id
stringlengths 47
47
| text
stringlengths 426
671k
| keywords_count
int64 1
10
| codes_count
int64 2
4.68k
|
---|---|---|---|
<urn:uuid:548e5bb2-9022-4b9f-9b0a-c5827cf604da>
|
|Release date||May 4, 1984|
The Macintosh External Disk Drive is the original model in a series of external 3+1⁄2-inch floppy disk drives manufactured and sold by Apple Computer exclusively for the Macintosh series of computers introduced in January 1984. Later, Apple would unify their external drives to work cross-platform between the Macintosh and Apple II product lines, dropping the name "Macintosh" from the drives. Though Apple had been producing external floppy disk drives prior to 1984, they were exclusively developed for the Apple II, III and Lisa computers using the industry standard 5+1⁄4-inch flexible disk format. The Macintosh external drives were the first to widely introduce Sony's new 3+1⁄2-inch rigid disk standard commercially and throughout their product line. Apple produced only one external 3+1⁄2-inch drive exclusively for use with the Apple II series called the Apple UniDisk 3.5.
The original Macintosh External Disk Drive (M0130) was introduced with the Macintosh on January 24, 1984. However, it did not actually ship until May 4, 1984, sixty days after Apple had promised it to dealers. Bill Fernandez was the project manager who oversaw the design and production of the drive. The drive case was designed to match the Macintosh and included the same 400-kilobyte drive (a Sony-made 3+1⁄2-inch single-sided mechanism) installed inside the Macintosh. Although very similar to the 400-kilobyte drive which newly replaced Apple's ill-fated Twiggy drive in the Lisa, there were subtle differences relating mainly to the eject mechanism. However, confusingly all of these drives were labelled identically. The Macintosh could only support one external drive, limiting the number of floppy disks mounted at once to two, but both Apple and third party manufacturers developed external hard drives that connected to the Mac's floppy disk port, which had pass-through ports to accommodate daisy-chaining the external disk drive. Apple's Hard Disk 20 could accommodate an additional daisy-chained hard drive as well as an external floppy disk.
3.5-inch single-sided floppies had been used on several microcomputers and synthesizers in the early 1980s, including the Hewlett Packard 150 and various MSX computers. The standard on all of these was MFM with 80 tracks and 9 sectors per track, giving 360 KB per disk. However, Apple's custom interface uses Group Coded Recording (GCR) and a unique format which puts fewer sectors on the smaller inner tracks and more sectors on the wider outer tracks of the disk. The disk speeds up when accessing the inner tracks and slows down when accessing the outer ones. This is called the "Zoned CAV" system; there are five zones of 16 tracks each. The innermost zone has 8 sectors per track, the next zone 9 sectors per track, and so on; the outermost zone has 12 sectors per track. This allows more space per disk (400 KB) and also improves reliability by reducing the number of sectors on the inner tracks which had less physical media to allocate to each sector.
The external 400-kilobyte Macintosh drive will work on any Macintosh that does not have a high density SuperDrive controller (due to electrical changes with the interface), but the disks in practice only support the MFS file system. Although a 400-kilobyte disk may be formatted with HFS, it cannot be booted from, nor is it readable in a Mac 128 or 512.
Copy protection schemes were not as elaborate or widespread on Macintosh software as they were on Apple II software for several reasons. First, the Mac drives did not afford the same degree of low-level control. Also Apple did not publish source listings for the Mac OS ROMs as they did with the Apple II. Finally, the Mac OS routines were considerably more complex and disk access had to be synchronized with the mouse and keyboard.
By early 1985, it was clear that the Macintosh needed additional storage space, in particular a hard drive. Apple announced their first hard drive for the Mac in March 1985. However, the MFS file system did not support subdirectories, making it unsuitable for a hard disk. Apple quickly began adopting for the Mac the hierarchical based SOS filing system introduced with the Apple III and long since implemented in ProDOS for the Apple II series and the Lisa. This change in the Mac's filing system delayed the introduction of the double sided Sony drives which Apple intended to offer as soon as the technology was available, a concession they made when adopting the Sony drives over their own problematic double-capacity Twiggy drives. However, based on the success of the 3.5-inch floppy drive for the Mac, there was no such obstacle in immediately implementing an 800-kilobyte drive for the Apple II, so it was introduced in September 1985, four months before the version for the Mac. While Apple simultaneously introduced their new hard drive after a 6-month delay, they chose not to implement the new floppy drive for the Macintosh at that time.
In September 1985, Apple released its first 3+1⁄2-inch drive (A2M2053) for the Apple II series utilizing Sony's new 800-kilobyte double-sided drive mechanism, which would not be released for the Macintosh until four months later. The Apple UniDisk 3.5 drive contained additional circuitry making it an "intelligent" or "smart" drive; this made it incompatible with the Macintosh, despite having the identical mechanism that was to be later used in the Macintosh drive. However, if the internal circuit board (which consisted of its own CPU, IWM chip, RAM and firmware) was bypassed it could operate on a Macintosh as an 800-kilobyte drive. This permitted storage-hungry Mac users the ability to double their disk capacity 5 months before Apple officially made an 800-kilobyte drive available for the Mac. At the time, the HD20 Startup disk came with HFS and a new ".Sony" driver that supported 800K drives (in addition to the HD20). Ironically, though the drive would prove to be significantly faster than the previous 400-kilobyte drive, it was specifically slowed down to accommodate the slower 1-megahertz processor of the Apple II. It came in the Snow White-styled case and color to match the Apple IIc and had a pass-through connector for the addition of a second daisy-chained drive. It plugged in directly to the Apple IIc disk port (although original IIcs needed a ROM upgrade) and required a specialized interface card on earlier Apple II models. It would later also work directly with the built-in disk port on the Apple IIc Plus and Apple IIGS through backwards compatibility. This was not recommended for the latter two machines as the Apple 3.5" Drive was faster. It continued to be sold for use with the Apple IIc and IIe which could not use the subsequent replacement Apple 3+1⁄2-inch drive, until the Apple IIc Plus redesign in 1988 and Apple II 3.5 Disk Controller Card released in 1991. Apple developed a DuoDisk 3.5 which contained two drives vertically stacked, but never brought it to market. The 3+1⁄2-inch format was not very popular in the Apple II community (excluding the 16-bit Apple IIGS) as most software was released in the 5.25-inch format to accommodate the existing installed Disk II drives.
In January 1986, Apple introduced the Macintosh Plus which had a Sony double-sided 800-kilobyte capacity disk drive, and used the new HFS disk format providing directories and sub-directories. This drive was fitted into an external case as the Macintosh 800K External Drive (M0131), which was slimmer than the earlier 400-kilobyte drive. It could be used with Macintosh models except for the original 128K, which could not load the HFS disk format. The drive supported the older 400-kilobyte single-sided disks allowing them to be shared. The use of Apple's GCR with variable speed (as used on the 400-kilobyte drive) accommodated a higher storage capacity than its 720-kilobyte PC counterparts. In addition, the mechanism was much quieter and significantly faster than its predecessor. Designed primarily to run on Macs with the new 128-kilobyte ROM which contained the necessary code to support the drive, it could be used with Macs with older 64-kilobyte ROMs if the proper software was loaded from the system folder of a Hard Disk 20 into the Mac's RAM. The drive controlled its own speed and was no longer dependent on an external signal from the Mac, which was blocked on the early drive mechanisms compatible only with the Macintosh. Later universal mechanisms, first used on the Apple II to accommodate proprietary signals, required special cables to isolate the speed signal from the Mac, to prevent damage to the drive. However, with its increased storage capacity combined with 2-4 times the RAM available on the Mac Plus, the external drive was less of a necessity than it had been with its predecessors. Nevertheless, with the only option for adding additional storage being extremely expensive hard drives, a year later Apple increased the maximum number of floppy drives that could be accessed simultaneously to three on the Macintosh SE (the Macintosh Portable was the only other Mac to do so).
Beginning in September 1986, Apple adopted a unified cross-platform product strategy essentially eliminating platform-specific peripherals where possible. The Apple 3.5 Drive (A9M0106), is an 800K external drive released in conjunction with the Apple IIGS computer, and replaced the beige-colored Macintosh 800K External Drive. It works on both the Apple IIGS as well as the Macintosh. It came in a case similar to the UniDisk, but in Platinum gray. Like the UniDisk 3.5, the Apple 3.5 Drive includes Apple II-specific features such as a manual disk eject button and a daisy-chain connector which allows two drives to be connected to an Apple II computer. The Macintosh however could still only accommodate one external drive, and ignores use of the eject button. Unlike the Macintosh 800K External Drive, the Apple 3.5 Drive can be used natively with the 64-kilobyte ROM stock Macintosh 128K & 512K computers without the HD20 INIT, albeit only with 400K MFS formatted disks. Designed as a universal external drive replacement, the Apple 3.5 Drive was eventually made compatible with the remaining Apple II models in production upon the introduction of the Apple IIc Plus and the Apple II 3.5 Disk Controller Card for the Apple IIe.
Following the success of the Macintosh implementation of the 3+1⁄2-inch format, the format was also adopted widely by the personal computer industry. However most of the industry adopted a different Modified Frequency Modulation (MFM) formatting scheme at a fixed rotational speed, incompatible with Apple's own GCR with variable speed, resulting in a less-expensive drive, but with a lower capacity (720 KB rather than 800 KB). In 1987 a newer and better, MFM-based, "high-density" format was developed which IBM first introduced in their PS/2 systems, doubling the previous storage capacity to 1.4 MB. In Apple's pursuit of cross-compatibility with DOS and Windows-based systems to attract more business customers, they adopted the new format, thus confirming it as the first industry-wide floppy disk standard. However, Apple could not take advantage of the less expensive fixed-speed systems of the IBM-based computers, due to its backward incompatibility with their own variable-speed formats.
Main article: SuperDrive
Later renamed the Apple SuperDrive (G7287), the Apple FDHD Drive (Floppy Disk High Density) was introduced in 1989 as Apple's first external 1.44 MB high-density double-sided 3+1⁄2-inch floppy drive. It supported all of Apple's 3.5" floppy disk formats as well as all standard PC formats (e.g. MS-DOS, Windows), allowing the Macintosh to read and write all industry-standard floppy disk formats. The external drive was offered only briefly with support for the Apple II, coming late in that product's life. To take advantage of the drive's extended storage and new capabilities, it required the new SWIM (Sander-Wozniak Integrated Machine) floppy disk controller chip to be present on the Macintosh and Apple II, the latter requiring the Apple II 3.5 Disk Controller Card which integrated the chip. If the drive was connected to an older Macintosh, Apple IIGS or Apple IIc Plus with the older IWM (Integrated Woz Machine) chip, the drive would act as a standard 800K drive, without any additional capabilities. The interface card was necessary for the Apple IIGS to make use of its greater storage capacity and ability to handle PC formats. The Apple IIe could not utilize the drive in any form, unless it had the specialized interface card installed, much like the UniDisk 3.5 which the SuperDrive replaced. The last Mac it could be used with was the Classic II and was discontinued shortly thereafter. The drive was fitted in every desktop Mac from its introduction and was eliminated with the introduction of the iMac in 1998. PowerPC Macs dropped the original auto-inject Sony drives and went to a manual inject mechanism.
Manufactured exclusively for use with the Macintosh PowerBook line, the Macintosh HDI-20 External 1.44MB Floppy Disk Drive (M8061) contained a low-powered, slimmer version of the SuperDrive and used a small square HDI-20 proprietary connector, rather than the larger standard DE-19 desktop connector, and was powered directly by the laptop. It had a matching dark gray case and an access cover which flipped down to form a stand. The external drive was sold optionally for those PowerBooks which had no built-in drive, however, the identical drive mechanism was included internally in some PowerBook models, which otherwise had no provision to accommodate an external drive.
Compatible only with the PowerBook 2400c, the Macintosh PowerBook 2400c Floppy Disk Drive (M4327) used a unique Molex connector rather than the previous HDI-20 connector. Possibly because of the 2400c's IBM design heritage, both the drive and the computer use the same connectors as IBM ThinkPad external floppy drives from the same period; however, IBM drives are not electrically compatible. The drive was discontinued in 1998 and would be the last external floppy drive manufactured by Apple.
((cite web)): CS1 maint: bot: original URL status unknown (link)
| 1 | 4 |
<urn:uuid:80383b80-2a50-41a0-82dd-5ac797c74cf6>
|
Transmission Control Protocol
The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets between applications running on hosts communicating by an IP network. Major Internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP. Applications that do not require reliable data stream service may use the User Datagram Protocol (UDP), which provides a connectionless datagram service that emphasizes reduced latency over reliability.
|Internet protocol suite|
- 1 Historical origin
- 2 Network function
- 3 TCP segment structure
- 4 Protocol operation
- 4.1 Connection establishment
- 4.2 Connection termination
- 4.3 Resource usage
- 4.4 Data transfer
- 4.5 Maximum segment size
- 4.6 Selective acknowledgments
- 4.7 Window scaling
- 4.8 TCP timestamps
- 4.9 Out-of-band data
- 4.10 Forcing data delivery
- 5 Vulnerabilities
- 6 TCP ports
- 7 Development
- 8 TCP over wireless networks
- 9 Hardware implementations
- 10 Debugging
- 11 Alternatives
- 12 Checksum computation
- 13 See also
- 14 References
- 15 Further reading
- 16 External links
During May 1974, the Institute of Electrical and Electronic Engineers (IEEE) published a paper titled A Protocol for Packet Network Intercommunication. The paper's authors, Vint Cerf and Bob Kahn, described an internetworking protocol for sharing resources using packet-switching among the nodes. A central control component of this model was the Transmission Control Program that incorporated both connection-oriented links and datagram services between hosts. The monolithic Transmission Control Program was later divided into a modular architecture consisting of the Transmission Control Protocol at the connection-oriented layer and the Internet Protocol at the internetworking (datagram) layer. The model became known informally as TCP/IP, although formally it was henceforth termed the Internet Protocol Suite.
The Transmission Control Protocol provides a communication service at an intermediate level between an application program and the Internet Protocol. It provides host-to-host connectivity at the Transport Layer of the Internet model. An application does not need to know the particular mechanisms for sending data via a link to another host, such as the required packet fragmentation on the transmission medium. At the transport layer, the protocol handles all handshaking and transmission details and presents an abstraction of the network connection to the application.
At the lower levels of the protocol stack, due to network congestion, traffic load balancing, or other unpredictable network behaviour, IP packets may be lost, duplicated, or delivered out of order. TCP detects these problems, requests re-transmission of lost data, rearranges out-of-order data and even helps minimize network congestion to reduce the occurrence of the other problems. If the data still remains undelivered, the source is notified of this failure. Once the TCP receiver has reassembled the sequence of octets originally transmitted, it passes them to the receiving application. Thus, TCP abstracts the application's communication from the underlying networking details.
TCP is used extensively by many applications available by internet, including the World Wide Web (WWW), E-mail, File Transfer Protocol, Secure Shell, peer-to-peer file sharing, and streaming media applications.
TCP is optimized for accurate delivery rather than timely delivery and can incur relatively long delays (on the order of seconds) while waiting for out-of-order messages or re-transmissions of lost messages. Therefore, it is not particularly suitable for real-time applications such as Voice over IP. For such applications, protocols like the Real-time Transport Protocol (RTP) operating over the User Datagram Protocol (UDP) are usually recommended instead.
TCP is a reliable stream delivery service which guarantees that all bytes received will be identical with bytes sent and in the correct order. Since packet transfer by many networks is not reliable, a technique known as 'positive acknowledgement with re-transmission' is used to guarantee reliability. This fundamental technique requires the receiver to respond with an acknowledgement message as it receives the data. The sender keeps a record of each packet it sends and maintains a timer from when the packet was sent. The sender re-transmits a packet if the timer expires before receiving the message acknowledgement. The timer is needed in case a packet gets lost or corrupted.
While IP handles actual delivery of the data, TCP keeps track of 'segments' - the individual units of data transmission that a message is divided into for efficient routing through the network. For example, when an HTML file is sent from a web server, the TCP software layer of that server divides the sequence of file octets into segments and forwards them individually to the IP software layer (Internet Layer). The Internet Layer encapsulates each TCP segment into an IP packet by adding a header that includes (among other data) the destination IP address. When the client program on the destination computer receives them, the TCP layer (Transport Layer) re-assembles the individual segments and ensures they are correctly ordered and error-free as it streams them to an application.
TCP segment structure
Transmission Control Protocol accepts data from a data stream, divides it into chunks, and adds a TCP header creating a TCP segment. The TCP segment is then encapsulated into an Internet Protocol (IP) datagram, and exchanged with peers.
The term TCP packet appears in both informal and formal usage, whereas in more precise terminology segment refers to the TCP protocol data unit (PDU), datagram to the IP PDU, and frame to the data link layer PDU:
Processes transmit data by calling on the TCP and passing buffers of data as arguments. The TCP packages the data from these buffers into segments and calls on the internet module [e.g. IP] to transmit each segment to the destination TCP.
A TCP segment consists of a segment header and a data section. The TCP header contains 10 mandatory fields, and an optional extension field (Options, pink background in table).
The data section follows the header. Its contents are the payload data carried for the application. The length of the data section is not specified in the TCP segment header. It can be calculated by subtracting the combined length of the TCP header and the encapsulating IP header from the total IP datagram length (specified in the IP header).
|0||0||Source port||Destination port|
|8||64||Acknowledgment number (if ACK set)|
0 0 0
|16||128||Checksum||Urgent pointer (if URG set)|
|Options (if data offset > 5. Padded at the end with "0" bytes if necessary.)
- Source port (16 bits)
- Identifies the sending port
- Destination port (16 bits)
- Identifies the receiving port
- Sequence number (32 bits)
- Has a dual role:
- If the SYN flag is set (1), then this is the initial sequence number. The sequence number of the actual first data byte and the acknowledged number in the corresponding ACK are then this sequence number plus 1.
- If the SYN flag is clear (0), then this is the accumulated sequence number of the first data byte of this segment for the current session.
- Acknowledgment number (32 bits)
- If the ACK flag is set then the value of this field is the next sequence number that the sender is expecting. This acknowledges receipt of all prior bytes (if any). The first ACK sent by each end acknowledges the other end's initial sequence number itself, but no data.
- Data offset (4 bits)
- Specifies the size of the TCP header in 32-bit words. The minimum size header is 5 words and the maximum is 15 words thus giving the minimum size of 20 bytes and maximum of 60 bytes, allowing for up to 40 bytes of options in the header. This field gets its name from the fact that it is also the offset from the start of the TCP segment to the actual data.
- Reserved (3 bits)
- For future use and should be set to zero
- Flags (9 bits) (aka Control bits)
- Contains 9 1-bit flags
- NS (1 bit): ECN-nonce concealment protection (experimental: see RFC 3540).
- CWR (1 bit): Congestion Window Reduced (CWR) flag is set by the sending host to indicate that it received a TCP segment with the ECE flag set and had responded in congestion control mechanism (added to header by RFC 3168).
- ECE (1 bit): ECN-Echo has a dual role, depending on the value of the SYN flag. It indicates:
- If the SYN flag is set (1), that the TCP peer is ECN capable.
- If the SYN flag is clear (0), that a packet with Congestion Experienced flag set (ECN=11) in IP header was received during normal transmission (added to header by RFC 3168). This serves as an indication of network congestion (or impending congestion) to the TCP sender.
- URG (1 bit): indicates that the Urgent pointer field is significant
- ACK (1 bit): indicates that the Acknowledgment field is significant. All packets after the initial SYN packet sent by the client should have this flag set.
- PSH (1 bit): Push function. Asks to push the buffered data to the receiving application.
- RST (1 bit): Reset the connection
- SYN (1 bit): Synchronize sequence numbers. Only the first packet sent from each end should have this flag set. Some other flags and fields change meaning based on this flag, and some are only valid for when it is set, and others when it is clear.
- FIN (1 bit): Last package from sender.
- Window size (16 bits)
- The size of the receive window, which specifies the number of window size units (by default, bytes) (beyond the segment identified by the sequence number in the acknowledgment field) that the sender of this segment is currently willing to receive (see Flow control and Window Scaling)
- Checksum (16 bits)
- The 16-bit checksum field is used for error-checking of the header, the Payload and a Pseudo-Header. The Pseudo-Header consist of the Source IP Address, the Destination IP Address, the protocol number for the TCP-Protocol (0x0006) and the length of the TCP-Headers including Payload (in Bytes).
- Urgent pointer (16 bits)
- if the URG flag is set, then this 16-bit field is an offset from the sequence number indicating the last urgent data byte
- Options (Variable 0–320 bits, divisible by 32)
- The length of this field is determined by the data offset field. Options have up to three fields: Option-Kind (1 byte), Option-Length (1 byte), Option-Data (variable). The Option-Kind field indicates the type of option, and is the only field that is not optional. Depending on what kind of option we are dealing with, the next two fields may be set: the Option-Length field indicates the total length of the option, and the Option-Data field contains the value of the option, if applicable. For example, an Option-Kind byte of 0x01 indicates that this is a No-Op option used only for padding, and does not have an Option-Length or Option-Data byte following it. An Option-Kind byte of 0 is the End Of Options option, and is also only one byte. An Option-Kind byte of 0x02 indicates that this is the Maximum Segment Size option, and will be followed by a byte specifying the length of the MSS field (should be 0x04). This length is the total length of the given options field, including Option-Kind and Option-Length bytes. So while the MSS value is typically expressed in two bytes, the length of the field will be 4 bytes (+2 bytes of kind and length). In short, an MSS option field with a value of 0x05B4 will show up as (0x02 0x04 0x05B4) in the TCP options section.
- Some options may only be sent when SYN is set; they are indicated below as [SYN]. Option-Kind and standard lengths given as (Option-Kind,Option-Length).
- 0 (8 bits): End of options list
- 1 (8 bits): No operation (NOP, Padding) This may be used to align option fields on 32-bit boundaries for better performance.
- 2,4,SS (32 bits): Maximum segment size (see maximum segment size) [SYN]
- 3,3,S (24 bits): Window scale (see window scaling for details) [SYN]
- 4,2 (16 bits): Selective Acknowledgement permitted. [SYN] (See selective acknowledgments for details)
- 5,N,BBBB,EEEE,... (variable bits, N is either 10, 18, 26, or 34)- Selective ACKnowledgement (SACK) These first two bytes are followed by a list of 1–4 blocks being selectively acknowledged, specified as 32-bit begin/end pointers.
- 8,10,TTTT,EEEE (80 bits)- Timestamp and echo of previous timestamp (see TCP timestamps for details)
- (The remaining options are historical, obsolete, experimental, not yet standardized, or unassigned)
- The TCP header padding is used to ensure that the TCP header ends and data begins on a 32 bit boundary. The padding is composed of zeros.
TCP protocol operations may be divided into three phases. Connections must be properly established in a multi-step handshake process (connection establishment) before entering the data transfer phase. After data transmission is completed, the connection termination closes established virtual circuits and releases all allocated resources.
A TCP connection is managed by an operating system through a programming interface that represents the local end-point for communications, the Internet socket. During the lifetime of a TCP connection the local end-point undergoes a series of state changes:
- (server) represents waiting for a connection request from any remote TCP and port.
- (client) represents waiting for a matching connection request after having sent a connection request.
- (server) represents waiting for a confirming connection request acknowledgment after having both received and sent a connection request.
- (both server and client) represents an open connection, data received can be delivered to the user. The normal state for the data transfer phase of the connection.
- (both server and client) represents waiting for a connection termination request from the remote TCP, or an acknowledgment of the connection termination request previously sent.
- (both server and client) represents waiting for a connection termination request from the remote TCP.
- (both server and client) represents waiting for a connection termination request from the local user.
- (both server and client) represents waiting for a connection termination request acknowledgment from the remote TCP.
- (both server and client) represents waiting for an acknowledgment of the connection termination request previously sent to the remote TCP (which includes an acknowledgment of its connection termination request).
- (either server or client) represents waiting for enough time to pass to be sure the remote TCP received the acknowledgment of its connection termination request. [According to RFC 793 a connection can stay in TIME-WAIT for a maximum of four minutes known as two MSL (maximum segment lifetime).]
- (both server and client) represents no connection state at all.
To establish a connection, TCP uses a three-way handshake. Before a client attempts to connect with a server, the server must first bind to and listen at a port to open it up for connections: this is called a passive open. Once the passive open is established, a client may initiate an active open. To establish a connection, the three-way (or 3-step) handshake occurs:
- SYN: The active open is performed by the client sending a SYN to the server. The client sets the segment's sequence number to a random value A.
- SYN-ACK: In response, the server replies with a SYN-ACK. The acknowledgment number is set to one more than the received sequence number i.e. A+1, and the sequence number that the server chooses for the packet is another random number, B.
- ACK: Finally, the client sends an ACK back to the server. The sequence number is set to the received acknowledgement value i.e. A+1, and the acknowledgement number is set to one more than the received sequence number i.e. B+1.
At this point, both the client and server have received an acknowledgment of the connection. The steps 1, 2 establish the connection parameter (sequence number) for one direction and it is acknowledged. The steps 2, 3 establish the connection parameter (sequence number) for the other direction and it is acknowledged. With these, a full-duplex communication is established.
The connection termination phase uses a four-way handshake, with each side of the connection terminating independently. When an endpoint wishes to stop its half of the connection, it transmits a FIN packet, which the other end acknowledges with an ACK. Therefore, a typical tear-down requires a pair of FIN and ACK segments from each TCP endpoint. After the side that sent the first FIN has responded with the final ACK, it waits for a timeout before finally closing the connection, during which time the local port is unavailable for new connections; this prevents confusion due to delayed packets being delivered during subsequent connections.
A connection can be "half-open", in which case one side has terminated its end, but the other has not. The side that has terminated can no longer send any data into the connection, but the other side can. The terminating side should continue reading the data until the other side terminates as well.
It is also possible to terminate the connection by a 3-way handshake, when host A sends a FIN and host B replies with a FIN & ACK (merely combines 2 steps into one) and host A replies with an ACK.
Some host TCP stacks may implement a half-duplex close sequence, as Linux or HP-UX do. If such a host actively closes a connection but still has not read all the incoming data the stack already received from the link, this host sends a RST instead of a FIN (Section 184.108.40.206 in RFC 1122). This allows a TCP application to be sure the remote application has read all the data the former sent—waiting the FIN from the remote side, when it actively closes the connection. But the remote TCP stack cannot distinguish between a Connection Aborting RST and Data Loss RST. Both cause the remote stack to lose all the data received.
Some application protocols using the TCP open/close handshaking for the application protocol open/close handshaking may find the RST problem on active close. As an example:
s = connect(remote); send(s, data); close(s);
For a program flow like above, a TCP/IP stack like that described above does not guarantee that all the data arrives to the other application if unread data has arrived at this end.
Most implementations allocate an entry in a table that maps a session to a running operating system process. Because TCP packets do not include a session identifier, both endpoints identify the session using the client's address and port. Whenever a packet is received, the TCP implementation must perform a lookup on this table to find the destination process. Each entry in the table is known as a Transmission Control Block or TCB. It contains information about the endpoints (IP and port), status of the connection, running data about the packets that are being exchanged and buffers for sending and receiving data.
The number of sessions in the server side is limited only by memory and can grow as new connections arrive, but the client must allocate a random port before sending the first SYN to the server. This port remains allocated during the whole conversation, and effectively limits the number of outgoing connections from each of the client's IP addresses. If an application fails to properly close unrequired connections, a client can run out of resources and become unable to establish new TCP connections, even from other applications.
Both endpoints must also allocate space for unacknowledged packets and received (but unread) data.
There are a few key features that set TCP apart from User Datagram Protocol:
- Ordered data transfer: the destination host rearranges according to sequence number
- Retransmission of lost packets: any cumulative stream not acknowledged is retransmitted
- Error-free data transfer
- Flow control: limits the rate a sender transfers data to guarantee reliable delivery. The receiver continually hints the sender on how much data can be received (controlled by the sliding window). When the receiving host's buffer fills, the next acknowledgment contains a 0 in the window size, to stop transfer and allow the data in the buffer to be processed.
- Congestion control
TCP uses a sequence number to identify each byte of data. The sequence number identifies the order of the bytes sent from each computer so that the data can be reconstructed in order, regardless of any packet reordering, or packet loss that may occur during transmission. The sequence number of the first byte is chosen by the transmitter for the first packet, which is flagged SYN. This number can be arbitrary, and should in fact be unpredictable to defend against TCP sequence prediction attacks.
Acknowledgements (Acks) are sent with a sequence number by the receiver of data to tell the sender that data has been received to the specified byte. Acks do not imply that the data has been delivered to the application. They merely signify that it is now the receiver's responsibility to deliver the data.
Reliability is achieved by the sender detecting lost data and retransmitting it. TCP uses two primary techniques to identify loss. Retransmission timeout (abbreviated as RTO) and duplicate cumulative acknowledgements (DupAcks).
Dupack based retransmission
If a single packet (say packet 100) in a stream is lost, then the receiver cannot acknowledge packets above 100 because it uses cumulative acks. Hence the receiver acknowledges packet 100 again on the receipt of another data packet. This duplicate acknowledgement is used as a signal for packet loss. That is, if the sender receives three duplicate acknowledgements, it retransmits the last unacknowledged packet. A threshold of three is used because the network may reorder packets causing duplicate acknowledgements. This threshold has been demonstrated to avoid spurious retransmissions due to reordering. Sometimes selective acknowledgements (SACKs) are used to give more explicit feedback on which packets have been received. This greatly improves TCP's ability to retransmit the right packets.
Timeout based retransmission
Whenever a packet is sent, the sender sets a timer that is a conservative estimate of when that packet will be acked. If the sender does not receive an ack by then, it transmits that packet again. The timer is reset every time the sender receives an acknowledgement. This means that the retransmit timer fires only when the sender has received no acknowledgement for a long time. Typically the timer value is set to where is the clock granularity. Further, in case a retransmit timer has fired and still no acknowledgement is received, the next timer is set to twice the previous value (up to a certain threshold). Among other things, this helps defend against a man-in-the-middle denial of service attack that tries to fool the sender into making so many retransmissions that the receiver is overwhelmed.
If the sender infers that data has been lost in the network using one of the two techniques described above, it retransmits the data.
Sequence numbers allow receivers to discard duplicate packets and properly sequence reordered packets. Acknowledgments allow senders to determine when to retransmit lost packets.
To assure correctness a checksum field is included; see checksum computation section for details on checksumming. The TCP checksum is a weak check by modern standards. Data Link Layers with high bit error rates may require additional link error correction/detection capabilities. The weak checksum is partially compensated for by the common use of a CRC or better integrity check at layer 2, below both TCP and IP, such as is used in PPP or the Ethernet frame. However, this does not mean that the 16-bit TCP checksum is redundant: remarkably, introduction of errors in packets between CRC-protected hops is common, but the end-to-end 16-bit TCP checksum catches most of these simple errors. This is the end-to-end principle at work.
TCP uses an end-to-end flow control protocol to avoid having the sender send data too fast for the TCP receiver to receive and process it reliably. Having a mechanism for flow control is essential in an environment where machines of diverse network speeds communicate. For example, if a PC sends data to a smartphone that is slowly processing received data, the smartphone must regulate the data flow so as not to be overwhelmed.
TCP uses a sliding window flow control protocol. In each TCP segment, the receiver specifies in the receive window field the amount of additionally received data (in bytes) that it is willing to buffer for the connection. The sending host can send only up to that amount of data before it must wait for an acknowledgment and window update from the receiving host.
When a receiver advertises a window size of 0, the sender stops sending data and starts the persist timer. The persist timer is used to protect TCP from a deadlock situation that could arise if a subsequent window size update from the receiver is lost, and the sender cannot send more data until receiving a new window size update from the receiver. When the persist timer expires, the TCP sender attempts recovery by sending a small packet so that the receiver responds by sending another acknowledgement containing the new window size.
If a receiver is processing incoming data in small increments, it may repeatedly advertise a small receive window. This is referred to as the silly window syndrome, since it is inefficient to send only a few bytes of data in a TCP segment, given the relatively large overhead of the TCP header.
The final main aspect of TCP is congestion control. TCP uses a number of mechanisms to achieve high performance and avoid congestion collapse, where network performance can fall by several orders of magnitude. These mechanisms control the rate of data entering the network, keeping the data flow below a rate that would trigger collapse. They also yield an approximately max-min fair allocation between flows.
Acknowledgments for data sent, or lack of acknowledgments, are used by senders to infer network conditions between the TCP sender and receiver. Coupled with timers, TCP senders and receivers can alter the behavior of the flow of data. This is more generally referred to as congestion control and/or network congestion avoidance.
In addition, senders employ a retransmission timeout (RTO) that is based on the estimated round-trip time (or RTT) between the sender and receiver, as well as the variance in this round trip time. The behavior of this timer is specified in RFC 6298. There are subtleties in the estimation of RTT. For example, senders must be careful when calculating RTT samples for retransmitted packets; typically they use Karn's Algorithm or TCP timestamps (see RFC 1323). These individual RTT samples are then averaged over time to create a Smoothed Round Trip Time (SRTT) using Jacobson's algorithm. This SRTT value is what is finally used as the round-trip time estimate.
Enhancing TCP to reliably handle loss, minimize errors, manage congestion and go fast in very high-speed environments are ongoing areas of research and standards development. As a result, there are a number of TCP congestion avoidance algorithm variations.
Maximum segment size
The maximum segment size (MSS) is the largest amount of data, specified in bytes, that TCP is willing to receive in a single segment. For best performance, the MSS should be set small enough to avoid IP fragmentation, which can lead to packet loss and excessive retransmissions. To try to accomplish this, typically the MSS is announced by each side using the MSS option when the TCP connection is established, in which case it is derived from the maximum transmission unit (MTU) size of the data link layer of the networks to which the sender and receiver are directly attached. Furthermore, TCP senders can use path MTU discovery to infer the minimum MTU along the network path between the sender and receiver, and use this to dynamically adjust the MSS to avoid IP fragmentation within the network.
MSS announcement is also often called "MSS negotiation". Strictly speaking, the MSS is not "negotiated" between the originator and the receiver, because that would imply that both originator and receiver will negotiate and agree upon a single, unified MSS that applies to all communication in both directions of the connection. In fact, two completely independent values of MSS are permitted for the two directions of data flow in a TCP connection. This situation may arise, for example, if one of the devices participating in a connection has an extremely limited amount of memory reserved (perhaps even smaller than the overall discovered Path MTU) for processing incoming TCP segments.
Relying purely on the cumulative acknowledgment scheme employed by the original TCP protocol can lead to inefficiencies when packets are lost. For example, suppose 10,000 bytes are sent in 10 different TCP packets, and the first packet is lost during transmission. In a pure cumulative acknowledgment protocol, the receiver cannot say that it received bytes 1,000 to 9,999 successfully, but failed to receive the first packet, containing bytes 0 to 999. Thus the sender may then have to resend all 10,000 bytes.
To alleviate this issue TCP employs the selective acknowledgment (SACK) option, defined in RFC 2018, which allows the receiver to acknowledge discontinuous blocks of packets which were received correctly, in addition to the sequence number of the last contiguous byte received successively, as in the basic TCP acknowledgment. The acknowledgement can specify a number of SACK blocks, where each SACK block is conveyed by the starting and ending sequence numbers of a contiguous range that the receiver correctly received. In the example above, the receiver would send SACK with sequence numbers 1000 and 9999. The sender would accordingly retransmit only the first packet (bytes 0 to 999).
A TCP sender can interpret an out-of-order packet delivery as a lost packet. If it does so, the TCP sender will retransmit the packet previous to the out-of-order packet and slow its data delivery rate for that connection. The duplicate-SACK option, an extension to the SACK option that was defined in RFC 2883, solves this problem. The TCP receiver sends a D-ACK to indicate that no packets were lost, and the TCP sender can then reinstate the higher transmission-rate.
The SACK option is not mandatory, and comes into operation only if both parties support it. This is negotiated when a connection is established. SACK uses the optional part of the TCP header (see TCP segment structure for details). The use of SACK has become widespread—all popular TCP stacks support it. Selective acknowledgment is also used in Stream Control Transmission Protocol (SCTP).
For more efficient use of high-bandwidth networks, a larger TCP window size may be used. The TCP window size field controls the flow of data and its value is limited to between 2 and 65,535 bytes.
Since the size field cannot be expanded, a scaling factor is used. The TCP window scale option, as defined in RFC 1323, is an option used to increase the maximum window size from 65,535 bytes to 1 gigabyte. Scaling up to larger window sizes is a part of what is necessary for TCP tuning.
The window scale option is used only during the TCP 3-way handshake. The window scale value represents the number of bits to left-shift the 16-bit window size field. The window scale value can be set from 0 (no shift) to 14 for each direction independently. Both sides must send the option in their SYN segments to enable window scaling in either direction.
Some routers and packet firewalls rewrite the window scaling factor during a transmission. This causes sending and receiving sides to assume different TCP window sizes. The result is non-stable traffic that may be very slow. The problem is visible on some sites behind a defective router.
TCP timestamps, defined in RFC 1323, can help TCP determine in which order packets were sent. TCP timestamps are not normally aligned to the system clock and start at some random value. Many operating systems will increment the timestamp for every elapsed millisecond; however the RFC only states that the ticks should be proportional.
There are two timestamp fields:
- a 4-byte sender timestamp value (my timestamp)
- a 4-byte echo reply timestamp value (the most recent timestamp received from you).
TCP timestamps are used in an algorithm known as Protection Against Wrapped Sequence numbers, or PAWS (see RFC 1323 for details). PAWS is used when the receive window crosses the sequence number wraparound boundary. In the case where a packet was potentially retransmitted it answers the question: "Is this sequence number in the first 4 GB or the second?" And the timestamp is used to break the tie.
Also, the Eifel detection algorithm (RFC 3522) uses TCP timestamps to determine if retransmissions are occurring because packets are lost or simply out of order.
It is possible to interrupt or abort the queued stream instead of waiting for the stream to finish. This is done by specifying the data as urgent. This tells the receiving program to process it immediately, along with the rest of the urgent data. When finished, TCP informs the application and resumes back to the stream queue. An example is when TCP is used for a remote login session, the user can send a keyboard sequence that interrupts or aborts the program at the other end. These signals are most often needed when a program on the remote machine fails to operate correctly. The signals must be sent without waiting for the program to finish its current transfer.
TCP OOB data was not designed for the modern Internet. The urgent pointer only alters the processing on the remote host and doesn't expedite any processing on the network itself. When it gets to the remote host there are two slightly different interpretations of the protocol, which means only single bytes of OOB data are reliable. This is assuming it is reliable at all as it is one of the least commonly used protocol elements and tends to be poorly implemented.
Forcing data delivery
Normally, TCP waits for 200 ms for a full packet of data to send (Nagle's Algorithm tries to group small messages into a single packet). This wait creates small, but potentially serious delays if repeated constantly during a file transfer. For example, a typical send block would be 4 KB, a typical MSS is 1460, so 2 packets go out on a 10 Mbit/s ethernet taking ~1.2 ms each followed by a third carrying the remaining 1176 after a 197 ms pause because TCP is waiting for a full buffer.
In the case of telnet, each user keystroke is echoed back by the server before the user can see it on the screen. This delay would become very annoying.
Setting the socket option
TCP_NODELAY overrides the default 200 ms send delay. Application programs use this socket option to force output to be sent after writing a character or line of characters.
The RFC defines the
PSH push bit as "a message to the receiving TCP stack to send this data immediately up to the receiving application". There is no way to indicate or control it in user space using Berkeley sockets and it is controlled by protocol stack only.
TCP may be attacked in a variety of ways. The results of a thorough security assessment of TCP, along with possible mitigations for the identified issues, were published in 2009, and is currently being pursued within the IETF.
Denial of service
By using a spoofed IP address and repeatedly sending purposely assembled SYN packets, followed by many ACK packets, attackers can cause the server to consume large amounts of resources keeping track of the bogus connections. This is known as a SYN flood attack. Proposed solutions to this problem include SYN cookies and cryptographic puzzles, though SYN cookies come with their own set of vulnerabilities. Sockstress is a similar attack, that might be mitigated with system resource management. An advanced DoS attack involving the exploitation of the TCP Persist Timer was analyzed in Phrack #66.
An attacker who is able to eavesdrop a TCP session and redirect packets can hijack a TCP connection. To do so, the attacker learns the sequence number from the ongoing communication and forges a false segment that looks like the next segment in the stream. Such a simple hijack can result in one packet being erroneously accepted at one end. When the receiving host acknowledges the extra segment to the other side of the connection, synchronization is lost. Hijacking might be combined with Address Resolution Protocol (ARP) or routing attacks that allow taking control of the packet flow, so as to get permanent control of the hijacked TCP connection.
Impersonating a different IP address was not difficult prior to RFC 1948, when the initial sequence number was easily guessable. That allowed an attacker to blindly send a sequence of packets that the receiver would believe to come from a different IP address, without the need to deploy ARP or routing attacks: it is enough to ensure that the legitimate host of the impersonated IP address is down, or bring it to that condition using denial-of-service attacks. This is why the initial sequence number is now chosen at random.
An attacker who can eavesdrop and predict the size of the next packet to be sent can cause the receiver to accept a malicious payload without disrupting the existing connection. The attacker injects a malicious packet with the sequence number and a payload size of the next expected packet. When the legitimate packet is ultimately received, it is found to have the same sequence number and length as a packet already received and is silently dropped as a normal duplicate packet—the legitimate packet is "vetoed" by the malicious packet. Unlike in connection hijacking, the connection is never desynchronized and communication continues as normal after the malicious payload is accepted. TCP veto gives the attacker less control over the communication, but makes the attack particularly resistant to detection. The large increase in network traffic from the ACK storm is avoided. The only evidence to the receiver that something is amiss is a single duplicate packet, a normal occurrence in an IP network. The sender of the vetoed packet never sees any evidence of an attack.
Another vulnerability is TCP reset attack.
TCP and UDP use port numbers to identify sending and receiving application end-points on a host, often called Internet sockets. Each side of a TCP connection has an associated 16-bit unsigned port number (0-65535) reserved by the sending or receiving application. Arriving TCP packets are identified as belonging to a specific TCP connection by its sockets, that is, the combination of source host address, source port, destination host address, and destination port. This means that a server computer can provide several clients with several services simultaneously, as long as a client takes care of initiating any simultaneous connections to one destination port from different source ports.
Port numbers are categorized into three basic categories: well-known, registered, and dynamic/private. The well-known ports are assigned by the Internet Assigned Numbers Authority (IANA) and are typically used by system-level or root processes. Well-known applications running as servers and passively listening for connections typically use these ports. Some examples include: FTP (20 and 21), SSH (22), TELNET (23), SMTP (25), HTTP over SSL/TLS (443), and HTTP (80). Registered ports are typically used by end user applications as ephemeral source ports when contacting servers, but they can also identify named services that have been registered by a third party. Dynamic/private ports can also be used by end user applications, but are less commonly so. Dynamic/private ports do not contain any meaning outside of any particular TCP connection.
Network Address Translation (NAT), typically uses dynamic port numbers, on the ("Internet-facing") public side, to disambiguate the flow of traffic that is passing between a public network and a private subnetwork, thereby allowing many IP addresses (and their ports) on the subnet to be serviced by a single public-facing address.
TCP is a complex protocol. However, while significant enhancements have been made and proposed over the years, its most basic operation has not changed significantly since its first specification RFC 675 in 1974, and the v4 specification RFC 793, published in September 1981. RFC 1122, Host Requirements for Internet Hosts, clarified a number of TCP protocol implementation requirements. A list of the 8 required specifications and over 20 strongly encouraged enhancements is available in RFC 7414. Among this list is RFC 2581, TCP Congestion Control, one of the most important TCP-related RFCs in recent years, describes updated algorithms that avoid undue congestion. In 2001, RFC 3168 was written to describe Explicit Congestion Notification (ECN), a congestion avoidance signaling mechanism.
The original TCP congestion avoidance algorithm was known as "TCP Tahoe", but many alternative algorithms have since been proposed (including TCP Reno, TCP Vegas, FAST TCP, TCP New Reno, and TCP Hybla).
TCP Interactive (iTCP) is a research effort into TCP extensions that allows applications to subscribe to TCP events and register handler components that can launch applications for various purposes, including application-assisted congestion control.
Multipath TCP (MPTCP) is an ongoing effort within the IETF that aims at allowing a TCP connection to use multiple paths to maximize resource usage and increase redundancy. The redundancy offered by Multipath TCP in the context of wireless networks enables the simultaneous utilization of different networks, which brings higher throughput and better handover capabilities. Multipath TCP also brings performance benefits in datacenter environments. The reference implementation of Multipath TCP is being developed in the Linux kernel. Multipath TCP is used to support the Siri voice recognition application on iPhones, iPads and Macs
TCP Cookie Transactions (TCPCT) is an extension proposed in December 2009 to secure servers against denial-of-service attacks. Unlike SYN cookies, TCPCT does not conflict with other TCP extensions such as window scaling. TCPCT was designed due to necessities of DNSSEC, where servers have to handle large numbers of short-lived TCP connections.
tcpcrypt is an extension proposed in July 2010 to provide transport-level encryption directly in TCP itself. It is designed to work transparently and not require any configuration. Unlike TLS (SSL), tcpcrypt itself does not provide authentication, but provides simple primitives down to the application to do that. As of 2010[update], the first tcpcrypt IETF draft has been published and implementations exist for several major platforms.
TCP Fast Open is an extension to speed up the opening of successive TCP connections between two endpoints. It works by skipping the three-way handshake using a cryptographic "cookie". It is similar to an earlier proposal called T/TCP, which was not widely adopted due to security issues. As of July 2012[update], it is an IETF Internet draft.
Proposed in May 2013, Proportional Rate Reduction (PRR) is a TCP extension developed by Google engineers. PRR ensures that the TCP window size after recovery is as close to the Slow-start threshold as possible. The algorithm is designed to improve the speed of recovery and is the default congestion control algorithm in Linux 3.2+ kernels.
TCP over wireless networks
TCP was originally designed for wired networks. Packet loss is considered to be the result of network congestion and the congestion window size is reduced dramatically as a precaution. However, wireless links are known to experience sporadic and usually temporary losses due to fading, shadowing, hand off, interference, and other radio effects, that are not strictly congestion. After the (erroneous) back-off of the congestion window size, due to wireless packet loss, there may be a congestion avoidance phase with a conservative decrease in window size. This causes the radio link to be underutilized. Extensive research on combating these harmful effects has been conducted. Suggested solutions can be categorized as end-to-end solutions, which require modifications at the client or server, link layer solutions, such as Radio Link Protocol (RLP) in cellular networks, or proxy-based solutions which require some changes in the network without modifying end nodes.
One way to overcome the processing power requirements of TCP is to build hardware implementations of it, widely known as TCP offload engines (TOE). The main problem of TOEs is that they are hard to integrate into computing systems, requiring extensive changes in the operating system of the computer or device. One company to develop such a device was Alacritech.
A packet sniffer, which intercepts TCP traffic on a network link, can be useful in debugging networks, network stacks, and applications that use TCP by showing the user what packets are passing through a link. Some networking stacks support the SO_DEBUG socket option, which can be enabled on the socket using setsockopt. That option dumps all the packets, TCP states, and events on that socket, which is helpful in debugging. Netstat is another utility that can be used for debugging.
For many applications TCP is not appropriate. One problem (at least with normal implementations) is that the application cannot access the packets coming after a lost packet until the retransmitted copy of the lost packet is received. This causes problems for real-time applications such as streaming media, real-time multiplayer games and voice over IP (VoIP) where it is generally more useful to get most of the data in a timely fashion than it is to get all of the data in order.
Also, for embedded systems, network booting, and servers that serve simple requests from huge numbers of clients (e.g. DNS servers) the complexity of TCP can be a problem. Finally, some tricks such as transmitting data between two hosts that are both behind NAT (using STUN or similar systems) are far simpler without a relatively complex protocol like TCP in the way.
Generally, where TCP is unsuitable, the User Datagram Protocol (UDP) is used. This provides the application multiplexing and checksums that TCP does, but does not handle streams or retransmission, giving the application developer the ability to code them in a way suitable for the situation, or to replace them with other methods like forward error correction or interpolation.
Stream Control Transmission Protocol (SCTP) is another protocol that provides reliable stream oriented services similar to TCP. It is newer and considerably more complex than TCP, and has not yet seen widespread deployment. However, it is especially designed to be used in situations where reliability and near-real-time considerations are important.
TCP also has issues in high-bandwidth environments. The TCP congestion avoidance algorithm works very well for ad-hoc environments where the data sender is not known in advance. If the environment is predictable, a timing based protocol such as Asynchronous Transfer Mode (ATM) can avoid TCP's retransmits overhead.
Multipurpose Transaction Protocol (MTP/IP) is patented proprietary software that is designed to adaptively achieve high throughput and transaction performance in a wide variety of network conditions, particularly those where TCP is perceived to be inefficient.
TCP checksum for IPv4
The checksum field is the 16 bit one's complement of the one's complement sum of all 16-bit words in the header and text. If a segment contains an odd number of header and text octets to be checksummed, the last octet is padded on the right with zeros to form a 16-bit word for checksum purposes. The pad is not transmitted as part of the segment. While computing the checksum, the checksum field itself is replaced with zeros.
In other words, after appropriate padding, all 16-bit words are added using one's complement arithmetic. The sum is then bitwise complemented and inserted as the checksum field. A pseudo-header that mimics the IPv4 packet header used in the checksum computation is shown in the table below.
|96||Source port||Destination port|
The source and destination addresses are those of the IPv4 header. The protocol value is 6 for TCP (cf. List of IP protocol numbers). The TCP length field is the length of the TCP header and data (measured in octets).
TCP checksum for IPv6
- Any transport or other upper-layer protocol that includes the addresses from the IP header in its checksum computation must be modified for use over IPv6, to include the 128-bit IPv6 addresses instead of 32-bit IPv4 addresses.
A pseudo-header that mimics the IPv6 header for computation of the checksum is shown below.
|320||Source port||Destination port|
- Source address: the one in the IPv6 header
- Destination address: the final destination; if the IPv6 packet doesn't contain a Routing header, TCP uses the destination address in the IPv6 header, otherwise, at the originating node, it uses the address in the last element of the Routing header, and, at the receiving node, it uses the destination address in the IPv6 header.
- TCP length: the length of the TCP header and data
- Next Header: the protocol value for TCP
Checksum offload
Many TCP/IP software stack implementations provide options to use hardware assistance to automatically compute the checksum in the network adapter prior to transmission onto the network or upon reception from the network for validation. This may relieve the OS from using precious CPU cycles calculating the checksum. Hence, overall network performance is increased.
This feature may cause packet analyzers that are unaware or uncertain about the use of checksum offload to report invalid checksums in outbound packets that have not yet reached the network adapter. This will only occur for packets that are intercepted before being being transmitted by the network adapter; all packets transmitted by the network adaptor on the wire will have valid checksums. This issue can also occur when monitoring packets being transmitted between virtual machines on the same host, where a virtual device driver may omit the checksum calculation (as an optimization), knowing that the checksum will be calculated later by the VM host kernel or its physical hardware.
- Connection-oriented communication
- Karn's algorithm
- List of TCP and UDP port numbers (a long list of ports and services)
- Maximum segment lifetime
- Maximum transmission unit
- Micro-bursting (networking)
- Nagle's algorithm
- Port (computer networking)
- T/TCP variant of TCP
- TCP congestion avoidance algorithms
- TCP global synchronization
- TCP pacing
- TCP segment
- TCP sequence prediction attack
- TCP tuning for high performance networks
- WTCP a proxy-based modification of TCP for wireless networks
- Transport Layer § Comparison of transport layer protocols
- Vinton G. Cerf; Robert E. Kahn (May 1974). "A Protocol for Packet Network Intercommunication" (PDF). IEEE Transactions on Communications. 22 (5): 637–648. doi:10.1109/tcom.1974.1092259. Archived from the original (PDF) on March 4, 2016.
- Comer, Douglas E. (2006). Internetworking with TCP/IP:Principles, Protocols, and Architecture. 1 (5th ed.). Prentice Hall. ISBN 0-13-187671-6.
- "TCP (Linktionary term)".
- "RFC 791 – section 2.1".
- "RFC 793".
- "RFC 1323, TCP Extensions for High Performance, Section 2.2".
- "RFC 2018, TCP Selective Acknowledgement Options, Section 2".
- "RFC 2018, TCP Selective Acknowledgement Options, Section 3".
- "RFC 1323, TCP Extensions for High Performance, Section 3.2".
- RFC 793 section 3.1
- RFC 793 Section 3.2
- Tanenbaum, Andrew S. (2003-03-17). Computer Networks (Fourth ed.). Prentice Hall. ISBN 0-13-066102-3.
- "TCP Definition". Retrieved 2011-03-12.
- Mathis; Mathew; Semke; Mahdavi; Ott (1997). "The macroscopic behavior of the TCP congestion avoidance algorithm". ACM SIGCOMM Computer Communication Review. 27.3: 67–82.
- Paxson, V.; Allman, M.; Chu, J.; Sargent, M. (June 2011). "The Basic Algorithm". Computing TCP's Retransmission Timer. IETF. p. 2. sec. 2. RFC 6298. https://tools.ietf.org/html/rfc6298#section-2. Retrieved October 24, 2015.
- Stone; Partridge (2000). "When The CRC and TCP Checksum Disagree". Sigcomm.
- "RFC 879".
- "TCP window scaling and broken routers [LWN.net]".
- Gont, Fernando (November 2008). "On the implementation of TCP urgent data". 73rd IETF meeting. Retrieved 2009-01-04.
- Peterson, Larry (2003). Computer Networks. Morgan Kaufmann. p. 401. ISBN 1-55860-832-X.
- Richard W. Stevens (2006). November 2011 TCP/IP Illustrated. Vol. 1, The protocols Check
|url=value (help). Addison-Wesley. pp. Chapter 20. ISBN 978-0-201-63346-7.
- Security Assessment of the Transmission Control Protocol (TCP) at the Wayback Machine (archived March 6, 2009)
- Security Assessment of the Transmission Control Protocol (TCP)
- Jakob Lell. "Quick Blind TCP Connection Spoofing with SYN Cookies". Retrieved 2014-02-05.
- Some insights about the recent TCP DoS (Denial of Service) vulnerabilities
- "Exploiting TCP and the Persist Timer Infiniteness".
- "Laurent Joncheray, Simple Active Attack Against TCP, 1995".
- John T. Hagen; Barry E. Mullins (2013). "TCP veto: A novel network attack and its application to SCADA protocols". Innovative Smart Grid Technologies (ISGT), 2013 IEEE PES.
- TCP Interactive (iTCP)
- RFC 6182
- RFC 6824
- Raiciu; Barre; Pluntke; Greenhalgh; Wischik; Handley (2011). "Improving datacenter performance and robustness with multipath TCP". Sigcomm.
- "MultiPath TCP - Linux Kernel implementation".
- Raiciu; Paasch; Barre; Ford; Honda; Duchene; Bonaventure; Handley (2012). "How Hard Can It Be? Designing and Implementing a Deployable Multipath TCP". USENIX NSDI.
- Bonaventure; Seo (2016). "Multipath TCP Deployments". IETF Journal.
- Michael Kerrisk (2012-08-01). "TCP Fast Open: expediting web services". LWN.net.
- Y. Cheng, J. Chu, S. Radhakrishnan, A. Jain (2012-07-16). TCP Fast Open. IETF. I-D draft-ietf-tcpm-fastopen-01. https://tools.ietf.org/html/draft-ietf-tcpm-fastopen-01.
- "RFC 6937 - Proportional Rate Reduction for TCP". Retrieved 6 June 2014.
- Grigorik, Ilya (2013). High-performance browser networking (1. ed.). Beijing: O'Reilly. ISBN 1449344763.
- "TCP performance over CDMA2000 RLP". Retrieved 2010-08-30
- Muhammad Adeel & Ahmad Ali Iqbal (2004). "TCP Congestion Window Optimization for CDMA2000 Packet Data Networks". International Conference on Information Technology (ITNG'07): 31–35. ISBN 978-0-7695-2776-5. doi:10.1109/ITNG.2007.190.
- Yunhong Gu, Xinwei Hong, and Robert L. Grossman. "An Analysis of AIMD Algorithm with Decreasing Increases". 2004.
- "Wireshark: Offloading".
Wireshark captures packets before they are sent to the network adapter. It won't see the correct checksum because it has not been calculated yet. Even worse, most OSes don't bother initialize this data so you're probably seeing little chunks of memory that you shouldn't. New installations of Wireshark 1.2 and above disable IP, TCP, and UDP checksum validation by default. You can disable checksum validation in each of those dissectors by hand if needed.
- "Wireshark: Checksums".
Checksum offloading often causes confusion as the network packets to be transmitted are handed over to Wireshark before the checksums are actually calculated. Wireshark gets these “empty” checksums and displays them as invalid, even though the packets will contain valid checksums when they leave the network hardware later.
- Stevens, W. Richard. TCP/IP Illustrated, Volume 1: The Protocols. ISBN 0-201-63346-9.
- Stevens, W. Richard; Wright, Gary R. TCP/IP Illustrated, Volume 2: The Implementation. ISBN 0-201-63354-X.
- Stevens, W. Richard. TCP/IP Illustrated, Volume 3: TCP for Transactions, HTTP, NNTP, and the UNIX Domain Protocols. ISBN 0-201-63495-3.**
|Wikiversity has learning resources about Transmission Control Protocol|
|Wikimedia Commons has media related to Transmission Control Protocol.|
- RFC 675 – Specification of Internet Transmission Control Program, December 1974 Version
- RFC 793 – TCP v4
- RFC 1122 – includes some error corrections for TCP
- RFC 1323 – TCP Extensions for High Performance [Obsoleted by RFC 7323]
- RFC 1379 – Extending TCP for Transactions—Concepts [Obsoleted by RFC 6247]
- RFC 1948 – Defending Against Sequence Number Attacks
- RFC 2018 – TCP Selective Acknowledgment Options
- RFC 5681 – TCP Congestion Control
- RFC 6247 – Moving the Undeployed TCP Extensions RFC 1072, RFC 1106, RFC 1110, RFC 1145, RFC 1146, RFC 1379, RFC 1644, and RFC 1693 to Historic Status
- RFC 6298 – Computing TCP's Retransmission Timer
- RFC 6824 – TCP Extensions for Multipath Operation with Multiple Addresses
- RFC 7323 – TCP Extensions for High Performance
- RFC 7414 – A Roadmap for TCP Specification Documents
- Oral history interview with Robert E. Kahn, Charles Babbage Institute, University of Minnesota, Minneapolis. Focuses on Kahn's role in the development of computer networking from 1967 through the early 1980s. Beginning with his work at Bolt Beranek and Newman (BBN), Kahn discusses his involvement as the ARPANET proposal was being written, his decision to become active in its implementation, and his role in the public demonstration of the ARPANET. The interview continues into Kahn's involvement with networking when he moves to IPTO in 1972, where he was responsible for the administrative and technical evolution of the ARPANET, including programs in packet radio, the development of a new network protocol (TCP/IP), and the switch to TCP/IP to connect multiple networks.
- IANA Port Assignments
- John Kristoff's Overview of TCP (Fundamental concepts behind TCP and how it is used to transport data between two endpoints)
- TCP fast retransmit simulation animated: slow start, sliding window, duplicated Ack, congestion window
- TCP, Transmission Control Protocol
- Checksum example
- Engineer Francesco Buffa's page about Transmission Control Protocol
- TCP tutorial
- Linktionary on TCP segments
- TCP Sliding Window simulation animated (ns2)
- Multipath TCP
- TCP Technology and Testing methodologies
| 1 | 7 |
<urn:uuid:b4754ff4-2329-4e2d-b944-669fd3cf9ed1>
|
When we think of the history of modern computing, the story usually begins in the late 1970s. Apple released the enormously popular Apple II in 1977, and the 1980s saw graphical user interfaces rise to popularity in a battle between Mac OS and Microsoft Windows. It's common knowledge that Xerox was ahead of the curve in implementing a GUI and a computer mouse in the early 80s, but Apple made them popular with the Macintosh. Those elements of computing have been around far longer than Apple or Xerox PCs, however: computing pioneer Doug Engelbart was showing them off in 1968.
Engelbart demonstrated an early computer mouse, word processing, hypertext and video conferencing back when computers still used punch cards for controlling data in a presentation that's come to be known as The Mother of All Demos. It's an amazing slice of computer history that gives insight into technology we still use today. About 26 minutes into the hour and 15 minutes presentation, Engelbart suddenly remarks "I don't know why we call it a mouse...it started that way and we never did change it."
While Engelbart's demonstration mostly covers inputting information with a keyboard and mouse, it features some crazy ahead of the curve functionality--sharing data between computer terminals and participating in remote collaboration. Remember how convenient and amazing Google Docs still seems today? It wasn't quite as pretty in 1968 as it is today, but Engelbart and his team at Stanford were video conferencing and collaborating over forty years ago.
The Mother of All Demos is worth a watch, even if you skip around. Make sure to check out the mouse at 26 minutes and teleconferencing starting around the 56 minute mark.
| 1 | 2 |
<urn:uuid:69483799-4793-4172-8aaa-213c8127c39d>
|
|Page tools: Print Page Print All RSS Search this Product|
6 The registration of deaths is the responsibility of the eight individual state and territory Registrars of Births, Deaths and Marriages. As part of the registration process, information about the cause of death is supplied by the medical practitioner certifying the death or by a coroner. Other information about the deceased is supplied by a relative or other person acquainted with the deceased, or by an official of the institution where the death occurred. The information is provided to the Australian Bureau of Statistics (ABS) by individual Registrars for coding and compilation into aggregate statistics. In addition, the ABS supplements this data with information from the National Coroners Information System (NCIS). The following diagram shows the process undertaken in producing cause of death statistics for Australia.
7 The data presented in this publication are also included in a series of data cubes that are available on the ABS website.
8 A Glossary is also provided which details definitions of terminology used.
2011 SCOPE AND COVERAGE
9 The statistics in chapters 1-7 relate to the number of deaths registered, not those which actually occurred, in the years shown. Number of deaths by year of occurrence are published in Chapter 8 and Data Cube 14.
Scope of causes of death statistics
10 The scope for each reference year of the Death Registrations includes:
11 Death records received by ABS during the March quarter 2012 which were initially registered in 2011 (but for which registration was not fully completed until 2012) were assigned to the 2011 reference year. Any registrations relating to 2011 which were received by ABS from April 2012 were assigned to the 2012 reference year. Approximately 4% to 6% of deaths occurring in one year are not registered until the following year or later.
12 Prior to 2007, the scope for the reference year of the Death Registrations collection included:
Coverage of causes of death statistics
13 Ideally, for compiling annual times series, the number of deaths should be recorded and reported as those which occurred within a given reference period, such as a calendar year. However, there can be lags in the registration of deaths with the state or territory registries and so not all deaths are registered in the year that they occur. There may also be further delays to the ABS receiving notification of the death from the registries due to processing or data transfer lags. Therefore, there are three dates attributable to each death registration:
From 2007 onwards, data for a particular reference year includes all deaths registered in Australia for the reference year that are received by the ABS by the end of the March quarter of the subsequent year. For example, a death may occur in December of 2010, but the death may not be registered until January of 2011. Information about the death is then provided to the ABS in April of 2011. This death would have a date of occurrence in December 2010, a date of registration in January 2011, and a reference year of 2011.
14 The ABS Causes of Death collection includes all deaths that occurred and were registered in Australia, including deaths of persons whose usual residence is overseas. Deaths of Australian residents that occurred outside Australia may be registered by individual Registrars, but are not included in ABS deaths or causes of death statistics.
15 The current scope of the statistics includes:
16 The scope of the statistics excludes:
Scope of perinatal death statistics
17 The scope of the perinatal death statistics includes all fetal deaths (at least 20 weeks' gestation or at least 400 grams birth weight) and neonatal deaths (all live born babies who die within 28 completed days of birth, regardless of gestation or birth weight). This scope was adopted for the 2007 Perinatal Deaths collection, and was applied to historical data for 1999-2006. For more information on the changes in scope rules see Perinatal Deaths, Australia, 2007 (cat. no. 3304.0) Explanatory Notes 18-20.
18 Fetal deaths are registered only as a stillbirth, they are not in scope of either the Births, Australia (cat. no. 3301.0) or Deaths, Australia (cat. no. 3302.0) collections. Neonatal deaths are registered first as a birth and then as a death and are in scope of the Births and Deaths collections.
19 For 1996 and previous editions of this publication, data relating to perinatal deaths were based upon the World Health Organization (WHO) recommended definition for compiling national perinatal statistics. The WHO definition of perinatal deaths included all neonatal deaths, and those fetuses weighing at least 500 grams or having a gestational age of at least 22 weeks or body length of 25 centimetres crown-heel. A summary table based on the WHO definition of perinatal deaths is included in this release.
20 A range of socio-demographic data are available from the ABS Causes of Death collection. Standard classifications used in the presentation of causes of death statistics include age, sex, birthplace, multiple birth and Indigenous status. Statistical standards for social and demographic variables have been developed by the ABS. Where these are not published in the Causes of Death publication or data cubes, they can be sourced on request from the ABS.
International Classification of Diseases (ICD)
24 The International Classification of Diseases (ICD) is the international standard classification for epidemiological purposes and is designed to promote international comparability in the collection, processing, classification, and presentation of causes of death statistics. The classification is used to classify diseases and causes of disease or injury as recorded on many types of medical records as well as death records. The ICD has been revised periodically to incorporate changes in the medical field. Currently ICD 10th revision is used for Australian causes of death statistics.
25 ICD-10 is a variable-axis classification meaning that the classification does not group diseases only based on anatomical sites, but also on the type of disease. Epidemiological data and statistical data is grouped according to:
26 For example, a systemic disease such as septicaemia is grouped with infectious diseases; a disease primarily affecting one body system, such as a myocardial infarction is grouped with circulatory diseases; and a congenital condition such as spina bifida is grouped with congenital conditions.
27 For further information about the ICD refer to WHO International Classification of Diseases (ICD).
28 The ICD 10th Revision is also available online.
29 An ongoing issue for the ABS Causes of Death collection has been that the quality of the data can be affected by the length of time required for the coronial process to be finalised and the coroner case closed. For some time, these concerns have been raised by key users of causes of death data regarding the quality of selected causes data (e.g. deaths due to intentional self-harm (suicides), homicides, Sudden Infant Death Syndrome (SIDS) and motor vehicle accidents). The ABS have addressed these data quality concerns in two ways:
30 Up to and including deaths registered in 2005, ABS Causes of Death processing was finalised at a point in time. At this point, not all coroners' cases had been investigated, the case closed and relevant information loaded into the National Coroners Information System (NCIS). The coronial process can take several years if an inquest is being held or complex investigations are being undertaken. In these instances, the cases remain open on the NCIS. Coroners' cases that have not been closed can impact on data quality as less specific ICD codes often need to be applied in the absence of a coroner's finding.
31 To improve the quality of ICD coding, all coroner certified deaths registered after 1 January 2006 are now subject to a revisions process. If the case remains open on the NCIS, the ABS will investigate and use additional information from police reports, toxicology reports, autopsy reports and coroners' findings to assign a more specific cause of death to these open cases. The use of this additional information at either 12 or 24 months after initial processing increases the specificity of the assigned ICD-10 codes over time. As 12 or 24 months have passed since initial processing, many Coronial cases will be closed, with the coroner having determined the underlying cause of death and allowing the ABS to code a more specific cause of death.
32 In this publication and associated data cubes, in addition to 2011 preliminary data, 2010 revised data and 2009 final data have also been published. See Technical Notes, Causes of Death Revisions, 2006 in the Causes of Death, Australia, 2010 publication, and Causes of Death Revisions, 2009 and 2010 in this publication for further information.
33 In 2009, an initial review was undertaken into the impact of the overall revisions process. Analysis of the revisions process has continued to be undertaken, up to and including the finalised 2009 causes of death data. These reviews have indicated the value of undergoing the revisions process in increasing the specificity of underlying causes of death, as data changes from preliminary, to revised, to final. As the process is still relatively new, further analysis of the impact of revisions will be conducted to monitor the efficiency and effectiveness of this process.
2011 MORTALITY CODING
34 The extensive nature of the ICD enables classification of causes of death at various levels of detail. For the purpose of this publication, data is presented according to the ICD at the chapter level, with further disaggregation for major causes of death.
35 To enable the reader to see the relationship between the various summary classifications used in this publication, all tables include the ICD codes that constitute the causes of death covered.
Updates to ICD-10
36 The Update and Revision Committee (URC), a WHO advisory group on updates to ICD-10, maintains the cumulative and annual lists of approved updates to the ICD-10 classification. The updates to ICD-10 are of numerous types including addition and deletion of codes, changes to coding instructions and modification and clarification of terms.
37 The cumulative list of ICD-10 updates can be found online.
38 The ABS uses the Medical Mortality Data System (MMDS) for automated cause of death coding. The MMDS applies ICD rules to all death records, diseases and conditions listed on the death certificate. Approximately 70-80% of records can be coded using the MMDS without manual intervention.
Types of death
39 All causes of death can be grouped to describe the type of death whether it be from a disease or condition, or from an injury or whether the cause is unknown. These are generally described as:
External Causes of Death
40 Where an accidental or violent death occurs, the underlying cause is classified according to the circumstances of the fatal injury, rather than the nature of the injury, which is coded separately. For example, a motorcyclist may crash into a tree (V27.4) and sustain multiple fractures to the skull and facial bones (S02.7) which leads to death. The underlying cause of death is the crash itself (V27.4), as it is the circumstance which led to the injuries that ultimately caused the death.
Leading Causes of Death
41 Ranking causes of death is a useful method of describing patterns of mortality in a population and allows comparison over time and between populations. However, different methods of grouping causes of death can result in a vastly different list of leading causes for any given population. A ranking of leading causes of death based on broad cause groupings such as 'cancers' or 'heart disease' does not identify the leading causes within these groups, which is needed to inform policy on interventions and health advocacy. Similarly, a ranking based on very narrow cause groupings or including diseases that have a low frequency, can be meaningless in informing policy.
42 Tabulations of leading causes presented in this publication are based on research presented in the Bulletin of the World Health Organisation, Volume 84, Number 4, April 2006, 297-304. The determination of groupings in this list is primarily driven by data from individual countries representing different regions of the world. Other groupings are based on prevention strategies, or to maintain homogeneity within the groups of cause categories. Since the aforementioned bulletin was published, a decision was made by WHO to include deaths associated with the H1N1 influenza strain (commonly known as swine flu) in the ICD-10 classification as Influenza due to certain certain identified influenza virus (J09). This code has been included with the Influenza and Pneumonia leading cause grouping in the Causes of Death publication since the 2009 reference year.
43 A number of organisations publish lists of leading causes of death. However, the basis for determining the leading causes may vary. For example, many lists are based on Years of Potential Life Lost (YPLL) and are designed to present data based on the burden of mortality and disease to the community. The ABS listing of leading causes is based on the numbers of deaths and is designed to present information on incidence of mortality rather than burden of mortality.
Years of Potential Life Lost (YPLL)
44 Years of Potential Life Lost (YPLL) measures the extent of 'premature' mortality, which is assumed to be any death between the ages of 1-78 years inclusive, and aids in assessing the significance of specific diseases or trauma as a cause of premature death.
45 Estimates of YPLL are calculated for deaths of persons aged 1-78 years based on the assumption that deaths occurring at these ages are untimely. The inclusion of deaths under one year would bias the YPLL calculation because of the relatively high mortality rate for that age, and 79 years was the median age at death when this series of YPLL was calculated using 2001 as the standard year. As shown below, the calculation uses the current ABS standard population of all persons in the Australian population at 30 June 2001. This standard is revised every 10 years.
46 YPLL is derived from:where: = adjusted age at death. As age at death is only available in completed years the midpoint of the reported age is chosen (e.g. age at death 34 years was adjusted to 34.5). = registerednumber of deaths at age due to a particular cause of death.
YPLL is directly standardised for age using the following formula: where the age correction factor is defined for age as: where: = estimated number of persons resident in Australia aged 1-78 years at 30 June 2009 = estimated number of persons resident in Australia aged years at 30 June 2009 = estimated number of persons resident in Australia aged years at 30 June 2001 (standard population)= estimated number of persons resident in Australia aged 1-78 years at 30 June 2001 (standard population)
47 The data cubes contain directly standardised death rates and YPLL for males, females and persons. In some cases the summation of the results for males and females will not equate to persons. The reasons for this is that different standardisation factors are applied separately for males, females and persons.
Age-Standardised death rates
48 Age-standardised rates enable the comparison of death rates over time. Along with adult, infant and child mortality rates, they are used to determine whether the mortality rate of the Aboriginal and Torres Strait Islander population is declining over time, and whether the gap between Aboriginal and Torres Strait Islander and non-Indigenous populations is narrowing. However, there have been inconsistencies in the way different government agencies have calculated age-standardised death rates in the past. The ABS hosted a workshop on age-standardisation on 19 April 2011 to discuss the best method of age-standardisation (direct or indirect) and to produce a clear set of guidelines specifically for the analysis and reporting of COAG "Closing the Gap" indicators. Workshop participants agreed that the direct method is the most preferred method of age-standardisation as it allows for valid comparisons of mortality rates between different study populations and across time.
49 The direct method has been used throughout the publication and data cubes for age standardised death rates. Age-standardised death rates for specific causes of death with less than a total of 20 deaths are not available for publication, due to issues of robustness.
50 For further information, see Appendix: Principles on the use of direct age-standardisation , from Deaths, Australia, 2010 (cat. no. 3302.0).
State and Territory Data
51 Causes of death statistics for states and territories in this publication have been compiled based on the state or territory of usual residence of the deceased, regardless of where in Australia the death occurred and was registered. Deaths of persons usually resident overseas which occur in Australia are included in the state/territory in which their death was registered.
52 Statistics compiled on a state or territory of registration basis are available on request.
Perinatals State and Territory Data
53 Given the small number of perinatals death which occur in some states and territories, some data provided on a state/territory basis in this publication have been aggregated for South Australia, Western Australia, Northern Territory, Australian Capital Territory and Other Territories.
Potentially Avoidable Deaths
54 Potentially avoidable deaths data based on the Indigenous status of the deceased has been included in this publication. The progress measure for potentially avoidable deaths comprises potentially preventable deaths and potentially treatable deaths. Potentially preventable deaths are those which are amenable to screening and primary prevention, such as immunisation, and reflect the effectiveness of the current preventive health activities of the health sector. Deaths from potentially treatable conditions are those which are amenable to therapeutic interventions, and reflect the safety and quality of the current treatment system. For the list of ICD codes which are used to calculate potentially avoidable mortality, see the Avoidable Mortality Appendix.
55 For further information, see National Healthcare Agreement: PI 20 - Potentially avoidable deaths, 2011.
Coroner Certified Deaths
56 In compiling causes of death statistics, the ABS employs a variety of measures to improve quality, which include:
57 The quality of causes of death coding can be affected by changes in the way information is reported by certifiers, by lags in completion of coroner cases and the processing of the findings. While changes in reporting and lags in coronial processes can affect coding of all causes of death, those coded to Chapter XVIII: Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified and Chapter XX: External causes of morbidity and mortality are more likely to be affected because the code assigned within the chapter may vary depending on the coroner's findings (in accordance with ICD-10 coding rules).
58 Over time, improvements have been made to the quality of the causes of death data published by the ABS. Two processing improvements were introduced to the ABS Causes of Death collection in 2008 (the context and details of these improvements are described below). These improvements relate to the way the ABS codes coroner certified deaths and have had the effect of significantly improving the quality of cause of death codes assigned to coroner certified cases.
59 In order to complete a death registration, the death must be certified by either a doctor using the Medical Certificate of Cause of Death, or by a coroner. It is the role of the coroner to investigate the circumstances surrounding all reportable deaths and to establish wherever possible the circumstances surrounding the death, and the cause(s) of death. Generally most deaths due to external causes will be referred to a coroner for investigation; this includes those deaths which are possible instances of Intentional self-harm (suicide). See Explanatory Notes 29-33 and Technical Note Causes of Death Revisions, 2009 and 2010 for further information.
60 When coronial investigations are complete, causes of death information is passed to the Registrar of Births, Deaths and Marriages, as well as to the NCIS. The ABS uses the NCIS as the only source of data to code coroner certified deaths. Where a case remains open on the NCIS at the time the ABS ceases processing and insufficient information is available to code a cause of death (e.g. a coroner certified death was yet to be finalised by the coroner), less specific ICD codes are assigned as required by the ICD coding rules.
61 The specificity with which open cases are able to be allocated an ICD-10 code is directly related to the amount and type of information available on the NCIS. The amount of information available for open cases varies considerably from no information to detailed police, autopsy and toxicology reports. There may also be interim findings of 'intent'.
62 The manner or intent of an injury which leads to death, is determined by whether the injury was inflicted purposefully or not (in some cases, intent cannot be determined) and, when it is inflicted purposefully (intentional), whether the injury was self-inflicted (suicide) or inflicted by another person (assault).
63 The first of the new processing improvements introduced from 2008 relates to the way that the ABS utilises information on the Medical Certificate of Cause of Death. For both open and closed coroners' cases, more time is now taken to investigate the certificate to consistently apply ICD-10 coding rules when a non-specific underlying cause was shown in part 1. Part 2 of the certificate details conditions that may have contributed to the death but were not part of the sequence of events that led to death.
64 The second new processing improvement relates to the use of additional information available on the NCIS. Increased resources and time were spent investigating coroners' reports to identify specific causes of death. This involved making increased use of police reports, toxicology reports, autopsy reports and coroners' findings for both open and closed cases to increase the specificity of causes and clarity of intents.
65 The introduction of these processes has resulted in improved data quality in relation to assigning unspecified cause codes to coroner certified deaths. There has been a decrease of 457 (39.4%) in the number of coroner certified deaths attributed to Other ill-defined and unspecified causes of mortality (R99) from 1,160 in 2007 (preliminary) to 703 in 2011 (preliminary).
66 As less specific codes are generally associated with open rather than closed coroner certified cases, the new processes have had the effect of significantly improving the quality of cause of death codes assigned to open cases. Additionally, a large number of deaths investigated by coroners are due to external causes, therefore the new processes have also had the effect of improving these data.
67 The 2011 data provided in this publication has not yet been subjected to the revisions process, which will further improve the quality of the data. Therefore, the data on 2011 causes of death is considered preliminary and refers to the point in time when initial 2011 processing was finalised. The 2011 data will go through the revisions process twice, and will be released in the ABS Causes of Death publications in 2014 (2011 revised) and 2015 (2011 final).
68 The Indigenous status of a deceased person is captured through the death registration process. It can be noted on the Death Registration Form and/or the Medical Certificate of Causes of Death. However it is recognised that not all Indigenous deaths are captured through these processes, leading to under-identification. While data are provided to the ABS for the Indigenous status question for 99.5% of all deaths, there are concerns regarding the accuracy of the data.
69 There are several data collection forms on which people are asked to state whether they are of Aboriginal and/or Torres Strait Islander origin. Due to a number of factors, the results are not always consistent. The likelihood that a person will identify, or be identified, as an Aboriginal and/or Torres Strait Islander on a specific form is known as their propensity to identify.
70 Propensity to identify as an Aboriginal and/or Torres Strait Islander is determined by a range of factors, including:
71 In addition to those deaths where the deceased is identified as an Aboriginal and/or Torres Strait Islander, a number of deaths occur each year where Indigenous status is not stated on the death registration form. In 2011, there were 794 deaths registered in Australia for whom Indigenous status was not stated, representing 0.5% of all deaths registered.
72 Data presented in this publication may therefore underestimate the level of Aboriginal and/or Torres Strait Islander deaths and mortality in Australia. Caution should be exercised when interpreting data for Aboriginal and/or Torres Strait Islander Australians presented in this publication, especially with regard to year-to-year changes.
73 Chapter 6 of this publication and data cube 12 provide information on causes of death for Aboriginal and/or Torres Strait Islander Australians. Due to the data quality issues outlined below, detailed disaggregations of deaths of Aboriginal and/or Torres Strait Islander Australians are provided only for New South Wales, Queensland, Western Australia and the Northern Territory.
74 Due to the increased focus on the mortality rates of Aboriginal and/or Torres Strait Islander Australians, a number of projects have been undertaken to investigate the quality of these data. These include:
75 The ABS undertakes significant work aimed at improving Indigenous identification. Quality studies conducted as part of the Census Data Enhancement project have investigated the levels and consistency of Indigenous identification between the 2006 Census and death registrations. See Information Paper: Census Data Enhancement - Indigenous Mortality Quality Study, 2006-07 (cat. no. 4723.0), released on 17 November 2008. The ABS is currently undertaking work to repeat the Census Data Enhancement (CDE) project for 2011 Census and post-census deaths. See Census Data Enhancement Project: An Update, Oct 2010 (cat. no. 2062.0).
76 An assessment of various methods for adjusting incomplete Indigenous death registration data for use in compiling Indigenous life tables and life expectancy estimates is presented in Discussion Paper: Assessment of Methods for Developing Life Tables for Aboriginal and Torres Strait Islander Australians, 2006 (cat. no. 3302.0.55.002), released on 17 November 2008. Final tables based on feedback received from this discussion paper, using information from the Census Data Enhancement (CDE) study, can be found in Experimental Life Tables for Aboriginal and Torres Strait Islander Australians (cat. no. 3302.0.55.003).
Perinatal data quality over time
Perinatal data processing system
77 Perinatal deaths (both neonatals and stillbirths) are manually coded within a section of the ABS mortality system. Data quality checks that are run on perinatal deaths (both doctor and coroner certified) ensure closer alignment with perinatal coding requirements (i.e. ensuring that a mother's condition code is not accepted in the fetus/infant's field, and vice versa).
Treatment of 'not stated' data in the ABS application of perinatal scope rules in relation to fetal deaths.
78 The ABS scope rules include fetal deaths based on gestation of at least 20 weeks or birth weigh of at least 400 grams. This scope is consistent with the legislated requirement for all state and territory Registrars of Births, Deaths and Marriages to register all fetal deaths of at least 20 weeks' gestation or 400 grams birth weight. Based on this legislative requirement, in the case of missing gestation and/or birth weight data, the fetal record is considered in scope and included in the dataset. A record is only considered out of scope if both gestation and birth weight data are present, and both fall outside the scope criteria (i.e. gestation of 19 weeks or less and birth weight of 399 grams or fewer). This rule has been applied to all perinatal data presented in this publication.
79 All 'live births' are considered in scope of the collection regardless of gestation or birth weight. When gestation or birth weight is not stated, it only affects the application of scope rules for fetal deaths.
DEATHS BY TYPE OF CERTIFIER
80 For deaths registered in 2011, 11.5% were certified by a coroner. There are variations between jurisdictions in relation to the proportion of deaths certified by a coroner, ranging from 9.3% deaths certified by a coroner in New South Wales to 28.0% of deaths certified by a coroner in the Northern Territory. The proportion of deaths certified by a coroner in 2011 is comparable to previous years.
SPECIFIC ISSUES FOR 2011 DATA
81 Users analysing 2011 cause of death data should take into account a number of issues, as outlined below:
82 Increased number of deaths, New South Wales
In September quarter 2011 the high number of death registrations in New South Wales was queried with the New South Wales Registry of Births, Deaths and Marriages. Information provided by the Registry indicates that these fluctuations may be the result of changes in processing rates. This may have contributed to the increase in the number of death registrations for New South Wales in 2011. New South Wales deaths in 2011 (50,661) were 5.7% higher than in 2010 (47,945).
83 The number of deaths attributable to Accident to watercraft causing drowning and submersion (V90) increased from 26 in 2010 to 75 in 2011. This increase is primarily due to deaths resulting from an incident in December 2010 when a boat collided with cliffs on Christmas Island. These deaths were registered with the Western Australian Registry of Births, Deaths and Marriages in January 2011, resulting in an increase in the number of deaths coded to V90 in Western Australia.
Intentional Self-Harm (Suicide) (X60-X84, Y87.0)
92 The number of deaths attributed to suicide for 2011 is expected to increase as data is subject to the revisions process. For further information see Explanatory Notes 29-33 and Technical Notes, Causes of Death Revisions, 2006 in the Causes of Death, Australia, 2010 (cat. no. 3303.0) publication, and Causes of Death Revisions, 2009 and 2010.
93 In addition to the revisions process, new coding guidelines were applied for deaths registered from 1 January 2007. The new guidelines improve data quality by enabling deaths to be coded to suicide if evidence indicates the death was from intentional self-harm. Previously, coding rules required a coroner to determine a death as intentional self-harm for it to be coded to suicide. However, in some instances the coroner does not make a finding on intent. The reasons for this may include legislative or regulatory barriers around the requirement to determine intent, or sensitivity to the feelings, cultural practices and religious beliefs of the family of the deceased. Further, for some mechanisms of death it may be very difficult to determine suicidal intent (e.g. single vehicle incidents, drowning). In these cases the burden of proof required for the coroner to establish that the death was as a result of intentional self-harm may make a finding of suicide less likely.
94 Under the new coding guidelines, in addition to coroner-determined suicides, deaths may also be coded to suicide following further investigation of information on the NCIS. Further investigation of a death would be initiated when the mechanism of death indicates a possible suicide and the coroner does not specifically state the intent as accidental or homicidal. Information that would support a determination of suicide includes indications by the person that they intended to take their own life, the presence of a suicide note, or knowledge of previous suicide attempts. The processes for coding open and closed coroner cases are illustrated below (open/closed case coding decision trees).
95 Suicide deaths of children are an extremely sensitive issue for families and coroners. The number of child suicides registered each year is small and is likely to be underestimated, more so than for other age groups. Consequently, data produced for child suicides would likely be subject to ABS procedures to protect confidentiality and, as a result, could not be reliably analysed. For these reasons, this publication does not include detailed annual information about suicides for children aged under 15 years in the commentary or data cubes. However, aggregated data for suicide deaths of persons under 15 years of age for the reference years 2007-2011 is available in Appendix 1.
Undetermined intent (Y10-Y34, Y87.2)
96 Due to changes in coding rules for ICD-10 in 2007, deaths up to and including the 2006 reference year assigned a finding of 'Undetermined intent' only where this was the official coronial finding. Other deaths where either intent was 'not known' or 'blank' on the NCIS record, were coded with an intent of 'accidental'. From 2007, a death is coded to an 'Undetermined intent' code where the NCIS intent field is: 'could not be determined'; 'unlikely to be known'; or 'blank'. This change in coding practice has resulted in a significant increase in deaths allocated to these codes from 2006 onwards. However, it is important to note that it is expected that the number of deaths attributed to 'Undetermined intent' codes will decrease as revisions of preliminary data are undertaken, see Explanatory Notes 35-39 and Technical Notes, Causes of Death Revisions, 2006 in the Causes of Death, Australia, 2010 (cat. no. 3303.0) publication, and Causes of Death Revisions, 2009 and 2010 in this publication.
Registration of Outstanding Deaths, Queensland
97 In November 2010, the Queensland Registrar of Births, Deaths and Marriages advised the ABS of an outstanding deaths registration initiative undertaken by the Registry. This initiative resulted in the November 2010 registration of 374 previously unregistered deaths which occurred between 1992 and 2006 (including a few for which a date of death was unknown). Of these, around three-quarters (284) were deaths of Aboriginal and Torres Strait Islander Australians. A data adjustment has been made for tables in this publication which include Indigenous data for Queensland for 2010. For further information refer to Technical Notes, Registration of Outstanding Deaths, Queensland, 2010 in Deaths, Australia, 2010 (cat. no. 3302.0) and Retrospective Deaths by Causes of Death, Queensland, 2010, in Causes of Death, Australia, 2010 (cat. no. 3303.0).
Issues for Multiple Cause of Death data -Table 4.2 Reporting Underlying Causes with Associated Causes
98 Table 4.2, Reporting Underlying Causes with Associated Causes, contains data which differs slightly from that which was provided in previous publications. In previous years, when the underlying cause was paired with the equivalent condition as an associated cause, these variables were calculated on the basis of multiple causes of death principles. Multiple causes of death include all conditions and diseases on the death certificate, including both the underlying cause and the associated causes. Therefore, when data is analysed using multiple cause of death methods, the underlying cause is also included in the associated cause count. This resulted in a figure of 100% when identical variables were paired together.
99 In Causes of Death, Australia, 2011, the data for Table 4.2 has been calculated identifying the number of deaths where an underlying cause appeared with a selected associated cause. Utilising this method changes the data only for percentages where the underlying cause and the associated cause are equivalent. This method eliminates the inclusion of the underlying cause count from the associated causes, providing a figure which describes the number of times conditions are appearing together on death certificates. This change has been made to facilitate better understanding of the relationships between conditions and diseases, as it provides further insight into what morbid conditions and diseases people are experiencing as concurrent processes at the time of death. For example, rather than seeing that 100% of people who die of cancer have cancer listed somewhere on the death certificate, the data in Table 4.2 shows that 18% of people who died of cancer had multiple malignant neoplasms present at death.
SPECIFIC ISSUES FOR PERINATALS DATA
Main and leading condition in the fetus/infant
Other disorders originating in the perinatal period (P90-P96)
100 Coroner certified neonatal deaths with no cause of death information are coded to Other ill-defined and unspecified causes of mortality (R99). Doctor certified neonatal deaths with no cause of death information are coded to Conditions originating in the perinatal period, unspecified (P969).
Disorders related to length of gestation and fetal growth (P05-P08)
101 The number of perinatal deaths with main condition in the fetus/infant coded to Disorders related to length of gestation and fetal growth (P05-P08) has increased compared to the reference years leading up to and including 2005. Prior to 2006, deaths attributed to these causes would have been queried to obtain a more specific cause of death.
102 Appendix 2 provides details of the number of live births registered which have been used to calculate the fetal, neonatal and perinatal death rates shown in this publication. Appendix 2 also provides data on fetal deaths used in the calculation of fetal and perinatal death rates. These also enable further rates to be calculated.
CONFIDENTIALISATION OF DATA
103 Data cells with small values have been randomly assigned to protect confidentiality. As a result some totals will not equal the sum of their components. Cells with 0 values have not been affected by confidentialisation.
EFFECTS OF ROUNDING
104 Where figures have been rounded, discrepancies may occur between totals and sums of the component items.
105 ABS products and publications are available free of charge from the ABS website. Click on Statistics to gain access to the full range of ABS statistical and reference information. For details on products scheduled for release in the coming week, click on the Future Releases link on the ABS homepage.
These documents will be presented in a new window.
| 2 | 21 |
<urn:uuid:d282e066-24cc-485b-b724-29a940977e07>
|
Acceptance of general anesthesia is predicated on the assumption that its effects are entirely reversible. However, studies indicate that anesthesia and surgery are associated with cognitive impairment lasting ≥3 mo in 10%–14% of elderly patients (1). There has been much speculation about the causes of such impairment, but the etiology remains unknown. One potential candidate mechanism is general anesthesia itself. General anesthesia affects brain function at all levels, including neuronal membranes, receptors, ion channels, neurotransmitters, and cerebral blood flow and metabolism (2). Moreover, the aged brain is more susceptible to anesthetic effects and has greater sensitivity to nonanesthetic drugs (3,4). The aged brain is also different from the younger brain in several important respects, including size, distribution and type of neurotransmitters, metabolic function, and capacity for plasticity, suggesting that it might be more susceptible to anesthetic-mediated changes (5). Nevertheless, the possibility that general anesthesia contributes to cognitive deterioration in the elderly has not been directly tested, in part because it is difficult in clinical studies to differentiate between the effects of general anesthesia and those of surgery and hospitalization. Accordingly, we hypothesized that general anesthesia itself can cause prolonged cognitive alterations in aged subjects, and we tested this hypothesis in rats exposed to general anesthesia without surgery.
This study was approved by the Standing Committee on the Use of Animals in Research and Teaching, Harvard University/Faculty of Arts and Sciences. Young (6 mo old;n = 12) and aged (18 mo old;n = 13) Fischer 344 rats were acquired from the National Institutes of Health aged rat colony. After a 1-wk acclimation period, rats were food-restricted to 85% of free-feeding body weight and trained in a 12-arm radial arm maze (RAM). Fischer 344 rats were chosen because they have a median life expectancy of 26 mo, are frequently used to study both aging and cognitive impairment, and develop progressive cognitive impairment with age but are not so impaired that ceiling and floor effects are a problem (6–8).
Testing of cognitive function was performed in a 12-arm RAM. This procedure tests spatial working and reference memory and assesses the integrity of the frontal cortex, entorhinal cortex, and hippocampus. We chose the RAM because it allows for repetitive testing and can detect subtle differences in learning and memory caused by aging or sedative medications (9–11). The maze consists of a central platform that communicates with 12 arms, each of which was baited with a hidden food reward. The walls of the maze display simple geometric designs that provide fixed, extramaze cues to assist in spatial navigation. To ensure motivated performance, rats were food-restricted but had free access to water in the home cage.
Rats were adapted to the maze for 10 min daily over 3 days. During this interval, the rat was able to freely explore the maze, in which food rewards were scattered randomly. Initial training consisted of a daily 10-min session in which the rat was placed on the central platform of the maze and all arms were baited. The rat was allowed to choose arms in any order until all 12 arms were visited or 10 min elapsed. A correct choice was defined as one in which the rat entered a baited arm not previously explored, whereas an incorrect choice was scored when the rat entered and proceeded more than 80% down an arm it has previously visited or failed to enter the arm in 10 min. Formal training was concluded when all rats met standardized performance criteria for 2 days, defined as 11 correct choices with 1 or fewer errors in <10 min. The number of days required to meet standardized performance criteria was recorded, as were error rate and time to complete the maze during initial training. Furthermore, to complicate the maze, rats were trained on delay trials. These consisted of removing the rat from the maze for 30 s or 2 h between the first and last 6 correct arm choices.
After this initial training, we excluded two aged rats that never learned the maze and one young adult rat whose performance was more than 2 sd below that of the other young rats. The remaining rats were randomized to an anesthesia or control group. Rats randomized to the anesthesia group (n = 5 young and 6 aged) received 1.2% isoflurane in 70% nitrous oxide/30% oxygen for 2 h in a Plexiglas anesthetizing chamber, whereas the control group (n = 6 young and 5 aged) received air/oxygen (fraction of inspired oxygen, 0.3) at identical flow rates for 2 h. These anesthetics were selected because isoflurane and nitrous oxide are commonly used anesthetics; dosages are an extrapolation from halothane and nitrous oxide minimum alveolar anesthetic concentration (MAC) studies and represent 1.2 and 1.0 MAC in aged and adult rats, respectively (12). Anesthetic (Datex, Tewksbury, MA) and oxygen (Ohmeda, Madison, WI) concentrations were measured continuously, and the temperature of the Plexiglas anesthetizing chamber was controlled to maintain rat temperature at 37°C ± 0.5°C. Anesthesia was terminated by discontinuing the anesthetics; all rats received 100% oxygen for 5 min before removal from the chamber. Rats were allowed to recover for 24 h to avoid the confounding influence of residual anesthetic and then were retested in the maze during postanesthesia weeks 1, 3, and 8. This testing was conducted over six consecutive days, with 30-s and 2-h delay trials on alternate days, and the results of the three trials were averaged.
Because hypotension, hypercarbia, and hypoxia are potential causes of cognitive deterioration, we assessed physiologic status, including mean arterial blood pressure (MAP) and arterial blood gases, in a separate group of young (n = 3) and aged (n = 6 or 9) Fischer 344 rats. These rats were anesthetized with isoflurane 1.2% for insertion and externalization of a femoral arterial catheter, as described previously (13). Nitrous oxide 70% was added thereafter, and rectal temperature, MAP, and arterial blood gases were measured after a 2-h equilibration period, as well as 2 and 24 h after recovery.
Initial training-trial variables for young and aged rats were analyzed with Student’s t-test. Physiologic data were analyzed by one-way analysis of variance (ANOVA), followed by the Student-Newman-Keuls test. For within-group analysis of delay trials, the average group score for the last three pretreatment trials was taken as the baseline, and the result was compared with the average score for each posttreatment time point by a two-way ANOVA, with time and treatment as the two factors. Significant differences on the ANOVA were subjected to the Student-Newman-Keuls test to clarify significant effects. The same statistics were used for between-group comparisons of age-matched control and anesthetized groups at each time point.
Before randomization, aged rats required more training trials to meet standardized performance criteria (13.8 ± 1.2 versus 9.1 ± 0.7;P ≤ 0.01; Student’s t-test), made more errors, and took longer to complete the maze than young rats (Figs. 1 and 2). Spontaneous ventilation with isoflurane/nitrous oxide/oxygen was well tolerated physiologically in both age groups (Table 1). Anesthesia was associated with a small but statistically significant decrease in MAP in young and aged rats (−14% and −9%, respectively, versus the corresponding 2h postanesthesia value;P < 0.01). In addition, Pao2 was higher and pH lower during anesthesia in young rats. However, MAP and arterial blood gases remained well within physiologic limits in both age groups during and after anesthesia.
One of the aged anesthetized rats died before the 8-wk testing period; data from this animal were included in the 1- and 3-wk results, but a post hoc analysis demonstrates that the findings do not differ whether this animal is included or excluded. On the 2-h delay trials, there were no differences in any of the groups, regardless of age or anesthesia condition, in time to complete the maze or error rate (data not shown), suggesting that the task was too difficult. On the 30-s delay trials, in contrast, anesthesia had a differential effect with age. In the young control rats, time to complete the maze and error rate remained stable relative to the pretreatment baseline throughout the experiment, except at 8 wk, when they made more errors (Figs. 3 and 4). Prior anesthesia in young rats did not affect the time to complete the maze but reduced the error rate compared with the group’s preanesthesia baseline at 1, 3, and 8 wk after anesthesia (Fig. 4;P < 0.05). However, it is possible that this improvement could partially reflect the relatively higher error rate in this group at baseline. Indeed, there were no differences between young control and young anesthetized rats in time or error rate except at 2 mo after anesthesia, when previously anesthetized rats made fewer errors than the controls. With respect to errors in aged rats, there were no differences within the aged control or anesthetized groups compared with baseline and no differences between the groups at any testing interval (Fig. 4). However, aged control rats ran the maze significantly faster 1 and 3 wk after treatment than at baseline, signifying improvements in performance with repeated testing, but returned to the pretreatment baseline by 8 wk (P < 0.05;Fig. 3). In comparison, aged rats that received anesthesia made no such improvement. These rats appeared to be indecisive; they often looked down an arm repeatedly, and sometimes looked down several arms, before making a choice, whereas unanesthetized aged and young rats were much more deliberate. Consistent with this apparent indecision, the time required for aged anesthetized rats to complete the maze did not improve with repeated testing and, at 1 and 3 wk after anesthesia, was significantly worse than that of age-matched control rats (P < 0.05;Fig. 3).
The main findings of this study are that general anesthesia produces long-lasting but reversible impairment in aged rats on a previously learned spatial memory task, whereas it appears to improve maze performance in young rats. This is unlikely to be a consequence of physiologic changes associated with general anesthesia, because blood pressure and arterial blood gases remained within the physiologic range during and after anesthesia and were similar in both age groups. During weeks 1 and 3 after anesthesia, impairment in the aged rats was manifested not as deterioration from baseline performance but rather as failure of previously anesthetized aged rats to improve at the same rate or to the same degree as unanesthetized controls. This difference was evident in the time to complete the maze, but not the error rate, and had resolved by eight weeks after anesthesia, suggesting a persistent change in memory function rather than a toxic effect of isoflurane/nitrous oxide anesthesia. In contrast, young rats made fewer errors after anesthesia, and this improvement lasted for two months. No changes were noted in the time to complete the maze, however, most likely because they had reached maximal performance (i.e., a floor) before anesthesia and further improvement could not be detected. Although the learning impairment in aged rats was limited to the time to complete the maze, it is unlikely to be the result of isolated motor impairment, because all rats ambulated normally and otherwise navigated and explored the maze and cage without difficulty. Accordingly, we infer that anesthesia with isoflurane and nitrous oxide produces a sustained, differential effect on established spatial memory in young and aged rats, with improvement in the former and impairment in the latter, and that these effects last considerably longer than previously realized.
Short-term impairment of cognitive and psychomotor performance is common after general anesthesia and is typically attributed to incomplete drug clearance (14). It is unlikely that incomplete clearance explains the results observed here because behavioral testing did not resume until 24 hours after anesthesia, opposite effects were seen in young and aged rats, and the changes lasted 3–8 weeks. Although long lasting memory impairment after uncomplicated anesthesia has not previously been reported, there is some evidence that general anesthetics can produce sustained effects. Halothane and nitrous oxide anesthesia during the perinatal period leads to learning deficits and delayed behavioral development (15), and N-methyl-d-aspartate receptor blockade, which is a property of nitrous oxide, can produce long lasting memory deficits (16). In addition, nitrous oxide produces a distinctive, but apparently reversible, neurotoxic reaction in the cerebral cortex of adult rats at concentrations within the range used for human anesthesia (17,18). Also, isoflurane-induced burst suppression is more effective and longer lasting than electroconvulsive therapy for the treatment of refractory depression, implying that sustained brain changes result from the treatment (19). The relevance of these observations to our results is unclear, but it is not surprising that postanesthetic memory impairment occurs in aged rats. The aged brain is different from the young brain in most respects, including size, neurotransmitter levels, and capacity for neuroplasticity. In addition, the fact that aged rats are more susceptible to the amnesic effects of anticholinergic drugs than young rats suggests that neural systems underlying spatial learning may be more fragile in aged rats (20,21). Indeed, aging itself is associated with an impairment in memory (22–24).
Interest in postoperative cognitive dysfunction has been fueled recently by prospective clinical studies showing that general anesthesia and surgery are associated with long lasting cognitive impairment in elderly patients (1). The largest study prospectively entered more than 1000 elderly patients (median age, 68 years) and demonstrated deterioration lasting at least 3 months on a battery of cognitive tests in nearly 10% of those who underwent surgery and general anesthesia, whereas only approximately 3% of age-matched controls (median age, 67 years) got worse (1). Among patients older than 75 years, 14% were worse 3 months after surgery and anesthesia. This impairment seems to resolve over time, however, because there was no difference in the incidence of cognitive deterioration between small subgroups of control and surgery/anesthesia patients followed up for one to two years (25). This suggests a lasting, but ultimately reversible, functional change in learning and memory after general anesthesia and surgery in aged persons. The cause has not been established, however, in part because clinical studies have not controlled for the anesthetics used and cannot differentiate among the effects of illness, hospitalization, surgery, and anesthesia. What seems relatively clear is that physiologic changes typical of anesthesia are not sufficient cause; in fact, episodes of significant intraoperative or perioperative arterial hypotension (MAP <60% of baseline for >30 minutes) or hypoxemia (arterial oxygen saturation <80% for >2 minutes) do not correlate with the development of cognitive impairment in elderly patients (1). Indeed, our results show that postanesthetic cognitive impairment can occur without systemic physiologic abnormalities and, for the first time, implicate general anesthesia itself in sustained memory impairment in aged subjects.
General anesthesia has memory-enhancing effects in young rats, but ours is not the first study to demonstrate this. In studies in young adult mice, the volatile anesthetics halothane, enflurane, and isoflurane have been shown to enhance memory (26). In fact, 4 consecutive days of enflurane anesthesia for one hour immediately after RAM training reduced the error rate by 60%–70% in mice (27). This improvement occurred on a novel task, and the anesthetic was administered during the consolidation phase (immediately after learning). We too observed a decrease in errors on a spatial memory task, but the task was familiar, and the improvement lasted two months. Our study is therefore unique in demonstrating sustained memory enhancement after general anesthesia, but the mechanism is unknown and the phenomenon has not been described clinically, perhaps because it is too subtle or is negated by other effects of illness/surgery.
Our study has several important limitations. First, our model does not reproduce the clinical situation, in which multiple factors are likely to contribute to postoperative cognitive dysfunction. Second, because we studied a combination of anesthetics at a single dose, it is impossible to say whether the effects are dose dependent or drug specific. Third, we did not correct for age-related decreases in MAC and, thus, cannot exclude the possibility that similar memory impairment would be detected in young rats after deeper anesthesia or that aged rats would be unaffected by lighter anesthesia. Another consideration is that the experimental design tested the effect of general anesthesia on prelearned behavior, not a novel task. This is relevant because although the young and aged rats received equivalent numbers of training trials before anesthesia, the young rats reached the standardized performance criteria more quickly. Thus, at the time of anesthesia, young rats were effectively overtrained relative to aged rats. This does not negate the essential results of the study, however, because performance was compared with aged-matched control rats in each case. Moreover, the testing procedures we used are reliable indicators of age-related learning impairments in that they demonstrate test-retest reliability and consistency and, unlike the Morris water maze, permit repeated testing over time. Finally, our results could underestimate the severity of age-related postanesthetic cognitive impairment because, at 18 months of age, Fischer rats are only at late middle age. Clinical data, for example, demonstrate a more frequent incidence of cognitive deterioration among patients older than 75 years. In practice, however, working with older rats is difficult because some cannot learn and because baseline performance is so poor that a decline is difficult to detect.
This study raises at least as many questions as it answers. It is difficult to understand how general anesthesia ablates memory during the time it is administered, regardless of age, and yet subsequently can enhance memory performance in young animals and impair it in old ones. However, memory itself is a complicated and poorly understood phenomenon that has multiple temporal phases, involves widely distributed neuronal circuits, and ultimately requires new gene expression, protein synthesis, and structural changes within neurons (28). Only further research will help determine how general anesthetics affect these processes.
In the meantime, we can draw several conclusions from these results. First, this paradigm is useful for studying anesthesia-induced cognitive deterioration in aging because it is possible to minimize potential confounders in a way that is not possible in a clinical situation. Second, general anesthesia produces sustained impairment in spatial memory performance in aged animals, which suggests that it may contribute to the cognitive dysfunction observed in some aged patients after anesthesia and surgery. Moreover, such learning impairment can occur in the absence of appreciable systemic physiologic changes. Third, sustained impairment of cognitive function after general anesthesia in an aged animal model, and improvement in young animals, provides a basis for examining the neurobiological substrates of sustained anesthesia-related alterations in learning and memory in humans. Finally, it appears that general anesthesia affects learning and memory longer than previously recognized.
1. Moller JT, Cluitmans P, Rasmussen LS, et al. Long-term postoperative cognitive dysfunction in the elderly ISPOCD1 study: ISPOCD investigators—International Study of Post-Operative Cognitive Dysfunction. Lancet 1998; 351: 857–61.
2. Franks NP, Lieb WR. Molecular and cellular mechanisms of general anaesthesia. Nature 1994; 367: 607–14.
3. Magnusson KR, Scanga C, Wagner AE, Dunlop C. Changes in anesthetic sensitivity and glutamate receptors in the aging canine brain. J Gerontol A Biol Sci Med Sci 2000; 55: B448–54.
4. Ingram DK, Garofalo P, Spangler EL, et al. Reduced density of NMDA receptors and increased sensitivity to dizocilpine-induced learning impairment in aged rats. Brain Res 1992; 580: 273–80.
5. Mrak RE, Griffin ST, Graham DI. Aging-associated changes in human brain. J Neuropathol Exp Neurol 1997; 56: 1269–75.
6. Frick KM, Baxter MG, Markowska AL, et al. Age-related spatial reference and working memory deficits assessed in the water maze. Neurobiol Aging 1995; 16: 149–60.
7. Ghirardi O, Cozzolino R, Guaraldi D, Giuliani A. Within- and between-strain variability in longevity of inbred and outbred rats under the same environmental conditions. Exp Gerontol 1995; 30: 485–94.
8. Baxter MG, Gallagher M. Neurobiological substrates of behavioral decline: models and data analytic strategies for individual differences in aging. Neurobiol Aging 1996; 17: 491–5.
9. Borde N, Jaffard R, Beracochea D. Effects of chronic alcohol consumption or diazepam administration on item recognition and temporal ordering in a spatial working memory task in mice. Eur J Neurosci 1998; 10: 2380–7.
10. Luine V, Rodriguez M. Effects of estradiol on radial arm maze performance of young and aged rats. Behav Neural Biol 1994; 62: 230–6.
11. Decker MW, Gallagher M. Scopolamine-disruption of radial arm maze performance: modification by noradrenergic depletion. Brain Res 1987; 417: 59–69.
12. Loss GEJ, Seifen E, Kennedy RH, Seifen AB. Aging: effects on minimum alveolar concentration (MAC) for halothane in Fischer-344 rats. Anesth Analg 1989; 68: 359–62.
13. Marota JJ, Crosby G, Uhl GR. Selective effects of pentobarbital and halothane on c-fos and jun-B gene expression in rat brain. Anesthesiology 1992; 77: 365–71.
14. Moller JT, Svennild I, Johannessen NW, et al. Perioperative monitoring with pulse oximetry and late postoperative cognitive dysfunction. Br J Anaesth 1993; 71: 340–7.
15. Levin ED, Uemura E, Bowman RE. Neurobehavioral toxicology of halothane in rats. Neurotoxicol Teratol 1991; 13: 461–70.
16. Lukoyanov NV, Paula-Barbosa MM. A single high dose of dizocilpine produces long-lasting impairment of the water maze performance in adult rats. Neurosci Lett 2000; 285: 139–42.
17. Olney JW, Farber NB, Wozniak DF, et al. Environmental agents that have the potential to trigger massive apoptotic neurodegeneration in the developing brain. Environ Health Perspect 2000; 108 (Suppl 3): 383–8.
18. Jevtovic-Todorovic V, Todorovic SM, Mennerick S, et al. Nitrous oxide (laughing gas) is an NMDA antagonist, neuroprotectant and neurotoxin. Nat Med 1998; 4: 460–3.
19. Langer G, Karazman R, Neumark J, et al. Isoflurane narcotherapy in depressive patients refractory to conventional antidepressant drug treatment: a double-blind comparison with electroconvulsive treatment. Neuropsychobiology 1995; 31: 182–94.
20. Poe GR, Teed RG, Insel N, et al. Partial hippocampal inactivation: effects on spatial memory performance in aged and young rats. Behav Neurosci 2000; 114: 940–9.
21. Stemmelin J, Cassel JC, Will B, Kelche C. Sensitivity to cholinergic drug treatments of aged rats with variable degrees of spatial memory impairment. Behav Brain Res 1999; 98: 53–66.
22. Gallagher M, Rapp PR. The use of animal models to study the effects of aging on cognition. Annu Rev Psychol 1997; 48: 339–70.
23. Rapp PR, Amaral DG. Recognition memory deficits in a subpopulation of aged monkeys resemble the effects of medial temporal lobe damage. Neurobiol Aging 1991; 12: 481–6.
24. Gallagher M, Burwell R, Burchinal M. Severity of spatial learning impairment in aging: development of a learning index for performance in the Morris water maze. Behav Neurosci 1993; 107: 618–26.
25. Abildstrom H, Rasmussen LS, Rentowl P, et al. Cognitive dysfunction 1–2 years after non-cardiac surgery in the elderly: ISPOCD group—International Study of Post-Operative Cognitive Dysfunction. Acta Anaesthesiol Scand 2000; 44: 1246–51.
26. Komatsu H, Nogaya J, Anabuki D, et al. Memory facilitation by posttraining exposure to halothane, enflurane, and isoflurane in ddN mice. Anesth Analg 1993; 76: 609–12.
27. Komatsu H, Nogaya J, Kuratani N, et al. Repetitive post-training exposure to enflurane modifies spatial memory in mice. Anesthesiology 1998; 89: 1184–90.
© 2003 International Anesthesia Research Society
28. Bailey CH, Bartsch D, Kandel ER. Toward a molecular definition of long-term memory storage. Proc Natl Acad Sci U S A 1996; 93: 13445–52.
| 1 | 2 |
<urn:uuid:96a19b87-239b-4257-a676-7b08892cbf4d>
|
Figure 1. Fragments from a Nottingham-type stoneware mug discovered during the excavation of the courthouse.
Quenching one’s thirst with a mug of ale or hard cider was a fitting end to a long day in court in the colonial period. Taverns and ordinaries, often located near courthouses, were the scenes of celebrations as well sadder occasions for individuals drowning their sorrows after an unwanted legal outcome. A fragment from a Nottingham-type English stoneware mug, recovered from the site of the first courthouse in Charles County, Maryland, was probably witness to many such revelries or disappointing endings.
Figure 2. 1697 Plat map of the Charles County Courthouse and Ordinary. Maryland State Archives.
A beautifully detailed plat map prepared in 1697 depicts the first Charles County courthouse. Standing in a cluster of buildings, including an ordinary, several outbuildings, a fenced orchard and a set of stocks, this timber-framed structure was graced with a porch tower and glass windows. It served as the courthouse from 1674 until its abandonment in 1727, when the location of the county court was moved to nearby Port Tobacco. The courthouse was demolished for salvage in 1731 (King et al. 2008b) and over time, as the other buildings disappeared and people’s memories of them faded, the location of this first courthouse was lost. Continue reading →
The supermoon of August 2014 competes with the Domino sign on the waterfront in Baltimore. Photograph Jerry Jackson of the Baltimore Sun. http://darkroom.baltimoresun.com/2014/08/ supermoon-seen-around-the-world/#1
Domino Sugar, with its iconic neon sign, has been a Baltimore institution for over 90 years. The plant was built in 1922, but Baltimore’s sugar history extends back to the late eighteenth century. After becoming a major port of entry for raw sugar during the Revolutionary War, Baltimore took its place as a regional center for sugar production, with eleven refineries in operation by around 1825 (Williams et al. 2000; Magid 2005). Similar refineries in Washington D.C. and Alexandria, Virginia were all established in the early nineteenth century in reaction to international trade restrictions imposed by the Napoleonic Wars (Williams et al. 2000:279).
Among the archaeological collections curated at the Maryland Archaeological Conservation Lab is an assemblage from the sugar processing plant owned by Augustus Shutt and John Tool, in operation between 1804 and 1829 on Green (now Exeter) Street in Baltimore (Magid 2005). Continue reading →
Iron stirrup recovered from the stable (1711-1730 context) at the Smith St. Leonard site (18CV91).
May and June bring the Triple Crown of Thoroughbred Racing—the Kentucky Derby, the Preakness Stakes, and the Belmont Stakes—and Maryland is proud to claim the Preakness as its own.
Horse racing has a long and storied history in Maryland and this stirrup from the Smith St. Leonard Site (18CV91), a 1711-1754 tobacco plantation in Calvert County, is representative of the state’s long history with horses. This site contains remains of the only known eighteenth-century stable (c. 1711-1730) in Maryland, from which this stirrup was recovered. Estate details from the inventory, taken at the time of plantation owner Richard Smith Jr.’s death in 1715, reveal that he was breeding horses for sale. The value of the individual horses however indicates they were work, rather than racing, animals (Cohen, personal communication 2010).
This conclusion is perhaps not surprising, since Thoroughbred breeding and racing did not really get underway in Maryland until the mid-eighteenth century; indeed the first Thoroughbred horse in the American Colonies was imported to Virginia in 1730 (Robertson 1964:16). Continue reading →
Figure 1. Temperance Movement cup found in the fill of the privy.
Alcoholics Anonymous, the highly successful organization that helps individuals fight alcohol addiction, was founded in Akron, Ohio in 1935 (Anonymous 2015). The organization (commonly known as “AA”) remained small before the 1939 publication of the group’s philosophy and methods of practice. The “Big Book”, as it came to be known, set out the all-important Twelve Steps of Recovery and contained personal stories from group members—another critical component of the organization. Alcoholics Anonymous has become an international organization; in 2012, AA Census estimated that there were 114,642 groups and 2,131,549 members (S., Arthur, 2014).
This English-made ceramic teacup (Figure 1), dating to the second quarter of the nineteenth century and found in a Baltimore privy (Basalik and Payne 1982), is a tangible reminder that overuse of alcohol is not just a modern-day problem. The cup contains a printed design of a man and woman flanking a shield-shaped motif from which sprouts an oak tree. A banner above the heads of the figures proclaims “Firm as an Oak”, while banners beneath their feet state “Be Thou Faithful Unto Death”. The male and female each appear to be holding flags, although these portions of the cup are missing. Complete vessels suggest that the flags would have read “Sobriety” (male) and “Domestic Comfort (female).
The cup’s motif, sometimes referred to as “The Teetotal Coat of Arms”, symbolizes the moral reform movement that supported abstinence from alcoholic beverages. This crusade, aimed at the working class, was popular in both Britain and the United States in the nineteenth century (Smith 1993). Continue reading →
Note from the author: I would like to thank Justine Schaeffer, Naturalist/Director at the Benjamin Banneker Historical Park and Museum for reading a draft of this blog and correcting several errors.
Figure 1. Ground glass lens and slate pencil fragments recovered during excavations of the Banneker homestead. Photograph courtesy of Maryland Archaeological Conservation Laboratory.
Sometimes artifacts that aren’t all that impressive in appearance turn out to have really interesting histories. The circular fragment of glass in Figure 1 is a ground lens from a telescope or similar optical instrument. The objects surrounding the lens are slate pencils, used for marking on slate tablets. What makes these artifacts notable is that they were excavated from the eighteenth-century home of Benjamin Banneker (Hurry 2002). A self-taught astronomer and mathematician, Banneker is known as America’s first African-American man of science. He was born in 1731 to free parents in Baltimore County, Maryland and grew up on a small farm in present-day Oella (18BA282).
Taught to read and write by his grandmother, an English woman who married a former slave, Banneker later attended a small Quaker school (Bedini 1972:39). As an adult, Banneker became friends with George Ellicott, son of a nearby land and mill owner. The Ellicotts were Quakers who contracted with the Banneker family to provide their mill workers with produce. Although twenty-nine years Banneker’s junior, George Ellicott shared many of Banneker’s interests. Continue reading →
Note from author: I would like to acknowledge the assistance of Ed Chaney, Deputy Director of the MAC Lab and Dr. Julia A. King, St. Mary’s College of Maryland in the preparation of this blog. Any errors are my own.
Figure 1. Tulip shaped tobacco pipe from the Pine Bluff site. Tobacco had social and spiritual significance for native peoples and in some cultures, stone pipes were used in treaty ceremonies.
This week’s Maryland artifact is a tobacco pipe recovered in the 1970s during an excavation at the Pine Bluff site (18WC20) near modern-day Salisbury in Wicomico County. The pipe, made from fired clay, is in a shape associated with the Susquehannock Indians and often described as a “tulip” pipe. Other materials found during the excavation, including gun parts, glass pharmaceutical bottle fragments and English ceramics, suggest that some components of this possible village site post-dated English contact (Marshall 1977).
By the time of English colonization, the Eastern Shore had been home to Maryland’s native peoples for at least 13,000 years (Rountree and Davidson 1997:20). Archaeological surveys have revealed evidence of short-term camps, villages and places where resources were procured and processed. The abundant natural resources of the Eastern Shore—fish, shellfish, wild game and wild plants—made this area a favorable place to live. Continue reading →
Figure 1. The two center safety pins with stamped numbers marked net bags in commercial laundries and were used to track individual orders. The smaller open pins surrounding the safety pins were probably used to pin paper tags on finished clothing. The object to the top center is a soapstone pencil, used to mark stains.
During the 1980 excavation of the Federal Reserve site (18BC27), archaeologists uncovered the remains of a stoneware drainpipe that was clogged during the 1920s with debris from a commercial laundry. When the pipe was broken open by earthmoving equipment, it was found to have filled over time with artifacts set in a concreted matrix of iron corrosion. Among the artifacts recovered from the pipe were laundry bag net pins—the two odd looking safety pins with the stamped numbers seen in the photograph to the left. Since these large brass safety pins were rustproof, they could be attached to the net bags that separated individual orders in the washing machines. The solid flat heads were stamped with number designations that could be used to track bagged laundry to specific individuals. These pins are still being manufactured today for use in commercial laundries. They were just a few of the large number of commercial laundry-related artifacts found in the pipe. Continue reading →
| 1 | 3 |
<urn:uuid:cf9d6317-6c06-491f-b6db-fc2199ff7f68>
|
In agricultural regions worldwide, linear networks of vegetation such as hedges, fencerows and live fences provide habitat for plant and animal species in heavily modified landscapes. In Australia, networks of remnant native vegetation along roadsides are a distinctive feature of many rural landscapes. Here, we investigated the richness and composition of woodland-dependent bird communities in networks of eucalypt woodland vegetation along roadsides, in an agricultural region in which >80% of native woodland and forest vegetation has been cleared. We stratified sites in a) cross sections and b) linear strips of roadside vegetation, to test the influence on woodland birds of site location and configuration in the linear network (the ‘intersection effect’). We also examined the influence of tree size at the site, the amount of wooded vegetation surrounding the site, and the abundance of an aggressive native species, the noisy miner Manorina melanocephala. Birds were surveyed at 26 pairs of sites (cross section or linear strip) on four occasions. A total of 66 species was recorded, including 35 woodland species. The richness of woodland bird species was influenced by site configuration, with more species present at cross sections, particularly those with larger trees (>30 cm diameter). However, the strongest influence on species richness was the relative abundance of the noisy miner. The richness of woodland birds at sites where noisy miners were abundant was ~20% of that where miners were absent. These results recognise the value of networks of roadside vegetation as habitat for woodland birds in depleted agricultural landscapes; but highlight that this value is not realised for much of this vast vegetation network because of the dominance of the noisy miner. Nevertheless, roadside vegetation is particularly important where the configuration of networks create nodes that facilitate movement. Globally, the protection, conservation and restoration of such linear networks has an important influence on the persistence of biota within human-dominated landscapes.
Citation: Hall M, Nimmo D, Bennett AF (2016) At the Crossroads: Does the Configuration of Roadside Vegetation Affect Woodland Bird Communities in Rural Landscapes? PLoS ONE 11(5): e0155219. https://doi.org/10.1371/journal.pone.0155219
Editor: Govindhaswamy Umapathy, Centre for Cellular and Molecular Biology, INDIA
Received: July 27, 2015; Accepted: April 26, 2016; Published: May 16, 2016
Copyright: © 2016 Hall et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Data are available as S1 Data.
Funding: The authors have no support or funding to report.
Competing interests: The authors have declared that no competing interests exist.
Landscape modification to meet human needs for food, fibre and living space is a major influence on global biodiversity . A common legacy of such modification, particularly in agricultural environments, is the creation of networks of linear vegetation ; such as hedgerows in Europe [3,4], fencerows in North America [5,6], live fences in southern and central America , and roadside vegetation in Australia [8,9]. In highly modified regions, such linear elements potentially play an important role in biodiversity conservation [10,11]. Hedgerows and arable field margins in European farmland, for example, provide nest and roost sites, food resources and movement pathways for birds [4,12]; while in the Americas, fencerows and live fences provide refuge, foraging resources and movement corridors for diverse assemblages of birds, butterflies, bats and beetles [5,7].
The creation of roads and highways is among the most extensive and pervasive form of landscape change on Earth . Roadside vegetation, the vegetation between the road surface and boundary of the road reserve, varies greatly in width and composition , but collectively represents a vast linear network . In many regions in Australia, roadside vegetation is comprised of remnant native vegetation including grasslands, shrublands, woodlands or forest [16–18]. Typically, it occurs as strips from 5–30 m in width (e.g. ), although in some regions ‘travelling stock reserves’ may be greater than 500 m in width .
The spatial configuration of linear networks has implications for their value for biota in modified landscapes [21,22]. One aspect of the spatial configuration is the ecological role of intersections, where two or more linear habitats meet within a network. In a study of breeding birds in agricultural environments, Lack found more species at intersections (or ‘nodes’) of hedges compared with straight sections of hedge of the same length. He postulated this was due to intersections making it easier for smaller birds to defend territories, find food, obtain shelter, and have enhanced movement/retreat options. This ‘intersection effect’ (see Fig 1) in hedge networks has been supported by observations of greater species richness of corridor-dependent bird species at or near intersections . Van Langevelde and Grashof-Bokdam modeled bird movement in hedgerow networks and found that species with limited movement ability occurred at higher densities at intersections than in linear strips. They concluded this was likely due to the species’ ability to recolonise intersections more quickly following mortality of other individuals. The intersection effect has also been associated with increased richness of other taxa, including plants and arthropods . Few studies, however, have tested whether there may be a similar intersection effect on faunal occurrence in other types of linear networks that occur worldwide (but see ).
(1) The intersection or ‘node’ provides a meeting point of two or more vegetated pathways. It may provide more resources within a smaller, more defendable space for species dependent on wooded vegetation (ie woodland-dependent birds). (2) Cross sections provide multiple movement pathways (in black), enhancing a species ability to reach these nodes of potentially high resources (food, shelter, nest sites) as well as offering multiple movement/escape routes to other areas. (3) The configuration of vegetation surrounding an intersection also influences the likelihood that species will be able to cross spaces diagonally between road sections (in red); more vegetation provides more movement pathways and increases their chances of meeting resource needs. (4) Linear strips provide only two possible (continuously vegetated) movement pathways, potentially limiting a species ability to reach resources and thus potentially making these sites less likely to support species dependent on vegetation networks.
Here, we examine the use of roadside vegetation by woodland birds in southeastern Australia, to test whether intersections in networks of remnant roadside vegetation support a greater number of bird species than linear strips. Woodland birds in southern Australia have experienced serious decline and their conservation is of great concern [28,29]. In many regions, forest and woodland vegetation has been extensively cleared (e.g. >80% loss), such that networks of roadside vegetation form a substantial component of the remaining wooded habitat .
In addition to examining the hypothesis that roadside configuration affects woodland bird communities, the (1) ‘configuration’ hypothesis, we also examine three alternative hypotheses of drivers of woodland bird communities in roadside networks in the study region. These are (2) the ‘tree size’ hypothesis, which predicts that larger, older trees at a site are important for woodland birds , (3) the ‘habitat amount’ hypothesis, which predicts that sites surrounded by a greater amount of tree cover will contain more woodland bird species (based on ); and (4) the ‘biotic interaction’ hypothesis, which relates to the influence of an avian competitor, the noisy miner (Manorina melanocephala), known for its negative effects on woodland bird communities in south-eastern Australia [32,33]. Noisy miners aggressively out-compete or exclude smaller insectivorous species and have become abundant, and dominate bird communities, in many fragmented environments. We predicted that sites with more noisy miners will have fewer woodland bird species.
This research was undertaken with the approval of Deakin University Animal Ethics permit B8-2012.
The study area spans a region of ~10,000 km2 of the Victorian Riverina plains in north-central Victoria, Australia. Mean annual rainfall ranges from 500–750 mm, with most rain in winter and spring (Bureau of Meteorology, 2012. http://www.bom.gov.au/climate/data/). The native vegetation of the region is eucalypt woodland dominated by grey box (Eucalyptus microcarpa) and yellow box (E. melliodora) across drier areas of the plains, with river red gum (E. camaldulensis) common along streams. Canopy height of these major tree species is typically 10–25 m. Vegetation in the region has been extensively cleared or modified, primarily for agriculture: less than 20% of the original tree-cover remains , mainly concentrated in a few large forest blocks, but also as extensive linear networks along roads and streams, and as scattered paddock trees . Land use in the region is comprised largely of grazing by domestic stock (sheep, cattle) and mixed cropping (predominantly wheat, canola).
We used satellite images to identify potentially suitable pairs of sites, each comprising a four-way road intersection (cross section) and an adjacent straight section of road (linear strip). These potential sites were then field-checked to assess whether they met the following requirements. First, sites were set within a roadside vegetation network with canopy gaps of no more than 50 m, and were surrounded by largely cleared farmland. Second, scattered trees and remnant wooded vegetation surrounding sites were visually identified (from satellite images) to represent a range in cover from ~5–30%, typical of this physiographic region. Third, sites were each 1.0 ha in area, and each pair of sites (linear, cross section) was situated along the same road but separated by at least 500 m to enhance independence of samples, and to avoid overlap of surrounding vegetation buffers. Linear sites followed a north-south orientation in relation to the adjoining intersection site. Fourth, to limit the influence of vegetation type and microclimatic conditions on woodland bird communities, suitable sites were chosen to be dominated by a single canopy species, Eucalyptus microcarpa (at least 80% by abundance), and were similar in topographic position. If pairs of sites met the above criteria they were retained: 26 pairs (52 sites in total) were selected (Fig 2) from all potentially suitable sites (~80).
Insets (top) show (a) location of study area within Victoria; (b, c) examples of two paired sites (cross section and linear strip) with different levels of surrounding vegetation, and (d) the 52 study sites across the region. Insets (bottom) show examples of two sites, a cross section (e) and linear strip (f), dominated by E. microcarpa. The study area lies within south-eastern Australia (g).
Typically, survey transects were 500 m in length and encompassed the 20 m width of the road reserve (i.e. 500 m x 20 m = 1.0 ha), on minor roads originally surveyed as ‘one chain’ roads (22 yards = ~20 m width) . In four instances, roads were ‘two chains’ (~40 m) wide, so the transect was 250 m in length. Secondary and minor roads were chosen to reduce the potential effects of traffic.
Bird surveys were conducted at each site for 20 mins (n = 52 sites) in suitable weather conditions (i.e. fine weather, little or no wind). All individuals detected visually or aurally whilst walking the transect midline (along the road) were recorded, and distinction was made between those ‘on site’ and ‘off site’. At intersection sites, transects were walked from south to north for 250 m (10 mins), with the intersection falling at the 125 m mark. The diagonal gap between the north and westerly points was then crossed where possible, to avoid unnecessary disturbance to species by retracing steps. The west-east line was then followed for 250 m (further 10 mins), again with the intersection at the midway point. Care was taken not to record individuals twice near the mid-point.
Surveys were conducted four times at each site on separate days between April and June 2012, by the same observer (MH). Sites were surveyed once each in the early morning, mid-morning, mid-afternoon and late afternoon time periods, respectively, rotated across the study timeframe in a logistically feasible fashion to avoid travel and time delays.
To ensure data reflected potential habitat use at a site, birds flying more than a few metres above the canopy were regarded as off-site; except for aerial foragers such as raptors and swallows, which were included as on-site if they were foraging overhead, and birds flying above the canopy that landed on site. For birds observed flying, the direction in relation to the survey transect (along, across, circling) was recorded to determine patterns in the use of roadside vegetation as potential corridors for movement.
The species and diameter at breast height (DBH) were recorded for all trees within a randomly selected 0.5 ha section of each transect. The number of juvenile Eucalypts was also counted.
Response and predictor variables
We first grouped bird species to reflect their habitat associations (woodland-dependent, open-country, open-tolerant) . For analyses, we focused on woodland-dependent species because of their dependence on native vegetation. Three response variables were calculated, to represent the number of woodland species at each site (on-site only) in the following categories.
- Woodland-dependent: the total number of species categorised as woodland-dependent.
- Resident: the number of woodland species recorded at a site on >50% of surveys.
- Species’ movement: the proportion of woodland species seen flying along the transect in either direction (i.e. parallel with the roadside vegetation, rather than flying across between farmland and roadside vegetation or circling above).
Predictor variables (Table 1) were derived from vegetation data collected at each site, from GIS analysis, and the abundance of noisy miners from surveys at each site. They correspond with the four hypotheses.
First, the configuration hypothesis was represented by the main treatment in the study design; that is, a categorical variable with two levels, cross section or linear strip. Second, the tree size hypothesis was represented by counting the density of trees within the survey transect (number per ha) in size categories: 10–30 cm and >30 cm diameter. Saplings smaller than 10 cm diameter were not included. Third, the habitat amount hypothesis was represented by calculating the area of tree cover within buffers of radius 100, 250, 500 and 750 m, surrounding the mid-point of each site. Tree cover was calculated using the Tree25 layer (Department of Environment, Land, Water & Planning, Victoria) in ArcGIS10 (ESRI, 2011). A 500 m buffer was selected because it provided an ecologically meaningful area of surrounding vegetation, did not overlap with buffers of adjoining sites, and provided the strongest fit with the data. Last, the biotic interaction hypothesis was represented by calculating the average abundance (individuals ha-1) of the noisy miner from the four surveys at each site.
All predictor variables were standardised to allow a direct comparison of coefficients (mean = 0, standard deviation = 1). Pairwise correlations (Spearman’s rank correlation) between predictor variables were all < 0.55.
For two of the three response variables (woodland-dependent, resident species), generalised linear mixed models (GLMMs) were developed, assuming a Poisson distribution and log-link function. For the species movement (flying along) response variable, GLMMs were developed assuming a binomial distribution and logit-link function. Overdispersion was assessed in the global model. Where the dispersion parameter was >1, an observation-level random effect was fitted to account for additional variance . All models were fitted in R (R_Core_Team, 2012). As sites were spatially paired into treatments (i.e. cross section and linear strip), the pair to which a site belonged was entered as a random effect to account for potential lack of independence. All other environmental variables were regarded as fixed effects .
An information theoretic approach was used to compare competing models that represented the four hypotheses and to evaluate the relative support for each model . Candidate models were developed to compare all subsets of hypotheses (i.e. each competing hypothesis and all combinations of hypotheses) (Table 2). We calculated Akaike’s Information Criterion corrected for small sample sizes (AICc) to compare and rank the multiple competing models, and to determine the most parsimonious model . Ranking was undertaken by comparing the AICc difference (Δi) between each model and that with the lowest AICc value (i.e. the ‘best’ model). Models with Δi ≤2 are considered to have substantial support , and those with Δi of 2–7 have some support and should not necessarily be dismissed . Akaike weights (wi) were generated to assess the probability that the model is the best of the candidate set . We summed wi for the top models to generate a 95% confidence set of the most parsimonious (best fitting) models.
Also shown for each model are the number of parameters (K), AICc values, AICc differences (Δi) and Akaike weights (wi). Variables are described in Table 1.
Summing wi for all models within which a particular hypothesis (variable) occurs (∑wi) gives an importance value ranging from 0–1, indicating the relative importance of that hypothesis in explaining the data . The larger the value, the more importance that hypothesis has relative to others . We summed wi for models that included each of the four hypotheses (Table 2) to calculate the probability that each respective hypothesis was in the best model (i.e. the summed Akaike weight, ∑wi). When no single model was considered ‘clearly best’ (i.e. no models had wi >0.90), model averaging was performed using the MuMIn package . We regarded predictor variables as influential when the 95% confidence interval of the model-averaged coefficient did not overlap with zero.
A Morans I test was performed to test for spatial autocorrelation in model residuals for each of the response variables using the spdev package (R_Core_Team, 2015). No spatial autocorrelation was detected (Woodland-dependent species: p = 0.78, Resident species: p = 0.39, Flying along: p = 0.44).
Bird species recorded
A total of 66 species was recorded during the four survey rounds (n = 208 surveys in total), including 35 woodland-dependent species (S1 Table, S1 Data). At linear strips (n = 26 sites) 52 species were recorded (27 woodland species), whilst at cross sections (n = 26 sites) 54 species were recorded (28 woodland species) (Table 3).
The most common species included the eastern rosella (Platycercus eximius), Australian magpie (Gymnorhina tibicen) and galah (Eolophus roseicapillus), all suited to open country landscapes with sparse tree cover (S1 Table). Some of the least common species were woodland birds, such as the black-chinned honeyeater (Melithreptus gularis), eastern spinebill (Acanthorhynchus tenuirostris), crested shrike-tit (Falcunculus frontatus) and rufous whistler (Pachycephala rufiventris). Of the 35 woodland species, 27 occurred at <10 sites overall (Table 3). The noisy miner was widespread and abundant, being recorded at 48 of the 52 sites (92%), including 24 cross sections and 24 linear sites.
The most parsimonious model (lowest AIC value) for each response group included the configuration (linear or cross section) and the biotic interaction (mean abundance of noisy miners) hypotheses (Table 2); while the woodland-dependent model also included the tree size hypothesis, and the species movement (birds flying along) model included the habitat amount hypothesis (Table 2). The deviance explained (R2) by these models was 53% for all woodland-dependent species, 36% for resident woodland species and 49% for species flying along transects (Table 2).
However, for all three response variables, multiple models had substantial support (i.e. Δi ≤2) (Table 2) and there was no ‘clearly best’ model (i.e. wi > 0.90). Consequently, model averaging was performed to gain an understanding of the direction and size of the effect of each predictor variable in relation to each response variable. Summed Akaike weights (∑wi) and model-averaged coefficients for predictor variables are shown in Fig 3.
Black circles indicate values for which the confidence intervals of coefficients do not overlap with zero. The configuration variable is tested in the model by using cross section as the reference category. Therefore, a negative coefficient for this variable implies that linear strips have fewer species, or in the case of species’ movement, that more species are moving along linear strips.
Relative importance of hypotheses for response groups
The configuration hypothesis was well supported for woodland-dependent species (∑wi = 0.83), with model averaging revealing that species richness was greater at cross sections than linear strips (i.e. a negative coefficient for linear strips, with cross section used as the reference category) (Figs 3 & 4). The tree size hypothesis was also well supported (∑wi = 0.71), with parameter estimates revealing a positive association with the density of trees >30 cm diameter (Fig 3). Small trees (<30 cm) were less influential (Fig 3). The habitat amount hypothesis had little support (∑wi = 0.29), with model-averaged coefficients overlapping zero.
The configuration hypothesis was well supported for resident species (∑wi = 0.74), with model averaging revealing that species richness was greater at cross sections than linear strips (i.e. a negative coefficient for linear strips, with cross section used as the reference category) (Figs 3 & 4). The tree size (∑wi = 0.23) and habitat amount hypotheses received little support (∑wi = 0.26). Model-averaged coefficients for these hypotheses overlapped with zero, indicating little influence on the residency of woodland birds at these sites.
The configuration hypothesis was well supported for woodland-dependent species flying along transects (∑wi = 0.89). Parameter estimates reveal the proportion of woodland species at a site observed flying along roadside vegetation was greater for linear strips (Figs 3 & 5). The tree size hypothesis was not well supported for this group (∑wi = 0.13), with coefficients overlapping zero for both small and larger trees (Fig 3). The habitat amount hypothesis was well supported (∑wi = 0.79). Model-averaged coefficients revealed that species were responding to a greater level of tree cover within the landscape, particularly whilst flying along linear roadside strips (Fig 3).
Biotic interaction: the influence of the noisy miner.
The biotic interaction hypothesis had a high level of support for all response groups: woodland-dependent (∑wi = 1.0), resident (∑wi = 1.0) and species movement (∑wi = 0.98) (Fig 3). Model averaging revealed that noisy miner abundance had an important influence on all response groups (Fig 3). In each case, richness for response groups decreased as noisy miner abundance at sites increased; both when taking into consideration the treatment effect of configuration (Fig 4) and when pooling all sites regardless of treatment (Fig 6). For example, the richness of woodland-dependent species was predicted to decline from approximately eight species per site when no noisy miners were present, to just two with high abundance of noisy miners (Figs 4 & 6). In contrast, the proportion of woodland species observed flying along a transect increased by around 60% as noisy miner abundance increased from low to high (Fig 6).
Agricultural landscapes worldwide are characterised by linear networks of vegetation [7,19,42]. As these networks can comprise a large proportion of the remaining native vegetation [4,43], it is important to understand their role for biodiversity conservation. Here, we found that the configuration of roadside vegetation affects woodland bird species in a heavily modified agricultural region of southeastern Australia. Cross sections of roadside vegetation had more woodland bird species than did linear sections. The structure of roadside vegetation also influenced the biodiversity value, as the overall richness of woodland birds was positively associated with the density of larger trees. However, the potential value of roadside vegetation for woodland bird conservation is not currently being met due to a widespread aggressive species, the noisy miner, whose negative impact on all species’ groups exceeded that of all other predictor variables.
Factors influencing woodland species in roadside vegetation
Our finding that intersections had greater numbers of woodland birds compared to linear sections is consistent with the ‘intersection effect’ reported by Lack . The intersection effect predicts enhanced foraging efficiency by species at intersections compared with linear strips due to intersections having a greater number of movement pathways, both to vegetation surrounding the intersection and to other sources further along the network (Fig 1). This could facilitate easier access to food, particularly for species reluctant to use open spaces [24,27]. As this is the first study, of which we are aware, that has examined the intersection effect for roadside vegetation dominated by tree species (cf. shrub-dominated hedgerows and live-fences), we must infer similarities between our findings and those of studies conducted with different types of linear networks (e.g. [24,44]). Other factors may also come into play, such as the width, structure and composition of vegetation [4,7,45]; however, our sites were chosen to be similar with regard to these factors. These results strongly align with the intersection effect recorded for a variety of linear networks worldwide [23,27,46].
The conservation value of intersections was further affirmed by greater residency of woodland species in cross sections compared with linear strips, indicating that intersections are more than movement pathways, and may comprise important permanent habitat for woodland birds. We observed a greater proportion of individual woodland birds moving ‘along’ linear sections (cf cross sections). This finding further underscores the differing functional roles of intersections and linear strips, with the former having a higher potential to act as habitat and the latter being utilized to a greater extent as movement pathways .
The tree-size structure of roadside vegetation also affected woodland birds. A greater number of woodland species were found at sites with a higher density of larger trees (>30 cm diameter). Very large trees (typically those >60 cm diameter) in the study region often pre-date European settlement and were relatively rare, comprising around 10 per cent of all trees recorded at sites. However, given their size, they are likely contributing disproportionally to the overall canopy cover within sites. Large eucalypt trees are considered ‘keystone structures’ in agricultural landscapes of southern Australia [49,50] as they provide resources for many biota (including woodland birds; ), such as tree hollows, perches and food [52–54]. The continued protection and provision of large trees in roadside vegetation is vital to woodland bird conservation in southern Australia.
An unexpected result of this study was that the amount of tree cover surrounding sites did not strongly influence the response groups, other than positively influencing the proportion of species observed flying along transects. Some species may have shown a preference for sites along linear strips with greater surrounding tree cover whilst flying, for the shelter, refuge or foraging resources the surrounding cover provides, or because these strips act as movement pathways between more highly connected permanent habitat patches. In this region, the extent of tree cover surrounding a site and across the landscape is an important driver of woodland bird richness. Here, the study sites were amongst highly modified farmland: the average tree cover in a 500 m buffer was 13%, and sites were on average 16.5 km from the closest relatively large (>40 ha) woodland remnant. In these relatively isolated sites, the connected nature of the roadside vegetation may be more important than low levels of surrounding tree cover for species richness of woodland species.
The impact of the noisy miner
The aggressive native species, the noisy miner, consistently exerted the greatest influence on woodland bird species richness at sites. The impact of the noisy miner on woodland bird communities is well documented and supported by both correlative [55,56] and experimental studies. The richness of all response groups declined as the mean abundance of noisy miners at a site increased. Noisy miners were common across the study region, being present at 48 of 52 sites, supporting previous findings of this species’ preference for edge habitats, such as roadside networks .
The dominance of noisy miners across these rural landscapes greatly diminishes the value of all linear elements (i.e. both cross section and linear strips) to woodland bird conservation. The number of woodland bird species recorded in linear elements with high abundance of noisy miners is just 19% of that at sites at which no noisy miners were recorded. Even stronger results were evident for richness of resident species: the predicted number of resident species occupying sites in roadside networks was 85% lower in sites with high abundance of noisy miners compared with unoccupied sites. There also was evidence that noisy miners altered movement patterns of woodland species. Species were more often seen to be moving ‘along’ linear elements (as opposed to perching, roosting or foraging at sites) when noisy miners were abundant. Together, these results suggest that where noisy miners are abundant the available habitat for woodland birds is greatly reduced, likely leading to further isolation of populations as they seek to find suitable habitat but avoid areas dominated by the noisy miner [33,59].
Previous studies have suggested that the negative impacts of the noisy miner can be ameliorated by two primary means: 1) by habitat restoration, specifically increasing the amount of understory vegetation, to which noisy miners respond negatively ; or 2) by direct removal of noisy miners (i.e. culling) . Habitat restoration has the dual benefit of improving habitat quality for a range of taxonomic groups [61–63], while simultaneously minimising the impacts of noisy miners. However, it is also a longer-term solution, as restoration of understory vegetation can take years or decades [64,65]. Culling is a more immediate solution which could provide woodland bird communities with a reprieve, particularly given that such communities have recently been affected by a severe, long-term drought (the Millennium Drought, 2001–2009) . Experimental removal of noisy miners led to a rapid increase in the abundance and diversity of woodland birds. Management options that require ongoing and continuous intervention are not desirable as long-term solutions. Thus, a combination of understory restoration and culling over the short-term could provide woodland birds with opportunity to recolonise sites and persist in the longer-term.
Enhancing the value of linear networks for fauna
Our findings have several clear implications for enhancing the conservation value of linear networks in modified agricultural regions. First, protection, maintenance and restoration of vegetation associated with the intersections of linear strips will have value by targeting these key locations in the linear network. This can be complemented by restoration of native vegetation in farm paddocks across the corners of intersections, thus creating larger ‘nodes’ of connected habitat. Second, the results support the benefits of protecting and retaining larger trees along roadsides to enhance the conservation value of roadside vegetation for woodland birds. Third, the restoration of a complex understory, combined with a program of large-scale removal of noisy miners, could reduce the detrimental effect of this species in the short term and substantially increase the effective area of habitat for woodland birds in rural landscapes.
S1 Table. All birds recorded on transect over the four survey rounds, showing habitat association, foraging guild, presence and abundance at sites.
** OT = Open tolerant, OC = Open country, Wdl = Woodland-dependent. + I = Insectivore, P = Predatory, N = Nectarivore, S = Granivore, F = Frugivore, V = Vegetation, R = Raptorial.
This research was undertaken with the approval of Deakin University Animal Ethics permit B8-2012. Sincere thanks to those who aided this research in the inception stage, with field work and technical support: Jemima Connell, Hana Benjema, Simon Cassidy, Peter Spooner, Rodney van der Ree, Kate Stevens, Desley Whisson and Greg Holland. Valuable comments from reviewers helped improve this manuscript.
Conceived and designed the experiments: MH DN AFB. Performed the experiments: MH. Analyzed the data: MH DN. Contributed reagents/materials/analysis tools: AFB. Wrote the paper: MH DN AFB.
- 1. Foley JA, DeFries R, Asner GP, Barford C, Bonan G, Carpenter SR, et al. Global consequences of land use. Science. 2005;309: 570–574. pmid:16040698
- 2. Forman RT, Godron M. Landscape ecology John Wiley & Sons. N Y. 1986;619.
- 3. Baudry J, Bunce RGH, Burel F. Hedgerows: an international perspective on their origin, function and management. J Environ Manage. 2000;60: 7–22.
- 4. Hinsley SA, Bellamy PE. The influence of hedge structure, management and landscape context on the value of hedgerows to birds: a review. J Environ Manage. 2000;60: 33–49.
- 5. Best LB. Bird use of fencerows: implications of contemporary fencerow management practices. Wildl Soc Bull. 1983; 343–347.
- 6. Boutin C, Jobin B, Bélanger L, Choinière L. Plant diversity in three types of hedgerows adjacent to cropfields. Biodivers Conserv. 2002;11: 1–25.
- 7. Harvey CA, Villanueva C, Villacís J, Chacón M, Muñoz D, López M, et al. Contribution of live fences to the ecological integrity of agricultural landscapes. Agric Ecosyst Environ. 2005;111: 200–230.
- 8. Bennett AF. Roads, roadsides and wildlife conservation: a review. Nat Conserv 2 Role Corridors. 1991.
- 9. Spooner PG, Lunt ID. The influence of land-use history on roadside conservation values in an Australian agricultural landscape. Aust J Bot. 2004;52: 445–458.
- 10. van Langevelde F, Grashof-Bokdam CJ. Modelling the effect of intersections in linear habitat on spatial distribution and local population density. Int J Geogr Inf Sci. 2011;25: 367–378.
- 11. Bennett AF, Nimmo DG, Radford JQ. Riparian vegetation has disproportionate benefits for landscape-scale conservation of woodland birds in highly modified environments. J Appl Ecol. 2014;51: 514–523.
- 12. Vickery JA, Feber RE, Fuller RJ. Arable field margins managed for biodiversity conservation: a review of food resource provision for farmland birds. Agric Ecosyst Environ. 2009;133: 1–13.
- 13. Trombulak SC, Frissell CA. Review of ecological effects of roads on terrestrial and aquatic communities. Conserv Biol. 2000;14: 18–30.
- 14. Forman RT, Alexander LE. Roads and their major ecological effects. Annu Rev Ecol Syst. 1998;29: 207–231.
- 15. Forman RT. Land mosaics: the ecology of landscapes and regions [Internet]. Cambridge University Press; 1995.
- 16. Lynch JF, Saunders DA. Responses of bird species to habitat fragmentation in the wheatbelt of Western Australia: interiors, edges and corridors. Nat Conserv. 1991;2: 143–158.
- 17. Leach GJ, Recher HF. Use of roadside remnants of softwood scrub vegetation by birds in south-eastern Queensland. Wildl Res. 1993;20: 233–249.
- 18. Spooner PG, Lunt ID, Briggs SV, Freudenberger D. Effects of soil disturbance from roadworks on roadside shrubs in a fragmented agricultural landscape. Biol Conserv. 2004;117: 393–406.
- 19. van der Ree R. The population ecology of the squirrel glider (Petaurus norfolcensis) within a network of remnant linear habitats. Wildl Res. 2002;29: 329–340.
- 20. Lentini PE, Fischer J, Gibbons P, Hanspach J, Martin TG. Value of large-scale linear networks for bird conservation: a case study from travelling stock routes, Australia. Agric Ecosyst Environ. 2011;141: 302–309.
- 21. Fahrig L, Merriam G. Conservation of fragmented populations. Conserv Biol. 1994;8: 50–59.
- 22. Coffin AW. From roadkill to road ecology: A review of the ecological effects of roads. J Transp Geogr. 2007;15: 396–406.
- 23. Lack PC. Hedge intersections and breeding bird distribution in farmland. Bird Study. 1988;35: 133–136.
- 24. Némethová D, Tirinda A. The influence of intersections and dead-ends of line-corridor networks on the breeding bird distribution. FOLIA Zool-PRAHA-. 2005;54: 123.
- 25. Riffell SK, Gutzwiller KJ. Plant-species richness in corridor intersections: is intersection shape influential? Landsc Ecol. 1996;11: 157–168.
- 26. Pollard KA, Holland JM. Arthropods within the woody element of hedgerows and their distribution pattern. Agric For Entomol. 2006;8: 203–211.
- 27. Lindenmayer DB, Cunningham R, Crane M, Michael D, Montague-Drake R. Farmland bird responses to intersecting replanted areas. Landsc Ecol. 2007;22: 1555–1562.
- 28. Ford HA. The causes of decline of birds of eucalypt woodlands: advances in our knowledge over the last 10 years. Emu. 2011;111: 1–9.
- 29. Bennett JM, Nimmo DG, Clarke RH, Thomson JR, Cheers G, Horrocks GFB, et al. Resistance and resilience: can the abrupt end of extreme drought reverse avifaunal collapse? Divers Distrib. 2014;20: 1321–1332.
- 30. Manning AD, Lindenmayer DB, Barry SC. The conservation implications of bird reproduction in the agricultural “matrix”: a case study of the vulnerable superb parrot of south-eastern Australia. Biol Conserv. 2004;120: 363–374.
- 31. Radford JQ, Bennett AF, Cheers GJ. Landscape-level thresholds of habitat cover for woodland-dependent birds. Biol Conserv. 2005;124: 317–337.
- 32. Mac Nally R, Bowen M, Howes A, McAlpine CA, Maron M. Despotic, high-impact species and the subcontinental scale control of avian assemblage structure. Ecology. 2012;93: 668–678. pmid:22624220
- 33. Maron M, Grey MJ, Catterall CP, Major RE, Oliver DL, Clarke MF, et al. Avifaunal disarray due to a single despotic species. Divers Distrib. 2013;19: 1468–1479.
- 34. Council EC. Box-ironbark forests and woodlands investigation: final report. Environ Conserv Counc East Melb Vic. 2001;
- 35. Radford JQ, Bennett AF. Terrestrial avifauna of the Gippsland Plain and Strzelecki Ranges, Victoria, Australia: insights from Atlas data. Wildl Res. 2005;32: 531–555.
- 36. Zuur AF, Saveliev AA, Ieno EN. Zero inflated models and generalized linear mixed models with R [Internet]. Highland Statistics Limited Newburgh; 2012.
- 37. Zuur A, Ieno EN, Walker N, Saveliev AA, Smith GM. Mixed effects models and extensions in ecology with R [Internet]. Springer Science & Business Media; 2009.
- 38. Burnham KP, Anderson DR. Model selection and multimodel inference: a practical information-theoretic approach [Internet]. Springer Science & Business Media; 2002.
- 39. Symonds MR, Moussalli A. A brief guide to model selection, multimodel inference and model averaging in behavioural ecology using Akaike’s information criterion. Behav Ecol Sociobiol. 2011;65: 13–21.
- 40. Burnham KP, Anderson DR, Huyvaert KP. AIC model selection and multimodel inference in behavioral ecology: some background, observations, and comparisons. Behav Ecol Sociobiol. 2011;65: 23–35.
- 41. Barton K. MuMIn: Multi-model inference. R package version 1.0. 0. Vienna Austria R Found Stat Comput See HttpCRAN R-Proj Orgpackage MuMIn. 2011;
- 42. Burel F. Hedgerows and their role in agricultural landscapes. Crit Rev Plant Sci. 1996;15: 169–190.
- 43. Lunt ID, Spooner PG. Using historical ecology to understand patterns of biodiversity in fragmented agricultural landscapes. J Biogeogr. 2005;32: 1859–1873.
- 44. Joyce KA, Holland JM, Doncaster CP. Influences of hedgerow intersections and gaps on the movement of carabid beetles. Bull Entomol Res. 1999;89: 523–531.
- 45. Gelling M, Macdonald DW, Mathews F. Are hedgerows the route to increased farmland small mammal density? Use of hedgerows in British pastoral habitats. Landsc Ecol. 2007;22: 1019–1032.
- 46. Geslin T, Lefeuvre J-C, Le Pajolec Y, Questiau S, Eybert MC. Salt exploitation and landscape structure in a breeding population of the threatened bluethroat (Luscinia svecica) in salt-pans in western France. Biol Conserv. 2002;107: 283–289.
- 47. Haddad NM, Bowne DR, Cunningham A, Danielson BJ, Levey DJ, Sargent S, et al. Corridor use by diverse taxa. Ecology. 2003;84: 609–615.
- 48. Spooner PG, Smallbone L. Effects of road age on the structure of roadside vegetation in south-eastern Australia. Agric Ecosyst Environ. 2009;129: 57–64.
- 49. Manning AD, Fischer J, Lindenmayer DB. Scattered trees are keystone structures–implications for conservation. Biol Conserv. 2006;132: 311–321.
- 50. Lindenmayer DB, Laurance WF, Franklin JF, Likens GE, Banks SC, Blanchard W, et al. New Policies for Old Trees: Averting a Global Crisis in a Keystone Ecological Structure. Conserv Lett. 2014;7: 61–69.
- 51. Fischer J, Zerger A, Gibbons P, Stott J, Law BS. Tree decline and the future of Australian farmland biodiversity. Proc Natl Acad Sci. 2010;107: 19597–19602. pmid:20974946
- 52. Newton I. The role of nest sites in limiting the numbers of hole-nesting birds: a review. Biol Conserv. 1994;70: 265–276.
- 53. McGoldrick JM, Macnally R. Impact of flowering on bird community dynamics in some central Victorian eucalypt forests. Ecol Res. 1998;13: 125–139.
- 54. Gibbons P, Lindenmayer D. Tree hollows and wildlife conservation in Australia [Internet]. CSIRO publishing; 2002.
- 55. Howes A, Mac Nally R, Loyn R, Kath J, Bowen M, McAlpine C, et al. Foraging guild perturbations and ecological homogenization driven by a despotic native bird species. Ibis. 2014;156: 341–354.
- 56. Montague-Drake R, Lindenmayer D, Cunningham R, Stein J. A reverse keystone species affects the landscape distribution of woodland avifauna: a case study using the Noisy Miner (Manorina melanocephala) and other Australian birds. Landsc Ecol. 2011;26: 1383–1394.
- 57. Grey MJ, Clarke MF, Loyn RH. Initial changes in the avian communities of remnant eucalypt woodlands following a reduction in the abundance of noisy miners, Manorina melanocephala. Wildl Res. 1997;24: 631–648.
- 58. Maron M. Nesting, foraging and aggression of Noisy Miners relative to road edges in an extensive Queensland forest. Emu. 2009;109: 75–81.
- 59. Eyre TJ, Maron M, Mathieson MT, Haseler M. Impacts of grazing, selective logging and hyper-aggressors on diurnal bird fauna in intact forest landscapes of the Brigalow Belt, Queensland. Austral Ecol. 2009;34: 705–716.
- 60. Hastings RA, Beattie AJ. Stop the bullying in the corridors: can including shrubs make your revegetation more Noisy Miner free? Ecol Manag Restor. 2006;7: 105–112.
- 61. Law BS, Chidel M. Eucalypt plantings on farms: Use by insectivorous bats in south-eastern Australia. Biol Conserv. 2006;133: 236–249.
- 62. Barrett GW, Freudenberger D, Drew A, Stol J, Nicholls AO, Cawsey EM. Colonisation of native tree and shrub plantings by woodland birds in an agricultural landscape. Wildl Res. 2008;35: 19–32.
- 63. Jellinek S, Parris KM, McCarthy MA, Wintle BA, Driscoll DA. Reptiles in restored agricultural landscapes: the value of linear strips, patches and habitat condition. Anim Conserv. 2014;17: 544–554.
- 64. Vesk PA, Mac Nally R. The clock is ticking—revegetation and habitat for birds and arboreal mammals in rural landscapes of southern Australia. Agric Ecosyst Environ. 2006;112: 356–366.
- 65. Cunningham RB, Lindenmayer DB, Crane M, Michael D, MacGregor C. Reptile and arboreal marsupial response to replanted vegetation in agricultural landscapes. Ecol Appl. 2007;17: 609–619. pmid:17489264
| 1 | 2 |
<urn:uuid:86f75ec1-cd1f-447e-8ce5-4049c35019ec>
|
A key requirement of the immune system is to distinguish self from nonself. While the concept is simple, the implementation is a complex system that has taken decades to understand. At the center of this process is recognition and binding of a T-cell receptor (TCR) to an antigen displayed in the major histocompatibility complex (MHC) on the surface of an antigen-presenting cell (APC). Multiple other factors then influence whether this binding results in T-cell activation or anergy.
The life of a T cell begins in the thymus, where immature cells proliferate and create a wide repertoire of TCRs through recombination of the TCR gene segments. A selection process then begins, and T cells with strong reactivity to self-peptides are deleted in the thymus to prevent autoreactivity in a process called central tolerance.1 T cells with insufficient MHC binding undergo apoptosis, but those that can weakly respond to MHC molecules and self-peptides are not deleted and are released as naive cells to circulate through the blood, spleen, and lymphatic organs. There they are exposed to professional APCs displaying foreign antigens (in the case of infection) or mutated self-proteins (in the case of malignancy). Some TCRs may have specificity that is cross-reactive with self-antigens. To prevent autoimmunity, numerous immune checkpoint pathways regulate activation of T cells at multiple steps during an immune response, a process called peripheral tolerance.1,2 Central to this process are the cytotoxic T-lymphocyte–associated antigen 4 (CTLA-4) and programmed death 1 (PD-1) immune checkpoint pathways.3 The CTLA-4 and PD-1 pathways are thought to operate at different stages of an immune response. CTLA-4 is considered the “leader” of the immune checkpoint inhibitors, as it stops potentially autoreactive T cells at the initial stage of naive T-cell activation, typically in lymph nodes.2,4 The PD-1 pathway regulates previously activated T cells at the later stages of an immune response, primarily in peripheral tissues.2 A core concept in cancer immunotherapy is that tumor cells, which would normally be recognized by T cells, have developed ways to evade the host immune system by taking advantage of peripheral tolerance.5,6 Inhibition of the immune checkpoint pathways has led to the approval of several new drugs: ipilimumab (anti-CTLA-4), pembrolizumab (anti-PD-1), and nivolumab (anti-PD-1). There are key similarities and differences in these pathways, with implications for cancer therapy.
T-cell activation is a complex process that requires >1 stimulatory signal. TCR binding to MHC provides specificity to T-cell activation, but further costimulatory signals are required. Binding of B7-1 (CD80) or B7-2 (CD86) molecules on the APC with CD28 molecules on the T cell leads to signaling within the T cell. Sufficient levels of CD28:B7-1/2 binding lead to proliferation of T cells, increased T-cell survival, and differentiation through the production of growth cytokines such as interleukin-2 (IL-2), increased energy metabolism, and upregulation of cell survival genes.
CTLA-4 is a CD28 homolog with much higher binding affinity for B77,8; however, unlike CD28, binding of CTLA-4 to B7 does not produce a stimulatory signal. As such, this competitive binding can prevent the costimulatory signal normally provided by CD28:B7 binding7,9,10 (Fig. 1). The relative amount of CD28:B7 binding versus CTLA-4:B7 binding determines whether a T cell will undergo activation or anergy.4 Furthermore, some evidence suggests that CTLA-4 binding to B7 may actually produce inhibitory signals that counteract the stimulatory signals from CD28:B7 and TCR:MHC binding.11,12 Proposed mechanisms for such inhibitory signals include direct inhibition at the TCR immune synapse, inhibition of CD28 or its signaling pathway, or increased mobility of T cells leading to decreased ability to interact with APCs.9,12,13
CTLA-4 itself is subject to regulation, particularly by localization within the cell. In resting naive T cells CTLA-4 is located primarily in the intracellular compartment.14 Stimulatory signals resulting from both TCR and CD28:B7 binding induce upregulation of CTLA-4 on the cell surface by exocytosis of CTLA-4-containing vesicles.14 This process operates in a graded feedback loop whereby stronger TCR signaling elicits more CTLA-4 translocation to the cell surface. In case of a net negative signal through CTLA-4:B7 binding, full activation of T cells is prevented by inhibition of IL-2 production and cell cycle progression.15
CTLA-4 is also involved in other aspects of immune control. Regulatory T cells (Tregs) control functions of the effector T cells, and thus are key players in maintaining peripheral tolerance.16,17 Unlike effector T cells, Tregs constitutively express CTLA-4, and this is thought to be important for their suppressive functions.17 In animal models, genetic CTLA-4 deficiency in Tregs impaired their suppressive functions.17,18 One mechanism whereby Tregs are thought to control effector T cells is downregulation of B7 ligands on APCs, leading to reduced CD28 costimulation (Fig. 2).18,19
PD-1 is a member of the B7/CD28 family of costimulatory receptors. It regulates T-cell activation through binding to its ligands, programmed death ligand 1 (PD-L1) and programmed death ligand 2 (PD-L2).20 Similar to CTLA-4 signaling, PD-1 binding inhibits T-cell proliferation, and interferon-γ (IFN-γ), tumor necrosis factor-α, and IL-2 production, and reduces T-cell survival20 (Fig. 3). If a T cell experiences coincident TCR and PD-1 binding, PD-1-generated signals prevent phosphorylation of key TCR signaling intermediates, which terminates early TCR signaling and reduces activation of T cells.10,21 PD-1 expression is a hallmark of “exhausted” T cells that have experienced high levels of stimulation or reduced CD4+ T-cell help.22 This state of exhaustion, which occurs during chronic infections and cancer, is characterized by T-cell dysfunction, resulting in suboptimal control of infections and tumors.
Both CTLA-4 and PD-1 binding have similar negative effects on T-cell activity; however, the timing of downregulation, the responsible signaling mechanisms, and the anatomic locations of immune inhibition by these 2 immune checkpoints differ. Unlike CTLA-4, which is confined to T cells, PD-1 is more broadly expressed on activated T cells, B cells, and myeloid cells.2,20 While CTLA-4 functions during the priming phase of T-cell activation, PD-1 functions during the effector phase, predominantly within peripheral tissues.20
The distribution of PD-1 ligands also differs from those for CTLA-4. The B7 ligands for CTLA-4 are expressed by professional APCs, which typically reside in lymph nodes or spleen2; however, PD-L1 and PD-L2 are more widely expressed.2,10,23,24 PD-L1 is expressed on leukocytes, on nonhematopoietic cells, and in nonlymphoid tissues, and can be induced on parenchymal cells by inflammatory cytokines (IFN-γ) or tumorigenic signaling pathways.25 PD-L1 expression is also found on many different tumor types, and is associated with an increased amount of tumor-infiltrating lymphocytes (TILs) and poorer prognosis.26–28 PD-L2 is primarily expressed on dendritic cells and monocytes, but can be induced on a wide variety of other immune cells and nonimmune cells, depending on the local microenvironment.29 PD-1 has a higher binding affinity for PD-L2 than for PD-L1, and this difference may be responsible for differential contributions of these ligands to immune responses.30 Because PD-1 ligands are expressed in peripheral tissues, PD-1–PD-L1/PD-L2 interactions are thought to maintain tolerance within locally infiltrated tissues.2
As might be expected, the plurality of ligands for PD-1 leads to variation in biological effects, depending upon which ligand is bound. One model showed opposing roles of PD-L1 and PD-L2 signaling in activation of natural killer T cells.31 Inhibition of PD-L2 binding leads to enhanced TH2 activity,32 whereas PD-L1 binding to CD80 has been shown to inhibit T-cell responses.33 These different biological effects are likely to contribute to differences in activity and toxicity between antibodies directed at PD-1 (preventing binding to both ligands) as opposed to those directed at PD-L1, and therefore have potential therapeutic implications.
Although Tregs express PD-1 as well as CTLA-4, the function of PD-1 expression on these cells remains unclear. PD-L1 has been shown to contribute to the conversion of naive CD4+ T cells to Treg cells34 and to inhibit T-cell responses by promoting the induction and maintenance of Tregs.35 Consistent with these findings, PD-1 blockade can reverse Treg-mediated suppression of effector T cells in vitro.36
PD-1 binding with its ligands decreases the magnitude of the immune response in T cells that are already engaged in an effector T-cell response.22 This results in a more restricted spectrum of T-cell activation compared with CTLA-4 blockade, which may explain the apparently lower incidence of immune-mediated adverse events (AEs) associated with PD-1 compared with a CTLA-4 blockade (see below).37 Similarities and differences between the CTLA-4 and PD-1 receptors, and the consequences of their engagement, are detailed in Box 1.
Box 1: A Comparison of CTLA-4 and PD-1 Cited Here...
IMPLICATIONS OF CTLA-4 AND PD-1 PATHWAY BLOCKADE IN CANCER
Preclinical studies showing decreased tumor growth and improved survival with CTLA-4 or PD-1 pathway blockade provide the rationale for immune checkpoint inhibition for cancer treatment.39,40 Monoclonal antibodies that block CTLA-4 or PD-1 are now approved for melanoma and lung cancer, and are in development for other tumor types, including kidney cancer, prostate cancer, and head and neck cancer (Table 1).41–44 Other agents targeting PD-L1 specifically are also in development (Table 1).41–44
The exact mechanism by which anti-CTLA-4 antibodies induce an antitumor response is unclear, although research to date suggests that CTLA-4 blockade affects the immune priming phase by supporting the activation and proliferation of a higher number of effector T cells, regardless of TCR specificity, and by reducing Treg-mediated suppression of T-cell responses (Fig. 4).2 An increase in the diversity of the peripheral T-cell pool following CTLA-4 blockade in patients with melanoma has recently been reported.45 An ipilimumab study in patients with melanoma or prostate cancer provided evidence that baseline T-cell profile may also be important. An immediate turnover of the T-cell repertoire on initial treatment was shown, and it continued to evolve with further treatment; both expansion and loss of individual T-cell clonotypes were identified, but there was a net increase in TCR diversity.46 Overall survival, however, was associated with the maintenance of clones present in high frequency at baseline. In patients with shorter overall survival, numbers of these highest frequency clones decreased with treatment. These findings suggest that effective CTLA-4 blockade may depend on the ability to retain preexisting high-avidity T cells with relevance to the antitumor response.
PD-1 blockade works during the effector phase to restore the immune function of T cells in the periphery that have been turned off following extended or high levels of antigen exposure, as in advanced cancer.22,23 As mentioned above, the ligands for PD-1 can be expressed by tumor cells as well as tumor-infiltrating immune cells. PD-L1 expression on tumor cells varies by tumor type and also within a given tumor type, but appears to be particularly abundant in melanoma, non–small cell lung cancer (NSCLC), and ovarian cancer.27,28,47 In a recent study, PD-L1 expression on tumor cells was shown to be significantly associated with PD-1 expression on TILs, and was locally associated with PD-L2 expression when this ligand was also expressed.27 In the same study, tumor PD-L1 expression was the single factor most closely correlated with response to anti-PD-1 blockade, whereas PD-L1 expression on TILs was not associated with response.27 Another study, however, found that patient response to anti-PD-L1 blockade was strongest when PD-L1 was expressed by tumor-infiltrating immune cells.48
Inhibiting PD-L1 specifically, as opposed to PD-1 inhibition, will block PD-1:PD-L1 interactions while preserving PD-1:PD-L2 interactions. This has the potential to provide a more targeted signal with less unwanted toxicity, as self-tolerance mediated through PD-1:PD-L2 interactions should be preserved.37,49 Furthermore, as PD-L1 is known to bind CD80 as well as PD-1 to deliver inhibitory signals to T cells,33 PD-L1 inhibition with an appropriate antibody could in theory also prevent PD-L1 reverse signaling and its resulting T-cell downregulation through CD80; a PD-L1-directed antibody could also interrupt the PD-L1:CD80 axis on other cells where they are coexpressed, such as dendritic cells.20,23
The differences in timing, location, and nonredundant effects of their actions suggest that anti-CTLA-4–targeted therapies and anti-PD-1 therapies have the potential for additive or possibly synergistic effects in the treatment of advanced malignancy. Further evidence that supports this theory and highlights the different role of each immune checkpoint comes from a study that investigated the biological effect of CTLA-4 and PD-1 blockade in patients undergoing single-agent or combination treatment.50 While CTLA-4 inhibition induced a proliferative signal found predominantly in a subset of transitional memory T cells, PD-1 inhibition was associated with changes in genes thought to be involved in cytolysis and natural killer cell function; dual blockade led to nonoverlapping changes in gene expression. The 2 treatment types also produced different effects on levels of circulating cytokines. This study confirms that CTLA-4 and PD-1 blockade lead to distinct patterns of immune activation, supporting the rationale for the investigation of immune checkpoint combinations in the clinic.
CLINICAL EFFICACY AND CHARACTERISTICS OF RESPONSES WITH IMMUNE CHECKPOINT INHIBITORS
Anti-CTLA-4 blockade with ipilimumab was the first treatment to prolong overall survival in patients with advanced melanoma in a randomized setting.51,52 Analysis of long-term survival data pooled across several phase II and phase III trials showed that the survival curve begins to plateau at about 3 years, with 3-year survival rates of 22%, 26%, and 20% in all patients with sufficient follow-up, in treatment-naive patients, and in previously treated patients, respectively.53 Consistent with its survival benefit, CTLA-4 blockade is associated with durable responses in a proportion of patients treated, with some responses reported to last >3 years.51,54
More recently, PD-1 blockade has been shown to improve survival and progression-free survival in patients with metastatic melanoma and in patients with previously treated metastatic squamous and nonsquamous NSCLC.55–62 The longest follow-up data available indicate that highly durable responses can also occur with PD-1 blockade in patients with melanoma, NSCLC, or renal cell carcinoma (RCC).48,63–66 The response rates with PD-1 pathway blockade were higher than with CTLA-4 blockade in advanced melanoma: 33% to 34% versus 12% of patients in a phase III head-to-head trial of pembrolizumab versus ipilimumab. This trial also reported higher 1-year survival rates with pembrolizumab versus ipilimumab: 68% to 74% versus 58%.54
Because immune checkpoint inhibitors work by restarting an effective antitumor immune response, response patterns can differ from those seen with chemotherapy or targeted agents. Delayed or unconventional responses may be related to variations in the kinetics and efficacy of each patient’s individual immune system, as well as its interplay with tumors and metastases. An initial increase in target lesion tumor volume could be because of true tumor growth before the generation of effective antitumor response. Conversely, faster activation of an antitumor immune response could lead to inflammation and an influx of immune cells into the tumor site, which could masquerade as tumor progression. In clinical trials of ipilimumab, approximately 10% of patients were initially characterized as having progressive disease by World Health Organization criteria, but subsequently had favorable survival.67 Approximately 4% to 8% of patients with advanced melanoma receiving nivolumab or pembrolizumab in clinical trials had unconventional responses that did not meet Response Evaluation Criteria in Solid Tumors (RECIST) criteria, but were nevertheless associated with patient benefit.55,66,68,69 Unconventional response patterns have also been observed in patients with lung cancer or RCC receiving PD-1 pathway inhibitors.56,64,65,70 These atypical responses have led to the development of modified response criteria called immune-related response criteria (Supplemental Table 1, Supplemental Digital Content 1, http://links.lww.com/AJCO/A11067,68,71).
A frustration with ipilimumab has been the inability to predict prospectively which patients are most likely to benefit from treatment. The low level of inducible CTLA-4 expression and the widespread expression of its B7 ligands are not useful as predictive biomarkers. Retrospective studies have identified several markers associated with response, including absolute lymphocyte count, upregulation of the T-cell activation maker-inducible costimulator (ICOS), and the development of a polyfunctional T-cell response to the tumor antigen NY-ESO-1.72 To date, none of these potential markers have been validated prospectively. An association between melanoma mutational load and clinical benefit with CTLA-4 blockade has been shown, but was insufficient alone to predict patients who are likely to respond to treatment73; however, work examining tumor neoantigens has shown promise, with the identification of a neoantigen signature present in tumors that correlated with overall survival of individuals treated with CTLA-4 blockade.73
In contrast, the upregulation of PD-1 on exhausted cells and of PD-L1 ligands on tumor cells or tumor-infiltrating immune cells may offer the potential for identifying patients responsive to PD-1 or PD-L1 blockade.27,48 Preliminary data across tumor types suggest that patients with PD-L1-expressing tumors or infiltrating immune cells typically have a higher response rate to anti-PD-1 or anti-PD-L1 therapy and may also have improved survival outcomes compared with patients with low or negative PD-L1 expression.27,48,55,63,64,70,74,75 However, in most studies, responses have also been seen in patients with PD-L1-low or PD-LI-negative tumors, and thus these patients should not be excluded from treatment. In a trial comparing the combination of ipilimumab and nivolumab against each agent alone, responses in PD-L1-positive patients were similar with the combination versus nivolumab alone, whereas PD-L1-negative patients did better receiving the combination.58 While all of these results are provocative, more research is needed to establish the validity and utility of PD-L1 expression as a predictive biomarker.
Other markers of response to anti-PD-1 or PD-L1 therapy have also been explored and include features associated with PD-L1-mediated suppression of preexisting immunity.48,76 As with CTLA-4, mutational burden and higher neoantigen burden have recently been shown to be associated with efficacy in patients with NSCLC treated with PD-1 blockade.77
Immune checkpoint blockade is associated with AEs with potential immunologic etiologies, so-called immune-mediated AEs. Commonly reported immune-mediated AEs include rash or pruritus, gastrointestinal disorders, and endocrinopathies.54,55,57,66,69,78,79
The overall rate of grade ≥3 AEs was higher with ipilimumab (20%) compared with pembrolizumab (10% to 13%) in a phase III trial.54 Theoretically, this could be a consequence of a greater magnitude of T-cell proliferation or reduced Treg-mediated immunosuppression with CTLA-4 blockade, or activation of a smaller number of T-cell clones with PD-1 blockade.
Hypophysitis is reported in about 2% to 4% of patients receiving ipilimumab but in <1% of patients receiving PD-1 inhibitors51,54,55,69,80; however, this variation in incidence may not be related to differences in immune mechanism of action, but may be explained by ectopic expression of CTLA-4 in the pituitary gland, leading to ipilimumab binding to endocrine cells, followed by complement fixation and inflammation.80
Inhibiting PD-L1 rather than PD-1 may result in a slightly different toxicity profile, although clinical data are currently limited. Treatment-related grade 3-4 AEs were reported in 4% to 13% of patients receiving PD-L1 inhibitors in phase I/II trials across 2 different agents and multiple tumor types.48,74,75,81 While data from comparative trials are not yet available, the incidence of grade 3-4 treatment-related AEs may trend lower with PD-L1 inhibitors than with PD-1 inhibitors; however, the immune-mediated AEs reported to date have been similar between the 2 types of agents.
BLOCKADE OF BOTH CTLA-4 AND PD-1/PD-L1
Blockade of both CTLA-4 and PD-1 or PD-L1 could, in theory, induce proliferation of a higher number of T cells early in an immune response, restore immune responses of previously activated T cells that have become exhausted, and reduce Treg-mediated immunosuppression (Fig. 4). Preclinical studies showed enhanced antitumor responses using dual blockade compared with single-agent blockade, which was also observed in initial clinical trials.82–85 This synergistic effect validates the different roles these agents play in immune regulation.
An increased response rate and improved progression-free survival were reported with the ipilimumab-nivolumab combination when compared with ipilimumab alone in a randomized phase III trial in treatment-naive patients with metastatic melanoma.58 The objective response rate was 58% versus 19%, and the median progression-free survival was 11.5 versus 2.9 months for the combination and monotherapy, respectively.58 Combinations of CTLA-4 and PD-1 inhibitors are also being investigated in patients with several other tumor types, including advanced NSCLC and RCC. In metastatic RCC, preliminary data suggest that the objective response rate is higher with a combination blockade (38% to 43%) than was seen with PD-1 inhibition alone in a different trial (20% to 22%).70,86 Early data from lung cancer trials do not suggest increased antitumor activity with a combination blockade in NSCLC87,88; however, increased antitumor activity was seen with a combination blockade in small cell lung cancer (SCLC) versus nivolumab.89
Combining CTLA-4 and PD-1 blockade with the aim of increasing efficacy is highly desirable, but combination treatment could prove more toxic. In patients with previously untreated melanoma or recurrent SCLC, the incidence of drug-related grade 3-4 AEs was 54% to 55% with concurrent blockade compared with 24% to 27% with ipilimumab alone and 15% to 16% with nivolumab alone.58,83,89 Prior CTLA-4 inhibition does not appear to predispose patients to development of immune-mediated AEs with PD-1 inhibition,57,90,91 which may therefore support sequential rather than combination treatment.
The CTLA-4 and PD-1 immune checkpoint pathways downregulate T-cell activation to maintain peripheral tolerance, and can be exploited by tumors to induce an immunosuppressive state that allows the tumors to grow and develop instead of being eliminated by the immune system. The differential patterns of the CTLA-4 and PD-1 ligand expression—found primarily in lymphoid tissue and in peripheral tissues, respectively—are central to the hypothesis that CTLA-4 acts early in tolerance induction and PD-1 acts late to maintain long-term tolerance. Inhibitors of CTLA-4 and PD-1 or its ligand, PD-L1, can restore antitumor immune responses, leading to long-term benefit in a substantial proportion of treated patients. As a likely result of their mechanism of action, immune checkpoint inhibitors are associated with immune-mediated toxicities, most of which can be managed successfully with corticosteroids. Preliminary data suggest that simultaneous blockade of both CTLA-4 and PD-1 pathways leads to increased efficacy over CTLA-4 or PD-1 inhibition alone or in sequence, providing additional evidence of the separate roles of these checkpoints in regulating antitumor immune responses. Further trials are needed to confirm these data and validate a combination strategy.
To date, 3 immune checkpoint inhibitors have been approved for use in melanoma; 2 of the 3 are also approved for lung cancer. These and other investigational CTLA-4, PD-1, and PD-L1 inhibitors are in active clinical development for multiple indications and have the potential to revolutionize future treatment options for many patients with advanced cancer.
1. Goldrath AW, Bevan MJ. Selecting and maintaining a diverse T-cell repertoire. Nature. 1999;402:255–262.
2. Fife BT, Bluestone JA. Control of peripheral T-cell tolerance and autoimmunity via the CTLA-4 and PD-1 pathways. Immunol Rev. 2008;224:166–182.
3. Greenwald RJ, Freeman GJ, Sharpe AH. The B7 family revisited. Annu Rev Immunol. 2005;23:515–548.
4. Krummel MF, Allison JP. CD28 and CTLA-4 have opposing effects on the response of T cells to stimulation. J Exp Med. 1995;182:459–465.
5. Dunn GP, Old LJ, Schreiber RD. The immunobiology of cancer immunosurveillance and immunoediting. Immunity. 2004;21:137–148.
6. Poschke I, Mougiakakos D, Kiessling R. Camouflage and sabotage: tumor escape from the immune system. Cancer Immunol Immunother. 2011;60:1161–1711.
7. Chambers CA, Kuhns MS, Egen JG, et al.. CTLA-4-mediated inhibition in regulation of T cell responses: mechanisms and manipulation in tumor immunotherapy. Annu Rev Immunol. 2001;19:565–594.
8. Collins AV, Brodie DW, Gilbert RJ, et al.. The interaction properties of costimulatory molecules revisited. Immunity. 2002;17:201–210.
9. Egen JG, Kuhns MS, Allison JP. CTLA-4: new insights into its biological function and use in tumor immunotherapy. Nat Immunol. 2002;3:611–618.
10. Parry RV, Chemnitz JM, Frauwirth KA, et al.. CTLA-4 and PD-1 receptors inhibit T-cell activation by distinct mechanisms. Mol Cell Biol. 2005;25:9543–9553.
11. Fallarino F, Fields PE, Gajewski TF. B7–1 engagement of cytotoxic T lymphocyte antigen 4 inhibits T cell activation in the absence of CD28. J Exp Med. 1998;188:205–210.
12. Masteller EL, Chuang E, Mullen AC, et al.. Structural analysis of CTLA-4 function in vivo. J Immunol. 2000;164:5319–5327.
13. Schneider H, Downey J, Smith A, et al.. Reversal of the TCR stop signal by CTLA-4. Science. 2006;313:1972–1975.
14. Linsley PS, Bradshaw J, Greene J, et al.. Intracellular trafficking of CTLA-4 and focal localization towards sites of TCR engagement. Immunity. 1996;4:535–543.
15. Krummel MF, Allison JP. CTLA-4 engagement inhibits IL-2 accumulation and cell cycle progression upon activation of resting T cells. J Exp Med. 1996;183:2533–2540.
16. Piccirillo CA, Shevach EM. Naturally-occurring CD4+CD25+ immunoregulatory T cells: central players in the arena of peripheral tolerance. Semin Immunol. 2004;16:81–88.
17. Takahashi T, Tagami T, Yamazaki S, et al.. Immunologic self-tolerance maintained by CD25(+)CD4(+) regulatory T cells constitutively expressing cytotoxic T lymphocyte-associated antigen 4. J Exp Med. 2000;192:303–309.
18. Wing K, Onishi Y, Prieto-Martin P, et al.. CTLA-4 control over Foxp3+ regulatory T cell function. Science. 2008;322:271–275.
19. Qureshi OS, Zheng Y, Nakamura K, et al.. Trans-endocytosis of CD80 and CD86: a molecular basis for the cell-extrinsic function of CTLA-4. Science. 2011;332:600–603.
20. Keir ME, Butte MJ, Freeman GJ, et al.. PD-1 and its ligands in tolerance and immunity. Annu Rev Immunol. 2008;26:677–704.
21. Bennett F, Luxenberg D, Ling V, et al.. Program death-1 engagement upon TCR activation has distinct effects on costimulation and cytokine-driven proliferation: attenuation of ICOS, IL-4, and IL-21, but not CD28, IL-7, and IL-15 responses. J Immunol. 2003;170:711–718.
22. Wherry EJ. T cell exhaustion. Nat Immunol. 2011;12:492–499.
23. Chen DS, Irving BA, Hodi FS. Molecular pathways: next-generation immunotherapy—inhibiting programmed death-ligand 1 and programmed death-1. Clin Cancer Res. 2012;18:6580–6587.
24. Latchman YE, Liang SC, Wu Y, et al.. PD-L1-deficient mice show that PD-L1 on T cells, antigen-presenting cells, and host tissues negatively regulates T cells. Proc Natl Acad Sci. 2004;101:10691–10696.
25. Chen L. Co-inhibitory molecules of the B7-CD28 family in the control of T-cell immunity. Nat Rev Immunol. 2004;4:336–347.
26. Hino R, Kabashima K, Kato Y, et al.. Tumor cell expression of programmed cell death-1 ligand 1 is a prognostic factor for malignant melanoma. Cancer. 2010;116:1757–1766.
27. Taube JM, Klein AP, Brahmer JR, et al.. Association of PD-1, PD-1 ligands, and other features of the tumor immune microenvironment with response to anti-PD-1 therapy. Clin Cancer Res. 2014;20:5064–5074.
28. Zou W, Chen L. Inhibitory B7-family molecules in the tumour microenvironment. Nat Rev Immunol. 2008;8:467–477.
29. Rozali EN, Hato SV, Robinson BW, et al.. Programmed death ligand 2 in cancer-induced immune suppression. Clin Dev Immunol. 2012;2012:656340.
30. Youngnak P, Kozono Y, Kozono H, et al.. Differential binding properties of B7-H1 and B7-DC to death-1. Biochem Biophys Res Commun. 2003;307:672–677.
31. Akbari O, Stock P, Singh AK, et al.. PD-L1 and PD-L2 modulate airway inflammation and iNKT-cell-dependent airway hyperreactivity in opposing directions. Mucosal Immunol. 2010;3:81–91.
32. Huber S, Hoffmann R, Muskens F, et al.. Alternatively activated macrophages inhibit T-cell proliferation by Stat6-dependent expression of PD-L2. Blood. 2010;116:3311–3320.
33. Butte MJ, Keir ME, Phamduy TB, et al.. Programmed death-1 ligand 1 interacts specifically with the B7–1 costimulatory molecule to inhibit T cell responses. Immunity. 2007;27:111–122.
34. Wang L, Pino-Lagos K, de Vries VC, et al.. Programmed death 1 ligand signaling regulates the generation of adaptive Foxp3+CD4+ regulatory T cells. Proc Natl Acad Sci. 2008;105:9331–9336.
35. Francisco LM, Salinas VH, Brown KE, et al.. PD-L1 regulates the development, maintenance, and function of induced regulatory T cells. J Exp Med. 2009;206:3015–3029.
36. Wang C, Thudium KB, Han M, et al.. In vitro characterization of the anti-PD-1 antibody nivolumab, BMS-936558, and in vivo toxicology in non-human primates. Cancer Immunol Res. 2014;2:846–856.
37. Ott PA, Hodi FS, Robert C. CTLA-4 and PD-1/PD-L1 blockade: new immunotherapeutic modalities with durable clinical benefit in melanoma patients. Clin Cancer Res. 2013;19:5300–5309.
38. Egen JG, Allison JP. Cytotoxic T lymphocyte antigen-4 accumulation in the immunological synapse is regulated by TCR signal strength. Immunity. 2002;16:23–35.
39. Leach DR, Krummel MF, Allison JP. Enhancement of antitumor immunity by CTLA-4 blockade. Science. 1996;271:1734–1736.
40. Hirano F, Kaneko K, Tamura H, et al.. Blockade of B7-H1 and PD-1 by monoclonal antibodies potentiates cancer therapeutic immunity. Cancer Res. 2005;65:1089–1096.
41. Bristol-Myers Squibb Company. Yervoy (Ipilimumab) [package insert]. Princeton, NJ: Bristol-Myers Squibb Company; 2013.
43. Merck & Co Inc. Keytruda (Pembrolizumab) [package insert]. Whitehouse Station, NJ: Merck & Co Inc; 2015.
44. Bristol-Myers Squibb Company. Opdivo (Nivolumab) [package insert]. Princeton, NJ: Bristol-Myers Squibb Company; 2015.
45. Robert L, Tsoi J, Wang X, et al.. CTLA4 blockade broadens the peripheral T-cell receptor repertoire. Clin Cancer Res. 2014;20:2424–2432.
46. Cha E, Klinger M, Hou Y, et al.. Improved survival with T cell clonotype stability after anti-CTLA-4 treatment in cancer patients. Sci Transl Med. 2014;6:238–270.
47. Patel SP, Kurzrock R. PD-L1 expression as a predictive biomarker in cancer immunotherapy. Mol Cancer Ther. 2015;14:847–856.
48. Herbst RS, Soria JC, Kowanetz M. Predictive correlates of response to the anti-PD-L1 antibody MPDL3280A in cancer patients. Nature. 2014;515:563–567.
49. Pardoll DM. The blockade of immune checkpoints in cancer immunotherapy. Nat Rev Cancer. 2012;12:252–264.
50. Das R, Verma R, Sznol M, et al.. Combination therapy with anti-CTLA-4 and anti-PD-1 leads to distinct immunologic changes in vivo. J Immunol. 2015;194:950–959.
51. Hodi FS, O’Day SJ, McDermott DF, et al.. Improved survival with ipilimumab in patients with metastatic melanoma. N Engl J Med. 2010;363:711–723.
52. Robert C, Thomas L, Bondarenko I, et al.. Ipilimumab plus dacarbazine for previously untreated metastatic melanoma. N Engl J Med. 2011;364:2517–2526.
53. Schadendorf D, Hodi FS, Robert C, et al.. Pooled analysis of long-term survival data from phase II and phase III trials of ipilimumab in unresectable or metastatic melanoma. J Clin Oncol. 2015;33:1889–1894.
54. Farolfi A, Ridolfi L, Guidoboni M, et al.. Ipilimumab in advanced melanoma: reports of long-lasting responses. Melanoma Res. 2012;22:263–270.
55. Robert C, Schachter J, Long GV, et al.. KEYNOTE-006 Investigators. Pembrolizumab versus ipilimumab in advanced melanoma. N Engl J Med. 2015;372:2521–2532.
56. Robert C, Long GV, Brady B, et al.. Nivolumab in previously untreated melanoma without BRAF mutation. N Engl J Med. 2015;372:320–330.
57. Rizvi NA, Mazières J, Planchard D, et al.. Activity and safety of nivolumab, an anti-PD-1 immune checkpoint inhibitor, for patients with advanced, refractory squamous non-small-cell lung cancer (CheckMate 063): a phase 2, single-arm trial. Lancet Oncol. 2015;16:257–265.
58. Weber JS, D’Angelo SP, Minor D, et al.. Nivolumab versus chemotherapy in patients with advanced melanoma who progressed after anti-CTLA-4 treatment (CheckMate 037): a randomised, controlled, open-label, phase 3 trial. Lancet Oncol. 2015;16:375–384.
59. Larkin J, Chiarion-Sileni V, Gonzalez R, et al.. Combined nivolumab and ipilimumab or monotherapy in untreated melanoma. N Engl J Med. 2015;373:23–34.
60. Brahmer J, Reckamp KL, Baas P, et al.. Nivolumab versus docetaxel in advanced squamous-cell non-small-cell lung cancer. N Engl J Med. 2015;373:123–135.
62. Ribas A, Puzanov I, Dummer R, et al.. Pembrolizumab versus investigator-choice chemotherapy for ipilimumab-refractory melanoma (KEYNOTE-002): a randomised, controlled, phase 2 trial. Lancet Oncol. 2015;16:908–918.
63. Garon EB, Rizvi NA, Hui R, et al.. Pembrolizumab for the treatment of non-small-cell lung cancer. N Engl J Med. 2015;372:2018–2028.
64. Gettinger SN, Horn L, Gandhi L, et al.. Overall survival and long-term safety of nivolumab (anti-programmed death-1 antibody, BMS-936558, ONO-4538) in patients with previously treated advanced non-small-cell lung cancer. J Clin Oncol. 2015;33:2004–2012.
65. McDermott DF, Drake CG, Sznol M, et al.. Survival, durable response, and long-term safety in patients with previously treated advanced renal cell carcinoma receiving nivolumab. J Clin Oncol. 2015;33:2013–2020.
66. Topalian SL, Sznol M, McDermott DF, et al.. Survival, durable tumor remission, and long-term safety in patients with advanced melanoma receiving nivolumab. J Clin Oncol. 2014;32:1020–1030.
67. Wolchok JD, Hoos A, O’Day S, et al.. Guidelines for the evaluation of immune therapy activity in solid tumors: immune-related response criteria. Clin Cancer Res. 2009;15:7412–7420.
68. Hodi FS, Ribas A, Daud A, et al.. Evaluation of immune-related response criteria (irRC) in patients (pts) with advanced melanoma (MEL) treated with the anti-PD-1 monoclonal antibody MK-3475. J Clin Oncol. 2014;32(suppl):3006.
69. Robert C, Ribas A, Wolchok JD, et al.. Anti-programmed-death-receptor-1 treatment with pembrolizumab in ipilimumab-refractory advanced melanoma: a randomised dose-comparison cohort of a phase 1 trial. Lancet. 2014;384:1109–1117.
70. Motzer R, Rini B, McDermott D, et al.. Nivolumab for metastatic renal cell carcinoma: results of a randomized phase II trial. J Clin Oncol. 2015;33:1430–1437.
71. Eisenhauer EA, Therasse P, Bogaerts J, et al.. New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur J Cancer. 2009;45:228–247.
72. Callahan MK, Postow MA, Wolchok JD. Immunomodulatory therapy for melanoma: ipilimumab and beyond. Clin Dermatol. 2013;31:191–199.
73. Snyder A, Makarov V, Merghoub T, et al.. Genetic basis for clinical response to CTLA-4 blockade in melanoma. N Engl J Med. 2014;371:2189–2199.
74. Powles T, Eder JP, Fine GD, et al.. MPDL3280A (anti-PD-L1) treatment leads to clinical activity in metastatic bladder cancer. Nature. 2014;515:558–562.
75. Segal NH, Ou S-H, Balmanoukian AS, et al.. Safety and efficacy of MEDI4736, an anti-PD-L1 antibody, in patients from a squamous cell carcinoma of the head and neck (SCCHN) expansion cohort. J Clin Oncol. 2015;33(suppl):3011.
76. Tumeh PC, Harview CL, Yearley JH, et al.. PD-1 blockade induces responses by inhibiting adaptive immune resistance. Nature. 2014;515:568–571.
77. Rizvi NA, Hellmann MD, Snyder A, et al.. Mutational landscape determines sensitivity to PD-1 blockade in non-small cell lung cancer. Science. 2015;348:124–128.
78. Topalian SL, Hodi FS, Brahmer JR, et al.. Safety, activity, and immune correlates of anti-PD-1 antibody in cancer. N Engl J Med. 2012;366:2443–2454.
79. Weber JS, Kahler KC, Hauschild A. Management of immune-related adverse events and kinetics of response with ipilimumab. J Clin Oncol. 2012;30:2691–2697.
80. Iwama S, De RA, Callahan MK, et al.. Pituitary expression of CTLA-4 mediates hypophysitis secondary to administration of CTLA-4 blocking antibody. Sci Transl Med. 2014;6:230–245.
81. Rizvi N, Brahmer J, Ou S-H, et al.. Safety and clinical activity of MEDI4736, an anti-programmed cell death-ligand 1 (PD-L1) antibody, in patients with non-small cell lung cancer (NSCLC). J Clin Oncol. 2015;33(suppl):8032.
82. Curran MA, Montalvo W, Yagita H, et al.. PD-1 and CTLA-4 combination blockade expands infiltrating T cells and reduces regulatory T and myeloid cells within B16 melanoma tumors. Proc Natl Acad Sci. 2010;107:4275–4280.
83. Postow MA, Chesney J, Pavlick AC, et al.. Nivolumab and ipilimumab versus ipilimumab in untreated melanoma. N Engl J Med. 2015;372:2006–2017.
84. Selby M, Engelhardt J, Lu LS, et al.. Antitumor activity of concurrent blockade of immune checkpoint molecules CTLA-4 and PD-1 in preclinical models. J Clin Oncol. 2013;31(suppl):3061.
85. Wolchok JD, Kluger H, Callahan MK, et al.. Nivolumab plus ipilimumab in advanced melanoma. N Engl J Med. 2013;369:122–133.
86. Hammers H, Plimack ER, Infante JR, et al.. Phase I study of nivolumab in combination with ipilimumab in metastatic renal cell carcinoma (mRCC). J Clin Oncol. 2015;33(suppl):4516.
87. Antonia SJ, Gettinger S, Chow LQ, et al.. Nivolumab (anti-PD-1; BMS-936558, ONO-4538) and ipilimumab in first-line NSCLC: interim phase I results. J Clin Oncol. 2014;32(suppl):8023.
88. Gettinger S, Hellmann MD, Shepherd FA, et al.. First-line monotherapy with nivolumab (NIVO; anti-programmed death-1 [PD-1]) in advanced non-small cell lung cancer (NSCLC): safety, efficacy and correlation of outcomes with PD-1 expression. J Clin Oncol. 2015;33(suppl):8025.
89. Antonia SJ, Bendell JC, Taylor MH, et al.. Phase I/II study of nivolumab with or without ipilimumab for treatment of recurrent small cell lung cancer (SCLC): CA209-032. J Clin Oncol. 2015;33:7503.
90. Ribas A, Hodi FS, Kefford R, et al.. Efficacy and safety of the anti-PD-1 monoclonal antibody MK-3475 in 411 patients (pts) with melanoma (MEL). J Clin Oncol. 2014;32(suppl):LBA9000.
91. Weber JS, Kudchadkar RR, Yu B, et al.. Safety, efficacy, and biomarkers of nivolumab with vaccine in ipilimumab-refractory or -naive melanoma. J Clin Oncol. 2013;31:4311–4318.
| 1 | 5 |
<urn:uuid:86409bb2-86af-4b73-964c-6dfce884f72d>
|
by Jann Swanson
Using the farm crisis of the early 1980s as a model, two economists have refuted several of the arguments against legislation that would permit bankruptcy judges to cramdown or stripdown of mortgage loans. Thomas J. Fitzpatrick IV and James B Thomson, economists with the Federal Reserve Bank of Cleveland, published their paper, Stripdowns and Bankruptcy: Lessons from Agricultural Bankruptcy Reform in the bank's Economic Commentary on its website.
Allowing stripdowns of mortgages during Chapter 13 bankruptcy reorganization has been suggested as one way to deal with the housing crisis. If such legislation were passed, bankruptcy judges would be allowed to reduce the outstanding balance on a mortgage loan to the actual value of the underlying collateral, turning the remaining balance of the mortgage into an unsecured claim which would receive the same proportionate payout as other unsecured debts included in the bankruptcy petition. Some proponents of this provision maintain it could be a partial solution to the foreclosure crisis, reducing the number of homes going into foreclosure by improving the chances of a successful loan modification. Others favor the law on the basis of equity, saying that mortgages on rental properties and vacation homes as well as virtually every other type of secured loan can be stripped down during Chapter 13 proceedings.
Those opposing stripdown legislation fear an increase in mortgage interest rates, apparently in response to any increase in loan modifications rather than to the stripdown itself. The unintended consequences of this, they argue, might be to make homeownership less affordable and accessible to low and moderate income families. Opponents also cite the possibility of an avalanche of Chapter 13 filings should stripdowns become law in the midst of the current financial crisis. Lenders have been the most vocal of opponents, arguing that stripdowns would shift losses from borrowers to lenders, give bankruptcy judges too much discretion, and that such shifting is unfair in that it changes the rules of contracts after the fact.
The economists maintain that such arguments are best viewed against the empirical evidence from the actual experience with stripdowns done under legislation establishing the Bankruptcy Judges, United States Trustees, and Family Farmer Bankruptcy Act of 1986. This legislation established a separate chapter in the U.S. Bankruptcy Code, Chapter 12 intended solely for farmers. The legislation was passed in response to an agricultural and bank crisis in the 1980s and originally had a sunset provision, but worked well enough that it was twice extended and then made permanent in 2005.
The agricultural lending crisis had some strong parallels with the more recent home lending meltdown, as well as, Patrick and Thomson point out, some distinct differences and many of the claims and concerns expressed in the current debate were central in the debate over Chapter 12.
The agricultural lending crisis started in the 1970s when US farm exports rose over 500 percent, from $8.24 to $43.78 billion in a nine year period starting in 1972. This led to a dramatic rise in commodity prices and farm incomes over that time period. Net farm income peaked at over $27 billion in 1979, a rise of 41 percent over the decade.
It was a typical boom-bust scenario: When prices for their goods were rising, farms expanded and farm real estate prices increased significantly; in Iowa, for example, the price of farm land more than quadrupled from 1970 to 1982. But, while demand for their products had increased sharply in the early 1970s, farmers watched it fall almost as fast in the late 1970s and early 80s. With the drop in demand and price for products the demand and price for land fell too. That Iowa land lost nearly two-thirds of its value in five years, and the same thing happened nationally. The average price of farmland increased more than 350 percent by 1982 then fell by more than a third in the next five years.
As the price was going up, so did agricultural debt loads as many farmers borrowed to acquire additional acreage. Cash-short and expecting increased income, many farmers used variable-rate notes to purchase real estate. Caught up in the boom, lenders eased underwriting standards, relying on the continued appreciation of the land for security rather than the ability of the farmers to service their debt. But as prices and cash flows decreased and the variable-rate notes used to purchase farm real estate reset, many farmers saw their interest rates increase, found that they could not make payments and were underwater on their mortgages.
Farmland values peaked in 1981 in the Midwest, where the land-price appreciation had been the greatest, and declined by as much as 49 percent over the next few years before bottoming out in 1987. Farm-sector debt quadrupled from the early 1970s through the mid-1980s. Debt declined by one-third from 1984 through 1987, but much of this reduction reflected the liquidation of farms.
Many farmers, especially in the South and Midwest, were underwater with their agricultural loans and were in danger of losing their primary residences with little relief possible under the existing bankruptcy laws. Chapter 13 did not allow for modification of debt secured by a primary residence, and Chapter 11, intended for corporations, was too complex for most small and medium sized farmers and also contained provisions that made a stripdown problematic.
Some states enacted moratoriums on foreclosures but they provided only temporary relief given the underlying economic factors (does any of this sound familiar yet) and left many farmers unable to service their debt and with almost no possibility of renegotiating their secured loans.
Fitzpatrick and Thomson point out that, unlike in the current foreclosure crisis, the troubled debt then was highly concentrated a few Farm Credit Banks, Farmer Mac and commercial banks in the affected regions. Nonetheless, these agriculturally related banks began to fail in 1984 and accounted for a third of all bank failures between 1983 and 1987. This led to the Chapter 12 legislation and its related stripdowns provisions. Despite the same arguments we hear today, Congress permitted stripdowns for farmers because voluntary modification efforts, even when subsidized by the government, did not lead agricultural lenders to negotiate loan modifications.
The actual negative impact of the legislation was minor. Even though the new section of the Bankruptcy Code was created specifically for farmers, it did not change the cost and availability of farm credit dramatically. In fact, a United States General Accounting Office (1989) survey of a small group of bankers found that none of them raised interest rates to farmers more than 50 basis points. The economists say that while this rate change may have been a response to the Chapter 12, it is also consistent with increasing premiums due to the economic environment and suggest that the changes in the cost and availability of farm credit after the bankruptcy reform differed little from what would be expected in that economic environment, absent reform.
The Commentary says, "What was most interesting about Chapter 12 is that it worked without working. According to studies by Robert Collender (1993) and Jerome Stam and Bruce Dixon (2004), instead of flooding bankruptcy courts, Chapter 12 drove the parties to make private loan modifications. In fact, although the U.S. General Accounting Office reports that more than 30,000 bankruptcy filings were expected the year Chapter 12 went into effect, only 8,500 were filed in the first two years. Since then, Chapter 12 bankruptcy filings have continued to fall."
Despite the controversy that accompanied Chapter 12 and is stirring around the idea of a stripdown authority today, economists say that the "effects of the stripdown provision, in place for more than two decades, on the availability and terms of agricultural credit suggest that there has been little if any economically significant impact on the cost and availability of that credit." They do, however, point out some significant differences between the agricultural foreclosure crisis of the 1980s and the current home foreclosure crisis.
"First, the structure of the underlying loan markets is different. Unlike mortgages today, few if any of the farm loans in the 1980s were sold or securitized. Moreover, there was more direct government involvement in agricultural loan markets in the 1980s than there was in the mortgage markets leading up to the current housing crisis. Finally, the scale of the current foreclosure crisis is several times larger than the 1980s agricultural crisis, which was limited geographically to the Midwest and Great Plains states. Yet, despite these differences, the response to the farm foreclosure crisis and the impact of bankruptcy reform on agricultural credit markets is still informative for the current debate."
| 1 | 11 |
<urn:uuid:6dc999d8-0f5e-4d61-8917-1eab0d4af33e>
|
1) ARM is some kind of open-source processor architecture right? It seems like everybody and nobody owns ARM. From my understanding, the ARM company just engineers the cores/architecture and licenses that out to other companies to fab. They kind of act like the processor R&D team at AMD or Intel for example. You can "buy" their cores to use on your SoC, or you can "buy" their architecture as a building block to engineer your own optimized cores.
Except for your first bit about it being open source and nobody owning it, that's essentially correct. The ARM people own the IP and license it out, either as complete core designs or as a license to use the ARM ISA.
2) Why/how did ARM gain such a strong foothold? Was it just because they were the first to make it to the lowest power usage arena? I have a hard time believing that AMD or Intel didn't see the need for this coming a long time ago when smartphones first started emerging and didn't think of something similar.
To a large extent, I think they were just in the right place at the right time. They had the right combination of performance and power usage, at exactly the time when the mobile device market was exploding.
Other lower-power processors were (and are) out there, like the PIC microcontroller line. But these did not have the compute horespower to handle a smartphone. PICs get used a lot in appliance and automotive applications.
The MIPS processor line could've been what ARM is today, but back in the '90s they got acquired by former workstation/server vendor (and inventor of OpenGL) SGI, and subsequently spun back off again a few years later when SGI made the ill-fated decision to bet the farm on Itanium. IMO this little detour derailed any chances MIPS may have had of dominating the embedded market. They're still around (used in some consumer electronics devices like Blu-Ray players, set top boxes, and the PSP), but haven't managed to achieve the dominance that ARM has.
3) What's preventing others (AMD/Intel) from beating them? Is it just a big black box that nobody can reverse engineer? I have a hard time believing that since you can license their actual architecture like Apple did with the A6.
Beating who? ARM Holdings? They're not a semiconductor company, they are an IP licensing operation. They don't build actual chips, so they don't compete directly with Intel or AMD.
Coming up with a different (but equivalent from a performance per watt standpoint) RISC CPU design that doesn't use any ARM IP is certainly doable (especially for someone with deep pockets like Intel), but there's also a huge existing ecosystem for ARM development. Compilers, OSes, and APIs (Linux, Android, etc.) all exist today. If you rolled a new design from scratch you'd have to port or re-invent all of the support infrastructure too.
Doing it with x86 (to leverage the existing x86 ecosystem) is difficult, because x86 is a complicated ISA with a lot of excess baggage that isn't needed for mobile devices. Atom was Intel's attempt at this, but it was still too power hungry for the sort of applications ARM targets, and too wimpy for low-end laptops and netbooks.
If the world isn't making sense to you, you're either drinking too much or not drinking enough.
| 1 | 3 |
<urn:uuid:93e0234c-6740-4e6b-b58d-2531303c9998>
|
1 This publication contains summary statistics on causes of death for the general population, together with selected statistics on perinatal deaths. The registration of deaths is the responsibility of the individual state and territory Registrars of Births, Deaths and Marriages. As part of the registration process, information about the cause of death is supplied by the medical practitioner certifying the death or by a coroner. Other information about the deceased is supplied by a relative or other person acquainted with the deceased, or by an official of the institution where the death occurred. This information is provided to the Australian Bureau of Statistics (ABS) by individual Registrars for coding and compilation into aggregate statistics shown in this publication. In addition, the ABS supplements this data with information from the National Coroners Information Service (NCIS). Statistics of perinatal deaths for years prior to 1994 were published separately in Perinatal Deaths, Australia (cat. no. 3304.0).
SCOPE AND COVERAGE
2 The statistics in sections 1, 2 and 3 relate to the number of deaths registered, not those which actually occurred, in the years shown. About 4% to 6% of deaths occurring in one year are not registered until the following year or later. Statistics in section 4 relate to deaths by year of occurrence.
Tourism Related Deaths
3 The ABS deaths collection includes all deaths that occurred and were registered in Australia including deaths of persons whose usual residence is overseas. Deaths of Australian residents that occurred outside Australia may be registered by individual Registrars, but are not included in ABS statistics.
Perinatal death statistics
4 The perinatal death statistics contained in this publication, unless otherwise stated, include all fetuses and infants delivered weighing at least 400 grams or (when birthweight is unavailable) the corresponding gestational age (20 weeks), whether alive or dead. This definition recognises the availability of reliable 400 grams/20 weeks data from all state and territory Registrars of Births, Deaths and Marriages. The ABS has adopted the legal requirement for registration of a perinatal death as the statistical standard as it meets the requirements of major users in Australia.
5 For 1996 and previous editions of this publication, data relating to perinatal deaths were based upon the World Health Organization (WHO) recommended definition for compiling national perinatal statistics. The WHO definition of perinatal deaths included infants and fetuses weighing at least 500 grams or having a gestational age of 22 weeks or body length of 25 centimetres crown-heel.
6 The birth statistics used to calculate the perinatal and neonatal death rates in this publication are shown in Appendix 3. Appendix tables A3.1-A3.3 detail registered live birth statistics and stillbirth statistics adjusted to exclude infants who are known to have weighed under 400 grams. Such births are identified from the medical certificate of perinatal death, which records birthweight. Appendix table A3.4 shows similar adjusted information but it is based on the 500 grams definition.
7 The adjusted birth statistics differ from the birth statistics used to derive the infant death rates in this publication. The statistics used to calculate infant death rates include all registered live births regardless of birthweight. These statistics are shown in tables A2.1 of Appendix 2.
8 The adjusted birth statistics also differ from the statistics published in Births, Australia (cat. no. 3301.0), which are unadjusted for birthweight, i.e. have not had births known to have weighed less than 400 grams excluded. For years 1993 to 1996, births which occurred in Other Territories were excluded from adjusted live births used in calculating perinatal rates.
STATISTICS FOR STATES AND TERRITORIES
9 Cause of death statistics for states and territories in this publication have been compiled in respect of the state or territory of usual residence of the deceased, regardless of where in Australia the death occurred and was registered. The state or territory of usual residence for a perinatal death is determined by the state or territory of usual residence of the mother.
10 Statistics compiled on a state or territory of registration basis are available on request.
11 The Australian Standard Geographical Classification versions used since 1993 have a category 'Other Territories' comprising Jervis Bay, Christmas Island and Cocos (Keeling) Islands. In the past, Jervis Bay was included with Australian Capital Territory and the two island Territories were included in Off-Shore Areas and Migratory. From 1997, statistics for 'Other Territories' are included in the Australian totals.
CAUSE OF DEATH CLASSIFICATION USED
12 The tenth revision of the International Classification of Diseases and Health Related Problems (ICD-10) was adopted for Australian use for deaths registered from 1 January 1999. However, to identify changes between the ninth and tenth revisions, deaths for 1997 and 1998 were coded to both revisions.
13 The extensive nature of the ICD enables classification of causes of death at various levels of detail. For the purpose of this publication, two summary classifications are used. They are:
14 Tables 1.1, 1.3, 2.2, 2.3 and 4.1 present statistics at the ICD chapter level with further disaggregation for major causes of death. Background on this summary classification is given in Volume 1 of the ICD.
- the ICD at the chapter level (with further disaggregation for major causes of death).
- selected Causes of Death for age groups.
15 Tables 1.2 and 1.4 present data for main causes of death for age groups. For each age group, a summary classification of the selected causes of death relevant to the age group has been used. These consist of causes of death significant in that age group, at the chapter level, with further disaggregation below the chapter level where appropriate.
16 To enable the reader to see the relationship between the various summary classifications used in this publication, all tables show in brackets the ICD codes which constitute the causes of death covered.
17 As ICD-9 did not directly accommodate the coding of Acquired Immune Deficiency Syndrome (AIDS) and AIDS-related deaths, cases where AIDS was the underlying cause were coded to ICD-9 deficiency of cell-mediated immunity (279.1), from 1988 to 1995. In 1996, ABS adopted ICD-9 Clinically Modified (CM) for coding of AIDS and AIDS-related deaths. Hence, for 1996 to 1998, all AIDS-related deaths (i.e. deaths where AIDS was mentioned in any place on the death certificate) were coded to HIV infection (042-044). ICD-10 adopted from 1999 allows for the coding of AIDS and AIDS-related deaths (B20-B24).
18 All data in this publication refer to AIDS-related deaths rather than only those deaths where AIDS is the underlying cause. Hence in table 1.1 and 1.3, AIDS-related deaths differ from the data provided for all other causes in that table since for all other causes, only data for underlying cause are given.
19 For perinatal deaths, both the main condition in the fetus/infant, and the main condition in the mother are coded to the full four-digit level of the tenth revision of ICD. Causes selected for publication in this issue are those categories which were responsible for a significant proportion of perinatal deaths.
EXTERNAL CAUSES OF DEATH
20 Deaths that are classified as External Causes are generally of the kind that are reported to coroners for investigation. Although what constitutes a reportable death varies across jurisdictions, they are generally reported in circumstances such as:
21 Where an accidental or violent death occurs, the underlying cause is classified according to the circumstances of the fatal injury, rather than the nature of the injury which is coded separately.
- Where the person died unexpectedly and the cause of death is unknown;
- Where the person died in a violent or unnatural manner;
- Where the person died during or as a result of an anaesthetic;
- Where the person was 'held in care' or in custody immediately before they died; and
- Where the identity of the person who has died is unknown.
22 In compiling causes of death statistics, the ABS employs a variety of quality control measures which include:
23 The quality of causes of death coding can be affected by changes in the way information is reported by certifiers, by lags in completion of coroner cases and the processing of the findings. While changes in reporting and lags in coronial processes can affect coding of all causes of death, those coded to Chapter XX: External causes of morbidity and mortality are more likely to be affected because the code assigned within the chapter may vary depending on the coroner's findings.
- providing certifiers with certification booklets for guidance in reporting cause of death on medical certificates;
- seeking additional information, where necessary, from medical practitioners, from coroners and from the National Coroners Information Service (NCIS);
- check-coding of cause of death; and
- editing checks at the individual record and aggregate levels.
Specific Issues for 2004 data
24 Care should be taken in interpreting results in recent years for the following areas within Chapter XX: External causes of morbidity and mortality. In regard to the impacts on quality resulting from lags in finalising coronial processes, ABS is investigating options for revising deaths data to capture more complete cause of death information.
25 Falls (W00-W19) - To reduce risk factors for falls in nursing homes in Victoria, all deaths where the medical certificate mentions falls are now referred to the coroner for verification, and the Coroner Clinical Liaison Service implemented a falls awareness campaign mid 2003. The number of deaths due to falls recorded in Victoria increased significantly in 2003 (up 50%) and again in 2004 (more than double the 2003 recorded level), whereas in previous years the deaths may have been attributed to other causes such as hypostatic pneumonia.
Analysing small numbers
26 Perinatals (P00-P96) - There is some variability over time across a range of the perinatal death categories and where the numbers are small, caution should be applied in drawing inferences about change over time. In particular, the number of deaths coded to Disorders related to short gestation and low birth weight, not elsewhere classified (P07) more than doubled between 2003 and 2004.
Quality affected by delays
27 Suicide (X60-X84) - There has been an increase in recent years in the number of open coroners' cases. Where cases are not finalised and the findings are not available to the ABS in time for publication of causes of death statistics, deaths are coded to other accidental, ill-defined or unspecified causes rather than suicide. The causes of death statistics are not revised once a coronial enquiry is finalised.
28 The number of deaths coded to Intentional self-harm (suicides) has declined in recent years which may in part reflect the increase in open coroners' cases when the statistics were finalised.
29 Assault (X85-Y09) - The increase in the number of coroners' cases not closed at the time the ABS finalised the 2004 deaths file is expected to have contributed significantly to the 41% decline in the number of deaths coded as due to assaults in 2004.
30 All states and territories have provision for the identification of Indigenous deaths on their death registration forms. However, the coverage of deaths identified as Indigenous varies across states and territories and over time. This publication presents in table 1.6, Indigenous deaths data for 2004 for all states and territories except Victoria, Tasmania and the Australian Capital Territory, which are not separately published due to a combination of comparatively small numbers, and relatively low coverage, of reported Indigenous deaths. A higher proportion of Indigenous deaths are due to external cause than non-indigenous deaths. It is advised that users should refer to Explanatory note 20 when interpreting 2004 data.
31 In fulfilling its functions the ABS collects information in pursuance of sections 10 and 11 of the Census and Statistics Act. Once supplied to the ABS this information is deemed to have been "furnished in pursuance of the Act" and is protected by the secrecy provisions of section 19 of the Act.
32 The provisions of subsection 12(2) of the Act place a requirement on the Statistician to publish and disseminate statistics but not in a manner that is likely to enable the identification of a particular person or organisation
33 To maintain the confidentiality of individuals, affected cells are replaced with np.
34 Appendix 2 provides details of the number of live births registered which have been used to calculate the infant death rates shown in this publication. Appendix 3 provides data on adjusted births used for calculating perinatal death rates. These also enable further rates to be calculated.
35 The ABS publications draw extensively on information provided freely by individuals, businesses, governments and other organisations. Their continued cooperation is very much appreciated: without it, the wide range of statistics published by the ABS would not be available. Information received and collected by the ABS is treated in strict confidence as required by the Census and Statistics Act 1905.
36 Other ABS products which may be of interest include:
37 The ABS has a web based information service called Statistics (previously known as AusStats) which provides the ABS full standard product range on line. It allows you to conveniently access a large range of ABS statistical and reference information, free of charge. It also includes companion data in multidimensional datasets in SuperTable format, and time series spreadsheets.
Statistics - electronic data available at www.abs.gov.au.
Australian Social Trends, cat. no. 4102.0 - issued annually
Births, Australia, cat. no. 3301.0 - issued annually
Causes of Deaths, Australia: Summary Tables cat. no. 3303.0.55.001 - issued irregularly
Causes of Infant and Child Deaths, Australia, 1982-96, cat. no. 4398.0 - issued irregularly - discontinued
Deaths, Australia, cat. no. 3302.0 - issued annually
Deaths due to Diseases and Cancers of the Respiratory System, Australia, 1979-1994, cat. no. 3314.0 - issued irregularly
Drug Induced Deaths, cat. no. 3321.0.55.001 - single issue
Information Paper: Drug-induced Deaths - A Guide to ABS Causes of Death Data, cat. no. 4809.0 - single issue
Information Paper: Multiple Cause of Death Analysis, cat. no. 3319.0.55.001 - issued irregularly
Mortality Atlas Australia 1997-2000, cat. no. 3318.0 - single issue
Suicides, Australia, 1921-1998, cat. no. 3309.0 - issued irregularly
Suicides: Recent Trends Australia, 1993-2003 cat. no. 3309.0.55.001 - issued irregularly
38 Current publications and other products released by the ABS are listed in the Catalogue of Publications and Products (cat. no. 1101.0). The catalogue is available from any ABS office or the ABS web site at <http://www.abs.gov.au>. The ABS also issues a daily Release Advice on the web site which details products to be released in the week ahead.
39 As well as the statistics included in this and related publications, additional information is available from the ABS web site at <http://www.abs.gov.au> by accessing Themes/Health.
DATA AVAILABLE ON REQUEST
40 More detailed cause of death information is available upon request from the ABS. This information can comprise standard tables (see Appendix 1) or customised tabulations (by hardcopy or electronic media). Unit record files are available to approved users upon application. Generally, a charge is made for providing information upon request.
41 Perinatal tabulations for Australia based on national (see Explanatory Notes, paragraph 4 and 5) and international definitions are available upon request. The WHO international definition comprises all fetuses and infants (who die within seven days of birth) weighing at least 1,000 grams or (when birthweight is unavailable) having the corresponding gestational age (28 weeks) or body length (35 centimetres crown-heel). A charge is made for providing this information.
42 For more information about cause of death statistics or data concepts contact the National Information Service on 1300 135 070.
EFFECTS OF ROUNDING
43 Where figures have been rounded, discrepancies may occur between totals and sums of the component items.
| 2 | 11 |
<urn:uuid:2dc0a7da-8e89-451e-8de6-1b7bcc244fc5>
|
An ethics training specific for European public health
© Camps et al. 2015
Received: 23 June 2015
Accepted: 13 August 2015
Published: 25 August 2015
Training in public health ethics is not at the core of public health programmes in Europe. The fruitful progress of the United States could stimulate the European schools of public health and other academic institutions to develop specifically European teaching programmes for ethics that embrace both transatlantic innovations and some adaptations based on the evolution of moral values in European societies. This paper reviews the arguments for a European public health ethics curriculum and recommends the main features of such a programme. Europe shares common values and, above all, the three major ethical principles that were socially and politically crystallized by the French Revolution: liberty, equality, and fraternity. Fraternity, otherwise known as solidarity, although rarely mentioned in the literature on ethical issues, is the moral value that best defines the European concept of public health expressed as a common good, mutual aid, and a collective or shared responsibility for health of the population. Specific political motivations were responsible for the origin of European health systems and for current policy proposals led by the European Union, such as Europe’s commitments, at least in theory, to: reduce social inequities in health and to develop the health in all policies approach. These and other initiatives, albeit not exclusively European, have political and legal repercussions that pose unique ethical challenges. Europe combines homogeneity in social determinants of health with heterogeneity in public health approaches and interventions. It is therefore necessary to develop training in ethics and good government for all public health workers in Europe, especially since a large segment of the population’s health depends on actions and decisions adopted by the European Commission and its regulatory agencies as well as for non EU European Region countries. Based on these arguments, the paper concludes with several recommendations for a common nucleus for the ethics curriculum in Europe.
KeywordsPublic health ethics Curriculum Europe
The influence of ethics on public health is by no means a new issue, although it has not generated either an operative deontology or such a specific application as research ethics or clinical ethics. However, the application of ethical considerations in the most healthcare-related clinical fields has led to a development that, in Gostin’s words, is generated by, for, and in public health .
This development affects the different areas of action in public health, which particularly include, according to Callahan & Jennings: 1) the protection and promotion of health (including the prevention of disease); 2) etiological and evaluative research (epidemiology and other types); and, 3) unjust and avoidable social inequities that, according to Upshur, are related above all to the professional dimension and its social and political legitimacy. All of this has a marked orientation towards advocacy of the collective dimension of the population’s health based on equity and social justice and whose practical application must be sufficiently critical to take adequate advantage of the strengths of each of its contributions.
There is thus no doubt about the relevance of ethics in public health and therefore the need for the corresponding training, an issue that was addressed in the previous chapter. Another matter is whether there are currently enough experienced teachers (most of whom, like the bibliography, are of North American origin) and whether a specifically European initiative would be useful. We owe some of the most prominent conceptual considerations of ethics applied to public health as an academic discipline and specifically in relation to bioethics to Dawson & Verweij . They are prominent pioneers in public health ethics issues, proposing a form of ethics that transcends individual considerations in order to consider collective interventions that protect and foster a population’s health. It is not, however, a case of tracking the geographic or cultural origins of the foundations of public health ethics, but rather of considering whether it would be useful to design and develop specifically European teaching programmes for ethics and public health and, if so, why.
As most European schools of public health do not include ethics in their training programmes, at least not in any generalized fashion, and as some of the available programmes adopt a perspective that is closer to clinical bioethics than the community approach that typifies public health, now is the right time to design a training programme.
Although it should take into account the positive experience of the model ethics curriculum observed by North American schools, this design should do more than merely copy it. It is necessary to consider the wide variety of subjects included in the ethics programmes of North American schools of public health and then incorporate the innovations and even some adaptations into the evolution of moral values in European societies. Although ethical foundations and perspectives are universal, the dilemmas faced have their local particularities. For example, regulations for the use of safety belts and helmets illustrate the differences between the expectations, preferences, and values of Europeans and North Americans, which also affect vaccination policies and the rights of immigrants, among other issues.
Below, therefore, are arguments that justify the proposal and that, in observance of the suggestions made by Maecklberghe & Schroeder, share the philosophical perspective of ethics and both the academic and professional view of public health.
Historical and philosophical arguments
Europe shares a rich cultural history, and its most prominent common values include those that typify the core values of public health and, above all, the three major ethical principles that were socially and politically crystallized by the French Revolution: liberty, equality, and fraternity. People’s liberties and rights regarding health matters were finally recognized in the Nuremberg Code of 1947. This code is a set of research ethics principles for human experimentation that resulted as a consequence of the Nuremberg Trials against the doctors involved in the human experiments in concentration camps as part of the Nazi programs of genocide. The Nuremberg Code laid the foundations for present-day bioethics and the defence of the principle of autonomy. Equality is currently included in most European texts that advocate the right to health protection and universal healthcare. And fraternity, otherwise known as solidarity, is the basis for the welfare state policies shared by European countries .
Solidarity is the moral value that best defines the European concept of public health and is expressed as a common good, mutual aid, and a collective or shared responsibility for health. Considering that vulnerability is associated with poor health, public health ethics places special emphasis on the needy sectors of society and on reducing social inequalities. These models are the inspiration for the characteristic welfare state model in European countries,which includes a universal public healthcare system, social policies to reduce health inequalities and community prevention programmes.
This understanding of solidarity contrasts with the values that we usually associate with the individualism and personal behaviours that, at least apparently, typify health policy in the United States of America. Many Europeans find it hard to understand how there can be forty-five million people in that country who do not have medical insurance, which is why we applaud Obamacare . Meanwhile, in Europe we tend to feel proud of the solidarity at the core of European health policy, which is extended to social security systems to support the unemployed and the elderly . Paradoxically, solidarity is rarely mentioned in the literature on the ethical issues that arise in public health policy and practice as described by Dawson and Jennings, who suggest that taking solidarity seriously will enrich our approaches to public health ethics .
Although European health systems, based on solidarity, are currently in crisis due to budget cuts, the ageing of the population, and also a recent tendency for over-individualism (partly caused, paradoxical though it may sound, by the welfare state itself ), the achievement of a sustainable welfare state and public health policies is an ethical imperative supported by fundamental rights. We therefore need to rethink the welfare state model that we want, as well as the measures that should be taken to strengthen solidarity between people .
European institutions have, however, shown signs of their commitment to solidarity, which is characteristic of the European culture with regard to health matters. Examples of this are the European Council’s insistence in June 2008 on reducing the differences in terms of healthcare and life expectancy between and within member states, the EU’s Health Strategy, which encourages work to continue on reducing health inequalities, the Announcement by the 2008 Commission on the Renewed Social Agenda, which reasserted Europe’s fundamental ethical objectives in relation to opportunities, equal access, and solidarity; and the 2009 Announcement by the Commission of the European Communities, aimed at the European Parliament, the European Council, the European Economic and Social Committee, and the Committee of the Regions, titled Solidarity in health: reducing health inequalities in the EU . Non EU countries of the WHO European Region are in transition but are influenced by EU and WHO standards.
Europe’s commitment to reducing inequalities in social factors that influence health was reflected in the report by the WHO Commission on Social Determinants of Health, supported by certain European presidencies and by the WHO Regional Office for Europe itself, although its demand for specific consideration of public health ethics has yet to be sufficiently reflected in political practice [20–22].
Another European initiative was that by the Finnish Government to highlight health as one of the priorities for public policy, thus reinforcing the lead set by the Ottawa Charter . A perspective, albeit not exclusively European, whose greatest political and legal repercussions have occurred in Europe [25, 26]. Ten years ago, it was noted that there was a need to evaluate the impact on health of interventions in sectors such as urban planning, industry and employment, as well as the public relevance that achieving their introduction to European policy could have . This public health focus obligates us to balance healthcare directly against other social values and to evaluate different political options that have heterogeneous social and economic repercussions. These are challenges, such as water fluoridation and the prevention of injuries, that could benefit from analysis using public health ethics approaches.
Other common challenges are those derived from immigration from outside and within the EU, the free movement of people throughout European territory, health tourism in the EU and policies for the prevention of epidemics and possible natural and other catastrophes. Immigration is at the core of relevant ethical issues, since it is central in the rhetoric of the neo-fascist and xenophobic political parties that are far from being residual in Europe. As Lindert el al remind us, public health has to learn the lessons of ethical failures in Europe and the obligation of the public health community to – in the words of McKee - “not remain silent when others seek to divide us from our fellow human beings” and as Lindert states, “humiliate, separate and murder the ‘others’.” [28, 29].
In Europe, the determinant agents of population health are, despite their differences, fairly homogenous . Its health services also share the purposes of the welfare state, assuring (through different organizations) a public and accessible supply of health services. Neither is the focus of public health very different in the different European countries, thanks to the work of European health agencies. In this regard, it is worth highlighting the alliance established between the primary care and public health mechanisms in countries like the United Kingdom and Spain, where community health is a common objective of both institutions . The Nuffield Council reported some joint work experiences involving ethics scholars and public health professionals . We should also cite work such as that conducted by the ethics and public health work group at the Spanish Society for Public Health and Health Administration (SESPAS) which, in collaboration with the Grífols Foundation, maintains a platform for joint exchange and deliberation that has already produced tangible results [33, 34].
Beyond the European homogeneities in terms of vision and organization, there are situations and conflicts regarding health issues in Europe that will benefit from an ethical approach. Heterogeneities can enrich the focuses of public health ethics and, when they express conflicts, they can provide a stimulus for ethical considerations to help find balanced solutions. Prominent among these is the diversity of vaccination policies in Europe. The European Centre for Disease Prevention and Control (ECDC), the European Union agency for infectious diseases, provides regular information on the highly diverse national vaccination policies and makes policy recommendations. Despite this, in some cases, there is major heterogeneity in health policy, as is the case with the varicella vaccination, for example. Four countries recommend population-based administration during infancy, including Germany and some regions of Italy and Spain, while other countries like Poland, the United Kingdom, France, and Spain only recommend it for susceptible adolescents and risk groups. Finally, Holland, Sweden, and Norway make no recommendations at all . There are many values and interests involved in the conflicts surrounding the adoption of decisions in each country, along with a wide variety of adopted solutions, and these are worth scrutinizing in terms of ethics. Spain and the United Kingdom, for example, have very different policies. The vaccination is barely administered at all in the countries that do not recommend it in infancy, except in Spain, where a de facto alliance between the production company and various scientific societies has, through administration in private healthcare, achieved coverage of 40 %, which represents a risk for the non-vaccinated population, who could be infected at older ages when there is a greater risk of complications . In the United Kingdom, cost-effectiveness studies have produced inconclusive results and the regulator recommended such a low price for the vaccine that the company decided not to market it . The same company, when it failed to be included on the Spanish vaccine schedule, sold it on the private market at the highest price in Europe and achieved very high sales by imitating the strategy initiated by companies producing other vaccinations. Meanwhile, in public health, each country decides on its investments in accordance with different values. For example, for some it is enough for a vaccination to be cost-effective for it to be included on schedules, without considering the opportunity cost, i.e. the value that the investment would have in other health and social areas or even the value that it could provide for other public health interventions. It is in this diverse setting that the population and its free mobility comes into play, which sometimes demands the right to vaccination and rejects measures such as those adopted in Spain to withdraw the vaccination from public sale.
The attitude of European populations to regulations that restrict personal freedom, such as the use of safety belts by drivers and passengers or making helmets compulsory for motorcyclists, is one of general acceptance and would seem worthy of a positive evaluation, while for some North American and European populations such initiatives are an example of unacceptable paternalism .
The right to healthcare in Europe is another matter. Although European healthcare models have much in common, the different focuses represent different limitations both among European citizens, depending on what countries they live in, and with respect to people from non-EU countries, the situation of undocumented immigrants being particularly noteworthy . The European Directive on the application of patients’ rights in cross-border healthcare revealed the difficulties imposed on Europe due to the diversity of methods for organizing healthcare . This might be why the preamble stated that no provision of this Directive should be interpreted in such a way as to undermine the fundamental ethical choices of Member States. The ways that these choices are reflected in the treatment of undocumented immigrants are currently putting the values on which European states are based and those of international agreements to the test. Beyond legal issues, the fact is that these challenges are unique and there is no historic background of collaboration in terms of healthcare comparable with what is happening in Europe, so an approach from public health ethics could be highly useful and, consequently, should be considered in the training of people who will work in key positions in European health services.
Meanwhile, European society’s response to its healthcare problems pervades, at least rhetorically, its exterior actions . A glance at the situation confirms that the problems and quandaries arising in global healthcare (intellectual property, emergency action as opposed to action on underlying determinants of health; technological focuses as opposed to basic needs; the fight against disease as opposed to public health services; etc.) require an approach that would benefit from the ethical application of public health from a specifically European perspective.
Even the problems with poor government, such as corruption, tend to be shared . European public health has to deal with the influence of lobby groups and corporations on political decision-making that affects the European population’s healthcare, meaning there is a need for coordinated work. It is therefore necessary to develop training in ethics and good government for all public health workers in Europe, especially since a large amount of the population’s health depends on actions and decisions adopted by the European Commission and its regulatory agencies. This should also apply to non EU European Region countries.
The political and legislative particularities, as well as those in relation to healthcare, that distinguish the European Region’s population from other populations, justify Europe having its own design that spans a common nucleus for European schools as a whole, which will in turn contribute to the current process of political construction of the new Europe. This nucleus should be complemented by tackling dilemmas of an ethical nature that on a local level are generating conflict between individual and community interests in the different geographical and social scenarios of the continent.
▪ Familiarize graduates with the basic concepts of ethics and of political and moral philosophy sustained by public health ethics.
▪ Foster sensitivity to ethics and the acquisition of criteria for the application of ethical considerations among public health professionals in such a way that ethical arguments are integrated into the design and practice of all public health interventions.
▪ Improve students’ ability to recognize any ethical tensions and conflicts associated with public health interventions.
▪ Provide information to develop skills to enable students to apply ethical values to the analysis of dilemmas and conflicts and, if relevant, to making practical decisions.
▪ Facilitate public health professionals’ ability to reflect on their own moral convictions in relation to other health agents and affected populations in a way that fosters debate and negotiation.
▪ Respect rules and standards of professional conduct that specifically affect research procedures, which include respect for privacy and confidentiality, as well as the declaration of interests that might sometimes be involved in conflicts.
▪ Know and understand the most relevant ethics history, theories and concepts for public health, prominent among which are autonomy, paternalism, induced interventions, individual and collective responsibility, respect for dignity and discrepancy and human rights, as well as the most significant aspects of the history of ethics and its applications, not forgetting the cases of misuse of the principles of public health for political purposes and outright mass murder.
▪ Know and understand the criteria and international evidence based “best practices” that enable work by professionals to be qualified as good practice in relation to personal information, confidentiality, privacy or conflicts of interests and in general the ethical dimensions of creating strategies and designing and implementing any public health interventions, as well as those that affect the behaviour of professionals when it comes to assuming personal or institutional responsibilities.
▪ Know and understand the nature and characteristics of ethics committees in the field of healthcare and the ethical requirements of for funding or publication of any research project in the field of public health.
▪ Identify and recognize the ethical dimensions and aspects of certain public health policies, strategies and interventions.
▪ Include the basic principles of ethics in the creation and design of public health strategies, and in non-discriminatory approaches with respect to the target populations and in the management of human resources.
▪ Respect and assume ethical principles with regard to data protection and confidentiality in relation to any information obtained when exercising one’s professional duties.
▪ Maintain relations with the system of ethical committees in one’s own country in relation to public health research projects.
▪ Bear in mind all the characteristics that may influence ethical dilemmas in the field of public health in Europe.
In short, this is a potential initiative for the useful development of ethics in, by and for European public health fields, not only the European Union. Due to the differences among policies in the European states, this diversity needs to build competencies of public health practitioners in order to learn and to understand neighbours’ approaches.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Gostin L. Public Health, Ethics and Human Rights: A tribute to the late Jonathan Mann. JL Med Ethics. 2001;29:121–30.View ArticleGoogle Scholar
- Callaghan D, Jennings B. Ethics and Public Health: forging a strong relationship. Am J Public Health. 2002;169–176.Google Scholar
- Upshur REG. Setting the Stage: Population and Public Health Ethics or Public Health Ethics: ineffable, ignorable or essential? In: Canadian Institutes of Health Research – Institute of Population and Public Health. Population and Public Health Ethics: Cases from Research, Policy, and Practice. University of Toronto Joint Centre for Bioethics: Toronto, ON. 2012: 11–20. Available at: http://www.jcb.utoronto.ca/publications/documents/Population-and-Public-Health-Ethics-Casebook-ENGLISH.pdf [accessed 10 November 2014].
- Dawson A, Verweij M. Ethics, prevention and public health. New York, NY: Oxford University Press; 2007.Google Scholar
- Aceijas C, Brall C, Schröder-Bäck P, Otok P, Maeckelberghe E, Strech D, et al. Teaching ethics in schools of public health in the European region: findings from a screening survey. Public Health Rev. 2012;34:1–10.Google Scholar
- Jennings B, Khan J, Mastroianni A, Parker LS (eds) Ethics and Public Health: Model Curriculum. July 2003. Available at: http://www.aspph.org/wp-content/uploads/2014/02/EthicsCurriculum.pdf [accessed 10 November 2014].
- Lee LM, Wright B, Semaan S. Expected Ethical Competencies of Public Health Professionals and Graduate Curricula in Accredited Schools of Public Health in North America. Am J Public Health. 2013;103:938–42.PubMedView ArticlePubMed CentralGoogle Scholar
- Maecklberghe ELM, Schröder-Bäck P. Public health ethics in Europe --let ethicists enter the public health debate. Eur J Public Health. 2007;1:1.Google Scholar
- Steiner Stjerno, Solidarity in Europe. The History of an Idea. Cambridge: Cambridge University Press; 2004.Google Scholar
- Gostin LO. Access to health care for millions in the balance as US Supreme Court reviews federal subsidies for insurance. JAMA. 2015;313:554.Google Scholar
- Ter Meulen R, Arts W, Muffels R, editors. Solidarity in Health and Social Care in Europe. Dordrecht: Kluwer Academic Publishers; 2001.Google Scholar
- Dawson A, Jennings B. The place of solidarity in public health ethics. Public Health Rev. 2012;34:65–79.Google Scholar
- Castel R. L’insécurité sociale. Qu’est-ce qu’être protegé? Paris: Éditions du Seuil; 2003.Google Scholar
- Van der Veen R, Yerkes M, Achterberg P. The Transformation of Solidarity. Changing Risks and the Future of the Welfare State. Amsterdam: Amsterdam University Press; 2012.Google Scholar
- Council of the European Union. Presidency Conclusions. Brussels; June 2008. Available at: http://tinyurl.com/n2x16b [accessed 20 November 2014].
- Commission of the European Communities. Together for Health: A Strategic Approach for the EU 2008–2013. Brussels, COM (2007) 630. Available at: http://www.consilium.europa.eu/uedocs/cms_data/docs/pressdata/en/ec/101346.pdf [accessed 20 November 2014].
- Commission of the European Communities. Renewed social agenda: Opportunities, access and solidarity in 21st century Europe. Brussels, COM (2008) 412. Available at: http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2008:0412:FIN:EN:PDF [accessed 20 November 2014].
- Brussels, Solidarity in health: reducing health inequalities in the EU. Brussels COM (2009) 567/4. Available at: http://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1396946539740&uri=CELEX:52009DC0567 [accessed 20 November 2014].
- CSDH, WHO. Closing the gap in a generation: Health equity through action on the social determinants of health. Final Report of the Commission on Social Determinants of Health. Geneva: World Health Organization; 2008. Available at: http://www.who.int/social_determinants/thecommission/finalreport/en/index.html [accessed: 20 November 2014].Google Scholar
- Hernandez Aguado I, Campos Esteban P, Catalan Matamoros D, Fernandez de la Hoz K, Koller T, Merino Merino B, et al. Moving Forward Equity in Health: Monitoring social determinants of health and the reduction of health inequalities. Madrid: Ministry of Health and Social Policy; 2010. Available at: http://www.msps.es/en/presidenciaUE/calendario/conferenciaExpertos/docs/haciaLaEquidadEnSalud.pdf [accessed 20 November 2014].Google Scholar
- Technical Committee for the priority of the Spanish Presidency on “Monitoring Social Determinants of Health and the Reduction of Health Inequalities”. General Directorate for Public Health and International Health. Ministry of Health and Social Policy, Madrid, Spain. Reducing health inequalities and monitoring social determinants of health in the European Union: a priority of the Spanish Presidency of the European Union 2010. Euro Surveill. 2010;15(27):pii = 19612. Available at: http://www.eurosurveillance.org/ViewArticle.aspx?ArticleId=19612 [accessed 20 November 2014].
- Marmot M, Allen J, Beth R, Bllomer E, Goldblatt P, on behalf of the Consortium for the European Review of Social Determinants of Health and the Health Divide. WHO European review of social determinants of health and the health divide. Lancet. 2012;380:1011–29.PubMedView ArticleGoogle Scholar
- European Union (2006). The Council Conclusions on Health in All Policies (HiAP) of 30 November 2006 (16167/06). Available at: http://ec.europa.eu/health/ph_projects/2005/action1/docs/2005_1_18_frep_a8_en.pdf [accessed 20 November 2014].
- WHO. The Ottawa Charter for Health Promotion. Available at: http://www.who.int/healthpromotion/conferences/previous/ottawa/en/ [accessed 20 November 2014].
- Generalitat de Catalunya. Pla Interdepartamental de Salut Pública (PINSAP). Available at: http://salutpublica.gencat.cat/web/.content/minisite/aspcat/sobre_lagencia/pinsap/presentation_pinsap_july13.pdf [accessed 20 November 2014].
- Ley 33/2011, de 4 de octubre, general de salud pública. BOE n.° 240, 5 de octubre de 2011: 104593–626. http://www.boe.es/boe/dias/2011/10/05/pdfs/BOE-A-2011-15623.pdf [accessed 20 November 2014].
- Lock K. Opportunities for inter-sectoral health improvement in new Member States – the case for health impact assessment. In: McKee M, MacLehose L, Nolte E eds. Policy and European Union Enlargement. New York: Mac Graw Hill; 2004;225–39.Google Scholar
- Lindert J, Stein Y, Guggenheim H, Jaakkola JJK, von Cranach M, Strous RD. How ethics failed – the role of psychiatrists and physicians in Nazi programs from exclusion to extermination, 1933–1945. Public Health Reviews. 2013;34: epub ahead of print.Google Scholar
- McKee M. A preface: how ethics failed: lessons for public health for all time. Public Health Reviews. 2012;34: epub ahead of print.Google Scholar
- OECD (2013), Health at a Glance 2013: OECD Indicators, OECD Publishing. http://www.oecd.org/els/health-systems/Health-at-a-Glance-2013.pdf [accessed 20 November2014].
- 1ª conferencia nacional de salud comunitaria. SESPAS/SEMFyC Barcelona 27 de abril de 2012. Available at: http://conferenciasaludcomunitaria.wordpress.com/presentacion/ [accessed 20 November 2014].
- Nuffield Council on Bioethics. Public health ethics issues. London; Nuffield Council on Bioethics; 2007. Available at: http://nuffieldbioethics.org/wp-content/uploads/2014/07/Public-health-ethical-issues.pdf [accessed 20 November 2014]
- Segura A (coordinator). Ethics and public health. Monographs Grifols n 27. Barcelona: Fundación Grifols; 2012. Available at: http://www.fundaciogrifols.org/portal/en/2/7353 [accessed 20 November].
- Camps V (coordinador) Casos prácticos de Ética y salud Pública. Monographs Grífols n 29. Barcelona: Fundación Grífols, 2014. Case studies in ethics and public health. Available at: http://www.fundaciogrifols.org/portal/en/2/7353/ctnt/dD10/_/_/8guq/29-Case-studies-in-ethics-and-public-health.html [accessed 20 November 2014].
- ECDC preliminary guidance. Varicella vaccine in the European Union. Stockholm: ECDC; 2014. Available at: http://www.ecdc.europa.eu/en/publications/Publications/Varicella-guidance-2014-consultation.pdf [accessed 20 November 2014].Google Scholar
- Word Health Organization. Varicella and herpes zoster vaccines: WHO position paper, June 2014. Wkly Epidemiol Rec. 2014;89:265–88. Available at: http://www.who.int/wer/2014/wer8925.pdf [accessed 21 April 2015].Google Scholar
- van Hoek AJ, Melegaro A, Gay N, Bilcke J, Edmunds WJ. The cost-effectiveness of varicella and combined varicella and herpes zoster vaccination programmes in the United Kingdom. Vaccine. 2012;30:1225–34.PubMedView ArticleGoogle Scholar
- Directive 2003/20/EC of the European Parliament and of the Council of 8 April 2003 amending Council Directive 91/671/EEC on the approximation of the laws of the Member States relating to compulsory use of safety belts in vehicles of less than 3.5 tonnes. Available at: http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32003L0020 [accessed 20 November 2014].
- Buchanan DR. Autonomy, Paternalism, and Justice: Ethical Priorities in Public Health. Am J Public Health. 2008;98:15–21.PubMedView ArticlePubMed CentralGoogle Scholar
- Royo-Bordonada MÁ, Díez-Cornell M, Llorente JM. Health-care access for migrants in Europe: the case of Spain. Lancet. 2013;382:393–4.PubMedView ArticleGoogle Scholar
- Directive 2011/24/EU of the European Parliament and of the Council of 9 March 2011 on the application of patients’ rights in cross-border healthcare. Available at: http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32011L0024&from=EN [accessed 20 November 2014].
- European Union (2010). Communication from the Commission to the Council, the European Parliament, the European Economic and Social Committee and the Committee of the Regions: The EU Role of Global Health Brussels, COM(2010)128 final. Available at: https://ec.europa.eu/europeaid/sites/devco/files/communication-eu-role-in-global-health-com2010128-20100331_en.pdf [accessed 20 November 2014].
- European Commission – Directorate-General Home Affairs (2013). Study on Corruption in the Healthcare Sector. Luxembourg: Publications Office of the European Union; 2013. Available at: http://www.stt.lt/documents/soc_tyrimai/20131219_study_on_corruption_in_the_healthcare_sector_en.pdf [accessed 20 November 2014].Google Scholar
- Corporate Europe Observatory (CEO). Brussels The EU quarter. 4th edition, fully revised and updated – September 2011. Brussels: CEO; 2011. Available at: http://corporateeurope.org/sites/default/files/publications/ceolobbylow.pdf [accessed 20 November 2014].Google Scholar
| 1 | 4 |
<urn:uuid:8ac1a1e0-abf6-491d-8c01-cefd8c97b428>
|
NounPlural of movie
- The cinema
- I'm taking my husband to the movies for our anniversary
Film is a term that encompasses individual motion pictures, the field of film as an art form, and the motion picture industry. Films are produced by recording images from the world with cameras, or by creating images using animation techniques or special effects.
Films are cultural artifacts created by specific cultures, which reflect those cultures, and, in turn, affect them. Film is considered to be an important art form, a source of popular entertainment and a powerful method for educating — or indoctrinating — citizens. The visual elements of cinema gives motion pictures a universal power of communication. Some movies have become popular worldwide attractions by using dubbing or subtitles that translate the dialogue. Traditional films are made up of a series of individual images called frames. When these images are shown rapidly in succession, a viewer has the illusion that motion is occurring. The viewer cannot see the flickering between frames due to an effect known as persistence of vision, whereby the eye retains a visual image for a fraction of a second after the source has been removed. Viewers perceive motion due to a psychological effect called beta movement.
The origin of the name "film" comes from the fact that photographic film (also called film stock) had historically been the primary medium for recording and displaying motion pictures. Many other terms exist for an individual motion picture, including picture, picture show, photo-play, flick, and most commonly, movie. Additional terms for the field in general include the big screen, the silver screen, the cinema, and the movies.
HistoryIn the 1860s, mechanisms for producing artificially created, two-dimensional images in motion were demonstrated with devices such as the zoetrope and the praxinoscope. These machines were outgrowths of simple optical devices (such as magic lanterns) and would display sequences of still pictures at sufficient speed for the images on the pictures to appear to be moving, a phenomenon called persistence of vision. Naturally, the images needed to be carefully designed to achieve the desired effect — and the underlying principle became the basis for the development of film animation.
With the development of celluloid film for still photography, it became possible to directly capture objects in motion in real time. Early versions of the technology sometimes required a person to look into a viewing machine to see the pictures which were separate paper prints attached to a drum turned by a handcrank. The pictures were shown at a variable speed of about 5 to 10 pictures per second depending on how rapidly the crank was turned. Some of these machines were coin operated. By the 1880s, the development of the motion picture camera allowed the individual component images to be captured and stored on a single reel, and led quickly to the development of a motion picture projector to shine light through the processed and printed film and magnify these "moving picture shows" onto a screen for an entire audience. These reels, so exhibited, came to be known as "motion pictures". Early motion pictures were static shots that showed an event or action with no editing or other cinematic techniques.
Ignoring Dickson's early sound experiments (1894), commercial motion pictures were purely visual art through the late 19th century, but these innovative silent films had gained a hold on the public imagination. Around the turn of the twentieth century, films began developing a narrative structure by stringing scenes together to tell narratives. The scenes were later broken up into multiple shots of varying sizes and angles. Other techniques such as camera movement were realized as effective ways to portray a story on film. Rather than leave the audience in silence, theater owners would hire a pianist or organist or a full orchestra to play music fitting the mood of the film at any given moment. By the early 1920s, most films came with a prepared list of sheet music for this purpose, with complete film scores being composed for major productions.
The rise of European cinema was interrupted by the breakout of World War I while the film industry in United States flourished with the rise of Hollywood. However in the 1920s, European filmmakers such as Sergei Eisenstein, F. W. Murnau, and Fritz Lang, along with American innovator D. W. Griffith and the contributions of Charles Chaplin, Buster Keaton and others, continued to advance the medium. In the 1920s, new technology allowed filmmakers to attach to each film a soundtrack of speech, music and sound effects synchronized with the action on the screen. These sound films were initially distinguished by calling them "talking pictures", or talkies.
The next major step in the development of cinema was the introduction of so-called "natural" color. While the addition of sound quickly eclipsed silent film and theater musicians, color was adopted more gradually as methods evolved making it more practical and cost effective to produce "natural color" films. The public was relatively indifferent to color photography as opposed to black-and-white, but as color processes improved and became as affordable as black-and-white film, more and more movies were filmed in color after the end of World War II, as the industry in America came to view color as essential to attracting audiences in its competition with television, which remained a black-and-white medium until the mid-1960s. By the end of the 1960s, color had become the norm for film makers.
Since the decline of the studio system in the 1960s, the succeeding decades saw changes in the production and style of film. New Hollywood, French New Wave and the rise of film school educated independent filmmakers were all part of the changes the medium experienced in the latter half of the 20th century. Digital technology has been the driving force in change throughout the 1990s and into the 21st century.
Film theory seeks to develop concise and systematic concepts that apply to the study of film as art. It was started by Ricciotto Canudo's The Birth of the Sixth Art. Formalist film theory, led by Rudolf Arnheim, Béla Balázs, and Siegfried Kracauer, emphasized how film differed from reality, and thus could be considered a valid fine art. André Bazin reacted against this theory by arguing that film's artistic essence lay in its ability to mechanically reproduce reality not in its differences from reality, and this gave rise to realist theory. More recent analysis spurred by Lacan's psychoanalysis and Ferdinand de Saussure's semiotics among other things has given rise to psychoanalytical film theory, structuralist film theory, feminist film theory and others.
Film criticism is the analysis and evaluation of films. In general, these works can be divided into two categories: academic criticism by film scholars and journalistic film criticism that appears regularly in newspapers and other media.
Film critics working for newspapers, magazines, and broadcast media mainly review new releases. Normally they only see any given film once and have only a day or two to formulate opinions. Despite this, critics have an important impact on films, especially those of certain genres. Mass marketed action, horror, and comedy films tend not to be greatly affected by a critic's overall judgment of a film. The plot summary and description of a film that makes up the majority of any film review can still have an important impact on whether people decide to see a film. For prestige films such as most dramas, the influence of reviews is extremely important. Poor reviews will often doom a film to obscurity and financial loss.
The impact of a reviewer on a given film's box office performance is a matter of debate. Some claim that movie marketing is now so intense and well financed that reviewers cannot make an impact against it. However, the cataclysmic failure of some heavily-promoted movies which were harshly reviewed, as well as the unexpected success of critically praised independent movies indicates that extreme critical reactions can have considerable influence. Others note that positive film reviews have been shown to spark interest in little-known films. Conversely, there have been several films in which film companies have so little confidence that they refuse to give reviewers an advanced viewing to avoid widespread panning of the film. However, this usually backfires as reviewers are wise to the tactic and warn the public that the film may not be worth seeing and the films often do poorly as a result.
It is argued that journalist film critics should only be known as film reviewers, and true film critics are those who take a more academic approach to films. This line of work is more often known as film theory or film studies. These film critics attempt to come to understand how film and filming techniques work, and what effect they have on people. Rather than having their works published in newspapers or appear on television, their articles are published in scholarly journals, or sometimes in up-market magazines. They also tend to be affiliated with colleges or universities.
The making and showing of motion pictures became a source of profit almost as soon as the process was invented. Upon seeing how successful their new invention, and its product, was in their native France, the Lumières quickly set about touring the Continent to exhibit the first films privately to royalty and publicly to the masses. In each country, they would normally add new, local scenes to their catalogue and, quickly enough, found local entrepreneurs in the various countries of Europe to buy their equipment and photograph, export, import and screen additional product commercially. The Oberammergau Passion Play of 1898 was the first commercial motion picture ever produced. Other pictures soon followed, and motion pictures became a separate industry that overshadowed the vaudeville world. Dedicated theaters and companies formed specifically to produce and distribute films, while motion picture actors became major celebrities and commanded huge fees for their performances. Already by 1917, Charlie Chaplin had a contract that called for an annual salary of one million dollars.
In the United States today, much of the film industry is centered around Hollywood. Other regional centers exist in many parts of the world, such as Mumbai-centered Bollywood, the Indian film industry's Hindi cinema which produces the largest number of films in the world. Whether the ten thousand-plus feature length films a year produced by the Valley pornographic film industry should qualify for this title is the source of some debate. Though the expense involved in making movies has led cinema production to concentrate under the auspices of movie studios, recent advances in affordable film making equipment have allowed independent film productions to flourish.
Profit is a key force in the industry, due to the costly and risky nature of filmmaking; many films have large cost overruns, a notorious example being Kevin Costner's Waterworld. Yet many filmmakers strive to create works of lasting social significance. The Academy Awards (also known as "the Oscars") are the most prominent film awards in the United States, providing recognition each year to films, ostensibly based on their artistic merits.
There is also a large industry for educational and instructional films made in lieu of or in addition to lectures and texts.
PreviewA preview performance refers to a showing of a movie to a select audience, usually for the purposes of corporate promotions, before the public film premiere itself. Previews are sometimes used to judge audience reaction, which if unexpectedly negative, may result in recutting or even refilming certain sections. (cf Audience response.)
TrailerTrailers or previews are film advertisements for films that will be exhibited in the future at a cinema, on whose screen they are shown. The term "trailer" comes from their having originally been shown at the end of a film programme. That practice did not last long, because patrons tended to leave the theater after the films ended, but the name has stuck. Trailers are now shown before the film (or the A movie in a double feature program) begins.
The nature of the film determines the size and type of crew required during filmmaking. Many Hollywood adventure films need computer generated imagery (CGI), created by dozens of 3D modellers, animators, rotoscopers and compositors. However, a low-budget, independent film may be made with a skeleton crew, often paid very little. Also, an open source film may be produced through open, collaborative processes. Filmmaking takes place all over the world using different technologies, styles of acting and genre, and is produced in a variety of economic contexts that range from state-sponsored documentary in China to profit-oriented movie making within the American studio system.
A typical Hollywood-style filmmaking Production cycle is comprised of five main stages:
This production cycle typically takes three years. The first year is taken up with development. The second year comprises preproduction and production. The third year, post-production and distribution.
A film crew is a group of people hired by a film company, employed during the "production" or "photography" phase, for the purpose of producing a film or motion picture. Crew are distinguished from cast, the actors who appear in front of the camera or provide voices for characters in the film. The crew interacts with but is also distinct from the production staff, consisting of producers, managers, company representatives, their assistants, and those whose primary responsibility falls in pre-production or post-production phases, such as writers and editors. Communication between production and crew generally passes through the director and his/her staff of assistants. Medium-to-large crews are generally divided into departments with well defined hierarchies and standards for interaction and cooperation between the departments. Other than acting, the crew handles everything in the photography phase: props and costumes, shooting, sound, electrics (i.e., lights), sets, and production special effects. Caterers (known in the film industry as "craft services") are usually not considered part of the crew.
TechnologyFilm stock consists of transparent celluloid, acetate, or polyester base coated with an emulsion containing light-sensitive chemicals. Cellulose nitrate was the first type of film base used to record motion pictures, but due to its flammability was eventually replaced by safer materials. Stock widths and the film format for images on the reel have had a rich history, though most large commercial films are still shot on (and distributed to theaters) as 35 mm prints.
Originally moving picture film was shot and projected at various speeds using hand-cranked cameras and projectors; though 1000 frames per minute (16⅔ frame/s) is generally cited as a standard silent speed, research indicates most films were shot between 16 frame/s and 23 frame/s and projected from 18 frame/s on up (often reels included instructions on how fast each scene should be shown) http://www.cinemaweb.com/silentfilm/bookshelf/18_car_1.htm. When sound film was introduced in the late 1920s, a constant speed was required for the sound head. 24 frames per second was chosen because it was the slowest (and thus cheapest) speed which allowed for sufficient sound quality. Improvements since the late 19th century include the mechanization of cameras — allowing them to record at a consistent speed, quiet camera design — allowing sound recorded on-set to be usable without requiring large "blimps" to encase the camera, the invention of more sophisticated filmstocks and lenses, allowing directors to film in increasingly dim conditions, and the development of synchronized sound, allowing sound to be recorded at exactly the same speed as its corresponding action. The soundtrack can be recorded separately from shooting the film, but for live-action pictures many parts of the soundtrack are usually recorded simultaneously.
As a medium, film is not limited to motion pictures, since the technology developed as the basis for photography. It can be used to present a progressive sequence of still images in the form of a slideshow. Film has also been incorporated into multimedia presentations, and often has importance as primary historical documentation. However, historic films have problems in terms of preservation and storage, and the motion picture industry is exploring many alternatives. Most movies on cellulose nitrate base have been copied onto modern safety films. Some studios save color films through the use of separation masters — three B&W negatives each exposed through red, green, or blue filters (essentially a reverse of the Technicolor process). Digital methods have also been used to restore films, although their continued obsolescence cycle makes them (as of 2006) a poor choice for long-term preservation. Film preservation of decaying film stock is a matter of concern to both film historians and archivists, and to companies interested in preserving their existing products in order to make them available to future generations (and thereby increase revenue). Preservation is generally a higher-concern for nitrate and single-strip color films, due to their high decay rates; black and white films on safety bases and color films preserved on Technicolor imbibition prints tend to keep up much better, assuming proper handling and storage.
Some films in recent decades have been recorded using analog video technology similar to that used in television production. Modern digital video cameras and digital projectors are gaining ground as well. These approaches are extremely beneficial to moviemakers, especially because footage can be evaluated and edited without waiting for the film stock to be processed. Yet the migration is gradual, and as of 2005 most major motion pictures are still recorded on film.
Independent filmmaking often takes place outside of Hollywood, or other major studio systems. An independent film (or indie film) is a film initially produced without financing or distribution from a major movie studio. Creative, business, and technological reasons have all contributed to the growth of the indie film scene in the late 20th and early 21st century.
On the business side, the costs of big-budget studio films also leads to conservative choices in cast and crew. There is a trend in Hollywood towards co-financing (over two-thirds of the films put out by Warner Bros. in 2000 were joint ventures, up from 10% in 1987). A hopeful director is almost never given the opportunity to get a job on a big-budget studio film unless he or she has significant industry experience in film or television. Also, the studios rarely produce films with unknown actors, particularly in lead roles.
Before the advent of digital alternatives, the cost of professional film equipment and stock was also a hurdle to being able to produce, direct, or star in a traditional studio film. The cost of 35 mm film is outpacing inflation: in 2002 alone, film negative costs were up 23%, according to Variety.
Although most animation studios are now using digital technologies in their productions, there is a specific style of animation that depends on film. Cameraless animation, made famous by moviemakers like Norman McLaren, Len Lye and Stan Brakhage, is painted and drawn directly onto pieces of film, and then run through a projector.
VenuesWhen it is initially produced, a feature film is often shown to audiences in a movie theater or cinema. The first theater designed exclusively for cinema opened in Pittsburgh, Pennsylvania in 1905. Thousands of such theaters were built or converted from existing facilities within a few years. In the United States, these theaters came to be known as nickelodeons, because admission typically cost a nickel (five cents).
Typically, one film is the featured presentation (or feature film). Before the 1970s, there were "double features"; typically, a high quality "A picture" rented by an independent theater for a lump sum, and a "B picture" of lower quality rented for a percentage of the gross receipts. Today, the bulk of the material shown before the feature film consists of previews for upcoming movies and paid advertisements (also known as trailers or "The Twenty").
Historically, all mass marketed feature films were made to be shown in movie theaters. The development of television has allowed films to be broadcast to larger audiences, usually after the film is no longer being shown in theaters. Recording technology has also enabled consumers to rent or buy copies of films on VHS or DVD (and the older formats of laserdisc, VCD and SelectaVision — see also videodisc), and Internet downloads may be available and have started to become revenue sources for the film companies. Some films are now made specifically for these other venues, being released as made-for-TV movies or direct-to-video movies. The production values on these films are often considered to be of inferior quality compared to theatrical releases in similar genres, and indeed, some films that are rejected by their own studios upon completion are distributed through these markets.
The movie theater pays an average of about 50-55% of its ticket sales to the movie studio, as film rental fees. The actual percentage starts with a number higher than that, and decreases as the duration of a film's showing continues, as an incentive to theaters to keep movies in the theater longer. However, today's barrage of highly marketed movies ensures that most movies are shown in first-run theaters for less than 8 weeks. There are a few movies every year that defy this rule, often limited-release movies that start in only a few theaters and actually grow their theater count through good word-of-mouth and reviews. According to a 2000 study by ABN AMRO, about 26% of Hollywood movie studios' worldwide income came from box office ticket sales; 46% came from VHS and DVD sales to consumers; and 28% came from television (broadcast, cable, and pay-per-view).
Future stateWhile motion picture films have been around for more than a century, film is still a relative newcomer in the pantheon of fine arts. In the 1950s, when television became widely available, industry analysts predicted the demise of local movie theaters. Despite competition from television's increasing technological sophistication over the 1960s and 1970s, such as the development of color television and large screens, motion picture cinemas continued. In the 1980s, when the widespread availability of inexpensive videocassette recorders enabled people to select films for home viewing, industry analysts again wrongly predicted the death of the local cinemas.
In the 1990s and 2000s the development of digital DVD players, home theater amplification systems with surround sound and subwoofers, and large LCD or plasma screens enabled people to select and view films at home with greatly improved audio and visual reproduction. These new technologies provided audio and visual that in the past only local cinemas had been able to provide: a large, clear widescreen presentation of a film with a full-range, high-quality multi-speaker sound system. Once again industry analysts predicted the demise of the local cinema. Local cinemas will be changing in the 2000s and moving towards digital screens, a new approach which will allow for easier and quicker distribution of films (via satellite or hard disks), a development which may give local theaters a reprieve from their predicted demise.
The cinema now faces a new challenge from home video by the likes of a new DVD format Blu-ray, which can provide full HD 1080p video playback at near cinema quality. Video formats are gradually catching up with the resolutions and quality that film offers, 1080p in Blu-ray offers a pixel resolution of 1920×1080 a leap from the DVD offering of 720×480 and the paltry 330×480 offered by the first home video standard VHS. The maximum resolutions that film currently offers are 2485×2970 or 1420×3390, UHD, a future digital video format, will offer a massive resolution of 7680×4320, surpassing all current film resolutions. The only viable competitor to these new innovations is IMAX which can play film content at an extreme 10000×7000 resolution.
Despite the rise of all new technologies, the development of the home video market and a surge of online piracy, 2007 was a record year in film that showed the highest ever box-office grosses. Many expected film to suffer as a result of the effects listed above but it has flourished, strengthening film studio expectations for the future.
- Reel Women: Pioneers of the Cinema, 1896 to the Present
- Glorious Technicolor: The Movies' Magic Rainbow
- Theories of Cinema, 1945-1995
- Animation Unlimited: Innovative Short Films Since 1940
- Film: An International Bibliography
- The Oxford Guide to Film Studies
- New Hollywood Cinema: An Introduction
- Complete Anime Guide: Japanese Animation Film Directory and Resource Guide
- Celluloid Mavericks: A History of American Independent Film
- The Oxford History of World Cinema
- Reel Racism: Confronting Hollywood's Construction of Afro-American Culture
- Africa Shoots Back: Alternative Perspectives in Sub-Saharan Francophone African Film
- Film as a Subversive Art
- All Movie Guide - Information on films: actors, directors, biographies, reviews, cast and production credits, box office sales, and other movie data.
- Film Site - Reviews of classic films
- The Internet Movie Database (IMDb) - Information on current and historical films and cast listings.
- Rottentomatoes.com - Movie reviews, previews, forums, photos, cast info, and more.
movies in Afrikaans: Film
movies in Tosk Albanian: Film
movies in Amharic: ፊልም
movies in Aragonese: Zine
movies in Min Nan: Tiān-iáⁿ
movies in Bosnian: Film
movies in Breton: Sinema
movies in Catalan: Pel·lícula
movies in Czech: Film
movies in Welsh: Ffilm
movies in Danish: Film
movies in German: Film
movies in Estonian: Filmikunst
movies in Spanish: Cine
movies in Basque: Film
movies in Persian: فیلم
movies in Faroese: Filmur
movies in French: Cinéma
movies in Western Frisian: Filmkeunst
movies in Friulian: Cine (art)
movies in Irish: Scannán
movies in Manx: Filmyn
movies in Scottish Gaelic: Film
movies in Korean: 영화
movies in Hindi: फ़िल्म
movies in Croatian: Film
movies in Indonesian: Film
movies in Icelandic: Kvikmynd
movies in Italian: Film
movies in Hebrew: סרט קולנוע
movies in Kashubian: Film
movies in Kirghiz: Кино
movies in Haitian: Sinema
movies in Lao: ຮູບເງົາ
movies in Latin: Pellicula
movies in Luxembourgish: Film
movies in Lithuanian: Filmas
movies in Limburgan: Film
movies in Lojban: skina
movies in Hungarian: Film
movies in Macedonian: Филм
movies in Malayalam: ചലച്ചിത്രം
movies in Maltese: Ċinema
movies in Dutch: Film (cinematografie)
movies in Dutch Low Saxon: Film (cinematografie)
movies in Cree: ᑳᒋᐦᑳᔥᑌᐦᑎᐦᒡ
movies in Japanese: 映画
movies in Neapolitan: Pellicule
movies in Norwegian: Film
movies in Norwegian Nynorsk: Film
movies in Pushto: فلم
movies in Low German: Filmkunst
movies in Polish: Film
movies in Portuguese: Film
movies in Romanian: Film
movies in Quechua: Kuyu walltay
movies in Scots: Film
movies in Albanian: Filmi
movies in Sicilian: Pillìcula
movies in Simple English: Film
movies in Slovak: Film
movies in Slovenian: Film
movies in Serbian: Филм
movies in Finnish: Elokuva
movies in Swedish: Film
movies in Tamil: திரைப்படம்
movies in Telugu: సినిమా
movies in Thai: ภาพยนตร์
movies in Vietnamese: Điện ảnh
movies in Tajik: Синамо
movies in Venetian: Cinema
movies in Walloon: Fime
movies in Wu Chinese: 电影
movies in Yiddish: פילם
movies in Samogitian: Films
movies in Chinese: 电影
| 1 | 21 |
<urn:uuid:21433d89-8f0f-4f50-83a0-064de89ecab3>
|
The role of epigenetic processes in the control of gene expression has been known for a number of years. DNA methylation at cytosine residues is of particular interest for epigenetic studies as it has been demonstrated to be both a long lasting and a dynamic regulator of gene expression. Efforts to examine epigenetic changes in health and disease have been hindered by the lack of high-throughput, quantitatively accurate methods. With the advent and popularization of next-generation sequencing (NGS) technologies, these tools are now being applied to epigenomics in addition to existing genomic and transcriptomic methodologies. For epigenetic investigations of cytosine methylation where regions of interest, such as specific gene promoters or CpG islands, have been identified and there is a need to examine significant numbers of samples with high quantitative accuracy, we have developed a method called Bisulfite Amplicon Sequencing (BSAS). This method combines bisulfite conversion with targeted amplification of regions of interest, transposome-mediated library construction and benchtop NGS. BSAS offers a rapid and efficient method for analysis of up to 10 kb of targeted regions in up to 96 samples at a time that can be performed by most research groups with basic molecular biology skills. The results provide absolute quantitation of cytosine methylation with base specificity. BSAS can be applied to any genomic region from any DNA source. This method is useful for hypothesis testing studies of target regions of interest as well as confirmation of regions identified in genome-wide methylation analyses such as whole genome bisulfite sequencing, reduced representation bisulfite sequencing, and methylated DNA immunoprecipitation sequencing.
21 Related JoVE Articles!
Determination of DNA Methylation of Imprinted Genes in Arabidopsis Endosperm
Institutions: Saint Louis University.
is an excellent model organism for studying epigenetic mechanisms. One of the reasons is the loss-of-function null mutant of DNA methyltransferases is viable, thus providing a system to study how loss of DNA methylation in a genome affects growth and development. Imprinting refers to differential expression of maternal and paternal alleles and plays an important role in reproduction development in both mammal and plants. DNA methylation is critical for determining whether the maternal or paternal alleles of an imprinted gene is expressed or silenced. In flowering plants, there is a double fertilization event in reproduction: one sperm cell fertilizes the egg cell to form embryo and a second sperm fuses with the central cell to give rise to endosperm. Endosperm is the tissue where imprinting occurs in plants. MEDEA
, a SET domain Polycomb group gene, and FWA
, a transcription factor regulating flowering, are the first two genes shown to be imprinted in endosperm and their expression is controlled by DNA methylation and demethylation in plants. In order to determine imprinting status of a gene and methylation pattern in endosperm, we need to be able to isolate endosperm first. Since seed is tiny in Arabidopsis
, it remains challenging to isolate Arabidopsis
endosperm and examine its methylation. In this video protocol, we report how to conduct a genetic cross, to isolate endosperm tissue from seeds, and to determine the methylation status by bisulfite sequencing.
Plant Biology, Issue 47, DNA methylation, imprinting, bisulfite sequencing, endosperm, Arabidopsis
Optimized Analysis of DNA Methylation and Gene Expression from Small, Anatomically-defined Areas of the Brain
Institutions: Max Planck Institute of Psychiatry.
Exposure to diet, drugs and early life adversity during sensitive windows of life 1,2
can lead to lasting changes in gene expression that contribute to the display of physiological and behavioural phenotypes. Such environmental programming is likely to increase the susceptibility to metabolic, cardiovascular and mental diseases 3,4
DNA methylation and histone modifications are considered key processes in the mediation of the gene-environment dialogue and appear also to underlay environmental programming 5
. In mammals, DNA methylation typically comprises the covalent addition of a methyl group at the 5-position of cytosine within the context of CpG dinucleotides.
CpG methylation occurs in a highly tissue- and cell-specific manner making it a challenge to study discrete, small regions of the brain where cellular heterogeneity is high and tissue quantity limited. Moreover, because gene expression and methylation are closely linked events, increased value can be gained by comparing both parameters in the same sample.
Here, a step-by-step protocol (Figure 1
) for the investigation of epigenetic programming in the brain is presented using the 'maternal separation' paradigm of early life adversity for illustrative purposes. The protocol describes the preparation of micropunches from differentially-aged mouse brains from which DNA and RNA can be simultaneously isolated, thus allowing DNA methylation and gene expression analyses in the same sample.
Neuroscience, Issue 65, Genetics, Physiology, Epigenetics, DNA methylation, early-life stress, maternal separation, bisulfite sequencing
Detection of Histone Modifications in Plant Leaves
Institutions: RWTH Aachen University, RWTH Aachen University, Leibniz University.
Chromatin structure is important for the regulation of gene expression in eukaryotes. In this process, chromatin remodeling, DNA methylation, and covalent modifications on the amino-terminal tails of histones H3 and H4 play essential roles1-2
. H3 and H4 histone modifications include methylation of lysine and arginine, acetylation of lysine, and phosphorylation of serine residues1-2
. These modifications are associated either with gene activation, repression, or a primed state of gene that supports more rapid and robust activation of expression after perception of appropriate signals (microbe-associated molecular patterns, light, hormones, etc.)3-7
Here, we present a method for the reliable and sensitive detection of specific chromatin modifications on selected plant genes. The technique is based on the crosslinking of (modified) histones and DNA with formaldehyde8,9
, extraction and sonication of chromatin, chromatin immunoprecipitation (ChIP) with modification-specific antibodies9,10
, de-crosslinking of histone-DNA complexes, and gene-specific real-time quantitative PCR. The approach has proven useful for detecting specific histone modifications associated with C4
photosynthesis in maize5,11
and systemic immunity in Arabidopsis3
Molecular Biology, Issue 55, chromatin, chromatin immunoprecipitation, ChIP, histone modifications, PCR, plant molecular biology, plant promoter control, gene regulation
Application of MassSQUIRM for Quantitative Measurements of Lysine Demethylase Activity
Institutions: University of Arkansas for Medical Sciences .
Recently, epigenetic regulators have been discovered as key players in many different diseases 1-3
. As a result, these enzymes are prime targets for small molecule studies and drug development 4
. Many epigenetic regulators have only recently been discovered and are still in the process of being classified. Among these enzymes are lysine demethylases which remove methyl groups from lysines on histones and other proteins. Due to the novel nature of this class of enzymes, few assays have been developed to study their activity. This has been a road block to both the classification and high throughput study of histone demethylases. Currently, very few demethylase assays exist. Those that do exist tend to be qualitative in nature and cannot simultaneously discern between the different lysine methylation states (un-, mono-, di- and tri-). Mass spectrometry is commonly used to determine demethylase activity but current mass spectrometric assays do not address whether differentially methylated peptides ionize differently. Differential ionization of methylated peptides makes comparing methylation states difficult and certainly not quantitative (Figure 1A). Thus available assays are not optimized for the comprehensive analysis of demethylase activity.
Here we describe a method called MassSQUIRM (mass spectrometric quantitation using isotopic reductive methylation) that is based on reductive methylation of amine groups with deuterated formaldehyde to force all lysines to be di-methylated, thus making them essentially the same chemical species and therefore ionize the same (Figure 1B). The only chemical difference following the reductive methylation is hydrogen and deuterium, which does not affect MALDI ionization efficiencies. The MassSQUIRM assay is specific for demethylase reaction products with un-, mono- or di-methylated lysines. The assay is also applicable to lysine methyltransferases giving the same reaction products. Here, we use a combination of reductive methylation chemistry and MALDI mass spectrometry to measure the activity of LSD1, a lysine demethylase capable of removing di- and mono-methyl groups, on a synthetic peptide substrate 5
. This assay is simple and easily amenable to any lab with access to a MALDI mass spectrometer in lab or through a proteomics facility. The assay has ~8-fold dynamic range and is readily scalable to plate format 5
Molecular Biology, Issue 61, LSD1, lysine demethylase, mass spectrometry, reductive methylation, demethylase quantification
Pyrosequencing: A Simple Method for Accurate Genotyping
Institutions: Washington University in St. Louis.
Pharmacogenetic research benefits first-hand from the abundance of information provided by the completion of the Human Genome Project. With such a tremendous amount of data available comes an explosion of genotyping methods. Pyrosequencing(R) is one of the most thorough yet simple methods to date used to analyze polymorphisms. It also has the ability to identify tri-allelic, indels, short-repeat polymorphisms, along with determining allele percentages for methylation or pooled sample assessment. In addition, there is a standardized control sequence that provides internal quality control. This method has led to rapid and efficient single-nucleotide polymorphism evaluation including many clinically relevant polymorphisms. The technique and methodology of Pyrosequencing is explained.
Cellular Biology, Issue 11, Springer Protocols, Pyrosequencing, genotype, polymorphism, SNP, pharmacogenetics, pharmacogenomics, PCR
Coculture Analysis of Extracellular Protein Interactions Affecting Insulin Secretion by Pancreatic Beta Cells
Institutions: University of California, San Diego, Janssen Research & Development, University of California, San Diego.
Interactions between cell-surface proteins help coordinate the function of neighboring cells. Pancreatic beta cells are clustered together within pancreatic islets and act in a coordinated fashion to maintain glucose homeostasis. It is becoming increasingly clear that interactions between transmembrane proteins on the surfaces of adjacent beta cells are important determinants of beta-cell function.
Elucidation of the roles of particular transcellular interactions by knockdown, knockout or overexpression studies in cultured beta cells or in vivo
necessitates direct perturbation of mRNA and protein expression, potentially affecting beta-cell health and/or function in ways that could confound analyses of the effects of specific interactions. These approaches also alter levels of the intracellular domains of the targeted proteins and may prevent effects due to interactions between proteins within the same cell membrane to be distinguished from the effects of transcellular interactions.
Here a method for determining the effect of specific transcellular interactions on the insulin secreting capacity and responsiveness of beta cells is presented. This method is applicable to beta-cell lines, such as INS-1 cells, and to dissociated primary beta cells. It is based on coculture models developed by neurobiologists, who found that exposure of cultured neurons to specific neuronal proteins expressed on HEK293 (or COS) cell layers identified proteins important for driving synapse formation. Given the parallels between the secretory machinery of neuronal synapses and of beta cells, we reasoned that beta-cell functional maturation might be driven by similar transcellular interactions. We developed a system where beta cells are cultured on a layer of HEK293 cells expressing a protein of interest. In this model, the beta-cell cytoplasm is untouched while extracellular protein-protein interactions are manipulated. Although we focus here primarily on studies of glucose-stimulated insulin secretion, other processes can be analyzed; for example, changes in gene expression as determined by immunoblotting or qPCR.
Medicine, Issue 76, Cellular Biology, Molecular Biology, Biomedical Engineering, Immunology, Hepatology, Islets of Langerhans, islet, Insulin, Coculture, pancreatic beta cells, INS-1 cells, extracellular contact, transmembrane protein, transcellular interactions, insulin secretion, diabetes, cell culture
Neo-Islet Formation in Liver of Diabetic Mice by Helper-dependent Adenoviral Vector-Mediated Gene Transfer
Institutions: Baylor College of Medicine , Baylor College of Medicine , Baylor College of Medicine .
Type 1 diabetes is caused by T cell-mediated autoimmune destruction of insulin-producing cells in the pancreas. Until now insulin replacement is still the major therapy, because islet transplantation has been limited by donor availability and by the need for long-term immunosuppression. Induced islet neogenesis by gene transfer of Neuogenin3 (Ngn3), the islet lineage-defining specific transcription factor and Betacellulin (Btc), an islet growth factor has the potential to cure type 1 diabetes.
Adenoviral vectors (Ads) are highly efficient gene transfer vector; however, early generation Ads have several disadvantages for in vivo
use. Helper-dependent Ads (HDAds) are the most advanced Ads that were developed to improve the safety profile of early generation of Ads and to prolong transgene expression1
. They lack chronic toxicity because they lack viral coding sequences2-5
and retain only Ad cis
elements necessary for vector replication and packaging. This allows cloning of up to 36 kb genes.
In this protocol, we describe the method to generate HDAd-Ngn3 and HDAd-Btc and to deliver these vectors into STZ-induced diabetic mice. Our results show that co-injection of HDAd-Ngn3 and HDAd-Btc induces 'neo islets' in the liver and reverses hyperglycemia in diabetic mice.
Medicine, Issue 68, Genetics, Physiology, Gene therapy, Neurogenin3, Betacellulin, helper-dependent adenoviral vectors, Type 1 diabetes, islet neogenesis
Optimization and Utilization of Agrobacterium-mediated Transient Protein Production in Nicotiana
Institutions: Fraunhofer USA Center for Molecular Biotechnology.
-mediated transient protein production in plants is a promising approach to produce vaccine antigens and therapeutic proteins within a short period of time. However, this technology is only just beginning to be applied to large-scale production as many technological obstacles to scale up are now being overcome. Here, we demonstrate a simple and reproducible method for industrial-scale transient protein production based on vacuum infiltration of Nicotiana
plants with Agrobacteria
carrying launch vectors. Optimization of Agrobacterium
cultivation in AB medium allows direct dilution of the bacterial culture in Milli-Q water, simplifying the infiltration process. Among three tested species of Nicotiana
, N. excelsiana
× N. excelsior
) was selected as the most promising host due to the ease of infiltration, high level of reporter protein production, and about two-fold higher biomass production under controlled environmental conditions. Induction of Agrobacterium
harboring pBID4-GFP (Tobacco mosaic virus
-based) using chemicals such as acetosyringone and monosaccharide had no effect on the protein production level. Infiltrating plant under 50 to 100 mbar for 30 or 60 sec resulted in about 95% infiltration of plant leaf tissues. Infiltration with Agrobacterium
laboratory strain GV3101 showed the highest protein production compared to Agrobacteria
laboratory strains LBA4404 and C58C1 and wild-type Agrobacteria
strains at6, at10, at77 and A4. Co-expression of a viral RNA silencing suppressor, p23 or p19, in N. benthamiana
resulted in earlier accumulation and increased production (15-25%) of target protein (influenza virus hemagglutinin).
Plant Biology, Issue 86, Agroinfiltration, Nicotiana benthamiana, transient protein production, plant-based expression, viral vector, Agrobacteria
Production and Use of Lentivirus to Selectively Transduce Primary Oligodendrocyte Precursor Cells for In Vitro Myelination Assays
Institutions: The University of Melbourne, The University of Melbourne.
Myelination is a complex process that involves both neurons and the myelin forming glial cells, oligodendrocytes in the central nervous system (CNS) and Schwann cells in the peripheral nervous system (PNS). We use an in vitro
myelination assay, an established model for studying CNS myelination in vitro
. To do this, oligodendrocyte precursor cells (OPCs) are added to the purified primary rodent dorsal root ganglion (DRG) neurons to form myelinating co-cultures. In order to specifically interrogate the roles that particular proteins expressed by oligodendrocytes exert upon myelination we have developed protocols that selectively transduce OPCs using the lentivirus overexpressing wild type, constitutively active or dominant negative proteins before being seeded onto the DRG neurons. This allows us to specifically interrogate the roles of these oligodendroglial proteins in regulating myelination. The protocols can also be applied in the study of other cell types, thus providing an approach that allows selective manipulation of proteins expressed by a desired cell type, such as oligodendrocytes for the targeted study of signaling and compensation mechanisms. In conclusion, combining the in vitro
myelination assay with lentiviral infected OPCs provides a strategic tool for the analysis of molecular mechanisms involved in myelination.
Developmental Biology, Issue 95, lentivirus, cocultures, oligodendrocyte, myelination, oligodendrocyte precursor cells, dorsal root ganglion neurons
Methylated DNA Immunoprecipitation
Institutions: BC Cancer Research Centre, University of British Columbia - UBC, These authors contributed equally., University of British Columbia - UBC, BC Cancer Agency, University of British Columbia - UBC.
The identification of DNA methylation patterns is a common procedure in the study of epigenetics, as methylation is known to have significant effects on gene expression, and is involved with normal development as well as disease 1-4
. Thus, the ability to discriminate between methylated DNA and non-methylated DNA is essential for generating methylation profiles for such studies. Methylated DNA immunoprecipitation (MeDIP) is an efficient technique for the extraction of methylated DNA from a sample of interest 5-7
. A sample of as little as 200 ng of DNA is sufficient for the antibody, or immunoprecipitation (IP), reaction. DNA is sonicated into fragments ranging in size from 300-1000 bp, and is divided into immunoprecipitated (IP) and input (IN) portions. IP DNA is subsequently heat denatured and then incubated with anti-5'mC, allowing the monoclonal antibody to bind methylated DNA. After this, magnetic beads containing a secondary antibody with affinity for the primary antibody are added, and incubated. These bead-linked antibodies will bind the monoclonal antibody used in the first step. DNA bound to the antibody complex (methylated DNA) is separated from the rest of the DNA by using a magnet to pull the complexes out of solution. Several washes using IP buffer are then performed to remove the unbound, non-methylated DNA. The methylated DNA/antibody complexes are then digested with Proteinase K to digest the antibodies leaving only the methylated DNA intact. The enriched DNA is purified by phenol:chloroform extraction to remove the protein matter and then precipitated and resuspended in water for later use. PCR techniques can be used to validate the efficiency of the MeDIP procedure by analyzing the amplification products of IP and IN DNA for regions known to lack and known to contain methylated sequences. The purified methylated DNA can then be used for locus-specific (PCR) or genome-wide (microarray and sequencing) methylation studies, and is particularly useful when applied in conjunction with other research tools such as gene expression profiling and array comparative genome hybridization (CGH) 8
. Further investigation into DNA methylation will lead to the discovery of new epigenetic targets, which in turn, may be useful in developing new therapeutic or prognostic research tools for diseases such as cancer that are characterized by aberrantly methylated DNA 2, 4, 9-11
Cell Biology, Issue 23, DNA methylation, immunoprecipitation, epigenomics, epigenetics, methylcytosine, MeDIP protocol, 5-methylcytosine antibody, anti-5-methylcytosine, microarray
High Sensitivity 5-hydroxymethylcytosine Detection in Balb/C Brain Tissue
Institutions: New England Biolabs.
DNA hydroxymethylation is a long known modification of DNA, but has recently become a focus in epigenetic research. Mammalian DNA is enzymatically modified at the 5th
carbon position of cytosine (C) residues to 5-mC, predominately in the context of CpG dinucleotides. 5-mC is amenable to enzymatic oxidation to 5-hmC by the Tet family of enzymes, which are believed to be involved in development and disease. Currently, the biological role of 5-hmC is not fully understood, but is generating a lot of interest due to its potential as a biomarker. This is due to several groundbreaking studies identifying 5-hydroxymethylcytosine in mouse embryonic stem (ES) and neuronal cells. Research techniques, including bisulfite sequencing methods, are unable to easily distinguish between 5-mC and 5-hmC . A few protocols exist that can measure global amounts of 5-hydroxymethylcytosine in the genome, including liquid chromatography coupled with mass spectrometry analysis or thin layer chromatography of single nucleosides digested from genomic DNA. Antibodies that target 5-hydroxymethylcytosine also exist, which can be used for dot blot analysis, immunofluorescence, or precipitation of hydroxymethylated DNA, but these antibodies do not have single base resolution.In addition, resolution depends on the size of the immunoprecipitated DNA and for microarray experiments, depends on probe design. Since it is unknown exactly where 5-hydroxymethylcytosine exists in the genome or its role in epigenetic regulation, new techniques are required that can identify locus specific hydroxymethylation. The EpiMark 5-hmC and 5-mC Analysis Kit provides a solution for distinguishing between these two modifications at specific loci. The EpiMark 5-hmC and 5-mC Analysis Kit is a simple and robust method for the identification and quantitation of 5-methylcytosine and 5-hydroxymethylcytosine within a specific DNA locus. This enzymatic approach utilizes the differential methylation sensitivity of the isoschizomers MspI and HpaII in a simple 3-step protocol. Genomic DNA of interest is treated with T4-BGT, adding a glucose moeity to 5-hydroxymethylcytosine. This reaction is sequence-independent, therefore all 5-hmC will be glucosylated; unmodified or 5-mC containing DNA will not be affected. This glucosylation is then followed by restriction endonuclease digestion. MspI and HpaII recognize the same sequence (CCGG) but are sensitive to different methylation states. HpaII cleaves only a completely unmodified site: any modification (5-mC, 5-hmC or 5-ghmC) at either cytosine blocks cleavage. MspI recognizes and cleaves 5-mC and 5-hmC, but not 5-ghmC. The third part of the protocol is interrogation of the locus by PCR. As little as 20 ng of input DNA can be used. Amplification of the experimental (glucosylated and digested) and control (mock glucosylated and digested) target DNA with primers flanking a CCGG site of interest (100-200 bp) is performed. If the CpG site contains 5-hydroxymethylcytosine, a band is detected after glucosylation and digestion, but not in the non-glucosylated control reaction. Real time PCR will give an approximation of how much hydroxymethylcytosine is in this particular site. In this experiment, we will analyze the 5-hydroxymethylcytosine amount in a mouse Babl/C brain sample by end point PCR.
Neuroscience, Issue 48, EpiMark, Epigenetics, 5-hydroxymethylcytosine, 5-methylcytosine, methylation, hydroxymethylation
DNA Methylation: Bisulphite Modification and Analysis
Institutions: Garvan Institute of Medical Research, University of NSW.
Epigenetics describes the heritable changes in gene function that occur independently to the DNA sequence. The molecular basis of epigenetic gene regulation is complex, but essentially involves modifications to the DNA itself or the proteins with which DNA associates. The predominant epigenetic modification of DNA in mammalian genomes is methylation of cytosine nucleotides (5-MeC). DNA methylation provides instruction to gene expression machinery as to where and when the gene should be expressed. The primary target sequence for DNA methylation in mammals is 5'-CpG-3' dinucleotides (Figure 1). CpG dinucleotides are not uniformly distributed throughout the genome, but are concentrated in regions of repetitive genomic sequences and CpG "islands" commonly associated with gene promoters (Figure 1). DNA methylation patterns are established early in development, modulated during tissue specific differentiation and disrupted in many disease states including cancer. To understand the biological role of DNA methylation and its role in human disease, precise, efficient and reproducible methods are required to detect and quantify individual 5-MeCs.
This protocol for bisulphite conversion is the "gold standard" for DNA methylation analysis and facilitates identification and quantification of DNA methylation at single nucleotide resolution. The chemistry of cytosine deamination by sodium bisulphite involves three steps (Figure 2). (1) Sulphonation: The addition of bisulphite to the 5-6 double bond of cytosine (2) Hydrolic Deamination: hydrolytic deamination of the resulting cytosine-bisulphite derivative to give a uracil-bisulphite derivative (3) Alkali Desulphonation: Removal of the sulphonate group by an alkali treatment, to give uracil. Bisulphite preferentially deaminates cytosine to uracil in single stranded DNA, whereas 5-MeC, is refractory to bisulphite-mediated deamination. Upon PCR amplification, uracil is amplified as thymine while 5-MeC residues remain as cytosines, allowing methylated CpGs to be distinguished from unmethylated CpGs by presence of a cytosine "C" versus thymine "T" residue during sequencing.
DNA modification by bisulphite conversion is a well-established protocol that can be exploited for many methods of DNA methylation analysis. Since the detection of 5-MeC by bisulphite conversion was first demonstrated by Frommer et al.1
and Clark et al.2
, methods based around bisulphite conversion of genomic DNA account for the majority of new data on DNA methylation. Different methods of post PCR analysis may be utilized, depending on the degree of specificity and resolution of methylation required. Cloning and sequencing is still the most readily available method that can give single nucleotide resolution for methylation across the DNA molecule.
Genetics, Issue 56, epigenetics, DNA methylation, Bisulphite, 5-methylcytosine (5-MeC), PCR
Single Oocyte Bisulfite Mutagenesis
Institutions: Schulich School of Medicine and Dentistry, University of Western Ontario, Schulich School of Medicine and Dentistry, University of Western Ontario, Children's Health Research Institute.
Epigenetics encompasses all heritable and reversible modifications to chromatin that alter gene accessibility, and thus are the primary mechanisms for regulating gene transcription1
. DNA methylation is an epigenetic modification that acts predominantly as a repressive mark. Through the covalent addition of a methyl group onto cytosines in CpG dinucleotides, it can recruit additional repressive proteins and histone modifications to initiate processes involved in condensing chromatin and silencing genes2
. DNA methylation is essential for normal development as it plays a critical role in developmental programming, cell differentiation, repression of retroviral elements, X-chromosome inactivation and genomic imprinting.
One of the most powerful methods for DNA methylation analysis is bisulfite mutagenesis. Sodium bisulfite is a DNA mutagen that deaminates cytosines into uracils. Following PCR amplification and sequencing, these conversion events are detected as thymines. Methylated cytosines are protected from deamination and thus remain as cytosines, enabling identification of DNA methylation at the individual nucleotide level3
. Development of the bisulfite mutagenesis assay has advanced from those originally reported4-6
towards ones that are more sensitive and reproducible7
. One key advancement was embedding smaller amounts of DNA in an agarose bead, thereby protecting DNA from the harsh bisulfite treatment8
. This enabled methylation analysis to be performed on pools of oocytes and blastocyst-stage embryos9
. The most sophisticated bisulfite mutagenesis protocol to date is for individual blastocyst-stage embryos10
. However, since blastocysts have on average 64 cells (containing 120-720 pg of genomic DNA), this method is not efficacious for methylation studies on individual oocytes or cleavage-stage embryos.
Taking clues from agarose embedding of minute DNA amounts including oocytes11
, here we present a method whereby oocytes are directly embedded in an agarose and lysis solution bead immediately following retrieval and removal of the zona pellucida from the oocyte. This enables us to bypass the two main challenges of single oocyte bisulfite mutagenesis: protecting a minute amount of DNA from degradation, and subsequent loss during the numerous protocol steps. Importantly, as data are obtained from single oocytes, the issue of PCR bias within pools is eliminated. Furthermore, inadvertent cumulus cell contamination is detectable by this method since any sample with more than one methylation pattern may be excluded from analysis12
. This protocol provides an improved method for successful and reproducible analyses of DNA methylation at the single-cell level and is ideally suited for individual oocytes as well as cleavage-stage embryos.
Genetics, Issue 64, Developmental Biology, Biochemistry, Bisulfite mutagenesis, DNA methylation, individual oocyte, individual embryo, mouse model, PCR, epigenetics
Human Pluripotent Stem Cell Based Developmental Toxicity Assays for Chemical Safety Screening and Systems Biology Data Generation
Institutions: University of Cologne, University of Konstanz, Technical University of Dortmund, Technical University of Dortmund.
Efficient protocols to differentiate human pluripotent stem cells to various tissues in combination with -omics technologies opened up new horizons for in vitro
toxicity testing of potential drugs. To provide a solid scientific basis for such assays, it will be important to gain quantitative information on the time course of development and on the underlying regulatory mechanisms by systems biology approaches. Two assays have therefore been tuned here for these requirements. In the UKK test system, human embryonic stem cells (hESC) (or other pluripotent cells) are left to spontaneously differentiate for 14 days in embryoid bodies, to allow generation of cells of all three germ layers. This system recapitulates key steps of early human embryonic development, and it can predict human-specific early embryonic toxicity/teratogenicity, if cells are exposed to chemicals during differentiation. The UKN1 test system is based on hESC differentiating to a population of neuroectodermal progenitor (NEP) cells for 6 days. This system recapitulates early neural development and predicts early developmental neurotoxicity and epigenetic changes triggered by chemicals. Both systems, in combination with transcriptome microarray studies, are suitable for identifying toxicity biomarkers. Moreover, they may be used in combination to generate input data for systems biology analysis. These test systems have advantages over the traditional toxicological studies requiring large amounts of animals. The test systems may contribute to a reduction of the costs for drug development and chemical safety evaluation. Their combination sheds light especially on compounds that may influence neurodevelopment specifically.
Developmental Biology, Issue 100, Human embryonic stem cells, developmental toxicity, neurotoxicity, neuroectodermal progenitor cells, immunoprecipitation, differentiation, cytotoxicity, embryopathy, embryoid body
A Method for Mouse Pancreatic Islet Isolation and Intracellular cAMP Determination
Institutions: University of Wisconsin-Madison, University of Wisconsin-Madison, University of Waterloo.
Uncontrolled glycemia is a hallmark of diabetes mellitus and promotes morbidities like neuropathy, nephropathy, and retinopathy. With the increasing prevalence of diabetes, both immune-mediated type 1 and obesity-linked type 2, studies aimed at delineating diabetes pathophysiology and therapeutic mechanisms are of critical importance. The β-cells of the pancreatic islets of Langerhans are responsible for appropriately secreting insulin in response to elevated blood glucose concentrations. In addition to glucose and other nutrients, the β-cells are also stimulated by specific hormones, termed incretins, which are secreted from the gut in response to a meal and act on β-cell receptors that increase the production of intracellular cyclic adenosine monophosphate (cAMP). Decreased β-cell function, mass, and incretin responsiveness are well-understood to contribute to the pathophysiology of type 2 diabetes, and are also being increasingly linked with type 1 diabetes. The present mouse islet isolation and cAMP determination protocol can be a tool to help delineate mechanisms promoting disease progression and therapeutic interventions, particularly those that are mediated by the incretin receptors or related receptors that act through modulation of intracellular cAMP production. While only cAMP measurements will be described, the described islet isolation protocol creates a clean preparation that also allows for many other downstream applications, including glucose stimulated insulin secretion, [3H
]-thymidine incorporation, protein abundance, and mRNA expression.
Physiology, Issue 88, islet, isolation, insulin secretion, β-cell, diabetes, cAMP production, mouse
A Toolkit to Enable Hydrocarbon Conversion in Aqueous Environments
Institutions: Delft University of Technology, Delft University of Technology.
This work puts forward a toolkit that enables the conversion of alkanes by Escherichia coli
and presents a proof of principle of its applicability. The toolkit consists of multiple standard interchangeable parts (BioBricks)9
addressing the conversion of alkanes, regulation of gene expression and survival in toxic hydrocarbon-rich environments.
A three-step pathway for alkane degradation was implemented in E. coli
to enable the conversion of medium- and long-chain alkanes to their respective alkanols, alkanals and ultimately alkanoic-acids. The latter were metabolized via the native β-oxidation pathway. To facilitate the oxidation of medium-chain alkanes (C5-C13) and cycloalkanes (C5-C8), four genes (alkB2
) of the alkane hydroxylase system from Gordonia
were transformed into E. coli
. For the conversion of long-chain alkanes (C15-C36), theladA
gene from Geobacillus thermodenitrificans
was implemented. For the required further steps of the degradation process, ADH
and ALDH (
originating from G. thermodenitrificans
) were introduced10,11
. The activity was measured by resting cell assays. For each oxidative step, enzyme activity was observed.
To optimize the process efficiency, the expression was only induced under low glucose conditions: a substrate-regulated promoter, pCaiF, was used. pCaiF is present in E. coli
K12 and regulates the expression of the genes involved in the degradation of non-glucose carbon sources.
The last part of the toolkit - targeting survival - was implemented using solvent tolerance genes, PhPFDα and β, both from Pyrococcus horikoshii
OT3. Organic solvents can induce cell stress and decreased survivability by negatively affecting protein folding. As chaperones, PhPFDα and β improve the protein folding process e.g.
under the presence of alkanes. The expression of these genes led to an improved hydrocarbon tolerance shown by an increased growth rate (up to 50%) in the presences of 10% n
-hexane in the culture medium were observed.
Summarizing, the results indicate that the toolkit enables E. coli
to convert and tolerate hydrocarbons in aqueous environments. As such, it represents an initial step towards a sustainable solution for oil-remediation using a synthetic biology approach.
Bioengineering, Issue 68, Microbiology, Biochemistry, Chemistry, Chemical Engineering, Oil remediation, alkane metabolism, alkane hydroxylase system, resting cell assay, prefoldin, Escherichia coli, synthetic biology, homologous interaction mapping, mathematical model, BioBrick, iGEM
Enhanced Reduced Representation Bisulfite Sequencing for Assessment of DNA Methylation at Base Pair Resolution
Institutions: Weill Cornell Medical College, Weill Cornell Medical College, Weill Cornell Medical College, University of Michigan.
DNA methylation pattern mapping is heavily studied in normal and diseased tissues. A variety of methods have been established to interrogate the cytosine methylation patterns in cells. Reduced representation of whole genome bisulfite sequencing was developed to detect quantitative base pair resolution cytosine methylation patterns at GC-rich genomic loci. This is accomplished by combining the use of a restriction enzyme followed by bisulfite conversion. Enhanced Reduced Representation Bisulfite Sequencing (ERRBS) increases the biologically relevant genomic loci covered and has been used to profile cytosine methylation in DNA from human, mouse and other organisms. ERRBS initiates with restriction enzyme digestion of DNA to generate low molecular weight fragments for use in library preparation. These fragments are subjected to standard library construction for next generation sequencing. Bisulfite conversion of unmethylated cytosines prior to the final amplification step allows for quantitative base resolution of cytosine methylation levels in covered genomic loci. The protocol can be completed within four days. Despite low complexity in the first three bases sequenced, ERRBS libraries yield high quality data when using a designated sequencing control lane. Mapping and bioinformatics analysis is then performed and yields data that can be easily integrated with a variety of genome-wide platforms. ERRBS can utilize small input material quantities making it feasible to process human clinical samples and applicable in a range of research applications. The video produced demonstrates critical steps of the ERRBS protocol.
Genetics, Issue 96, Epigenetics, bisulfite sequencing, DNA methylation, genomic DNA, 5-methylcytosine, high-throughput
Using Fluorescence Activated Cell Sorting to Examine Cell-Type-Specific Gene Expression in Rat Brain Tissue
Institutions: University of Delaware.
The brain is comprised of four primary cell types including neurons, astrocytes, microglia and oligodendrocytes. Though they are not the most abundant cell type in the brain, neurons are the most widely studied of these cell types given their direct role in impacting behaviors. Other cell types in the brain also impact neuronal function and behavior via the signaling molecules they produce. Neuroscientists must understand the interactions between the cell types in the brain to better understand how these interactions impact neural function and disease. To date, the most common method of analyzing protein or gene expression utilizes the homogenization of whole tissue samples, usually with blood, and without regard for cell type. This approach is an informative approach for examining general changes in gene or protein expression that may influence neural function and behavior; however, this method of analysis does not lend itself to a greater understanding of cell-type-specific gene expression and the effect of cell-to-cell communication on neural function. Analysis of behavioral epigenetics has been an area of growing focus which examines how modifications of the deoxyribonucleic acid (DNA) structure impact long-term gene expression and behavior; however, this information may only be relevant if analyzed in a cell-type-specific manner given the differential lineage and thus epigenetic markers that may be present on certain genes of individual neural cell types. The Fluorescence Activated Cell Sorting (FACS) technique described below provides a simple and effective way to isolate individual neural cells for the subsequent analysis of gene expression, protein expression, or epigenetic modifications of DNA. This technique can also be modified to isolate more specific neural cell types in the brain for subsequent cell-type-specific analysis.
Neuroscience, Issue 99, Fluorescence activated cell sorting, GLT-1, Thy1, CD11b, real-time PCR, gene expression
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Investigating Protein-protein Interactions in Live Cells Using Bioluminescence Resonance Energy Transfer
Institutions: Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition and Behaviour.
Assays based on Bioluminescence Resonance Energy Transfer (BRET) provide a sensitive and reliable means to monitor protein-protein interactions in live cells. BRET is the non-radiative transfer of energy from a 'donor' luciferase enzyme to an 'acceptor' fluorescent protein. In the most common configuration of this assay, the donor is Renilla reniformis
luciferase and the acceptor is Yellow Fluorescent Protein (YFP). Because the efficiency of energy transfer is strongly distance-dependent, observation of the BRET phenomenon requires that the donor and acceptor be in close proximity. To test for an interaction between two proteins of interest in cultured mammalian cells, one protein is expressed as a fusion with luciferase and the second as a fusion with YFP. An interaction between the two proteins of interest may bring the donor and acceptor sufficiently close for energy transfer to occur. Compared to other techniques for investigating protein-protein interactions, the BRET assay is sensitive, requires little hands-on time and few reagents, and is able to detect interactions which are weak, transient, or dependent on the biochemical environment found within a live cell. It is therefore an ideal approach for confirming putative interactions suggested by yeast two-hybrid or mass spectrometry proteomics studies, and in addition it is well-suited for mapping interacting regions, assessing the effect of post-translational modifications on protein-protein interactions, and evaluating the impact of mutations identified in patient DNA.
Cellular Biology, Issue 87, Protein-protein interactions, Bioluminescence Resonance Energy Transfer, Live cell, Transfection, Luciferase, Yellow Fluorescent Protein, Mutations
Isolation, Culture, and Imaging of Human Fetal Pancreatic Cell Clusters
Institutions: University of California, San Diego.
For almost 30 years, scientists have demonstrated that human fetal ICCs transplanted under the kidney capsule of nude mice matured into functioning endocrine cells, as evidenced by a significant increase in circulating human C-peptide following glucose stimulation1-9
. However in vitro,
genesis of insulin producing cells from human fetal ICCs is low10
; results reminiscent of recent experiments performed with human embryonic stem cells (hESC), a renewable source of cells that hold great promise as a potential therapeutic treatment for type 1 diabetes. Like ICCs, transplantation of partially differentiated hESC generate glucose responsive, insulin producing cells, but in vitro
genesis of insulin producing cells from hESC is much less robust11-17
. A complete understanding of the factors that influence the growth and differentiation of endocrine precursor cells will likely require data generated from both ICCs and hESC. While a number of protocols exist to generate insulin producing cells from hESC in vitro11-22
, far fewer exist for ICCs10,23,24
. Part of that discrepancy likely comes from the difficulty of working with human fetal pancreas. Towards that end, we have continued to build upon existing methods to isolate fetal islets from human pancreases with gestational ages ranging from 12 to 23 weeks, grow the cells as a monolayer or in suspension, and image for cell proliferation, pancreatic markers and human hormones including glucagon and C-peptide. ICCs generated by the protocol described below result in C-peptide release after transplantation under the kidney capsule of nude mice that are similar to C-peptide levels obtained by transplantation of fresh tissue6
. Although the examples presented here focus upon the pancreatic endoderm proliferation and β cell genesis, the protocol can be employed to study other aspects of pancreatic development, including exocrine, ductal, and other hormone producing cells.
Medicine, Issue 87, human fetal pancreas, islet cell cluster (ICC), transplantation, immunofluorescence, endocrine cell proliferation, differentiation, C-peptide
| 1 | 6 |
<urn:uuid:8fd6d869-ebc0-4c09-8f87-d8911e0cefed>
|
No such thing as a perfect match
This post was prompted after reading a social media post stating that matching for kidney transplants was no longer necessary. That of course is not true, but it may have been the hope of transplant doctors as immunosuppressive drugs were developing apace during the latter part of the 20th century. I suspect the heading of a 2016 Medscape article: “Good match not always needed for living donor kidney transplant” could be the source of the confusion. That article was essentially saying that compared to staying on the waiting list, supported by dialysis, then even an unmatched kidney provided better survival statistics.
So what does go into the matching process? Does it depend on where you live? What does it mean to have a mismatched transplant?
ANTIGEN = a substance that is recognised as foreign to your body and can trigger an immune response which is aimed at defending the body
ANTIBODY = a protein made in B cells that attaches to the antigen and so labels it as “for destruction”. Some antibodies will directly neutralise the antigen while others just tag them for other cells to deal with.
HLA = Human Leukocyte Antigens = a group of molecules that sit on the membrane of a cell acting as an ID label. They are unique to you and so can be used by the immune system to identify “self” from “non-self”. They are also referred to by the name “Major Histocompatibility Complex” or MHC.
LEUKOCYTES = white cells
LYMPHOCYTE = type of white cell. This can be
B-CELLS = white cells that produce antibodies
T-CELLS = white cells involved in the immune response; 3 main groups are
HELPER T-CELLS which trigger immune response
KILLER T-CELLS which attack and kill the foreign material
REGULATOR T-CELLS which suppress the immune response
Matching for a kidney transplant
For kidney transplants the first element to be matched will be the blood group. From your point of view, as the recipient, the simplest blood type to be would be AB because this would allow you to receive an organ from a patient with any other blood group. Blood group O, however, means you can only have an organ from another person with blood group O. The rhesus antibodies that are commonly also mentioned alongside ABO grouping – the positive/negative – are not important when it relates to kidney transplants.
The next matching test is also called ‘tissue typing‘ and this is where those HLA markers come into play.
While the ABO system is a way of labelling red blood cells, the HLA system labels the white blood cells, which are the fundamental cells of the immune system. This system was described in 1952. The 1980 Nobel Prize in Physiology or Medicine was awarded to Jean Dausset, Baruj Benacerraf and George Snell for their discoveries and work with HLA. Snell introduced the concept of “H antigens”, Dausset demonstrated their existence and Benacerraf showed that the immune response was controlled by genetic factors.
All cells have identification molecules on their surface, like a fingerprint. The immune system has to distinguish foreign antigens from those belonging to the body. For this they use the HLA molecules. They are called ‘human’ for obvious reasons – mice and other animals do have similar ones called MHC antigens. They are ‘antigens’ because they can provoke an immune response in another person, and the term ‘leukocyte’ means white cell and they were first discovered on human white blood cells.
Broadly speaking HLA sit on the surface of cells, waiting to be recognised by other cells. Imagine driving along a road looking at car number plates – you know which are local to your country and which are foreign visitors by the specific arrangement of letters and numbers.
Class I and Class II
HLA are grouped into “class I”, which are the ones usually carrying the label “self” and occur on every cell with a nucleus, and “class II”, which are found on the so-called ‘professional antigen-presenting cells’ (APC) and lymphocytes.
Class I HLA will effectively wave a piece of your own protein, a recognisable sign along the lines of “Hi, I belong to you, Have a nice day!”
Class II HLA will be displaying foreign proteins, shouting “Look what I found, you might want to take a closer look. This spells trouble!”
APC are akin to an army of guards trained to notice non-conformity – they ingest the abnormal foreign proteins, chomp them down into fragments (peptides) and then stick these fragments onto the HLA arms that protrude from the cell so other T cells can come along and eliminate them.
Humans have 3 Class I HLA ( A,B,C) and 6 Class II HLA (DPA1, DPB1, DQA1, DQB1, DRA, DRB1)
Chromosome 6 – the short arm
HLA molecules are proteins and like all other proteins the code or recipe for making them is in the DNA. In particular, HLA are coded by part of chromosome 6. There is huge variation in these genes – they are polymorphic – so there are thousands of possible combinations. Each of us has an almost unique set of HLA. We inherit them in blocks, called haplotypes, from our parents. This means a parent and child will have at least a 50% match. Siblings, however, could be any match between 0 and 100%.
HLA testing has become increasingly detailed over the years, from testing with serum in the 1950s when they were first discovered, to now using the actual DNA code. To type one person will cost in the region of £700 but considering a transplant will cost between £80,000 and £100,000 then HLA testing is a relatively small outlay.
The alpha polypeptides on the class I HLA are encoded by genes at specific parts of chromosome 6 that have been called HLA-A, HLA-B and HLA-C loci (‘locus’ just means ‘position’, loci is the plural)
[There are some other codes that will make HLA-G which is used to protect a foetus or baby from the mother, but thats not relevant to this post]
The class II HLA has two polypeptide chains, alpha and beta, each with their own trans-membrane region and tail.
These polypeptide chains are encoded by genes in the HLA-DP, HLA-DQ and HLA-DR regions of chromosome 6.
In the diagram the green box represents the “self” flag while the red box represents the “foreign”.
The class I and class II molecules are the most immunogenic antigens for rejection in solid organ transplants.
The most significant one is HLA-DR, followed by HLA-B and HLA-A. So these three loci are the most important for matching donor and recipient.
Where you live
There is geographical variation in the tests used for HLA typing.
In UK the current tests look at the -A,-B,-C,-DR and -DQ but recent research on cross-matching has suggested that up to one third of the positive reactions are caused by antibodies to HLA-DP and so this should be included in the screening protocols. The BTS guidelines in Uk now demand that laboratories are capable of testing for HLA-DP. An updated protocol was introduced in UK in 2006 to reduce some of the inequities of transplant allocation.
I think in US the loci -A, -B, -C, -DRB1, -DRB3/4/5 and -DQB1 are used for matching for kidney transplants. (Happy to be corrected if someone across the pond knows better). The US also updated their allocation system with respect to HLA, so as to increase the number of minorities receiving organs.
It is a slightly confusing area to read about as another source told me that US test for 94 antigens, 25 of them HLA-A, 51 HLA-B and 18 HLA-DR – no mention of -C or -DQ. This same source stated that Europe test for 51 antigens and U.K. for 49 antigens. I guess the bottom line is that laboratories will use different processes and results may not be directly comparable.
In the U.K. around 40% patients on the waiting list for a kidney transplant are sensitised to HLA antibodies. This happens with pregnancy, transfusions and previous transplants. Having antibodies can increase the wait. This is what is measured by the “Panel Reactive Antibody” test – sometimes called the Percentage reactive antibody test because results are quoted as a %. These tests are undertaken as a third level of tissue matching, looking for donor-specific antibodies. Historically they were done by mixing donor and recipient samples and waiting to see if one destroyed the other; new methods of virtual cross matching have led to a more accurate assessment of immunological risk.
Of course everyone wants a perfect match
The survival advantage from a well-matched kidney was established during the 1980s. In 1985 the 10-year survival for the new kidney was 41% if there were no mismatches but 25% if there were some mismatches. Now we are looking at figures of 75% 10 year survival for a well-matched kidney.
Advances in immunosupression have dramatically increased these figures and now we have a situation whereby having a mismatched living donor is sometimes better than having a well-matched deceased donor organ. This is thought to be because the process of death releases cytokines (chemicals) that damage the kidneys. The age of the donor and the ‘cold ischaemia time’ ( length of time the new kidney is out of a body without a blood supply) is now thought more important that absolute precision in matching.
Achieving a perfect match may be more important for younger patients because they are quite likely to need a second transplant later in life. If the kidney was mismatched in the first transplant then of course there will be more antibodies and more difficulty matching for the second transplant.
There are methods that can be used when a recipient has lots of antibodies – plasmapheresis is like washing the blood to lower the antibodies. But itt is costly and cannot be performed at all centres. It is also possible that antibodies will recur after plasmapheresis.
Not all HLA mismatches are equal
Some mismatches will be more significant than others. Studies have shown that the major impact comes from -B and -DR antigens. HLA-DR mismatches are correlated with poor long term survival. Another study from Holland has suggested that the combination of mismatches is also relevant, even going so far as describing some combinations as “taboo” combinations that significantly lowered graft survival times.
It has also been shown that -DR mismatches tend to cause rejection problems within the first six months post-transplant, but -B mismatches lead to problems around 2 years post-transplant.
A recent study from Perth (Lim et al.)has indicated that -DQ mismatches are associated with acute rejection, independent of the immunosupression used.
In the end we still need more donors
The fundamental problem with obtaining “the perfect match” is lack of donors. In particular donors from minority ethnic groups are needed, because despite all of the advances, it remains harder to find a match for ethnic minorities and indigenous populations and this is true regardless of the country in question.
Lim WH, Chapman DR, Coates PT et al. HLA-DQ mismatches and rejection in kidney transplant recipients. Clin J Am Soc Nephrol. 2016 May;11(5):875-83
Williams RC, Opelz G, McGarvey CJ, et al. The Risk of Transplant Failure with HLA Mismatch in First Adult Kidney Allografts from Deceased Donors. Transplantation 2016; 100:1094
| 1 | 2 |
<urn:uuid:e1bd9c4f-98c8-498e-a5f4-4493c7698607>
|
IBM Personal Computer
|Release date||August 12, 1981|
|Discontinued||April 2, 1987|
|Operating system||IBM BASIC / PC DOS 1.0
|CPU||Intel 8088 @ 4.77 MHz|
|Memory||16 kB ~ 256 kB|
|Successor||IBM Personal Computer XT
IBM Portable Personal Computer
IBM Personal Computer/AT
IBM PC Convertible
The IBM Personal Computer, commonly known as the IBM PC, is the original version and progenitor of the IBM PC compatible hardware platform. It is IBM model number 5150, and was introduced on August 12, 1981. It was created by a team of engineers and designers under the direction of Don Estridge of the IBM Entry Systems Division in Boca Raton, Florida.
The generic term "personal computer" was in use before 1981, applied as early as 1972 to the Xerox PARC's Alto, but because of the success of the IBM Personal Computer, the term "PC" came to mean more specifically a desktop microcomputer compatible with IBM's PC products. Within a short time of the introduction, third-party suppliers of peripheral devices, expansion cards, and software proliferated; the influence of the IBM PC on the personal computer market was substantial in standardizing a platform for personal computers. "IBM compatible" became an important criterion for sales growth; only the Apple Macintosh family kept significant market share without compatibility with the IBM personal computer.
- 1 History
- 2 IBM PC as standard
- 3 Third-party distribution
- 4 Models
- 5 Technology
- 6 Reception
- 7 Longevity
- 8 Collectability
- 9 See also
- 10 Notes
- 11 References
- 12 Further reading
- 13 External links
International Business Machines (IBM), one of the world's largest companies, had a 62% share of the mainframe computer market in 1981. Its share of the overall computer market, however, had declined from 60% in 1970 to 32% in 1980. Perhaps distracted by a long-running antitrust lawsuit, the "Colossus of Armonk" completely missed the fast-growing minicomputer market during the 1970s, and was behind rivals such as Wang, Hewlett-Packard (HP), and Control Data in other areas.
In 1979 BusinessWeek asked, "Is IBM just another stodgy, mature company?" By 1981 its stock price had declined by 22%. IBM's earnings for the first half the year grew by 5.3%—one third of the inflation rate—while those of minicomputer maker Digital Equipment Corporation (DEC) grew by more than 35%. The company began selling minicomputers, but in January 1982 the United States Department of Justice ended the antitrust suit because, The New York Times reported, the government "recognized what computer experts and securities analysts had long since concluded: I.B.M. no longer dominates the computer business".
IBM wished to avoid the same outcome with the new personal computer industry, dominated by the Commodore PET, Atari 8-bit family, Apple II, Tandy Corporation's TRS-80, and various CP/M machines. With $150 million in sales by 1979 and projected annual growth of more than 40% in the early 1980s, the microcomputer market was large enough for IBM's attention. Other large technology companies such as HP, Texas Instruments, and Data General had entered it, and some large IBM customers were buying Apples, so the company saw introducing its own personal computer as both an experiment in a new market and a defense against rivals, large and small.
In 1980 and 1981 rumors spread of an IBM personal computer, perhaps a miniaturized version of the IBM System/370, while Matsushita acknowledged that it had discussed with IBM the possibility of manufacturing a personal computer for the American company. The Japanese project, codenamed "Go", ended before the 1981 release of the American-designed IBM PC codenamed "Chess", but two simultaneous projects further confused rumors about the forthcoming product.
Data General and Texas Instruments' small computers were not very successful, but observers expected AT&T to soon enter the computer industry, and other large companies such as Exxon, Montgomery Ward, Pentel, and Sony were designing their own microcomputers. Whether IBM had waited too long to enter an industry in which Apple and others were already successful was unclear.
An observer stated that "IBM bringing out a personal computer would be like teaching an elephant to tap dance." Successful microcomputer company Vector Graphic's fiscal 1980 revenue was $12 million. A single IBM computer in the early 1960s cost as much as $9 million, occupied one quarter acre of air-conditioned space, and had a staff of 60 people; in 1980 its least-expensive computer, the 5120, still cost about $13,500. The company only sold through its internal sales force, had no experience with resellers or retail stores, and did not introduce the first product designed to work with non-IBM equipment until 1980.
Another observer claimed that IBM made decisions so slowly that, when tested, "what they found is that it would take at least nine months to ship an empty box". As with other large computer companies, its new products typically required about four to five years for development. IBM had to learn how to quickly develop, mass-produce, and market new computers. While the company traditionally let others pioneer a new market—IBM released its first commercial computer a year after Remington Rand's UNIVAC in 1951, but within five years had 85% of the market—the personal-computer development and pricing cycles were much faster than for mainframes, with products designed in a few months and obsolete quickly.
Many in the microcomputer industry resented IBM's power and wealth, and disliked the perception that an industry founded by startups needed a latecomer so staid that it had a strict dress code and employee songbook. The potential importance to microcomputers of a company so prestigious, that a popular saying in American companies stated "No one ever got fired for buying IBM", was nonetheless clear. InfoWorld, which described itself as "The Newsweekly for Microcomputer Users", stated that "for my grandmother, and for millions of people like her, IBM and computer are synonymous". Byte ("The Small Systems Journal") stated in an editorial just before the announcement of the IBM PC:
Rumors abound about personal computers to come from giants such as Digital Equipment Corporation and the General Electric Company. But there is no contest. IBM's new personal computer ... is far and away the media star, not because of its features, but because it exists at all. When the number eight company in the Fortune 500 enters the field, that is news ... The influence of a personal computer made by a company whose name has literally come to mean "computer" to most of the world is hard to contemplate.
The editorial acknowledged that "some factions in our industry have looked upon IBM as the 'enemy'", but concluded with optimism: "I want to see personal computing take a giant step."
Desktop sized programmable calculators by Hewlett Packard had evolved into the HP 9830 BASIC language computer by 1972. In 1972–1973 a team led by Dr. Paul Friedl at the IBM Los Gatos Scientific Center developed a portable computer prototype called SCAMP (Special Computer APL Machine Portable) based on the IBM PALM processor with a Philips compact cassette drive, small CRT, and full-function keyboard. SCAMP emulated an IBM 1130 minicomputer to run APL\1130. In 1973 APL was generally available only on mainframe computers, and most desktop sized microcomputers such as the Wang 2200 or HP 9800 offered only BASIC. Because it was the first to emulate APL\1130 performance on a portable, single-user computer, PC Magazine in 1983 designated SCAMP a "revolutionary concept" and "the world's first personal computer". The prototype is in the Smithsonian Institution. A non-working industrial design model was also created in 1973 illustrating how the SCAMP engineering prototype could be transformed into a usable product design for the marketplace. IBM executive Bill Lowe used the engineering prototype and design model in his early efforts to demonstrate the viability of creating a single-user computer.
Successful demonstrations of the 1973 SCAMP prototype led to the IBM 5100 portable microcomputer in 1975. In the late 1960s such a machine would have been nearly as large as two desks and would have weighed about half a ton. The 5100 was a complete computer system programmable in BASIC or APL, with a small built-in CRT monitor, keyboard, and tape drive for data storage. It was also very expensive, up to US$20,000; the computer was designed for professional and scientific customers, not business users or hobbyists. BYTE in 1975 announced the 5100 with the headline "Welcome, IBM, to personal computing", but PC Magazine in 1984 described 5100s as "little mainframes" and stated that "as personal computers, these machines were dismal failures ... the antithesis of user-friendly", with no IBM support for third-party software. Despite news reports that it was the first IBM product without a model number, when the PC was introduced in 1981 it was designated as the IBM 5150, putting it in the "5100" series though its architecture was not directly descended from the IBM 5100. Later models followed in the trend: For example, the IBM Portable Personal Computer, PC/XT, and PC AT are IBM machine types 5155, 5160, and 5170, respectively.
Following SCAMP, the IBM Boca Raton Laboratory created several single-user computer design concepts to support Lowe's ongoing effort to convince IBM there was a strategic opportunity in the personal computer business. A selection of these early IBM design concepts created in the infancy of personal computing is highlighted in the book ‘’DELETE: A Design History of Computer Vapourware.‘’ One such concept in 1977, code-named Aquarius, was a working prototype utilizing advanced bubble memory cartridges. While this design was more powerful and smaller than Apple II launched the same year, the advanced bubble technology was deemed unstable and not ready for mass production.
Some employees opposed IBM entering the market. One said, "Why on earth would you care about the personal computer? It has nothing at all to do with office automation." "Besides", he added, "all it can do is cause embarrassment for IBM". The company had determined from studying the market for years, and building the prototypes during the 1970s, that IBM was unable to internally build a personal computer profitably.
IBM President John Opel was not among those skeptical of personal computers. He and CEO Frank Cary had created more than one dozen semi-autonomous "Independent Business Units" (IBU) to encourage innovation; Fortune called them "How to start your own company without leaving IBM". After Lowe became the first head of the Entry Level Systems IBU in Boca Raton his team researched the market. Computer dealers were very interested in selling an IBM product, but told Lowe that the company could not design, sell, or service it as IBM had previously done. An IBM microcomputer, they said, must be composed of standard parts that store employees could repair. While dealers disliked Apple's business practices, including a shortage of the Apple II while the company focused on the more sophisticated Apple III, they saw no alternative because they doubted that IBM's traditional sales methods and bureaucracy would change.
Atari in 1980 proposed that it act as original equipment manufacturer for an IBM microcomputer. Aware that the company needed to enter the market quickly—even the schools in Broward County, near Boca Raton, purchased Apples—in July 1980 Lowe met with Opel, Cary, and others on the important Corporate Management Committee. Lowe demonstrated the proposal with an industrial design model based on the Atari 800 platform, and suggested acquiring Atari "because we can't do this within the culture of IBM".
Cary agreed about the culture, observing that IBM would need "four years and three hundred people" to develop its own personal computer; Lowe, however, promised one in a year if done without traditional IBM methods. Instead of acquiring Atari, the committee allowed him to form an independent group of employees—"the Dirty Dozen", led by engineer Bill Sydnes—which, Lowe promised, could design a prototype in 30 days. The crude prototype barely worked when he demonstrated it in August, but Lowe presented a detailed business plan that proposed that the new computer have an open architecture, use non-proprietary components and software, and be sold through retail stores, all contrary to IBM practice.
The committee agreed that Lowe's approach was the most likely to succeed. With Opel's strong support, in October it approved turning the group into another IBU codenamed "Project Chess" to develop "Acorn", with unusually large funding to help achieve the goal of introducing the product within one year of the August demonstration. After Lowe's promotion Don Estridge became the head of Chess, and by January 1981 the team made its first demonstration of the computer within IBM. Other key members included Sydnes, Lewis Eggebrecht, David Bradley, Mark Dean, and David O'Connor. Many were already hobbyists who owned their own computers including Estridge, who had an Apple II. After the team received permission to expand to 150 by the end of 1980, it received more than 500 calls in one day from IBM employees interested in joining the IBU.
IBM normally was vertically integrated, internally developing all hardware and software and discouraging customers from purchasing third-party products compatible with IBM products. For the PC the company avoided doing so as much as possible; choosing, for example, to license Microsoft BASIC despite having a BASIC of its own for mainframes. Although the company denied doing so, many observers concluded that IBM intentionally emulated Apple when designing the PC. The many Apple II owners on the team influenced its decision to design the computer with an open architecture and publish technical information so others could create software and expansion slot peripherals.
Although the company knew that it could not avoid competition from third-party software on proprietary hardware—Digital Research released CP/M-86 for the IBM Displaywriter, for example—it considered using the IBM 801 RISC processor and its operating system, developed at the Thomas J. Watson Research Center in Yorktown Heights, New York. The 801 processor was more than an order of magnitude more powerful than the Intel 8088, and the operating system more advanced than the PC DOS 1.0 operating system from Microsoft. Ruling out an in-house solution made the team’s job much easier and may have avoided a delay in the schedule, but the ultimate consequences of this decision for IBM were far-reaching.
IBM had recently developed the Datamaster business microcomputer, which used a processor and other chips from Intel; familiarity with them and the immediate availability of the 8088 was a reason for choosing it for the PC. The 62-pin expansion bus slots were designed to be similar to the Datamaster slots. Differences from the Datamaster included avoiding an all-in-one design while limiting the computer's size so that it would still fit on a standard desktop with the keyboard (also similar to the Datamaster's), and 5.25" disk drives instead of 8". Delays due to in-house development of the Datamaster software was a reason why IBM chose Microsoft BASIC—already available for the 8088—and published available technical information to encourage third-party developers. IBM chose the 8088 over the similar but superior 8086 because Intel offered a better price on the former and could provide more units, and the 8088's 8-bit bus reduced the cost of the rest of the computer.
The design for the computer was essentially complete by April 1981, when the manufacturing team took over the project. IBM could not only use its own hardware and make a profit with "Acorn". To save time and money, the IBU built the machine with commercial off-the-shelf parts from original equipment manufacturers whenever possible, with assembly occurring in Boca Raton. The IBU would decide whether it would be more economical to "Make or Buy" each manufacturing step. Various IBM divisions for the first time competed with outsiders to build parts of the new computer; a North Carolina IBM factory built the keyboard, the Endicott, New York factory had to lower its bid for printed circuit boards, and a Taiwanese company built the monitor. The IBU chose an existing monitor from IBM Japan and an Epson printer. Because of the off-the-shelf parts only the system unit and keyboard has unique IBM industrial design elements, the IBM copyright appears in only the ROM BIOS and on the company logo, and the company reportedly received no patents on the PC.
Because the product would carry the IBM logo, the only corporate division the IBU could not bypass was the Quality Assurance Unit. Another aspect of IBM that did not change was its emphasis on secrecy. Those working on the project were under strict confidentiality agreements. When an individual mentioned in public on a Saturday that his company was working on software for a new IBM computer, IBM security appeared at the company on Monday to investigate the leak. Developers received prototype computers in boxes lined with lead to block X-rays and sealed with solder, and had to keep them in locked, windowless rooms; to develop software Microsoft emulated the PC on a DEC minicomputer and used the prototype for debugging. After the PC's debut, IBM Boca Raton employees continued to decline to discuss their jobs in public. One writer compared the "silence" after asking one about his role at the company to "hit[ting] the wall at the Boston Marathon: the conversation is over".
IBM is proud to announce a product you may have a personal interest in. It's a tool that could soon be on your desk, in your home or in your child's schoolroom. It can make a surprising difference in the way you work, learn or otherwise approach the complexities (and some of the simple pleasures) of living.
It's the computer we're making for you.— IBM PC advertisement, 1982
After developing it in 12 months—faster than any other hardware product in company history—IBM announced the Personal Computer on 12 August 1981. Pricing started at US$1,565 (equivalent to $4,123 in 2016) for a configuration with 16K RAM, Color Graphics Adapter, and no disk drives. The company intentionally set prices for it and other configurations that were comparable to those of Apple and other rivals; one analyst stated that IBM "has taken the gloves off", while the company said "we suggest [the PC's price] invites comparison". Microsoft, Personal Software, and Peachtree Software were among the developers of nine launch titles, including EasyWriter and VisiCalc. In addition to the existing corporate sales force IBM opened its own Product Center retail stores. After studying Apple's successful distribution network, the company for the first time sold through others, ComputerLand and Sears Roebuck. Because retail stores receive revenue from repairing computers and providing warranty service, IBM broke a 70-year tradition by permitting and training non-IBM service personnel to fix the PC.
BYTE described IBM as having "the strongest marketing organization in the world", but the PC's marketing also differed from that of previous products. The company was aware of its strong corporate reputation among potential customers; an early advertisement began "Presenting the IBM of Personal Computers". The advertisements emphasized the novelty of an individual owning an IBM computer, describing "a product you may have a personal interest in" and asking readers to think of "'My own IBM computer. Imagine that' ... it's yours. For your business, your project, your department, your class, your family and, indeed, for yourself."
The Little Tramp
After considering Alan Alda, Beverly Sills, Kermit the Frog, and Billy Martin as celebrity endorsers IBM chose Charlie Chaplin's The Little Tramp character—played by Billy Scudder—for a series of advertisements based on Chaplin's films. The very popular and award-winning $36-million marketing campaign made the star of Modern Times—a film that expresses Chaplin's opposition to big business, mechanization, and technological efficiency—the (as Creative Computing described him) "warm cuddly" mascot of one of the world's largest companies.
Chaplin and his character became so widely associated with IBM—Time stated that "The Tramp ... has given [it] a human face"—that others used his bowler hat and cane to represent or satirize the company. Although the Chaplin estate sued those like Otrona who used the trademark without permission, PC Magazine's April 1983 issue had 12 advertisements that referred to the Little Tramp.
|“||It's a very different IBM.||”|
|— An outside developer surprised by the company's cooperation, 1981|
"We encourage third-part suppliers [for the PC] ... we are delighted to have them", IBM stated. It did not sell internally developed PC software until April 1984, instead relying on already established software companies. The company contacted Microsoft even before the official approval of Chess, and it and others received cooperation that was, one writer said, "unheard of" for IBM. Such openness surprised observers; BYTE called it "striking" and "startling", and one developer reported that "it's a very different IBM." Another said "They were very open and helpful about giving us all the technical information we needed. The feeling was so radically different—it's like stepping out into a warm breeze." He concluded, "After years of hassling—fighting the Not-Invented-Here attitude—we're the gods."
Most other personal-computer companies that did not disclose technical details; Texas Instruments, for example, intentionally made developing third-party TI 99/4A software difficult, even requiring a lockout chip in cartridges. IBM itself kept its mainframe technology so secret that rivals were indicted for industrial espionage. For the PC, however, IBM immediately released detailed information. The US$36 IBM PC Technical Reference Manual included complete circuit schematics, commented ROM BIOS source code, and other engineering and programming information for all of IBM's PC-related hardware, plus instructions on designing third-party peripherals. It was so comprehensive that one reviewer suggested that the manual could serve as a university textbook, and so clear that a developer claimed that he could design an expansion card without seeing the physical computer.
IBM marketed the technical manual in full-page color print advertisements, stating that "our software story is still being written. Maybe by you". Sydnes stated that "The definition of a personal computer is third-party hardware and software". Estridge said that IBM did not keep software development proprietary because it would have to "out-VisiCalc VisiCorp and out-Peachtree Peachtree—and you just can't do that", and unlike IBM's own version "Microsoft BASIC had hundreds of thousands of users around the world. How are you going to argue with that?"
Another advertisement told developers that the company would consider publishing software for "Education. Entertainment. Personal finance. Data management. Self-improvement. Games. Communications. And yes, business." Estridge explicitly invited small, "cottage" amateur and professional developers to create products "with", he said, "our logo and our support". IBM sold the PC at a large discount to employees, encouraged them to write software, and distributed a catalog of inexpensive software written by individuals that might not otherwise appear in public.
BYTE was correct in predicting that an IBM personal computer would receive much public attention. Its rapid development amazed observers, as did the willingness of the Colossus of Armonk to sell as a launch title Microsoft Adventure (a video game that, its press release stated, brought "players into a fantasy world of caves and treasures"); the company even offered an optional joystick port. Future Computing estimated that "IBM's Billion Dollar Baby" would have $2.3 billion in hardware sales by 1986. David Bunnell, an editor at Osborne/McGraw-Hill, recalled that
None of my associates wanted to talk about the Apple II or the Osborne I computer anymore, nor did they want to fantasize about writing the next super-selling program ... All they wanted to talk about was the IBM Personal Computer—what it was, its potential and limitations, and most of all, the impact IBM would have on the business of personal computing.
Competitors were more skeptical. Adam Osborne said "when you buy a computer from IBM, you buy a la carte. By the time you have a computer that does anything, it will cost more than an Apple. I don't think Apple has anything to worry about." Apple's Mike Markkula agreed that IBM's product was more expensive than the Apple II, and claimed that the Apple III "offers better performance". He denied that the IBM PC offered more memory, stating that his company could offer more than 128K "but frankly we don't know what anyone would do with that memory". Jon Shirley of Tandy admitted that IBM had a "legendary service reputation" but claimed that its thousands of Radio Shack stores "can provide better service", while predicting the IBM PC's "major market will be IBM addicts"; another executive claimed that Tandy could undersell a $3,000 IBM computer by $1,000. Many criticized the PC's design as not innovative and outdated, and believed that its alleged weaknesses, such as the use of single-sided, single-density disks with less storage than the computer's RAM, existed because the company was uncertain about the market and was experimenting before releasing a better computer. (Estridge later boasted, "Many ... said that there was nothing technologically new in this machine. That was the best news we could have had; we actually had done what we had set out to do.")
Rivals such as Apple, Tandy, and Commodore—together with more than 50% of the personal-computer market—had many advantages. While IBM began with one microcomputer, little available hardware or software, and a couple of hundred dealers, Radio Shack had 14 million customers and 8,000 stores—more than McDonald's—that only sold its broad range of computers and accessories. Apple had five times as many dealers in the US as IBM, an established international distribution network, and an installed base of more than 250,000 customers. Hundreds of independent developers produced software and peripherals for both companies' computers; at least ten Apple databases and ten word processors were available, while the PC had no databases and one word processor. The computer had very limited graphics capability, and customers who wanted both color and high-quality text had to purchase two graphics cards and two monitors.
Steve Jobs at Apple ordered a team to examine an IBM PC. After finding it unimpressive—Chris Espinosa called the computer "a half-assed, hackneyed attempt"—the company confidently purchased a full-page advertisement in The Wall Street Journal with the headline "Welcome, IBM. Seriously". Microsoft head Bill Gates was at Apple headquarters the day of IBM's announcement and later said "They didn't seem to care. It took them a full year to realize what had happened".
The IBM PC was immediately successful. BYTE reported a rumor that more than 40,000 were ordered on the day of the announcement; John Dvorak recalled that one dealer that day praised the computer as an "incredible winner, and IBM knows how to treat us — none of the Apple arrogance". One dealer received 22 $1,000 deposits from customers although he could not promise a delivery date. The company could have sold its entire projected first-year production to employees, and IBM customers that were reluctant to purchase Apples were glad to buy microcomputers from its traditional supplier. By October some referred to the computer simply as the "PC".
BYTE estimated that 90% of the 40,000 first-day orders were from software developers. By COMDEX in November Tecmar developed 20 products including memory expansion and expansion chassis, surprising even IBM. Jerry Pournelle reported after attending the West Coast Computer Faire in early 1982 that because IBM "encourages amateurs" with "documents that tell all", "an explosion of [third-party] hardware and software" was visible at the convention. Many manufacturers of professional business application software, who had been planning/developing versions for the Apple II, promptly switched their efforts over to the IBM PC when it was announced. Often, these products needed the capacity and speed of a hard-disk. Although IBM did not offer a hard-disk option for almost two years following introduction of its PC, business sales were nonetheless catalyzed by the simultaneous availability of hard-disk subsystems, like those of Tallgrass Technologies which sold in Computerland stores alongside the IBM 5150 at the introduction in 1981.
One year after the PC's release, although IBM had sold fewer than 100,000 computers, PC World counted 753 software packages for the PC—more than four times the number available for the Apple Macintosh one year after its 1984 release—including 422 applications and almost 200 utilities and languages. InfoWorld reported that "most of the major software houses have been frantically adapting their programs to run on the PC", with new PC-specific developers composing "an entire subindustry that has formed around the PC's open system", which Dvorak described as a "de facto standard microcomputer". The magazine estimated that "hundreds of tiny garage-shop operations" were in "bloodthirsty" competition to sell peripherals, with 30 to 40 companies in a price war for memory-expansion cards, for example. PC Magazine renamed its planned "1001 Products to Use with Your IBM PC" special issue after the number of product listings it received exceeded the figure. Tecmar and other companies that benefited from IBM's openness rapidly grew in size and importance, as did PC Magazine; within two years it expanded from 96 bimonthly to 800 monthly pages, including almost 500 pages of advertisements.
By the end of 1982 IBM was selling one PC every minute of the business day. It estimated that 50 to 70% of PCs sold in retail stores went to the home, and the publicity from selling a popular product to consumers caused IBM to, a spokesman said, "enter the world" by familiarizing them with the Colossus of Armonk. Although the PC only provided two to three percent of sales the company found that it had underestimated demand by as much as 800%. Because its prices were based on forecasts of much lower volume—250,000 over five years, which would have made the PC a very successful IBM product—the PC became very profitable; at times the company sold almost that many computers per month. Estridge claimed in 1983 that from October 1982 to March 1983 customer demand quadrupled. He stated that the company had increased production three times in one year, and warned of a component shortage if demand continued to increase.
By mid-1983 Yankee Group estimated that ten new IBM PC-related products appeared every day. In August 1983 the Chess IBU, with 4,000 employees, became the Entry Systems Division, which observers believed indicated that the PC was significantly important to IBM overall, and no longer an experiment. The PC surpassed the Apple II as the best-selling personal computer with more than 750,000 sold by the end of the year, while DEC only sold 69,000 microcomputers in the first nine months of the year despite offering three models for different markets. Retailers also benefited, with 65% of BusinessLand's revenue coming from the PC. Demand still so exceeded supply two years after its debut that, despite IBM shipping 40,000 PCs a month, dealers reportedly received 60% or less of their desired quantity. Pournelle received the PC he paid for in early July 1983 on 1 November, and IBM Boca Raton employees and neighbors had to wait five weeks to buy the computers assembled there.
Yankee Group also stated that the PC had by 1983 "destroyed the market for some older machines" from companies like Vector Graphic, North Star, and Cromemco. inCider wrote "This may be an Apple magazine, but let's not kid ourselves, IBM has devoured competitors like a cloud of locusts". By February 1984 BYTE reported on "the phenomenal market acceptance of the IBM PC", and by fall concluded that the company "has given the field its third major standard, after the Apple II and CP/M".
By then Apple was less welcoming of the rival that inCider stated had a "godlike" reputation. Its focus on the III had delayed improvements to the II, and the sophisticated Lisa was unsuccessful in part because, unlike the II and the PC, Apple discouraged third-party developers. The head of a retail chain said "It appears that IBM had a better understanding of why the Apple II was successful than had Apple." Jobs, after trying to recruit Estridge to become Apple's president, admitted that in two years IBM had joined Apple as "the industry's two strongest competitors". He warned in a speech before previewing the forthcoming "1984" Super Bowl commercial: "It appears IBM wants it all ... Will Big Blue dominate the entire computer industry? The entire information age? Was George Orwell right about 1984?"
IBM had $4 billion in annual PC revenue by 1984, more than twice that of Apple and as much as the sales of Apple, Commodore, HP, and Sperry combined, and 6% of total revenue. A Fortune survey found that 56% of American companies with personal computers used IBM PCs, compared to Apple's 16%. A 1983 study of corporate customers similarly found that two thirds of large customers standardizing on one computer chose the PC, compared to 9% for Apple. IBM's own documentation described the PC as inferior to competitors' less-expensive products, but the company generally did not compete on price; rather, the study found that they preferred "IBM's hegemony" because of its support. Most companies with mainframes used their PCs with the larger computers, which likely benefited IBM's mainframe sales and discouraged their purchasing non-IBM hardware.
In 1984 IBM introduced the PC/AT, unlike its predecessor the most sophisticated personal computer from any major company. By 1985 the PC family had more than doubled Future Computing's 1986 revenue estimate, with more than 12,000 applications and 4,500 dealers and distributors worldwide. In his obituary that year, The New York Times wrote that Estridge had led the "extraordinarily successful entry of the International Business Machines Corporation into the personal computer field". The Entry Systems Division had 10,000 employees and by itself would have been the world's third-largest computer company behind IBM and DEC, with more revenue than IBM's minicomputer business despite its much later start. IBM was the only major company with significant minicomputer and microcomputer businesses, in part because rivals like DEC and Wang did not adjust to the retail market.
Rumors of "lookalike", compatible computers, created without IBM's approval, began almost immediately after the IBM PC's release. Other manufacturers soon reverse engineered the BIOS to produce their own non-infringing functional copies. Columbia Data Products introduced the first IBM-PC compatible computer in June 1982. In November 1982, Compaq Computer Corporation announced the Compaq Portable, the first portable IBM PC compatible. The first models were shipped in January 1983.
IBM PC as standard
The success of the IBM computer led other companies to develop IBM Compatibles, which in turn led to branding like diskettes being advertised as "IBM format". An IBM PC clone could be built with off-the-shelf parts, but the BIOS required some reverse engineering. Companies like Compaq, Phoenix Software Associates, American Megatrends, Award, and others achieved fully functional versions of the BIOS, allowing companies like Dell, Gateway and HP to manufacture PCs that worked like IBM's product. The IBM PC became the industry standard.
Because IBM had no retail experience, the retail chains ComputerLand and Sears Roebuck provided important knowledge of the marketplace. They became the main outlets for the new product. More than 190 Computerland stores already existed, while Sears was in the process of creating a handful of in-store computer centers for sale of the new product. This guaranteed IBM widespread distribution across the U.S.
Targeting the new PC at the home market, Sears Roebuck sales failed to live up to expectations. This unfavorable outcome revealed that the strategy of targeting the office market was the key to higher sales.
IBM 5150 PC with IBM 5151 monitor
|model name||model #||Introduced||CPU||Features|
|PC||5150||August 1981||8088||Floppy disk or cassette system. One or two internal floppy drives were optional.|
|XT||5160||March 1983||8088||First IBM PC to come with an internal hard drive as standard.|
|XT/370||5160/588||October 1983||8088||5160 with XT/370 Option Kit and 3277 Emulation Adapter|
|3270 PC||5271||October 1983||8088||With 3270 terminal emulation, 20 function key keyboard|
|PCjr||4860||November 1983||8088||Floppy-based home computer, infrared keyboard|
|Portable||5155||February 1984||8088||Floppy-based portable|
|AT||5170||August 1984||80286||Faster processor, faster system bus (6 MHz, later 8 MHz, vs 4.77 MHz), jumperless configuration, real-time clock|
|AT/370||5170/599||October 1984||80286||5170 with AT/370 Option Kit and 3277 Emulation Adapter|
|3270 AT||5281||June 1985||80286||With 3270 terminal emulation|
|Convertible||5140||April 1986||80C88||Microfloppy laptop portable|
|XT 286||5162||September 1986||80286||Slow hard disk, but zero wait state memory on the motherboard. This 6 MHz machine was actually faster than the 8 MHz ATs (when using planar memory) because of the zero wait states|
All IBM personal computers are software backwards-compatible with each other in general, but not every program will work in every machine. Some programs are time sensitive to a particular speed class. Older programs will not take advantage of newer higher-resolution and higher-color display standards, while some newer programs require newer display adapters. (Note that as the display adapter was an adapter card in all of these IBM models, newer display hardware could easily be, and often was, retrofitted to older models.) A few programs, typically very early ones, are written for and require a specific version of the IBM PC BIOS ROM. Most notably, BASICA which was dependent on the BIOS ROM had a sister program called GW-BASIC which supported more functions, was 100% backwards compatible and could run independently from the BIOS ROM.
The CGA video card, with a suitable modulator, could use an NTSC television set or an RGBi monitor for display; IBM's RGBi monitor was their display model 5153. The other option that was offered by IBM was an MDA and their monochrome display model 5151. It was possible to install both an MDA and a CGA card and use both monitors concurrently if supported by the application program. For example, AutoCAD, Lotus 1-2-3 and others allowed use of a CGA Monitor for graphics and a separate monochrome monitor for text menus. Some model 5150 PCs with CGA monitors and a printer port also included the MDA adapter by default, because IBM provided the MDA port and printer port on the same adapter card; it was in fact an MDA/printer port combo card.
Although cassette tape was originally envisioned by IBM as a low-budget storage alternative, the most commonly used medium was the floppy disk. The 5150 was available with one or two 5-1/4" floppy drives - with two drives the program disc(s) would be in drive A, while drive B would hold the disc(s) for working files; with one drive the user had to swap program and file discs into the single drive. For models without any drives or storage medium, IBM intended users to connect their own cassette recorder via the 5150's cassette socket. The cassette tape socket was physically the same DIN plug as the keyboard socket and next to it, but electrically completely different.
A hard disk could not be installed into the 5150's system unit without changing to a higher-rated power supply (although later drives with lower power consumption have been known to work with the standard 63.5 Watt unit). The "IBM 5161 Expansion Chassis" came with its own power supply and one 10 MB hard disk and allowed the installation of a second hard disk. The system unit had five expansion slots, and the expansion unit had eight; however, one of the system unit's slots and one of the expansion unit's slots had to be occupied by the Extender Card and Receiver Card, respectively, which were needed to connect the expansion unit to the system unit and make the expansion unit's other slots available, for a total of 11 slots. A working configuration required that some of the slots be occupied by display, disk, and I/O adapters, as none of these were built into the 5150's motherboard; the only motherboard external connectors were the keyboard and cassette ports.
The simple PC speaker sound hardware was also on board.
The original PC's maximum memory using IBM parts was 256 kB, achievable through the installation of 64 kB on the motherboard and three 64 kB expansion cards. The processor was an Intel 8088 running at 4.77 MHz, 4/3 the standard NTSC color burst frequency of 315/88 = 3.57954[a] MHz. (In early units, the Intel 8088 used was a 1978 version, later were 1978/81/2 versions of the Intel chip; second-sourced AMDs were used after 1983). Some owners replaced the 8088 with an NEC V20 for a slight increase in processing speed and support for real mode 80186 instructions. The V20 gained its speed increase through the use of a hardware multiplier which the 8088 lacked. An Intel 8087 co-processor could also be added for hardware floating-point arithmetic.
IBM sold the first IBM PCs in configurations with 16 or 64 kB of RAM preinstalled using either nine or thirty-six 16-kilobit DRAM chips. (The ninth bit was used for parity checking of memory.) After the IBM XT shipped, the IBM PC motherboard was redesigned with the same RAM configuration as the IBM XT. (64 kB in one bank, expandable to 256kB by populating the other 3 banks.)
Although the TV-compatible video board, cassette port and Federal Communications Commission Class B certification were all aimed at making it a home computer, the original PC proved too expensive for the home market. At introduction, a PC with 64 kB of RAM and a single 5.25-inch floppy drive and monitor sold for US $3,005 (equivalent to $7,916 in 2016), while the cheapest configuration (US $1,565) that had no floppy drives, only 16 kB RAM, and no monitor (again, under the expectation that users would connect their existing TV sets and cassette recorders) proved too unattractive and low-spec, even for its time (cf. footnotes to the above IBM PC range table). While the 5150 did not become a top selling home computer, its floppy-based configuration became an unexpectedly large success with businesses.
The "IBM Personal Computer XT", IBM model 5160, was introduced two years after the PC and featured a 10 megabyte hard drive. It had eight expansion slots but the same processor and clock speed as the PC. The XT had no cassette jack, but still had the Cassette Basic interpreter in ROMs.
The XT could take 256 kB of memory on the main board (using 64 kbit DRAM); later models were expandable to 640 kB. The remaining 384 kilobytes of the 8088 address space were used for the BIOS ROM, adapter ROM and RAM space, including video RAM space. It was usually sold with a Monochrome Display Adapter (MDA) video card or a CGA video card.
The eight expansion slots were the same as the model 5150 but were spaced closer together. Although rare, a card designed for the 5150 could be wide enough to obstruct the adjacent slot in an XT. Because of the spacing, an XT motherboard would not fit into a case designed for the PC motherboard, but the slots and peripheral cards were compatible. The XT expansion bus (later called "8 bit Industry Standard Architecture" (ISA) by competitors) was retained in the IBM AT, which added connectors for some slots to allow 16-bit transfers; 8 bit cards could be used in an AT.
The "IBM Personal Computer XT/370" was an XT with three custom 8-bit cards: the processor card (370PC-P) contained a modified Motorola 68000 chip, microcoded to execute System/370 instructions, a second 68000 to handle bus arbitration and memory transfers, and a modified 8087 to emulate the S/370 floating point instructions. The second card (370PC-M) connected to the first and contained 512 kB of memory. The third card (PC3277-EM), was a 3270 terminal emulator necessary to install the system software for the VM/PC software to run the processors.
The "IBM PCjr" was IBM's first attempt to enter the market for relatively inexpensive educational and home-use personal computers. The PCjr, IBM model number 4860, retained the IBM PC's 8088 CPU and BIOS interface for compatibility, but its cost and differences in the PCjr's architecture, as well as other design and implementation decisions, eventually led to the PCjr, and the related IBM JX, being commercial failures.
The "IBM Portable Personal Computer" 5155 model 68 was an early portable computer developed by IBM after the success of Compaq's suitcase-size portable machine (the Compaq Portable). It was released in February, 1984, and was eventually replaced by the IBM Convertible.
The Portable was an XT motherboard, transplanted into a Compaq-style luggable case. The system featured 256 kilobytes of memory (expandable to 512 kB), an added CGA card connected to an internal monochrome (amber) composite monitor, and one or two half-height 5.25" 360K floppy disk drives. Unlike the Compaq Portable, which used a dual-mode monitor and special display card, IBM used a stock CGA board and a composite monitor, which had lower resolution. It could however, display color if connected to an external monitor or television.
The "IBM Personal Computer/AT" (model 5170), announced August 15, 1984, used an Intel 80286 processor, originally running at 6 MHz. It had a 16-bit ISA bus and 20 MB hard drive. A faster model, running at 8 MHz and sporting a 30-megabyte hard disk was introduced in 1986.
The AT was designed to support multitasking; the new SysRq (System request key), little noted and often overlooked, is part of this design, as is the 80286 itself, the first Intel 16-bit processor with multitasking features (i.e. the 80286 protected mode). IBM made some attempt at marketing the AT as a multi-user machine, but it sold mainly as a faster PC for power users. For the most part, IBM PC/ATs were used as more powerful DOS (single-tasking) personal computers, in the literal sense of the PC name.
Early PC/ATs were plagued with reliability problems, in part because of some software and hardware incompatibilities, but mostly related to the internal 20 MB hard disk, and High Density Floppy Disk Drive.
While some people blamed IBM's hard disk controller card and others blamed the hard disk manufacturer Computer Memories Inc. (CMI), the IBM controller card worked fine with other drives, including CMI's 33-MB model. The problems introduced doubt about the computer and, for a while, even about the 286 architecture in general, but after IBM replaced the 20 MB CMI drives, the PC/AT proved reliable and became a lasting industry standard.
- IBM AT's Drive parameter table listed the CMI-33 as having 615 cylinders instead of the 640 the drive was designed with, as to make the size an even 30 MB. Those who re-used the drives mostly found that the 616th cylinder was bad due to it being used as a landing area.
The "IBM Personal Computer AT/370" was an AT with two custom 16-bit cards, running almost exactly the same setup as the XT/370.
The IBM PC Convertible, released April 3, 1986, was IBM's first laptop computer and was also the first IBM computer to utilize the 3.5" floppy disk which went on to become the standard. Like modern laptops, it featured power management and the ability to run from batteries. It was the follow-up to the IBM Portable and was model number 5140. The concept and the design of the body was made by the German industrial designer Richard Sapper.
It utilized an Intel 80c88 CPU (a CMOS version of the Intel 8088) running at 4.77 MHz, 256 kB of RAM (expandable to 640 kB), dual 720 kB 3.5" floppy drives, and a monochrome CGA-compatible LCD screen at a price of $2,000. It weighed 13 pounds (5.8 kg) and featured a built-in carrying handle.
The PC Convertible had expansion capabilities through a proprietary ISA bus-based port on the rear of the machine. Extension modules, including a small printer and a video output module, could be snapped into place. The machine could also take an internal modem, but there was no room for an internal hard disk.
Next-generation IBM PS/2
The IBM PS/2 line was introduced in 1987. The Model 30 at the bottom end of the lineup was very similar to earlier models; it used an 8086 processor and an ISA bus. The Model 30 was not "IBM compatible" in that it did not have standard 5.25-inch drive bays; it came with a 3.5-inch floppy drive and optionally a 3.5-inch-sized hard disk. Most models in the PS/2 line further departed from "IBM compatible" by replacing the ISA bus completely with Micro Channel Architecture.
The main circuit board in an PC is called the motherboard (IBM terminology calls it a planar). This mainly carries the CPU and RAM, and it has a bus with slots for expansion cards. On the motherboard are also the ROM subsystem, DMA and IRQ controllers, coprocessor socket, sound (PC speaker, tone generation) circuitry, and keyboard interface. The original PC also has a cassette interface.
The bus used in the original PC became very popular, and it was subsequently named ISA. While it was popular, it was more commonly known as the PC-bus or XT-bus; the term ISA arose later when industry leaders chose to continue manufacturing machines based on the IBM PC AT architecture rather than license the PS/2 architecture and its MCA bus from IBM. The XT-bus was then retroactively named 8-bit ISA or XT ISA, while the unqualified term ISA usually refers to the 16-bit AT-bus (as better defined in the ISA specifications.) The AT-bus is an extension of the PC-/XT-bus and is in use to this day in computers for industrial use, where its relatively low speed, 5 volt signals, and relatively simple, straightforward design (all by year 2011 standards) give it technical advantages (e.g. noise immunity for reliability).
A monitor and any floppy or hard disk drives are connected to the motherboard through cables connected to graphics adapter and disk controller cards, respectively, installed in expansion slots. Each expansion slot on the motherboard has a corresponding opening in the back of the computer case through which the card can expose connectors; a blank metal cover plate covers this case opening (to prevent dust and debris intrusion and control airflow) when no expansion card is installed. Memory expansion beyond the amount installable on the motherboard was also done with boards installed in expansion slots, and I/O devices such as parallel, serial, or network ports were likewise installed as individual expansion boards. For this reason, it was easy to fill the five expansion slots of the PC, or even the eight slots of the XT, even without installing any special hardware. Companies like Quadram and AST addressed this with their popular multi-I/O cards which combine several peripherals on one adapter card that uses only one slot; Quadram offered the QuadBoard and AST the SixPak.
Intel 8086 and 8088-based PCs require expanded memory (EMS) boards to work with more than 640 kB of memory. (Though the 8088 can address one megabyte of memory, the last 384 kB of that is used or reserved for the BIOS ROM, BASIC ROM, extension ROMs installed on adapter cards, and memory address space used by devices including display adapter RAM and even the 64 kB EMS page frame itself.) The original IBM PC AT used an Intel 80286 processor which can access up to 16 MB of memory (though standard DOS applications cannot use more than one megabyte without using additional APIs.) Intel 80286-based computers running under OS/2 can work with the maximum memory.
Peripheral integrated circuits
The set of peripheral chips selected for the original IBM PC defined the functionality of an IBM compatible. These became the de facto base for later application specific integrated circuits (ASICs) used in compatible products.
The original system chips were one Intel 8259 programmable interrupt controller (PIC) (at I/O address 0x20), one Intel 8237 direct memory access (DMA) controller (at I/O address 0x00), and an Intel 8253 programmable interval timer (PIT) (at I/O address 0x40). The PIT provides the 18.2 Hz clock ticks, dynamic memory refresh timing, and can be used for speaker output; one DMA channel is used to perform the memory refresh.
The IBM PC AT added a second, slave 8259 PIC (at I/O address 0xA0), a second 8237 DMA controller for 16-bit DMA (at I/O address 0xC0), a DMA address register (implemented with a 74LS612 IC) (at I/O address 0x80), and a Motorola MC146818 real-time clock (RTC) with nonvolatile memory (NVRAM) used for system configuration (replacing the DIP switches and jumpers used for this purpose in PC and PC/XT models (at I/O address 0x70). On expansion cards, the Intel 8255 programmable peripheral interface (PPI) (at I/O addresses 0x378 is used for parallel I/O controls the printer, and the 8250 universal asynchronous receiver/transmitter (UART) (at I/O address 0x3F8 or 0x3E8) controls the serial communication at the (pseudo-) RS-232 port.
IBM offered a Game Control Adapter for the PC, which supported analog joysticks similar to those on the Apple II. Although analog controls proved inferior for arcade-style games, they were an asset in certain other genres such as flight simulators. The joystick port on the IBM PC supported two controllers, but required a Y-splitter cable to connect both at once. It remained the standard joystick interface on IBM compatibles until being replaced by USB during the 2000s.
The keyboard that came with the IBM 5150 was an extremely reliable and high-quality electronic keyboard originally developed in North Carolina for the Datamaster. Each key was rated to be reliable to over 100 million keystrokes. For the IBM PC, a separate keyboard housing was designed with a novel usability feature that allowed users to adjust the keyboard angle for personal comfort. Compared with the keyboards of other small computers at the time, the IBM PC keyboard was far superior and played a significant role in establishing a high-quality impression. For example, the industrial design of the adjustable keyboard, together with the system unit, was recognized with a major design award. Byte magazine in the fall of 1981 went so far as to state that the keyboard was 50% of the reason to buy an IBM PC. The importance of the keyboard was definitely established when the 1983 IBM PCjr flopped, in very large part for having a much different and mediocre Chiclet keyboard that made a poor impression on customers. Oddly enough, the same thing almost happened to the original IBM PC when in early 1981 management seriously considered substituting a cheaper and lower quality keyboard. This mistake was narrowly avoided on the advice of one of the original development engineers.
However, the original 1981 IBM PC 83-key keyboard was criticized by typists for its non-standard placement of the Return and left ⇧ Shift keys, and because it did not have separate cursor and numeric pads that were popular on the pre-PC DEC VT100 series video terminals. In 1982, Key Tronic introduced the now standard 101-key PC keyboard. In 1984, IBM corrected the Return and left ⇧ Shift keys on its AT keyboard, but shortened the Backspace key, making it harder to reach. In 1986, IBM changed to the 101 key enhanced keyboard, which added the separate cursor and numeric key pads, relocated all the function keys and the Ctrl keys, and the Esc key was also relocated to the opposite side of the keyboard.
Another feature of the original keyboard is the relatively loud "click" sound each key made when pressed. Since typewriter users were accustomed to keeping their eyes on the hardcopy they were typing from and had come to rely on the mechanical sound that was made as each character was typed onto the paper to ensure that they had pressed the key hard enough (and only once), the PC keyboard used a keyswitch that produced a click and tactile bump intended to provide that same reassurance.
The IBM PC keyboard is very robust and flexible. The low-level interface for each key is the same: each key sends a signal when it is pressed and another signal when it is released. An integrated microcontroller in the keyboard scans the keyboard and encodes a "scan code" and "release code" for each key as it is pressed and released separately. Any key can be used as a shift key, and a large number of keys can be held down simultaneously and separately sensed. The controller in the keyboard handles typematic operation, issuing periodic repeat scan codes for a depressed key and then a single release code when the key is finally released.
An "IBM PC compatible" may have a keyboard that does not recognize every key combination a true IBM PC does, such as shifted cursor keys. In addition, the "compatible" vendors sometimes used proprietary keyboard interfaces, preventing the keyboard from being replaced.
Although the PC/XT and AT used the same style of keyboard connector, the low-level protocol for reading the keyboard was different between these two series. The AT keyboard uses a bidirectional interface which allows the computer to send commands to the keyboard. An AT keyboard could not be used in an XT, nor the reverse. Third-party keyboard manufacturers provided a switch on some of their keyboards to select either the AT-style or XT-style protocol for the keyboard.
The original IBM PC used the 7-bit ASCII alphabet as its basis, but extended it to 8 bits with nonstandard character codes. This character set was not suitable for some international applications, and soon a veritable cottage industry emerged providing variants of the original character set in various national variants. In IBM tradition, these variants were called code pages. These codings are now obsolete, having been replaced by more systematic and standardized forms of character coding, such as ISO 8859-1, Windows-1251 and Unicode. The original character set is known as code page 437.
IBM equipped the model 5150 with a cassette port for connecting a cassette drive and assumed that home users would purchase the low-end model and save files to cassette tapes as was typical of home computers of the time. However, adoption of the floppy- and monitor-less configuration was low; few (if any) IBM PCs left the factory without a floppy disk drive installed. Also, DOS was not available on cassette tape, only on floppy disks (hence "Disk Operating System"). 5150s with just external cassette recorders for storage could only use the built-in ROM BASIC as their operating system. As DOS saw increasing adoption, the incompatibility of DOS programs with PCs that used only cassettes for storage made this configuration even less attractive. The ROM BIOS supported cassette operations.
The IBM PC cassette interface encodes data using frequency modulation with a variable data rate. Either a one or a zero is represented by a single cycle of a square wave, but the square wave frequencies differ by a factor of two, with ones having the lower frequency. Therefore, the bit periods for zeros and ones also differ by a factor of two, with the unusual effect that a data stream with more zeros than ones will use less tape (and time) than an equal-length (in bits) data stream containing more ones than zeros, or equal numbers of each.
IBM also had an exclusive license agreement with Microsoft to include BASIC in the ROM of the PC; clone manufacturers could not have ROM BASIC on their machines, but it also became a problem as the XT, AT, and PS/2 eliminated the cassette port and IBM was still required to install the (now useless) BASIC with them. The agreement finally expired in 1991 when Microsoft replaced BASICA/GW-BASIC with QBASIC. The main core BASIC resided in ROM and "linked" up with the RAM-resident BASIC.COM/BASICA.COM included with PC-DOS (they provided disk support and other extended features not present in ROM BASIC). Because BASIC was over 50 kB in size, this served a useful function during the first three years of the PC when machines only had 64 -128 kB of memory, but became less important by 1985. For comparison, clone makers such as Compaq were forced to include a version of BASIC that resided entirely in RAM.
Most or all 5150 PCs had one or two 5.25-inch floppy disk drives. These were either single-sided double-density (SSDD) or double-sided double-density (DSDD) drives. The IBM PC never used single density floppy drives. The drives and disks were commonly referred to by capacity, such as "160KB floppy disk" or "360KB floppy drive". DSDD drives were backwards compatible; they could read and write SSDD floppies. The same type of physical diskette media could be used for both drives, but a disk formatted for double-sided use could not be read on a single-sided drive.
The disks were Modified Frequency Modulation (MFM) coded in 512-byte sectors, and were soft-sectored. They contained 40 tracks per side at the 48 track per inch (TPI) density, and initially were formatted to contain eight sectors per track. This meant that SSDD disks initially had a formatted capacity of 160 kB, while DSDD disks had a capacity of 320 kB. However, the DOS operating system was later updated to allow formatting the disks with nine sectors per track. This yielded a formatted capacity of 180 kB with SSDD disks/drives, and 360 kB with DSDD disks/drives. The unformatted capacity of the floppy disks was advertised as "250KB" for SSDD and "500KB" for DSDD ("KB" ambiguously referring to either 1000 or 1024 bytes; essentially the same for rounded-off values), however these "raw" 250/500 kB were not the same thing as the usable formatted capacity; under DOS, the maximum capacity for SSDD and DSDD disks was 180 kB and 360 kB, respectively. Regardless of type, the file system of all floppy disks (under DOS) was FAT12.
The earliest IBM PCs had only single-sided floppy drives until double-sided drives became available in the spring of 1982. After the upgraded 64k-256k motherboard PCs arrived in early 1983, single-sided drives and the cassette model were discontinued.
IBM's original floppy disk controller card also included an external 37-pin D-shell connector. This allowed users to connect additional external floppy drives by third party vendors, but IBM did not offer their own external floppies until 1986.
The industry-standard way of setting floppy drive numbers was via setting jumper switches on the drive unit, however IBM chose to instead use a method known as the "cable twist" which had a floppy data cable with a bend in the middle of it that served as a switch for the drive motor control. This eliminated the need for users to adjust jumpers while installing a floppy drive.
The 5150 could not itself power hard drives without retrofitting a stronger power supply, but IBM later offered the 5161 Expansion Unit, which not only provided more expansion slots, but also included a 10 MB (later 20 MB) hard drive powered by the 5161's own separate 130-watt power supply. The IBM 5161 Expansion Unit was released in early 1983.
During the first year of the IBM PC, it was commonplace for users to install third-party Winchester hard disks which generally connected to the floppy controller and required a patched version of PC-DOS which treated them as a giant floppy disk (there was no subdirectory support).
IBM began offering hard disks with the XT, however the original PC was never sold with them. Nonetheless, many users installed hard disks and upgraded power supplies in them.
After floppy disks became obsolete in the early 2000s, the letters A and B became unused. But for 25 years, virtually all DOS-based PC software assumed the program installation drive was C, so the primary HDD continues to be "the C drive" even today. Other operating system families (e.g. Unix) are not bound to these designations.
Which operating system IBM customers would choose was at first unclear. Although the company expected that most would use PC DOS IBM supported using CP/M-86—which became available six months after DOS—or UCSD p-System as operating systems. IBM promised that it would not favor one operating system over the others; the CP/M-86 support surprised Gates, who claimed that IBM was "blackmailed into it". IBM was correct, nonetheless, in its expectation; one survey found that 96.3% of PCs were ordered with the $40 DOS compared to 3.4% for the $240 CP/M-86.
The IBM PC's ROM BASIC and BIOS supported cassette tape storage. PC DOS itself did not support cassette tape storage. PC DOS version 1.00 supported only 160 kB SSDD floppies, but version 1.1, which was released nine months after the PC's introduction, supported 160 kB SSDD and 320 kB DSDD floppies. Support for the slightly larger nine sector per track 180 kB and 360 kB formats arrived 10 months later in March 1983.
The BIOS (Basic Input/Output System) provided the core ROM code for the PC. It contained a library of functions that software could call for basic tasks such as video output, keyboard input, and disk access in addition to interrupt handling, loading the operating system on boot-up, and testing memory and other system components.
The original IBM PC BIOS was 8k in size and occupied four 2k ROM chips on the motherboard, with a fifth and sixth empty slot left for any extra ROMs the user wished to install. IBM offered three different BIOS revisions during the PC's lifespan. The initial BIOS was dated April 1981 and came on the earliest models with single-sided floppy drives and PC DOS 1.00. The second version was dated October 1981 and arrived on the "Revision B" models sold with double-sided drives and PC DOS 1.10. It corrected some bugs, but was otherwise unchanged. Finally, the third BIOS version was dated October 1982 and found on all IBM PCs with the newer 64k-256k motherboard. This revision was more-or-less identical to the XT's BIOS. It added support for detecting ROMs on expansion cards as well as the ability to use 640k of memory (the earlier BIOS revisions had a limit of 544k). Unlike the XT, the original PC remained functionally unchanged from 1983 until its discontinuation in early 1987 and did not get support for 101-key keyboards or 3.5" floppy drives, nor was it ever offered with half-height floppies.
IBM initially offered two video adapters for the PC, the Color/Graphics Adapter and the Monochrome Display and Printer Adapter. CGA was intended to be a typical home computer display; it had NTSC output and could be connected to a composite monitor or a TV set with an RF modulator in addition to RGB for digital RGBI-type monitors, although IBM did not offer their own RGB monitor until 1983. Supported graphics modes were 40 or 80x25 color text with 8x8 character resolution, 320x200 bitmap graphics with two fixed 4-color palettes, or 640x200 monochrome graphics.
The MDA card and its companion 5151 monitor supported only 80x25 text with a 9x14 character resolution (total pixel resolution was 720x350). It was mainly intended for the business market and so also included a printer port.
During 1982, the first third-party video card for the PC appeared when Hercules Computer Technologies released a clone of the MDA that could use bitmap graphics. Although not supported by the BIOS, the Hercules Graphics Adapter became extremely popular for business use due to allowing sharp, high resolution graphics plus text and itself was widely cloned by other manufacturers.
In 1985, after the launch of the IBM AT, the new Enhanced Graphics Adapter became available which could support 320x200 or 640x200 in 16 colors in addition to high-resolution 640x350 16 color graphics.
IBM also offered a video board for the PC, XT, and AT known as the Professional Graphics Adapter during 1984-86, mainly intended for CAD design. It was extremely expensive, required a special monitor, and was rarely ordered by customers.
VGA graphics cards could also be installed in IBM PCs and XTs, although they were introduced after the computer's discontinuation.
Serial port addresses and interrupts
|COM Port||IRQ||Base port address [Hex]|
Only COM1: and COM2: addresses were defined by the original PC. Attempts to share IRQ3 and IRQ4 to use additional ports require special measures in hardware and software, since shared IRQs were not defined in the original PC design. The most typical devices plugged into the serial port were modems and mice. Plotters and serial printers were also among the more commonly used serial peripherals, and there were numerous other more unusual uses such as operating cash registers, factory equipment, and connecting terminals.
IBM made a deal with Japan-based Epson to produce printers for the PC and all IBM-branded printers were manufactured by that company (Epson of course also sold printers with their own name). There was a considerable amount of controversy when IBM included a printer port on the PC that did not follow the industry-standard Centronics design, and it was rumored that this had been done to prevent customers from using non-Epson/IBM printers with their machines (plugging a Centronics printer into an IBM PC could damage the printer, the parallel port, or both). Although third-party cards were available with Centronics ports on them, PC clones quickly copied the IBM printer port and by the late 80s, it had largely displaced the Centronics standard.
BYTE wrote in October 1981 that the IBM PC's "hardware is impressive, but even more striking are two decisions made by IBM: to use outside suppliers already established in the microcomputer industry, and to provide information and assistance to independent, small-scale software writers and manufacturers of peripheral devices". It praised the "smart" hardware design and stated that its price was not much higher than the 8-bit machines from Apple and others. The reviewer admitted that the computer "came as a shock. I expected that the giant would stumble by overestimating or underestimating the capabilities the public wants and stubbornly insisting on incompatibility with the rest of the microcomputer world. But IBM didn't stumble at all; instead, the giant jumped leagues in front of the competition ... the only disappointment about the IBM Personal Computer is its dull name".
In a more detailed review in January 1982, BYTE called the IBM PC "a synthesis of the best the microcomputer industry has offered to date ... as well designed on the inside as it is on the outside". The magazine praised the keyboard as "bar none, the best ... on any microcomputer", describing the unusual Shift key locations as "minor [problems] compared to some of the gigantic mistakes made on almost every other microcomputer keyboard". The review also complimented IBM's manuals, which it predicted "will set the standard for all microcomputer documentation in the future. Not only are they well packaged, well organized, and easy to understand, but they are also complete". Observing that detailed technical information was available "much earlier ... than it has been for other machines", the magazine predicted that "given a reasonable period of time, plenty of hardware and software will probably be developed for" the computer. The review stated that although the IBM PC cost more than comparably configured Apple II and TRS-80 computers, and the insufficient number of slots for all desirable expansion cards was its most serious weakness, "you get a lot more for your money" and concluded, "In two years or so, I think [it] will be one of the most popular and best-supported ... IBM should be proud of the people who designed it".
In a special 1984 issue dedicated to the IBM PC, BYTE concluded that the PC had succeeded both because of its features like an 80-column screen, open architecture, and high-quality keyboard, and "the failure of other major companies to provide these same fundamental features earlier. In retrospect, it seems IBM stepped into a void that remained, paradoxically, at the center of a crowded market".
Many IBM PCs have remained in service long after their technology became largely obsolete. In June 2006, IBM PC and XT models were still in use at the majority of U.S. National Weather Service upper-air observing sites, used to process data as it is returned from the ascending radiosonde, attached to a weather balloon, although they have been slowly phased out. Factors that have contributed to the 5150 PC's longevity are its flexible modular design, its open technical standard (making information needed to adapt, modify, and repair it readily available), use of few special nonstandard parts, and rugged high-standard IBM manufacturing, which provided for exceptional long-term reliability and durability.
Some of the mechanical aspects of the slot specifications are still used in current PCs. A few systems still come with PS/2 style keyboard and mouse connectors.
The IBM model 5150 Personal Computer has become a collectable among vintage computer collectors, due to the system being the first true “PC” as we know them today. Today these systems can fetch anywhere from $100 to $4500, depending on cosmetic and operational condition. The IBM model 5150 has proven to be reliable; despite their age of 30 years or more, some still function as they did when new.
- Cited references
- Salmans, Sandra (1982-01-09). "Dominance Ended, I.B.M. Fights Back". The New York Times. Retrieved 2 January 2015.
- Burton, Kathleen (February 1983). "Anatomy of a Colossus, Part II". PC Magazine. p. 316. Retrieved 21 October 2013.
- "I.B.M.'S Speedy Redirection". The New York Times. 1983-11-02. Retrieved 2011-02-25.
- Camenker, Brian (Nov 1983). "The Making of the IBM PC". BYTE. pp. 254, 256. Retrieved 19 March 2016.
- Sandler, Corey (November 1984). "IBM: Colossus of Armonk". Creative Computing. p. 298. Retrieved February 26, 2013.
- Libes, Sol (December 1981). "Bytelines". BYTE. pp. 314–318. Retrieved 29 January 2015.
- Jeffery, Brian (1985-09-30). "IBM's high-end micros encroaching on mini territory". Computerworld. pp. SR/20–21. Retrieved 2 January 2015.
- "Total share: 30 years of personal computer market share figures", Jeremy Reimer December 14, 2005 arstechnica.com
- Morgan, Christopher P (March 1980). "Hewlett-Packard's New Personal Computer". BYTE. p. 60. Retrieved 18 October 2013.
- Swaine, Michael (1981-10-05). "Tom Swift Meets the Big Boys: Small Firms Beware". InfoWorld. p. 45. Retrieved 1 January 2015.
- Gens, Frank; Christiansen, Chris (November 1983). "Could 1,000,000 IBM PC Users Be Wrong?". BYTE. p. 135. Retrieved 19 March 2016.
- "Interest Group for Possible IBM Computer". BYTE. January 1981. p. 313. Retrieved 18 October 2013.
- Libes, Sol (June 1981). "IBM and Matsushita to Join Forces?". BYTE. p. 208. Retrieved 18 October 2013.
- Morgan, Chris (July 1981). "IBM's Personal Computer". BYTE. p. 6. Retrieved 18 October 2013.
- Markoff, John (1981-10-05). "Newcomers Join Rush to Enter Personal Computing". InfoWorld. pp. 46–47. Retrieved 2 January 2015.
- Blaxill, Mark; Eckardt, Ralph (2009). The Invisible Edge: Taking Your Strategy to the Next Level Using Intellectual Property. Penguin Group. pp. 195–198. ISBN 9781591842378.
- "The birth of the IBM PC". IBM Archives. Retrieved 13 June 2014.
- "IBM 5120". IBM. Retrieved 20 March 2016.
- Wise, Deborah (1982-08-23). "The colossus runs, not plods— how the IBM PC came to be". InfoWorld. p. 13. Retrieved 29 January 2015.
- McMullen, Barbara E.; John F. (1984-02-21). "Apple Charts The Course For IBM". PC Magazine. p. 122. Retrieved 24 October 2013.
- Hormby, Tom (2006-08-12). "Origin of the IBM PC". Low End Mac. Retrieved 10 January 2015.
- Seidner, Rich (speaker); Cringely, Robert X. (June 1996). "Part II". Triumph of the Nerds: The Rise of Accidental Empires. Season 1. PBS.
- Morgan, Chris (January 1982). "Of IBM, Operating Systems, and Rosetta Stones". BYTE. p. 6. Retrieved 19 October 2013.
- Bunnell, David (Feb–Mar 1982). "The Man Behind The Machine? / A PC Exclusive Interview With Software Guru Bill Gates". PC Magazine. p. 16. Retrieved February 17, 2012.
- Edlin, Jim (February–March 1982). "Confessions of a Convert". PC Magazine. p. 12. Retrieved 20 October 2013.
- Halfhill, Tom R. (December 1986). "The MS-DOS Invasion / IBM Compatibles Are Coming Home". Compute!. p. 32. Retrieved 9 November 2013.
- Rawsthorn, Alice (2011-07-31). "The Clunky PC That Started It All". The New York Times. Retrieved 21 October 2013.
- Zussman, John Unger (1982-08-23). "Let's keep those systems open". InfoWorld. p. 29. Retrieved 29 January 2015.
- "IBM Archives". Archived from the original on 2003-02-10.
- Friedl, Paul J. (November 1983). "SCAMP: The Missing Link In The PC's Past?". PC. pp. 190–197. Retrieved 8 January 2015.
- Atkinson, P, (2013) DELETE: A Design History of Computer Vapourware, London: Bloomsbury Publishing.
- "Obsolete Technology Website". Retrieved 2008-08-14.
- "Welcome, IBM, to personal computing". BYTE. December 1975. p. 90. Retrieved 19 March 2016.
- "PCommuniques". PC Magazine. February–March 1982. p. 5. Retrieved 20 October 2013.
- Likewise, IBM's early PC video display monitors have similar numbers: The IBM Monochrome Display (IBM's MDA monitor) is machine type 5151, the IBM Color Display (their CGA monitor) is machine type 5153, and the IBM Enhanced Color Display (their EGA monitor)) is machine type 5154.
- Scott, Greg (October 1988). ""Blue Magic": A Review". U-M Computing News. 3 (19): 12–15.
- Hey, Tony; Papay, Gyuri (2014). The Computing Universe: A Journey through a Revolution. Cambridge University Press. p. 153. ISBN 9780521766456.
- Musil, Steven (2013-10-28). "William Lowe, the 'father of the IBM PC,' dies at 72". CNet. Retrieved 8 January 2015.
- Porter, Martin (November 1983). "The Talk of Boca". PC Magazine. p. 162. Retrieved 22 October 2013.
- Elder, Tait (July 1989). "New Ventures: Lessons from Xerox and IBM". Harvard Business Review. Retrieved 20 January 2015.
- Porter, Martin (1984-09-18). "Ostracized PC1 Designer Still Ruminates 'Why?'". PC Magazine. p. 33. Retrieved 25 October 2013.
- Maher, Jeannette A. (May–June 1982). "Boca Boo-Boo". PC Magazine. p. 10. Retrieved 21 October 2013.
- Bradley, David J. (September 1990). "The Creation of the IBM PC". BYTE. pp. 414–420. Retrieved 2 April 2016.
- McCoy, Frank (2000-01-03). "Mark Dean / He refined the desktop PC. Now he wants to kill it". US News and World Report. Archived from the original on 2012-10-20. Retrieved 6 January 2015.
- Bunnell, David (April–May 1982). "Boca Diary". PC Magazine. p. 22. Retrieved 21 October 2013.
- Cringely, Robert X. (1996). Accidental Empires. HarperCollins. p. 121. ISBN 0887308554.
- Curran, Lawrence J., Shuford, Richard S. (November 1983). "IBM's Estridge". BYTE. pp. 88–97. Retrieved 19 March 2016.
- Freiberger, Paul (1982-08-23). "Bill Gates, Microsoft and the IBM Personal Computer". InfoWorld. p. 22. Retrieved 29 January 2015.
- ″28th Annual Design Review″, I.D. Magazine, Designers' Choice: IBM Personal Computer, Tom Hardy: Industrial Designer,1982.
- Magid, Lawrence J. (2001-08-09). "The Start of a Love-Hate Affair With a Computer". Los Angeles Times. Retrieved 10 January 2015.
- Maher, Jimmy (2013-07-18). "The Unmaking and Remaking of Sierra On-Line". The Digital Antiquarian. Retrieved 5 February 2015.
- "Presenting the IBM of Personal Computers.". PC Magazine (Advertisement). February–March 1982. pp. Inside front cover. Retrieved 20 October 2013.
- Lemmons, Phil (October 1981). "The IBM Personal Computer / First Impressions". BYTE. p. 36. Retrieved 19 October 2013.
- "IBM Introduces Its New Personal Computer Line". Santa Cruz Sentinel. Associated Press. 1981-08-13. p. 41. Retrieved 6 October 2015.
- Williams, Gregg (January 1982). "A Closer Look at the IBM Personal Computer". BYTE. p. 36. Retrieved 19 October 2013.
- "My own IBM computer. Imagine that.". BYTE (Advertisement). January 1982. p. 61. Retrieved 19 October 2013.
- Cook, Karen (1984-04-03). "Now Pitching for IBM...Billy Martin?". PC Magazine. p. 34. Retrieved 24 October 2013.
- Porter, Martin (July 1983). "That's Why The PC Is A Tramp". PC Magazine. p. 328. Retrieved 21 October 2013.
- Papson, Stephen (April 1990). "The IBM tramp". Jump Cut: A Review of Contemporary Media (35): 66–72.
- Caputi, Jane (1994). "IBM's Charlie Chaplin: A Case Study". In Maasik, Sonia; Solomon, Jack. Signs of Life in the U.S.A.: Readings on Popular Culture for Writers. Boston: Bedford Books. pp. 117–121.
- "Right away, you can see a difference.". BYTE (Advertisement). August 1982. pp. 206–207. Retrieved 19 October 2013.
- "New From CompuSoft / Learning IBM BASIC For the Personal Computer". PC Magazine (Advertisement). November 1982. p. 66. Retrieved 21 October 2013.
- "NEC's New Advanced Personal Computer Gives Charlie the Blues.". Computerworld (Advertisement). 1982-08-30. p. 81. Retrieved 21 October 2013.
- "Media Magician". PC Magazine (Advertisement). February 1983. p. 372. Retrieved 21 October 2013.
- Cook, Karen (1984-03-06). "Lampoon Does IBM Double Take, Turns Little Tramp to Great Dictator". PC Magazine. p. 43. Retrieved 24 October 2013.
- Dickinson, John (1984-09-18). "IBM's Displaywriter Begets a Family of PC Software". PC. p. 238. Retrieved 29 January 2015.
- Bunnell, David (Feb–Mar 1982). "The Man Behind The Machine?". PC Magazine (interview). p. 16. Retrieved February 17, 2012.
- Green, Wayne (1980-08). "Publisher's Remarks". Kilobaud. p. 8. Retrieved 23 June 2014. Check date values in:
- Pournelle, Jerry (July 1982). "Computers for Humanity". BYTE. p. 396. Retrieved 19 October 2013.
- Mitchell, Peter W. (1983-09-06). "A summer-CES report". Boston Phoenix. p. 4. Retrieved 10 January 2015.
- McEntire, Norman (June–July 1982). "The Key to the PC". PC Magazine. pp. 139–140. Retrieved 21 October 2013.
- "Because we put what you want into it, you get what you want out of it.". BYTE (advertisement). December 1981. pp. 20–21. Retrieved 12 August 2015.
- "The best software for the IBM Personal Computer. Could it be yours?". BYTE (Advertisement). September 1982. pp. 116–117. Retrieved 19 October 2013.
- Freiberger, Paul (1981-10-05). "Some Confusion at the Heart of IBM Microcomputer / Which Operating System Will Prevail?". InfoWorld. pp. 50–51. Retrieved 1 January 2015.
- "IBM directory lists software". Computerworld. 1984-11-12. p. 53. Retrieved 5 January 2015.
- "Read Only". PC (Advertisement). 1985-08-20. pp. 151–154. Retrieved 5 January 2015.
- Freiberger, Paul; Swaine, Michael (2000). Fire in the Valley: The Making of the Personal Computer. McGraw-Hill Book. p. 348. ISBN 0071358927.
- "Billion Dollar Baby". PC. Feb–Mar 1982. p. 5. Retrieved 25 February 2016.
- Bunnell, David (1982-02-03). "Flying Upside Down". PC Magazine. p. 10. Retrieved 6 April 2014.
- Freiberger, Paul (1981-10-05). "Old-Timers Claim IBM Entry Doesn't Scare Them". InfoWorld. p. 5. Retrieved 1 January 2015.
- Rosen Research (1981-11-30). "From the Rosen Electronics Letter / IBM's impact on microcomputer manufacturers". InfoWorld. pp. 86–87. Retrieved 25 January 2015.
- Markoff, John (1982-07-05). "Radio Shack: set apart from the rest of the field". InfoWorld. p. 36. Retrieved 10 February 2015.
- Lundell, Allan (1981-08-31). "TRS-80 Outcrop Companies Evolve". InfoWorld. pp. 46–47. Retrieved 15 February 2015.
- Fastie, Will (June 1983). "The Graphical PC". PC Magazine.
- Isaacson, Walter (2013). Steve Jobs. Simon and Schuster. p. 135,149. ISBN 1451648545.
- Dvorak, John C. (1983-11-28). "Inside Track". InfoWorld. p. 188. Retrieved 23 March 2016.
- Edlin, Jim; Bunnell, David (February–March 1982). "IBM's New Personal Computer: Taking the Measure / Part One". PC Magazine. p. 42. Retrieved 20 October 2013.
- Edlin, Jim (February–March 1982). "TecMates / Tecmar unveils a plug-in smorgasbord". PC Magazine. pp. 57–58. Retrieved 20 October 2013.
- Watt, Peggy; McGeever, Christine (1985-01-14). "Macintosh Vs. IBM PC At One Year". InfoWorld. pp. 16–17. Retrieved 28 December 2014.
- Markoff, John (1982-08-23). "Competition and innovation mark IBM add-in market". InfoWorld. p. 20. Retrieved 29 January 2015.
- "Front cover". PC. December 1983. Retrieved 1 February 2015.
- Burton, Kathleen (1983-03). "Anatomy of a Colossus, Part III". PC. p. 467. Retrieved 30 March 2014. Check date values in:
- Sanger, David E. (1985-08-05). "Philip Estridge Dies in Jet Crash; Guided IBM Personal Computer". The New York Times. Retrieved 19 October 2013.
- Ahl, David H. (March 1984). "Digital". Creative Computing. pp. 38–41. Retrieved 6 February 2015.
- Hayes, Thomas C. (1983-10-24). "Eagle Computer Stays in the Race". The New York Times. Retrieved 10 January 2015.
- Pournelle, Jerry (January 1984). "Too Many Leads, or What in *;?!#"*? Goes First?". BYTE. p. 61. Retrieved 20 January 2015.
- Whitmore, Sam (November 1983). "Fermentations". inCider. p. 10. Retrieved 7 January 2015.
- Curran, Lawrence J. (Feb 1984). "The Compatibility Craze". BYTE. p. 4. Retrieved 26 August 2015.
- Lemmons, Phil (Fall 1984). "IBM and Its Personal Computers". BYTE. p. 1. Retrieved 18 March 2016.
- "1983 Apple Keynote: The "1984" Ad Introduction". YouTube. April 1, 2006. Retrieved January 22, 2014.
- Libes, Sol (September 1985). "The Top Ten". BYTE. p. 418. Retrieved 27 October 2013.
- "IBM Personal Computers At a Glance". BYTE. Fall 1984. pp. 10–26. Retrieved 18 March 2016.
- Kennedy, Don (1985-04-16). "PCs Rated Number One". PC Magazine. p. 42. Retrieved 28 October 2013.
- Killen, Michael (Fall 1984). "IBM Forecast / Market Dominance". BYTE. pp. 30–38. Retrieved 18 March 2016.
- Bartimo, Jim (1984-11-05). "Mainframe BUNCH Goes Micro". InfoWorld. pp. 47–50. Retrieved 6 January 2015.
- Mace, Scott (1981-10-05). "Where You Can Go to Purchase the New Computers". InfoWorld. p. 49. Retrieved 1 January 2015.
- IBM did not offer own brand cassette recorders, but the 5150 had a cassette player jack, and IBM anticipated that entry level home users would connect their own cassette recorders for data storage instead of using the more expensive floppy drives (and use their existing TV sets as monitors); to this end, IBM initially offered the 5150 in a basic configuration without any floppy drives or monitor at the price of $1,565, whereas they offered a system with a monitor and single floppy drive for an initial $3,005. Few if any users however bought IBM 5150 PCs without floppy drives.
- Scott Mueller, Upgrading and Repairing PCs, 2nd Ed, Que Books 1992,ISBN 0-88022-856-3, page 94
- "Dual-Head operation on vintage PCs".
- Scott Mueller Upgrading and Repairing PCs, Second Edition, Que Books, 1992, ISBN 0-88022-856-3 page 48
- Whence Came the IBM PC Test and Measurement World, retrieved March 2,
- Gene Smart and Andrew Reinhardt, 15 years of Bits, Bytes and Other Great Moments, BYTE Magazine, September 1990 pg. 382
- Muller, Guide to repairing and upgrading PCs 6th edition
- i.e. 33% more speed, 50% more disk space
- PC Magazine, Sept. 30, 1986, pp. 179-184
- The opening sentence of an April 29, 1986 PC Magazine article reads "If you own an IBM PC AT and your hard disk hasn't crashed yet, don't worry -- it probably will." highbeam.com & encyclopedia.com (the latter a Chicago Sun-Times article citing the PC Magazine story). IBM recovered, although with mixed comments, as noted in the Sept. 30, 1986 PC Magazine article, "The Two Faces of IBM's 8-MHz AT," pp. 179 - 184.
- wustl.edu - ECE306 Lecture 16
- The DMA address register extends the 16-bit transfer memory address capacity of the 8237 to 24 bits
- illinois.edu - Real time clock plus RAM
- ctv.se - PC KITS-tutorial page (parallel port, joystick port)
- The IBM PC serial port is not strictly RS-232, since it uses TTL signal levels, whereas RS-232 requires signals of +/- 3 to 15 volts; some signal levels that are valid for a TTL high state, and all signal levels that represent a TTL low state, fall within the forbidden range of -3 to +3 volts for standard RS-232. (However, it is not difficult to design and construct a level converter that will convert between IBM serial port and standard RS-232 signals.)
- IBM (July 1982). Technical Reference: Personal Computer Hardware Reference Library (Revised ed.). IBM Corp. pp. 2–93. 6025008.
- Sometimes the tracks were also referred as cylinders, which is technically correct and analogous to hard drive cylinders. One floppy disk track equaled one cylinder, however with double-sided floppies, only the first side's cylinder numbers were identical to the track numbers; on the second side, the cylinders 1-40 corresponded to tracks 41-80 of the formatted floppy.
- 163,840 bytes, i.e. 512 bytes × 8 sectors × 40 tracks on the one side used
- 327,680 bytes, i.e. 512 bytes × 8 sectors × 40 tracks × 2 sides
- 184,320 bytes, i.e. 512 bytes × 9 sectors × 40 tracks on the one side used
- 368,640 bytes, i.e. 512 bytes × 9 sectors × 40 tracks × 2 sides
- Edlin, Jim (June–July 1982). "CP/M Arrives". PC Magazine. p. 43. Retrieved 21 October 2013.
- "PCommuniques". PC Magazine. February 1983. p. 53. Retrieved 21 October 2013.
- "Can You Do Real Work With the 30-Year-Old IBM 5150?".
- General references
- Norton, Peter (1986). Inside the IBM PC. Revised and enlarged. New York. Brady. ISBN 0-89303-583-1.
- August 12, 1981 press release announcing the IBM PC (PDF format).
- Mueller, Scott (1992). Upgrading and Repairing PCs, Second Edition, Que Books, ISBN 0-88022-856-3
- Chposky, James; Ted Leonsis (1988). Blue Magic - The People, Power and Politics Behind the IBM Personal Computer. Facts On File. ISBN 0-8160-1391-8.
- IBM (1983). Personal Computer Hardware Reference Library: Guide to Operations, Personal Computer XT. IBM Part Number 6936831.
- IBM (1984). Personal Computer Hardware Reference Library: Guide to Operations, Portable Personal Computer. IBM Part Numbers 6936571 and 1502332.
- IBM (1986). Personal Computer Hardware Reference Library: Guide to Operations, Personal Computer XT Model 286. IBM Part Number 68X2523.
- This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.
- "Birth of the IBM PC", IBM Corporation History Archives website
|Wikimedia Commons has media related to IBM Personal Computer.|
- IBM SCAMP
- IBM 5150 information at www.minuszerodegrees.net
- IBM PC 5150 System Disks and ROMs
- IBM PC from IT Dictionary
- IBM PC history and technical information
- What a legacy! The IBM PC's 25 year legacy
- CNN.com - IBM PC turns 25
- IBM-5150 and collection of old digital and analog computers at oldcomputermuseum.com
- IBM PC images and information
- A brochure from November, 1982 advertising the IBM PC
- A Picture of the XT/370 cards, showing the dual 68000 processors
- The History Of The IBM Personal Computer
|IBM Personal Computers||Succeeded by
IBM Personal Computer XT
IBM Portable Personal Computer
IBM Personal Computer/AT
IBM PC Convertible
| 1 | 30 |
<urn:uuid:201a7953-964c-4579-b907-dd654b06e72c>
|
ARM 1 (v1)
This was the very first ARM processor. Actually, when it was first manufactured in April 1985,
it was the very first commercial RISC processor. Ever.
As a testament to the design team, it was "working silicon" in it's first incarnation,
it exceeded it's design goals, and it used less than 25,000 transistors.
The ARM 1 was used in a few evaluation systems on the BBC micro (Brazil - BBC interfaced ARM),
and a PC machine (Springboard - PC interfaced ARM).
It is believed a large proportion of Arthur was developed on the Brazil hardware.
In essence, it is very similar to an ARM 2 - the differences being that R8 and R9 are not banked
in IRQ mode, there's no multiply instruction, no LDR/STR with register-specified shifts, and no
ARM evaluation system for BBC Master
(original picture source not known - downloaded from a website full of
this version created by Rick Murray to include zoomed-up ARM down the
ARM 2 (v2)
Experience with the ARM 1 suggested improvements that could be made. Such additions as the MUL
and MLA instructions allowed for real-time digital signal processing. Back then, it was to aid
in generating sounds. Who could have predicted exactly how suitable to DSP the ARM would be,
some fifteen years later?
In 1985, Acorn hit hard times which led to it being taken over by Olivetti. It took two years
from the arrival of the ARM to the launch of a computer based upon it...
...those were the days my friend, we thought they'd never end.
When the first ARM-based machines rolled out, Acorn could gladly announce to the world that they
offered the fastest RISC processor around. Indeed, the ARM processor kicked ass across the
computing league tables, and for a long time was right up there in the 'fastest processors'
listings. But Acorn faced numerous challenges. The computer market was in disarray, with some
people backing IBM's PC, some the Amiga, and all sorts of little itty-bitty things. Then Acorn
go and launch a machine offering Arthur (which was about as nice as the first release of Windows)
which had no user base, precious little software, and not much third party support. But they
The ARM 2 processor was the first to be used within the RISC OS platform, in the A305, A310, and
A4x0 range. It is an 8MHz processor that was used on all of the early machines, including the
A3000. The ARM 2 is clocked at 8MHz, which translates to approximately four and a half million
instructions per second (0.56 MIPS/MHz).
No current image - can you help?
ARM 3 (v2as)
Launched in 1989, this processor built on the ARM 2 by offering 4K of cache memory and the SWP
instruction. The desktop computers based upon it were launched in 1990.
Internally, via the dedicated co-processor interface, CP15 was 'created' to provide processor
control and identification.
Several speeds of ARM 3 were produced. The A540 runs a 26MHz version, and the A4 laptop runs a
24MHz version. By far the most common is the 25MHz version used in the A5000, though those with
the 'alpha variant' have a 33MHz version.
At 25MHz, with 12MHz memory (a la A5000), you can expect around 14 MIPS (0.56 MIPS/MHz).
It is interesting to notice that the ARM3 doesn't 'perform' faster - both the ARM2 and the ARM3
average 0.56 MIPS/MHz. The speed boost comes from the higher clock speed, and the cache.
Oh, and just to correct a common misunderstanding, the A4 is not a squashed down version of the
A5000. The A4 actually came first, and some of the design choices were reflected in the later
ARM3 with FPU
(original picture downloaded from Arcade BBS, archive had no attribution)
ARM 250 (v2as)
The 'Electron' of ARM processors, this is basically a second level revision of the ARM 3 design
which removes the cache, and combines the primary chipset (VIDC, IOC, and MEMC) into the one
piece of silicon, making the creation of a cheap'n'cheerful RISC OS computer a simple thing
indeed. This was clocked at 12MHz (the same as the main memory), and offers approximately 7 MIPS
This processor isn't as terrible as it might seem. That the A30x0 range was built with the ARM250
was probably more a cost-cutting exercise than intention. The ARM250 was designed for low power
consumption and low cost, both important factors in devices such as portables, PDAs, and
organisers - several of which were developed and, sadly, none of which actually made it to a
No current image - can you help?
ARM 250 mezzanine
This is not actually a processor. It is included here for historical interest. It seems the
machines that would use the ARM250 were ready before the processor, so early releases of the
machine contained a 'mezzanine' board which held the ARM 2, IOC, MEMC, and VIDC.
ARM 4 and ARM 5
These processors do not exist.
More and more people began to be interested in the RISC concept, as at the same sort of time
common Intel (and clone) processors showed a definite trend towards higher power consumption and
greater need for heat dissipation, neither of which are friendly to devices that are supposed to
be running off batteries.
The ARM design was seen by several important players as being the epitome of sleek, powerful RISC
It was at this time a deal was struck between Acorn, VLSI (long-time manufacturers of the ARM
chipset), and Apple. This lead to the death of the Acorn RISC Microprocessor, as Advanced RISC
Machines Ltd was born. This new company was committed to design and support specifically for the
processor, without the hassle and baggage of RISC OS (the main operating system for the processor
and the desktop machines). Both of those would be left to Acorn.
In the change from being a part of Acorn to being ARM Ltd in it's own right, the whole numbering
scheme for the processors was altered.
ARM 610 (v3)
This processor brought with it two important 'firsts'. The first 'first' was full 32 bit
addressing, and the second 'first' was the opening for a new generation of ARM based hardware.
Acorn responded by making the RiscPC. In the past, critics were none-too-keen on the idea of
slot-in cards for things like processors and memory (as used in the A540), and by this time many
people were getting extremely annoyed with the inherent memory limitations in the older hardware,
the MEMC can only address 4Mb of memory, and you can add more by daisy-chaining MEMCs - an idea
that not only sounds hairy, it is hairy!
The RiscPC brought back the slot-in processor with a vengeance. Future 'better' processors were
promised, and a second slot was provided for alien processors such as the 80486 to be plugged in.
As for memory, two SIMM slots were provided, and the memory was expandable to 256Mb. This does
not sound much as modern PCs come with half that as standard. However you can get a lot of milage
from a RiscPC fitted with a puny 16Mb of RAM.
But, always, we come back to the 32 bit. Because it has been with us and known about ever since
the first RiscPC rolled out, but few people noticed, or cared. Now as the new generation of ARM
processors drop the 26 bit 'emulation' modes, we RISC OS users are faced with the option of
getting ourselves sorted, or dying.
Ironically, the other mainstream operating systems for the RiscPC hardware - namely ARMLinux and
netbsd/arm32 are already fully 32 bit.
Several speeds were produced; 20MHz, 30Mhz, and the 33MHz part used in the RiscPC.
The ARM610 processor features an on-board MMU to handle memory, a 4K cache, and it can even
switch itseld from little-endian operation to big-endian operation. The 33MHz version offers
around 28MIPS (0.84 MIPS/MHz).
The RiscPC ARM610 processor card
(original picture by Rick Murray, © 2002)
ARM 710 (v3)
As an enhancement of the ARM610, the ARM 710 offers an increased cache size (8K rather than 4K),
clock frequency increased to 40MHz, improved write buffer and larger TLB in the MMU.
Additionally, it supports CMOS/TTL inputs, Fastbus, and 3.3V power but these features are not
used in the RiscPC.
Clocked at 40MHz, it offers about 36MIPS (0.9 MIPS/MHz); which when combined with the additional
clock speed, it runs an appreciable amount faster than the ARM 610.
ARM710 side by side with an 80486, the coin is a British 10 pence coin.
(original picture by Rick Murray, © 2001)
The ARM7500 is a RISC based single-chip computer with memory and I/O control on-chip to minimise
external components. The ARM7500 can drive LCD panels/VDUs if required, and it features power
management. The video controller can output up to a 120MHz pixel rate, 32bit sound, and there are
four A/D convertors on-chip for connection of joysticks etc.
The processor core is basically an ARM710 with a smaller (4K) cache.
The video core is a VIDC2.
The IO core is based upon the IOMD.
The memory/clock system is very flexible, designed for maximum uses with minimum fuss. Setting up
a system based upon the ARM7500 should be fairly simple.
A version of the ARM 7500 with hardware floating point support.
ARM7500FE, as used in the Bush Internet box.
(original picture by Rick Murray, © 2002)
StrongARM / SA110 (v4)
The StrongARM took the RiscPC from around 40MHz to 200-300MHz and showed a speed boost that was
more than the hardware should have been able to support. Still severely bottlednecked by the
memory and I/O, the StrongARM made the RiscPC fly. The processor was the first to feature
different instruction and data caches, and this caused quite a lot of self-modifying code to
fail including, amusingly, Acorn's own runtime compression system. But on the whole, the
incompatibilities were not more painful than an OS upgrade (anybody remember the RISC OS 2 to
RISC OS 3 upgrade, and all the programs that used SYS OS_UpdateMEMC, 64, 64 for a speed boost
froze the machine solid!).
In instruction terms, the StrongARM can offer half-word loads and stores, and signed half-word
and byte loads and stores. Also provided are instructions for multiplying two 32 bit values
(signed or unsigned) and replying with a 64 bit result. This is documented in the ARM assembler
user guide as only working in 32-bit mode, however experimentation will show you that they work
in 26-bit mode as well. Later documentation confirms this.
The cache has been split into separate instruction and data cache (Harvard architecture), with
both of these caches being 16K, and the pipeline is now five stages instead of three.
In terms of performance... at 100MHz, it offers 114MIPS which doubles to 228MIPS at 200MHz
A StrongARM mounted on a LART board.
In order to squeeze the maximum from a RiscPC, the Kinetic includes fast RAM on the processor
card itself, as well as a version of RISC OS that installs itself on the card. Apparently it
flies due to removing the memory bottleneck, though this does cause 'issues' with DMA
A Kinetic processor card.
This is a version of the SA110 designed primarily for portable applications. I mention it here
as I am reliably informed that the SA1100 is the processor inside the 'faster' Panasonic
satellite digibox. It contains the StrongARM core, MMU, cache, PCMCIA, general I/O controller
(including two serial ports), and a colour/greyscale LCD controller. It runs at 133MHz or 200MHz
and it consumes less than half a watt of power.
The Thumb instruction set is a reworking of the ARM set, with a few things omitted. Thumb
instructions are 16 bits (instead of the usual 32 bit). This allows for greater code density in
places where memory is restricted. The Thumb set can only address the first eight registers, and
there are no conditional execution instructions. Also, the Thumb cannot do a number of things
required for low-level processor exceptions, so the Thumb instruction set will always come
alongside the full ARM instruction set. Exceptions and the like can be handled in ARM code, with
Thumb used for the more regular code.
These versions are afforded less coverage due, mainly, to my not owning nor having access to any
of these versions.
While my site started as a way to learn to program the ARM under RISC OS, the future is in
embedded devices using these new systems, rather than the old 26 bit mode required by RISC OS...
...and so, these processors are something I would like to detail, in time.
This is an extension of the version three design (ARM 6 and ARM 7) that provides the extended
64 bit multiply instructions.
These instructions became a main part of the instruction set in the ARM version 4 (StrongARM,
These processors include the Thumb instruction set (and, hence, no 26 bit mode).
These processors include a number of additional instructions which provide improved performance
in typical DSP applications. The 'E' standing for "Enchanced DSP".
The future is here. Newer ARM processors exist, but they are 32 bit devices.
This means, basically, that RISC OS won't run on them until all of RISC OS is modified to be
32 bit safe.
As long as BASIC is patched, a reasonable software base will exist. However all C programs will
need to be recompiled. All relocatable modules will need to be altered. And pretty much all
assembler code will need to be repaired. In cases where source isn't available (ie, anything
written by Computer Concepts), it will be a tedious slog.
It is truly one of the situations that could make or break the platform.
I feel, as long as a basic C compiler/linker is made FREELY available, then we should go for it.
It need not be a 'good' compiler, as long as it will be a drop-in replacement for Norcroft CC
version 4 or 5. Why this? Because RISC OS depends upon enthusiasts to create software, instead
of big corporations. And without inexpensive reasonable tools, they might decide it is too much
to bother with converting their software, so may decide to leave RISC OS and code for another
I, personally, would happily download a freebie compiler/linker and convert much of my own code.
It isn't plain sailing for us - think of all of the library code that needs to be checked. It
will be difficult enough to obtain a 32 bit machine to check the code works correctly, never
mind all the other pitfalls. Asking us for a grand to support the platform is only going to
turn us away in droves. Heck, I'm still using ARM 2 and ARM 3 systems. Some of us smaller coders
won't be able to afford such a radical upgrade. And that will be VERY BAD for the platform. Look
how many people use the FREE user-created Internet suite in preference to commercial
alternatives. Look at all of the support code available on
Arcade BBS. Much of that will probably go, yes. But
would a platform trying to re-establish itself really want to say goodbye to the rest?
I don't claim my code is wonderful, but if only one person besides myself makes good use of it -
then it has been worth it.
Click here to learn more on 32 bit operation
Return to assembler index
Copyright © 2004 Richard Murray
| 2 | 8 |
<urn:uuid:056262c6-6229-4dba-995f-4017fcf13eb4>
|
Life on Earth faces challenges from changing climate
The Central Coast Climate Science Education website provides interested citizens reliable and timely access to the rapidly expanding understanding of the Earth's climate system, a topic which will be of increasing importance to our nation and the world. Information presented on this website is constantly evolving, so please check back periodically for new material, using the Update Log page.
Links are provided to other websites and informative posts as well as responses to local inquiries about climate science, and information about upcoming local events bearing on this issue.
If you are new to this website, please visit the Site Guide for a brief explanation of the content of the pages on this website. And, if you are new to learning the basics of climate science, it is highly recommended that you first read the Climate Science Summary page, which summarizes the basic science and the impacts of climate science.
Important features of this website are the Tutorial and Essays pages consisting of step-by-step "lessons" and explanations of various climate science topics for those who do not have an extensive background in science.
In addition, if you are a member of any group that would like a free PowerPoint presentation on "Climate Science Basics", please send a request to ray.climate(@sign) charter.net and a time will be arranged. The presentation runs about 35 to 45 minutes, with time afterwards devoted to questions and comments.
Note: This website utilizes PDF files. For Microsoft Windows computer users, this requires the free Adobe Reader software. For Apple Mac computer users, the Preview application should work fine.
If you have questions, comments, observations or recommendations, please contact website owner and author Dr. Ray Weymann via email ray.climate(@sign) charter.net or use the Contact page.
This short video, “Science in America” by superb science communicator and Astrophysicist Neil deGrasse Tyson, was posted on many websites just prior to the worldwide March for Science. His message is powerful and more important than ever.
Alerts & Events
For expanded information and past postings, see
Alerts & Event Notices
On June 4th, Dr. Weymann gave a talk at the First Presbyterian Church in San Luis Obispo, and video-taped the talk. As it will take time before the video is posted, he is listing the links and book references mentioned during the talk so they can be accessed by those interested,
prior to the posting of the video:
“A Climate for Change: Global Warming Facts for Faith-Based Decisions"
by Katharine Hayhoe and Andrew Farley, published by Faithworks, Hatchett Book Group.
ISBN: 978-0-446-54956-1. (The book is apparently out of print and is now very expensive.) The First Presbyterian Church in San Luis Obispo has a copy.
Katherine Hayhoe is a leading climate scientist and Evangelical Christian, and her husband, Andrew Farley, is an Evangelical Christian minister. This book is highly recommended for those who are not inclined to take action to deal with climate change due to their religious perspectives.
“Climate of Hope:How Cities, Businesses, and Citizens Can Save the Planet"
by Michael Bloomberg and Carl Pope. ISBN 978-1-250-14207-8 (also available as an e-book.) Michael Bloomberg is the founder of Bloomberg LP, a global media and financial services company, and former Mayor of New York City. Carl Pope is the former Executive Director of the Sierra Club. This book emphasizes the role of cities in leading the way to a low carbon emission world, the role of natural resources in helping to mitigate the effects of climate change, and the economic opportunities that are open in green energy technology.
Faith-based statements on climate change and stewardship of the Earth:
This website lists statements from every major religious faith in the world and all major Christian denominations.
Risky Business: The Economic Risks of Climate Change in the United States
This is the report of “Risky Business", an organization founded by Michael Bloomberg, along with former Treasury Secretary Hank Paulson and businessman and philanthropist Tom Steyer. It lays out both the national economic risks posed by climate change as well as a region-by-region analysis. An excellent resource written by people with deep knowledge of business and economics.
C40 Cities Climate Leadership Group
From their website: "The C40 Cities Climate Leadership Group, now in its 10th year, connects more than 80 of the world’s greatest cities, representing over 600 million people
and one quarter of the global economy. Created and led by cities, C40 is focused on tackling climate change and driving urban action that reduces greenhouse gas emissions and climate risks, while increasing the health,
wellbeing and economic opportunities of urban citizens."
Global Covenant of Mayors for Climate and Energy
From their website: "The Global Covenant of Mayors for Climate & Energy is an international alliance of cities and local governments with a shared long-term vision of promoting and supporting voluntary action to combat climate change and move to a low emission, resilient society. This is a historic and powerful response by the world’s cities to address the climate challenge. It is the broadest global alliance committed to local climate leadership, building on the commitment of over 7,449 cities, representing 674,484,562 people worldwide and 9.31% of the global population.”
(San Luis Obispo is a member,)
Recent Website Updates
This short video, “Science in America” by superb science communicator and Astrophysicist Neil deGrasse Tyson,
was posted on many websites just prior to the worldwide March for Science. His message is powerful and more
important than ever.
The fifth in a series of updated tutorials is available in the updated tutorials page. It discusses the relative importance of the various greenhouse gases, two ways in which this importance is measured and the sources of these greenhouse gases.
02.22.17 Editorial by Rush Holt
There has been a lot of discussion lately about how active scientists should be politically in light of the
new Administration. In the January 10th 2017 addition of SCIENCE Magazine, there is an excellent
editorial by the new CEO of the American Association for the Advancement of Science, Rush Holt.
Holt is a scientist and served for 16 years in Congress.
Here is the editorial in PDF form which I downloaded from the Magazine.
For all website updates see the Update Log
| 1 | 5 |
<urn:uuid:43ed653b-aa6a-4da4-be45-43541faebbb9>
|
Bühler, Gottlieb Friederick
1829 to 1865
Gottlieb Friedrich Bühler, a German missionary of the Church Missionary Society (CMS) who trained early Yoruba pastors and evangelists at the Abeokuta Training Institution, was born on July 3, 1829 into a large Christian family at Adelberg in Württenberg, Germany. Bühler trained first as a schoolmaster and had brief stints as a teacher in various institutions in his homeland before enrolling in the Missionary College at Basel in 1851. His decision to turn to mission might have been influenced by his eldest brother who had trained at the college and was serving in India at the time.
Preparing for the Missionary Vocation
The Missionary College at Basel was undergoing a period of ideological change when Bühler enrolled there. From the inception of the college in 1816, Christian Blumhardt's liberal ethos that affirmed cultural sensitivity towards indigenous peoples in the missionary encounter held sway. Blumhardt's philosophy of mission was a product of late eighteenth and early nineteenth century German thought on language and nationality, especially as posited by Johann Gottfried Herder and Friedrich Schleiermacher. Since, according to them, whatever is conceived in one language cannot be exactly duplicated in another, national identity is rooted in people's language, thought, and culture. They should, therefore, not only be preserved as their genius but should also be cultivated and nurtured in order to increase self understanding and national vocation.
As the nineteenth century wore on and Germany joined other European powers in exploring lands overseas in the spirit of the age, Blumhardt's mission philosophy came under attack from the growing rank of German colonists. Knowing that missionaries were in the vanguard of overseas enterprises, these adventurers sought to win them over. They achieved their quest for a new direction in mission philosophy when Blumhardt was succeeded in 1850 by Joseph Josenhans who led the Missionary College until 1879. Under the new leadership the underlying philosophy of missionary formation at Basel changed from facilitating beneficent civilization to propagating German-Swabian civilization. Economic aid and transfer of German material culture through trade became an integral part of mission. Moreover, Western civilization, particularly as represented by the German culture, was seen not only as a tool for communicating mission; to do mission was to civilize.
The new mission direction at Basel seminary could not have become fully operational in the years of Bühler's enrolment. And whatever accretion he brought to Islington of Basel's new way of seeing indigenous cultures in relation to those of Europeans could only have shrivelled under Henry Venn's pro-Blumhardt mission philosophy that affirmed indigenous cultures. Yet, as his training program at Abeokuta would unfold from 1858, Bühler had taken the best of both Blumhardt and Josenhans and would use them eclectically to shape the minds of his students in the mission field.
Mission to the Yoruba Country
On completing his three year training at the Missionary College, Bühler entered the service of the Church Missionary Society. In 1854, he was at the Church Missionary College, Islington, where he acquired proficiency in English. He was ordained a deacon by the bishop of London on June 3, 1855. Bühler departed for the Yoruba Mission of the CMS on October 24, 1855 and arrived in Lagos at the end of the year. He soon proceeded to Abeokuta where he was stationed until August 1856 when, in the company of Mr. Hoch, he went to Ibadan to relieve Mr. and Mrs Hinderer. The missionary couple had returned to England to recuperate their health.
Bühler moved to Lagos the following year after his stint at Ibadan to take charge of the chapel at Breadfruit. In 1858, he finally returned to Abeokuta to take charge of the Training Institution from Mr. Maser. The society had been struggling to establish the institution since 1853 but had had to contend with the twin forces of Mr. Henry Townsend's prejudice against liberal education for Africans and the untimely death of European agents sent out to establish it. Although Bühler reluctantly accepted the assignment, the need to train personnel for the mission was glaring. According to him, "I myself saw how necessary it was to instruct the young men who were waiting almost two years for regular instruction. We want more agents; there is no want of young men, they only want to be instructed." At Ake, Abeokuta, where he was stationed, he ran the institution and also assisted in church work. There he was also ordained into the priesthood on Sunday, March 20, 1859, by Bishop Bowen of Sierra Leone.
In 1860, Bühler married Sophia Mary Jay and returned with her to Abeokuta on December 8. She did not survive her seasoning fever as she died on January 4, 1861. He remarried in 1863 to Miss Annie Norris who survived him.
Bühler's training program at Abeokuta evolved and expanded with years, and his thought shows that he appreciated the need to situate the minds of his students in both the immediate and the wider contexts of their vocation. Recognizing the immediate context of culture and ministry, and the fact that some of the students did not understand English, he began teaching Scripture history in the Yoruba language to the thirteen enrolled pupils in April 1858. To this end, among his early requests for teaching materials was Pinnock's Analysis of Scripture History in both the Old and the New Testaments. Six months into his new assignment he reported with satisfaction that, "In my teaching I laid particular stress upon Scripture History to give them a good & practical knowledge of it & they, to my great delight showed a great increasing interest." His emphasis on Scripture is a recognition of the pivotal nature of the Bible to their calling as ministers. But in teaching the subject in Yoruba, Bühler was meeting the need to situate learning also in the context of the pupils' culture. His regret was that much of the Bible had not yet been translated into the Yoruba language.
When in 1861 the supervision of the day schools in Abeokuta was added to his responsibility, Bühler was disappointed with several aspects of the method of instructing the children. He particularly considered it "a great disadvantage" in the mission that there was "too much teaching in the [English] language which retards the progress considerably it being for most of the children an unknown tongue." Moreover, he reported:
Reading & numbering in engl. confuses them & they [the monitors] read them what they do not understand. The consequence is that they also read Yoruba without thinking. But the worst is that it takes them an enormous time four, five & even 6 years [sic] before they can read their own tongue fluently. I have now commenced at working out a plan for our schools in which I intend to lay down as a rule not to teach engl. until they can read their own language...Scripture history should always be given in their own Mother Tongue, otherwise I am afraid the result will never be so satisfactory as might be expected.
In the meantime Bühler had been adding other branches of knowledge to his training programme at the institution. Within the first six months he added "Catechising, Reading (English and Yoruba) & Writing, Geography, especially biblical Geography, history, & the Art of teaching..." In another six months he had expanded the programme:
In general history we had the history of Rome to Constantin... for Geography, Europe, in Bibl. Geography Paul's Missionary journeys. In Natur. Philosophy-- the rudiments; In Natur. History--the animal kingdom. In Arithmetic, fractions & applications thereof. In reading translating of verses, portions or whole chapters from engl. into Yoruba & from Yoruba into engl. was frequently practiced...[sic].
Two years later, Bühler could still report that he had added the histories of Assyria, Babylonia, Persia, the Jews and Alexander the Great to his general history, and his geography lesson now incorporated Africa. He had also introduced new subjects in the sciences- elementary astronomy, electricity, "mammalia"--and in the arts--singing, calligraphy & orthography and "a small beginning...in playing the harmonium." He even ordered for a machine to demonstrate the principle of electricity to his pupils, who came from "this country where the god of thunder & lightening is worshipped."
If Bühler's teaching of Scripture history in the mother tongue could be said to be consistent with Blumhardt and the CMS's missionary ethos, his continuous introduction of liberal arts and aspects of the growing physical sciences of European civilization would qualify as the expression of the civilizing mission of Josenhans. In this interaction, the latter could not but restrain the romantic tendencies of the former while the former served as the control valve for the ideological pressure the latter might seek to exert.
But the introduction of the liberal arts, in particular, has a deeper significance. Bühler's 1859 introduction of these subjects into the training program of the institution was an implicit recognition of the wider context that shaped the faith now being bequeathed to the pupils. But more than this, and in the face of the growing expansion of Europe through the activities of the missionaries themselves and the colonists already at work on the coast, Bühler needed to help his pupils to appreciate the antecedents to the world presently encroaching on their primal society. As agents of change in the making, this was a necessary preparation for their service with the Society for in another two years the process of colonization would burst on them in the 1861 annexation of Lagos to the expanding British possessions in West Africa, with all its complexities and unsettling challenges. Perhaps more than the teacher himself realized, these future agents of the Society needed this enlargement of perspective to be able to function in the emerging cultural environment where they would carry out Christian ministry. In this light, the whole process of what Bühler was doing could only have been providential, for it was going to be a small window that would soon close in his premature exit from the scene.
Meanwhile, the seminary teacher was not unaware of the peril that accompanied his teaching as he acknowledged that "[t]here is much temptation for them [i.e. his pupils] to pride, on account of their acquiring more knowledge than many of their companions, & the other temptation is to leave missionary work & to engage in trade which seems to offer much more profits." Three years later, in 1861, experience at home continued to impress upon him the pitfalls inherent in his training program and the "much evil" that is inadvertently introduced with civilization:
Privileges which others do not enjoy, superior knowledge, the prospect of becoming the teachers and leaders of the people, to be looked upon as wiser, more pious, in more favorable circumstances...is quite sufficient to upset a Christian young man at home, why not much more here where they have fewer equals and where by far the majority are inferior, at least in knowledge. When I therefore rejoice I rejoice with trembling.
By implication, the dangers were not valid enough to keep the pupils ignorant of knowledge that could prove beneficial to their service. The prospects were more encouraging than gloomy. And what is more, there are means of grace available to the conscientious to withstand the temptations learning could bring; he wrote:
On the whole I am thankful to say that there is unmistakable evidence of progress in their studies as well as in their moral tone...In the majority an important work of the Holy Spirit is going on whilst some of them, I may confidently say, are pious young men who live in prayer & carry on a good warfare. This is a great encouragement for me as well as for the future prospect of our Mission.
In addition to this movement of the Spirit among his pupils, Bühler himself consciously exerted his influence on them through his "friendly, fatherly appeal to their conscience."
Contending with Opposition
Bühler's evident concern for the effect of his training program on his students is a reflection of the pressure he was going through in the Yoruba mission. Mr. Townsend, the most influential CMS missionary at Abeokuta, disapproved of the liberal content of the training. The Englishman from Exeter, as some other English missionaries of the CMS also demonstrated later, did not value much book learning for African converts and agents of mission. In Sierra Leone, he had seen the supposed baneful effects of book learning among the colony born young people and had, apparently, concluded that anything that exposed Africans to European values, made them proud.
When, early in 1862, a letter arrived from the Parent Committee in London transferring Bühler from Ake to Ikija, the seminary teacher pointed finger at Townsend as the originator of the proposal. Townsend still ensured that Bühler did not relocate to Ikija but to Igbein by posting Rev. Jonathan Wood to Ikija before the Finance Committee deliberated on the instruction of the Parent Committee. Ikija having been thus occupied, a difficult station like Igbein remained the only vacant place for Bühler to occupy with his training institution. The pastorate there had become vacant as a result of the untimely death of its indigenous returnee pastor from Sierra Leone, Rev. Thomas King.
In a letter to the secretaries of the mission in London, Bühler both protested against the implicit undertone of the proposal and argued his case for a robust training program for the intending agents of the mission. In his words:
I entirely disagree with Mr. Townsend when he says too much instruction is given to the youths; we always differ on that point. Our work is progressing and we want more agents, but shall all our agents be good Christians only who can just read such portions of the Bible as are translated and nothing else? There can be no doubt such men are useful, but shall we set promising young men aside merely because the education of such young men has in other missions proved a failure? On several outstations our native agents cannot write and when they want to communicate with their superintendent they must come themselves.
He argued further that:
Among my pupils are several very promising--especially as regards their spiritual life--they are well gifted, but shall they be left in ignorance? Again, every missionary asks for a good schoolmaster, but how & where can they be obtained if we do not train them up for it? The constant complaint of our missionaries of the inability of their schoolmasters should have led Mr. Townsend to another conclusion.... To get better schoolmasters we must first give them a better education. However, what the first class of my pupils learn at present can scarcely be called a superior education; in most things they would not come up to a schoolmaster in England.
It was not enough for Bühler, who was still widowed at this time, to state his case. He made it known that he was grieved by the insinuation supposedly making the round in the missionary circle at Abeokuta. He felt he had his back on the wall as he wrote with melancholy:
It is just now 4 years since I have been chosen for this important and responsible post. I have laboured with joy and have devoted all my strength and energy to this work; I have done what I could do to stir up a missionary zeal among the young men and I fully believe that my work has not been in vain in the Lord. I have been most anxious to give a sound and practical knowledge of God's holy word which I trust will bear its fruit in due season....To be regarded by anyone of my Brethren as not doing my duty towards the work of the Lord in this land, or to be regarded as laying the foundation for the ruin of the young men by giving a somewhat superior education--and finally the ruin of the mission--would constantly prey on my mind, would make my life extremely unhappy and would surely undermine my health.
At roots, Townsend's problem with Bühler's training program was not just the danger of giving the pupils too much knowledge. Ulrich Graf, with whom he worked briefly in Hastings, Sierra Leone, might have identified it when he was reflecting on his visit to the Yoruba mission in 1854. It was at a time the Society was in need of expertise to translate the Bible into Yoruba. The process became mired in controversy as the missionaries could not agree on the convention to guide the process. Graf observed Townsend's skill in the Yoruba language and said of him:
Mr. Townsend is decidedly...the best speaker of the native language, he possessing a native instinctive tact in finding out the genius of the language; but being unaccustomed to scientific researches he is incapable to point out the Rules and Principles... [He] knows not "why" or "wherefore."
This observation, by extension, placed the finger on the source of Bühler's problem. He was carrying out his training program under the shadow of a man who had no aptitude for theorization. Apparently considering that his colleague was indulging in superfluities because he had not enough work to engage him, Townsend organized his posting and that of his institution to Igbein Church in January 1863. Igbein was a problematic congregation where, he must have thought, Bühler would not be wanting of quarrels to settle and be better occupied. On the other hand, Bühler and his pupils would be far from Townsend's "philistinism" and would freely engage in their intellectual pursuits. But the teacher and his students had to put up the mission houses and the buildings to house the institution.
In spite of the criticism that trailed his work, Bühler continued in his conviction of liberal education for the intending agents of the mission. And being satisfied with the balanced spiritual and intellectual growth of his students, despite the additional demand of manual labor on them, Bühler introduced the classical languages of Greek and Latin to the training program of the institution in 1863. He reported:
As most of the pupils of the I Class [that is Year 1] were sons of Sierra Leone emigrants they had a good knowledge of english [sic], but to improve it and to lead them deeper into the English language I thought a little Latin would do no harm, but would have many advantages...I do not regret to have made a trial, some of the pupils have profited by it; they have certainly seen the great difficulties in acquiring such a language and I do not think their little Latin has made them proud. I think it has humbled them.
In September 1864, Bühler's health began to decline rapidly and he had to resort to Lagos for a change of environment. He improved a little, but it was evident that he could not continue to serve in the harsh tropical climate of West Africa. He finally left Lagos for Europe on February 7, 1865. He died six months later, on August 14, 1865, at Schondorf, Germany, being only 36.
Bühler's pain in carrying out his vision of the learning that should constitute theological training was another expression of the perennial conflict of whether Christian theological formation should go the way of Clement of Alexandria or Tertullian. The problem with the latter is that it does not challenge the mind of the learner and often take religious experience at face value. The risk here is that such theological formation often lacks reflective capacity for creative response in times of rapid change. While the model of Clement tends to cultivate this capacity in its student, it also risks the dangers Bühler himself highlighted in fear and trembling. Yet, in a world of constant change, Clement's model remains the more viable option for the continuous relevance of the constant message of Christ. Buhler seems to have recognized this and, therefore, opted for it.
Of course, Bühler argued that the need for quality was the motive behind his adopting a liberal approach to his training. But in the typical fashion of nineteenth century Protestant missionary method, he also saw value in the careful use of education informed by enlightenment thinking to break the hold of superstitions on his pupils. By this method, he sought to free their minds from scruples that could lead them back to heathenism. Although the danger of this leading to the secularization of the mind was a present concern, he was confident enough that true knowledge would produce the character needed for the work of the ministry. Ultimately, the outcome of his training can be judged by the quality of the agents he produced. Two of such agents were Samuel Johnson and Andrew Hethersett Laniyonu, both of whom came with him back to Abeokuta to train at the institution when Bühler visited Ibadan in December 1862.
Johnson and Laniyonu lived with the Hinderers before resuming studies at Abeokuta in January 1863. On completing their studies in December 1865, they both returned to Ibadan and were recruited by the mission as schoolmasters. The two colleagues started well, Laniyonu at Ogunpa and Johnson at Kudeti. But Laniyonu had a character deficiency. He habitually indulged in adultery and was consequently dismissed in 1869. He later joined the colonial service in Lagos where he was involved in the clannish politics of how to end the wars in the interior in the 1880s.
Johnson, on the other hand, may be said to represent the best of Bühler's achievement. He confessed his teacher's strong influence on him when he was seeking ordination as a deacon in 1885. He wrote then that:
Separation from home, intercourse with students whose moral training was different from mine, the godly advices, warnings, and example of...Rev. G. F. Bühler...told much on me. The spirit was... powerfully at work...I spent hours in private prayers. As if not enough, I obtained the consent of a fellow student [Laniyonu]...to join me in these prayers, although I did not unburden my mind to him. It was then the Igbein Mission houses...were in building...here we have a private place within its bare walls, to retire for spiritual devotion. Not content with this I used to return sometimes quite alone. At this time I can date my real conversion.
Obviously, Bühler was not the only influence on Johnson, but it was under his training that all the inputs of his fellow Basel trained missionaries in the CMS mission who had nurtured him came to final fruition. Unlike Laniyonu who was expelled from missionary service, Johnson's positive impact on Yoruba mission and history cannot be overstated. For he brought his Christian witness into the volatile political environment of late nineteenth century Yoruba country as he carried messages between belligerents in the wars that devastated the country and the colonial authorities in Lagos. He therefore contributed significantly in bringing the wars to an end. But more relevant is the fact that he effected with the skills he gained under Bühler the hope of his mentor, David Hinderer, by documenting the history of the Yoruba people from the earliest times until the declaration of the British protectorate on the country. Johnson's magnum opus redeemed the fading memory of his people but confounded CMS leadership in London at its completion in 1897. In the second half of the twentieth century, his The History of the Yorubas became a classic in the study of Yoruba history and culture and has been referenced in standard academic works abroad. Johnson's missionary and cultural achievements are the ultimate vindication of Bühler's theological curriculum decades after he had left the scene.
1. G. Möricke, "Bühler's Biography", Church Missionary Society (CMS) Archives, University of Birmingham, Edgbaston, UK, C/A2/O24.
2. Klaus Fiedler, Christianity and African Culture: Conservative German Protestant Missionaries in Tanzania, 1900-1914 (Leiden: E. J. Brill, 1996), 14-17.
3. K. Rennstich, "The Understanding of Mission, Civilization and Colonialism in the Basel Mission," in Missionary Ideologies in the Imperialist Era, eds. T. Christensen and W.R. Hutchison (Aarhus C, Denmark: Forlaget Aros, 1982), 96.
4. K. Rennstich.
5. G. Bühler, journal entry, July 6, 1856, CMS C/A2/O24/37.
6. G. Bühler, journal entry, May 7, 1857, CMS C/A2/O24/37; G. Bühler to H. Venn, August 3, 1857, CMS C/A2/O24/3.
7. G. Bühler to H. Venn, May 1, 1858, CMS C/A2/O24/7.
8. J. F. A. Ajayi, Christian Missions in Nigeria 1841-1891--The Making of a New Elite (Essex: Longman, 1965), 150-151.
9. G. Bühler to H. Venn, May 1, 1858, CMS C/A2/O24/7.
10. G. Bühler to the Secretaries, April 2, 1859, CMS C/A2/O24/9; s.v. "Bühler, Gottlieb Friedrich," Church Missionary Society, Register of Missionaries, 100.
11. G. Bühler, Half yearly Report ending July 186, CMS C/A2/O24/44.
12. G. Bühler to H. Venn, September 30, 1858, CMS C/A2/O24/8.
13. G. Bühler, Report of Training Institution, September 30, 1858, CMS C/A2/O24/42.
14. G. Bühler, Half Yearly Report of Training Institution, January-July 1861, CMS C/A2/O24/44.
15. G. Bühler, Report of Training Institution, September 30, 1858, CMS C/A2/O24/42.
16. G. Bühler, Report of Training Institution, April 1859, CMS C/A2/O24/43.
17. G. Bühler, Report of Training Institution for Half Year ending December 31, 1861, CMS C/A2/O24/45.
18. G. Bühler, Report on Training Institution, September 30, 1858, CMS C/A2/O24/42.
19. G. Bühler, Report of Training Institution for Half Year ending December 31, 1861, CMS C/A2/O24/45.
20. G. Bühler, Report of Training Institution for Half Year ending December 31, 1861, CMS C/A2/O24/45.
21. G. Bühler, Half Yearly Report of Training Institution, January-July 1861, CMS C/A2/O24/44.
22. G. Bühler, Report of Training Institution for Half Year ending December 31, 1861, CMS C/A2/O24/45.
23. The Training Institution was later removed from Abeokuta to Lagos and later to Ọyọ in 1896. At Ọyọ, under Melville Jones, the institution's training program became activitistic. The students spent more time in evangelistic tours than in acquiring spiritual and intellectual formation for ministry.
24. Townsend did not believe that even his "best behaved youth" would not be lost to him if he sent him to London to learn printing. H. Townsend to H. Venn, February 28, 1960, CMS C/A2/O85/75.
25. G. Bühler to Secretaries, May 3, 1862, CMS C/A2/O85/16.
26. By giving him additional responsibility as a local church pastor, Bühler's critics were saying that he had not enough work to occupy him and that was why he could indulge in too much book work with the students.
27. G. Bühler to Secretaries, May 3, 1862, CMS C/A2/O85/16.
28. G. Bühler to Secretaries, May 3, 1862, CMS C/A2/O85/16.
29. G. Bühler to Secretaries, May 3, 1862, CMS C/A2/O85/16.
30. J. Graf, Report of Visit to the Yoruba Mission, C/A1/O105/63.
31. G. Bühler to H. Venn, December 2, 1862, CMS C/A2/O24/19; Annual Report, Igbein Station, December 1863, CMS C/A2/O24/47.
32. G. Bühler, Annual Report of Igbein Station, December 31, 1863, CMS C/A2/O24/47.
33. G. Bühler to I. Chapman, January 30, 1863, CMS C/A2/O24/21.
34. G. Bühler to Col. Dawes, November 7, 1864, CMS C/A2/O24/34.
35. G. Möricke, "Bühler's Biography", CMS C/A2/O24.
36. Tertullian himself fell a victim of the Montanist heretical movement.
37. D. Olubi, journal entry, February 25, 1869, CMS C/A2/O75/23; D. Olubi to J. Maser, March 11, 1873, CMS C/A2/O75/11.
38. A. Hethersett to W. Griffith, November 30, 1881, NAL CO 147/47, Despatch 16(3434), Enclosure 6.
39. S. Johnson to Secretaries, January 16, 1885, CMS C/A2/O 1885/67.
40. Johnson was born in Hastings, Sierra Leone, in 1846 under the pastorate of Ulrich Graf and was there until 1857 when he came to Ibadan with his parents. In Ibadan he came under the influence of David Hinderer both before his training at Abeokuta and afterwards.
41. David Hinderer was the first missionary to express, in 1854, the wish that the history of the Yoruba wars be written. Johnson might have been fulfilling this dream, hence his dedicating it to "the revered memory of The Rev. David Hinderer." D. Hinderer, journal entry, December 15, 1854, CMS C/A2/O/49/110.
42. R. Cust to the Secretaries, January 3, 1899, CMS G3/A2/O(1899)/3.
Archives of the Church Missionary Society (CMS), University of Birmingham, Edgbaston, Birmingham, UK.
Colonial Office Records, National Archives London (NAL).
Ajayi, J. F. A. Christian Missions in Nigeria 1841-1891--The Making of a New Elite. Essex: Longman, 1965.
Church Missionary Society, Register of Missionaries.
Fiedler, Klaus. Christianity and African Culture: Conservative German Protestant Missionaries in Tanzania, 1900-1914. Leiden: E. J. Brill, 1996.
Rennstich, K. "The Understanding of Mission, Civilization and Colonialism in the Basel Mission." In Missionary Ideologies in the Imperialist Era, eds. T. Christensen and W.R. Hutchison, 94-103. Aarhus C, Denmark: Forlaget Aros, 1982.
This article, which was received in 2011, was written and researched by Dr. Kehinde Olabimtan, Coordinator of educational ministries, Good News Baptist Church and Adjunct Teacher, Akrofi-Christaller Institute, Ghana, and a recipient of the Project Luke Scholarship for 2010-2011.
| 1 | 34 |
<urn:uuid:223f63be-385c-469d-9735-091a14407d51>
|
How Are Arrhythmias Treated?
There are dozens of arrhythmias that differ vastly in their symptoms severity, clinical prognosis, and available treatments. For each arrhythmia, frequently, there can be more than one therapeutic options. These factors can result in a bewildering array of treatment choices and professional opinions, creating a great deal of confusion for patients. This is especial true for patients who have sought multiple "second opinion" consultations. It is not unusual for these patients to become even more confused and more indecisive after each additional second opinion consultations he or she went to.
It is often said that medicine is an "art." In other words, diagnosis and therapy for many medical conditions, including arrhythmias, are not always "black and white." There are "judgment calls" and variation in professional opinions. Therefore, it is probably not possible that any two physicians would give identical opinions for any medical condition, except for a very select few conditions where it is clearly a "life and death" situation. For many conditions where treatments are elective or semi-elective, there can be many equally viable treatment options. The choice is often made by the patient in conjunction with their physicians.
This section on the treatments of arrhythmia is, therefore, not meant to provide an answer to every arrhythmia, but rather to discuss the general principles of treating arrhythmias. Certain myths in treating arrhythmias are also discussed. This is then followed by detailed discussion on specific treatment procedures in Cardiac Electrophysiology.
Frequently, there can be as many different opinions regarding treatment options for any given arrhythmia as there are physicians willing to provide them. However, there are some general principles and guidelines that most physicians do agree on.
1) Primum non nocere.
This Latin phrase translates to "First, do no harm," which is the first and foremost principle that all physicians agree on and adhere to. With whatever treatment recommendation a physician prescribes, either medical or surgical, the patient should not be harmed as a result. Medical treatments are meant to help, not worsen, the medical condition. However, one should keep in mind that there are practically no treatments that are completely harmless. There are risks in every medication and every surgery. (If someone tells you that a particular medication has "absolutely no side effect," he or she is not being truthful about it.) The important principle is that if the treatment should not be worse than the disease itself. This is where "risk/benefit analysis" comes in.
This is a basic principle that applies not only to Cardiac Electrophysiology, but practically in every aspect of Medicine. The rule is very simple. Whenever one considers a treatment option, the benefits of the treatment must outweigh the potential risks (side effects, complications, etc) of the treatment. A very common example is chemotherapy for cancers. This therapy is well known to be "toxic." Without the toxicity, it simply can not kill the cancer cells. However, there are very few chemotherapies that are so specific as to kill only the cancer cells, not healthy normal cells. This "collateral damage" is why patients frequently lose their hair and have dangerously low white blood cells during chemotherapy. But most patients and physicians will happily embrace chemotherapy because without it patient may die. In other word, one is willing to accept a "toxic" therapy because the disease is worse than the treatment. In the case where the treatment is worse than the disease, the treatment would not be a good one.
As the above example shows, the more serious a medical condition is, the more one would be willing to accept a treatment option with potentially higher risks. Unfortunately, as it turns out, the risk of treatments for more serious conditions usually are higher compared to those for less serious conditions. An example would be Tylenol for headache and chemotherapy for leukemia. Again, one would accept a treatment with potentially higher risk if the potential benefit is higher.
This general principle applies to Cardiac Electrophysiology as well. An arrhythmia such as PAC carries a completely benign prognosis. Patients usually have no symptoms or have minimal symptoms. Therefore, the threshold for treating these patients with any medications that have serious side effects would be very high. Most patients would be treated with milder medications, which may be less effective, but less risky. Patients would have to be severely symptomatic in order to take on the risk of taking the more effective, but potentially more toxic, medications. On the other hand, an arrhythmia like ventricular tachycardia is life-threatening and carries an ominous prognosis. Therefore, one would be willing to accept more aggressive treatment options, even those with potentially higher risks, because the ultimate benefits outweigh the risks.
Similar to the example of Tylenol versus chemotherapy, the more effective a treatment is in Cardiac Electrophysiology, the more potential risk. This is easy to understand. An extreme example would be a patient with an arrhythmia who chooses to treat himself with nothing. This "nothing" will have less side effect than a medication like amiodarone, but would be far less effective in controlling the arrhythmia.
The above paragraphs discussed how we choose treatment options based on risk/benefit analysis. The following section discusses why we treat any medical condition. There are only two reasons to treat a patient, for any ailment, for any condition, in any specialty. First is to alleviate symptoms (section 3). The second is to prevent morbidity and mortality (section 4), even in patients without symptoms. There are NO OTHER REASONS TO TREAT A PATIENT. A treatment is recommended to help a patient, either by reducing the symptoms or by improving the outcome (e.g. longevity of patient) . If the patient has no symptoms and the condition is not helped by the treatment, then there is no role for the treatment.
3) Treating Symptoms.
With few exceptions, patients come to a physician because of symptoms. In Cardiac Electrophysiology, the most common symptom is palpitation, racing heartbeats, or fainting spells. The primary goal for any treatment, therefore, is the alleviation of symptoms, taking into consideration the "risk/benefit" ratio. Take PAC again for example. This benign condition should be treated if patients have significant symptoms of palpitation, which may be extremely disabling and troublesome for some patients. In these cases, one may accept a treatment that may have potential side effects as long as the benefit outweighs the risks. Management of another condition,SVT, which can cause disabling symptoms, is also governed by the same principle. A patient with this condition who has frequent attacks and multiple ER visits would best be served with a treatment like radiofrequency ablation which can eliminate the source of SVT and cure this condition. Both PACs and SVTs are considered "benign" conditions, as they usually do not lead to fatality. Therefore, reason for treating these conditions is alleviation of symptoms.
4) Preventing Morbidity and Mortality.
This second principle of treatment is just as important, if not more important, than treating symptoms. One of the central goals in medicine is prevention of premature deaths, an example of which is high cholesterol treatment. Most patients with high cholesterol do not have any symptoms specific to the condition of high cholesterol because there is none. However, high cholesterol level can cause hardening of the arteries and heart attack; most physicians would consider treatment of high cholesterol mandatory even though these patients have no symptoms whatsoever. Cancer screening (breast, colon, prostate, etc) in asymptomatic high risk patient is another example.
In Cardiac Electrophysiology, the same principle holds for the management of arrhythmias. Treatment with coumadin for prevention of stroke in atrial fibrillation is a classic example. Atrial fibrillation is one of the most common preventable causes of stroke and the treatment with coumadin, a blood thinner, can prevent stroke. In this case, Coumadin does not improve symptoms of patients with atrial fibrillation whatsoever, but it is one of the most important treatments because it prevents a devastating morbidity and potential mortality of atrial fibrillation.
Similarly, recommendation for a prophylactic (preventative) defibrillator implantation in high risk patients who may have not any symptom is another important example of this principle of treatment. Cardiac patients with severe dysfunction of their heart are at high risk for sudden cardiac death. They can be completely asymptomatic until they actually suffer an event, by which time it is too late. Ironically, when patients do suffer sudden death, they frequently have no symptoms because death is instant. Thus, it is obvious why one does not wait for symptoms of sudden death to occur before recommending a defibrillator.
5) Physician Experiences.
Even though every physician strives to provide the best treatment recommendation for his or her patients, there are many factors that influences the physician's opinion, one of which is his or her own anecdotal experiences. A physician who recently referred a patient for a surgery and the patient suffered a complication may be "gun-shy" the next time he or she considers referring a similar patient, even though it may have been a 1 in a 1000 occurrence of that complication. On the other hand, a physician who never referred any patients for certain procedures probably does not have sufficient experience to make any reasonable recommendation for or against the procedure.
The experience of the physician or surgeon performing a certain procedure is also critically important. A physician or surgeon who has performed a large number of certain procedure would be more comfortable with that procedure, and consequently more inclined to recommend it than another physician who has little or no experience with it. The latter physician may recommend an alternative procedure which he or she is more experienced with but which may or may not be a superior procedure than the first one. Asking your physician and surgeon about his or her clinical experience is a completely legitimate (though sometimes embarrassing) question to ask at the time of your consultation.
1) My neighbor had it done. Why Can't I?
Even though all men are created equal (not really), all arrhythmias are not. Just because your neighbor had a certain procedure done for his arrhythmia and he is feeling great, it does not mean that you have the same condition or that you will benefit from the same procedure. There are many different arrhythmias, and treatment options vary dramatically from one arrhythmia to another. Your physicians, not your neighbor, would be a better person to provide medical recommendation.
2) My friend had a friend who died after a defibrillator. I will never have one myself.
This myth, unfortunately, is another one that influences patient decision on medical therapy, sometimes more so than any other factors. Patients sometimes trust the medical opinion of a friend or a neighbor more than that of their physician.
Every treatment, every medication, and every surgery has its inherent risks, but the mere presence of risks does not mean that one does not accept the treatment (see risk/benefit analysis section). To decline a procedure simply because it has some risks is like giving up driving a car because there is the risk of car accident (unless, of course, the risk is so high because of the driver or the car. Here the problem is the driver or the car, not "driving" per se).
Furthermore, cardiac patients who undergo procedures are generally very sick and elderly patients. It is not unexpected that some patients can still die after a defibrillator is implanted (This does not mean that the defibrillator caused the death; risks of death from a defibrillator surgery is in the order of 1 in 1000 or less). On the other hand, there are countless number of patients whose lives have been saved by the defibrillators and these stories do not always make the "headlines." Like every field in medicine, one must consider boththe benefits and the risks, not just the risks, in deciding on any medical intervention.
3) I get bruises easily. I can't take coumadin.
Again, risk/benefit analysis should be considered in any medical decision, including that for medications like coumadin. Coumadin in high risk patients can prevent a devastating stroke. The risk of not taking coumadin (stroke) is significantly greater than the risk of taking it (bruises). For most patients, having a stroke is worse than having skin bruises or other bleeding complications, unless you have a modeling career.
4) I won't take that rat poison!
Coumadin is a blood thinner and it is derived from rat poisoning. It is currently the standard of care in patients at high risk for stroke, such as atrial fibrillation or mechanical valve. It's level must be carefully monitored and dosage adjusted according to the level (seecoumadin clinic). Although rats do die from rat poisoning, there are important differences between rat poison and coumadin. Rat poison are given to rats in toxic and lethal dosages to cause massive bleeding, whereas coumadin is administered in tightly titrated dosages. In carefully monitored patients, coumadin is extremely safe.
5) I don't want any machine in me!
Many therapies in Cardiac Electrophysiology consist of implanting high-tech medical devices, such as pacemaker or defibrillator. Patients sometimes jokingly refer to themselves the "bionic man." Having a "machine" implanted in the body, however, is neither a new concept nor particularly unusual. Patient with broken bones or severe arthritis have had metal prosthesis implanted for decades. We live with and around machines everyday, including watches, cell phones, computers, automobiles, and hearing aids. Pacemakers and defibrillators are just sophisticated "machines" that are implanted inside the body to improve our health, not much different than those outside the body that improve our lives.
6) It's not "natural."
Many patients who subscribe to the "natural" theory of healing their medical conditions are strong opponents of many therapies in Cardiac Electrophysiology. Clearly, all medications are "synthetic" and no treatment is "natural." The only thing natural is to let the condition run its natural course (physicians call this "natural history of disease"). Every human intervention to alter the natural history of a disease is, by definition, "unnatural." For argument sake, here are a few examples of "natural" things in life: bacteria, virus, pneumonia, and "natural disasters" like earthquake and tsunami. And here are some examples of "unnatural" or synthetic things: antibiotics, nurses, doctors, hospitals, cars, phones, computers, and houses. Your reading this paragraph on an LCD screen over the internet is not "natural."
Hospital Based Procedures for Treating Arrhythmias
The previous paragraphs discussed general treatment principles in dealing with arrhythmias. The following section goes into detail about some of the more commonly performed Electrophysiology procedures. Not every procedure is meant for every patient with arrhythmias, and not every patient will need a procedure. This section deals only with procedures themselves, most of which are invasive and hospital-based. For specific treatment options for each arrhythmia type, please refer to the section on "Different Types of Arrhythmias."
This invasive study is generally needed for patients whose causes for fainting or severe palpitation remain unknown despite extensive noninvasive evaluations. It is also useful to differentiate the various causes for a documented episode of arrhythmia. It can be used to risk-stratify certain patients with known or suspected arrhythmias. Lastly, it is performed in conjunction with radiofrequency ablation, as a mean to confirm the mechanism of the arrhythmia before performing curative ablation.
The procedure is performed in a hospital setting in the cardiac catheterization laboratory, the same facility where coronary angiogram and angioplasty are performed. Under sedation, lidocaine (or equivalent local anesthetics) is injected into the skin. Several catheters are then inserted into veins in the groins and into the heart (see picture), after which electrical stimulation of the heart is performed through these catheters by the Electrophysiologist. These electrical stimulation can reveal an underlying electrical conduction problem such as slow heartbeat or heart block, as well as reproducing and confirming the cause of a rapid heartbeat. For patients with rapid heartbeat problem, they do not necessarily have to be in their arrhythmia at the time of the procedure since this test can "provoke" the dormant arrhythmia.
If a slow heartbeat is documented, one can prescribe the appropriate treatment, usually a pacemaker. If a fast heartbeat is confirmed, there are several treatment options, depending on the type of rapid heartbeat discovered. For some rapid heartbeat that are potentially life-threatening, such as ventricular tachycardia, an implantable defibrillator is required. On the other hand, for many other forms of rapid heartbeats, such as SVT, the arrhythmias can be "mapped" to determine the exact source of the problem, which is usually an "extra nerve" in the heart. In the majority of these cases, ablation can successfully eliminate the culprit of the arrhythmias, resulting in a long-term permanent cure for the patient.
Thus, an Electrophysiology study is a diagnostic study that helps the Electrophysiologists confirm the root of the suspected electrical problem of the heart. It serves as a gateway to other therapeutic modalities available to treat the arrhythmias.
Many patients who have serious symptoms from their rapid heartbeat, such as fainting or near-fainting, may be very reluctant to have a test which can provoke their arrhythmias, for fear of reproducing the frightening sensation. Reproducing the arrhythmia, however, may be the only way to confirm the causes of their conditions in most patients. Furthermore, there is no safer place to have an arrhythmia than in the cardiac catheterization laboratory, under the direct care of a Cardiac Electrophysiologist, and in the presence of an entire team of personnel specializing in the chronic as well as emergency treatment of arrhythmias. It is better to find it here than to have it occur "naturally" at home or while driving on the road.
In contrast to an coronary angiogram, which is a procedure designed to look for clotted arteries of the heart (coronary arteries), an Electrophysiology study is not meant to evaluate the patency of patient's arteries. But rather, it focuses on the evaluation of the electrical health of the heart. One, therefore, can not tell "if the arteries are blocked" by this test. This is the job for your general or interventional cardiologists.
Radiofrequency ablation (RFA). This is a cardiac procedure specifically designed to treat and cure certain types of arrhythmias (see sections on supraventricular tachycardia, Wolff-Parkinson-White Syndrome, and atrial flutter).
Ablation is a procedure of selectively destroying certain tissues of the body to cure or control a disease process. An ablation can be performed for seizure focus in the brain or for certain masses in the liver, or for abnormal electrical activities in the heart. Cardiac ablation refers to ablation specific to the heart rhythm problem. The most common source of energy for cardiac ablation is radiofrequency and thus the most common term for this procedure is "radiofrequency ablation," although other sources of energy have been used.
For cardiac ablation, very thin catheters are placed into the heart via large veins in the groin and sometimes in the neck (see picture above). This is why the procedure is also called "trans-catheter ablation," to distinguish it from open-heart surgical ablation. The procedure is done much like that of an Electrophysiology study, which is first performed to identify the source of the arrhythmia. "Mapping" is done to localize the source of the problem, after which ablation is performed targeting and selectively destroying the areas that are responsible for the arrhythmias.
For the purpose of discussion on this website, the term "radiofrequency ablation" means cardiac ablation procedures performed "percutaneously," or "endocardially" through a catheter (trans-catheter). In other words, they are performed by a minimally invasive technique via a vein or artery through the skin (percutaneous), not by an open-chest or open-heart surgery. The approach is from inside the heart (endocardial), because the catheters enter the heart on the inside, as opposed to outside the heart (epicardial) as in open-heart surgery. In the latter case, the approach is through a surgical opening in the chest and these epicardial ablation procedures are done by cardiothoracic surgeons, not by Cardiac Electrophysiologists.
Cure rates for most forms of arrhythmias by radiofrequency ablation range form 80% to 98% (please see sections on specific arrhythmias for individual discussion). Complications rates are low, with mortality less than 1 in several thousand and very small risks of bleeding and perforation.
For many types of arrhythmias, radiofrequency ablation is increasingly accepted as an preferred therapeutic alternative to chronic therapy with medications. It is considered first-line therapy for most curable arrhythmias such as supraventricular tachycardia, Wolff-Parkinson-White Syndrome, and atrial flutter.
3-Dimensinal Mapping.. This is a specialized mapping technique which utilizes a computer to delineate the source of complex arrhythmias. It works by projecting a virtual 3-demensional image of the heart in the computer to help the Cardiac Electrophysiologist navigate his catheters, in ways very similar to what GPS does for driving a car or flying an airplane.
For many types of ablation, such as those for atrial fibrillation, 3-D mapping is essential to ensure optimal success rates and safety for the patients. For further discussion on this technology, please click this link to St. Jude Medical.
Cardioversion. This is a procedure used to electrically convert a sustained abnormal heart rhythm back to the regular normal rhythm (normal sinus rhythm). The most common arrhythmias that require cardioversion is atrial fibrillation or atrial flutter, although sometimes ventricular tachycardia may need to be treated with cardioversion on an emergency basis..
Under anesthesia, an external electrical shock is applied to the heart through the chest. An external defibrillator is used to deliver the shock through its "paddles." The electricity that is transmitted through the chest into the heart will instantly stop an arrhythmia and restore normal regular rhythm. The risk of the procedure is fairly low. Other than the risk of minor skin burn and some risks associated with light anesthesia, the procedure is very safe, effective, and easy to perform. One risk that deserves mention is that of blood clot and stroke in patients with atrial fibrillation or flutter who undergo cardioversion. The risk is negligible if patients with these conditions have previously been treated with a blood thinner, or coumadin. One should not proceed with cardioversion if one has not been therapeutically treated with coumadin for at least 3 weeks, unless an ultrasound of the heart done through the esophagus is first performed (trans-esophageal echocardiogram) to rule out the presence of a clot in the left atrium.
Pacemaker (PM). A pacemaker's is a medical device used to regulate the heart rate and to keep it from beating too slowly. Therefore, the most common indication for a pacemaker in a patient is slow heartbeat or heart block. Patients with atrial fibrillation and fainting spellsdue to slow heart rate are also candidates for pacemaker implantation. The latest indication for pacemaker is cardiac resynchronization therapy for pacing with congestive heart failure.
A pacemaker system consists of the "pulse generator" and the "lead." The pulse generator is where the battery and the electronics reside. It is the "brain" of the pacemaker. It is connected to a "lead," or a wire, through which the "brain" of the pacemaker communicates with the heart. The connection between the lead and the pulse generator is called the "header."
Most pacemakers in use today are "dual chamber" pacemaker because they utilize two electrodes, which are placed respectively in the atrial and ventricular chamber, thus "dual chamber." (See anatomy and physiology section). The advantage of such a system is that is preserves the normal physiology of the heart, i.e., normal relationship between the upper chamber and lower chamber. A "single chamber" pacemaker uses only one electrode, which can be placed in either the atrium or the ventricle. A single chamber pacemaker is less frequently used in the U.S. because it does not preserve the normal relationship between the upper and lower chambers of the heart. A single chamber pacemaker is most commonly used when when such a normal relationship is no longer present in patients with chronic atrial fibrillation.
During surgical implantation of the pacemaker system, the leads are inserted through the vein on the chest. They are subsequently placed permanently inside the chambers of the heart whereas the "pulse generator" itself is implanted on the chest just under the skin (subcutaneous). Because the procedure is done transvenously (through the vein), it does not require an open heart surgery. This surgery can be completed in as short as 20 minutes and is associated with reasonably low risks and rapid recovery (see also frequently asked questions section).
Major complications are rare but may include cardiac perforation, pneumothorax (air leak in the lung), vascular injury, and hematoma (blood clot). Infection of the pacemaker may occur in 1 percent of the time which will require explantation of the entire pacemaker system.
While older generations of pacemaker has only one function and that is pacing the heart, newer generations of pacemakers have the added capability of cardiac resynchronization therapy (CRT). They can be used in patients without slow heartbeat but who suffer fromheart failure refractory to standard medical therapy.
Implantable Cardioverter Defibrillator (AICD or ICD). A defibrillator is a medical device whose primary function is to shock the heart when the heart has gone into a very rapid and life-threatening arrhythmia such as ventricular tachycardia. Its secondary function is to pace the heart when the heart rate is too slow.
A frequent question that comes up is whether a particular device is a "defibrillator" or a "pacemaker" or a "combination." A pacemaker simply paces the heart when it is too slow. It has no defibrillator function, i.e., it can not "shock" the heart in the case of an emergency. A defibrillator, on the other hand, can pace the heart when it is too slow, and shock the heart when it is too fast. All defibrillators today can also work as pacemakers, and therefore the concept of a "combination" pacemaker-defibrillator is no longer relevant. There are no defibrillators today that work only as a "shock box" without full pacemaker capability. The converse, however, is not true.
A defibrillator is used to treat patients with life-threatening arrhythmias. When first invented in the 1980s, defibrillators were reserved for patients who have already suffered a cardiac arrest or have documented serious arrhythmias. However, most defibrillators today are implanted on a prophylactic basis, i.e., preventatively. In other words, they are implanted in patients at high risk for a serious arrhythmia and cardiac arrest but who have not yet suffered such an event. While this idea may be difficult for some patients and even some physicians to accept, prophylactic defibrillator implantation is no different, conceptually, than treating hypertension or hypercholesterolemia for prevention of heart attack. One does not wait for cardiac arrest to occur before implanting a defibrillator, just as one does not wait until a full blown heart attack to take place before treating patient's elevated blood pressure and cholesterol. Current recommendation is for defibrillator implantation in patients with an ejection fraction less than 35%.
The anatomy of a defibrillator is very similar to that of a pacemaker, except that the size of the pulse generator and the electrodes are significantly bigger and the structures more complicated. This is because the defibrillator needs to deliver higher energy to shock the heart than what is required to pace the heart. The placement of the electrodes inside the heart is also more critical that that for the pacemaker because the effectiveness of the "shock" function depends greatly on the location of the electrodes.
Similar to pacemakers, defibrillators are inserted transvenously (through the vein) and therefore do not require an open heart surgery. Surgical risks are similar to those with pacemaker (see also frequently asked questions section).
Once implanted, a defibrillator monitors every single one the patient's heartbeat, day in and day out, 24/7, for any serious arrhythmia. The very second the heart slips into a dangerous rhythm like ventricular fibrillation (left side of the above diagram), the defibrillator instantly recognizes the problem, charges up its capacitors, and delivers a high voltage shock to the heart to restores regular rhythm (right side of the diagram).
Cardiac Resynchronization Therapy (CRT). This is a percutaneous (through the skin) surgical procedure specifically for the treatment of patients with severe congestive heart failure. In patients with heart failure, the left ventricle is enlarged and the time it takes to activate the entire heart may be significantly increased, leading to "dyssynchrony," or lack of synchronized or coordinated contraction of the heart. This usually manifests itself as abnormal EKG with either right bundle branch block or left bundle branch block. The larger the heart and the greater the degree of dyssynchrony (as assess by echocardiogram and EKG), the more one would benefit from CRT. CRT works by pacing both the right and left side of the heart simultaneously, shortening the time to activate the heart and restoring "synchrony" to the heart, thus the term "Cardiac Resynchronization Therapy (CRT)."
A CRT device can be a CRT pacemaker or a CRT defibrillator. Most CRT devices implanted in the U.S. are the defibrillator type because most patients with heart failure who need CRT will also need a defibrillator. A CRT device works by having a "third wire" capability to pace the left side of the heart.
Ordinary pacemakers and defibrillators come with two wires, one in the right atrium and one in the right ventricle (RV). CRT pacemakers and defibrillators have an extra wire which goes into the left ventricle (LV), via a vein in the back of the heart called "coronary sinus." The branches of the coronary sinus are called "coronary veins," through which the "third-wire" is placed in order to pace the left side of the heart (see diagram below). Simultaneous pacing of both right and left ventricle can be performed through these wires in order to "resynchronize" the heart. This can result in dramatic improve symptoms of heart failure for those patients with heart failure and dyssynchrony. Most patients with CRT implantation will experience improvement in their breathing, stamina, and exercise capacity. The ejection fraction and other important parameters of the heart may also improve.
For a CRT defibrillator, the CRT portion of the device is an added feature of the unit. In other words, the device can provide CRT while still functioning as a defibrillator. A standard two-wire defibrillator works as a defibrillator without CRT function.
Although CRT has been available since the late 1990s, it has only recently gained wide-spread acceptance and popularity following the publication of several large landmark clinical trials which demonstrated significant improvement in heart failure patients who have received CRT. Today, CRT is considered a standard of care for patients with heart failure and evidence of dyssynchrony, who continue to have refractory symptoms of heart failure despite optimal medical treatment.
Risks of the surgery is similar to those of the pacemakers and standard defibrillators. The additional "third wire" placed in the left side of the heart used to be a critical step that was difficult to achieve and took many hours. Today, with improved technique and equipment, the deployment of the "third wire" for CRT may take as few as an extra 10 minutes compared to the standard pacemaker or defibrillator.
| 1 | 2 |
<urn:uuid:e85e31e9-dfe0-4b42-aaf2-b98f1f044996>
|
|Portable Network Graphics|
|Filename extension||.png +|
|Internet media type||image/png +|
|Type code||PNGf, 'PNG '|
|Uniform type identifier||public.png|
|Magic number||89 50 4E 47 0D 0A 1A 0A|
|Developed by||W3C +|
|Type||lossless raster graphics format|
|Standard||ISO 15948, IETF RFC 2083|
|File formats category -|
Portable Network Graphics (PNG) is a bitmap image format that employs lossless data compression. PNG was created to improve upon and replace GIF (Graphics Interchange Format) as an image-file format not requiring a patent license. It is pronounced as ˈpɪŋ or spelled out as P-N-G.
PNG supports palette-based (palettes of 24-bit RGB colors), greyscale or RGB images. PNG was designed for transferring images on the Internet, not professional graphics, and so does not support other color spaces such as CMYK).
PNG files nearly always use file-extension "
PNG" or "
png" and are assigned the Internet media type "
image/png" (approved October 14, 1996).
History and development Edit
hh;The motivation for creating the PNG format was in early 1995, when it came to light that the Lempel-Ziv-Welch (LZW) data compression algorithm, used in the GIF format, had been patented by Unisys. There were also other problems with the GIF format which made a replacement desirable, notably its limitation to 256 colors at a time when computers capable of displaying far more than 256 colors were becoming common. Although GIF allows for animation, it was decided that PNG should be a single-image format. A companion format called MNG (Multi-image Network Graphics) has been defined for animation.
A January 1995 precursory discussion thread, on the usenet newsgroup "comp.graphics" with the subject Thoughts on a GIF-replacement file format, had many propositions, which would later be part of the PNG file format. In this thread, Oliver Fromme, author of the popular MS-DOS JPEG viewer QPEG, proposed the PING name, meaning PING is not GIF, and also the PNG extension for the first time.
- October 1, 1996 – Version 1.0 of the PNG specification was released, and later appeared as RFC 2083. It became a W3C Recommendation on October 1, 1996.
- December 31, 1998 – Version 1.1, with some small changes and the addition of three new chunks, was released.
- August 11, 1999 – Version 1.2, adding one extra chunk, was released.
- November 10, 2003 – PNG is now an International Standard (ISO/IEC 15948:2003). This version of PNG differs only slightly from version 1.2 and adds no new chunks.
- March 3, 2004 – ISO/IEC 15948:2004.
Technical details Edit
File header Edit
|89||Has the high bit set to detect transmission systems that do not support 8 bit data and to reduce the chance that a text file is mistakenly interpreted as a PNG, or vice versa.|
|50 4E 47||In ASCII, the letters "PNG", allowing a person to identify the format easily if it is viewed in a text editor.|
|0D 0A||A DOS style line ending (CRLF) to detect DOS-UNIX line ending conversion of the data.|
|1A||A byte that stops display of the file under DOS when the command type has been used – the end-of-file character|
|0A||A UNIX style line ending (LF) to detect UNIX-DOS line ending conversion.|
"Chunks" within the file Edit
After the header come a series of chunks, each of which conveys certain information about the image. Chunks declare themselves as critical or ancillary, and a program encountering an ancillary chunk that it does not understand can safely ignore it. This chunk-based structure is designed to allow the PNG format to be extended while maintaining compatibility with older versions.
The chunks each have a header specifying their size and type. This is immediately followed by the actual data, and then the checksum of the data. Chunks are given a four letter case sensitive ASCII name. The case of the different letters in the name (bit 5 of the numeric value of the character) provides the decoder with some information on the nature of chunks it does not recognize.
The case of the first letter indicates if the chunk is critical or not. If the first letter is uppercase, the chunk is critical; if not, the chunk is ancillary. Critical chunks contain information that is necessary to read the file. If a decoder encounters a critical chunk it does not recognize, it must abort reading the file or supply the user with an appropriate warning.
The case of the second letter indicates if the chunk is "public" (either in the specification or the registry of special purpose public chunks) or "private" (not standardized). Uppercase is public and lowercase is private. This ensures that public and private chunk names can never conflict with each other (although two private chunk names could conflict).
The third letter must be uppercase to conform to the PNG specification. It is reserved for future expansion. Decoders should treat a chunk with a lower case third letter the same as any other unrecognized chunk.
The case of the fourth letter indicates if a chunk is safe to copy by editors that do not recognize it. If lowercase, the chunk may be safely copied regardless of the extent of modifications to the file. If uppercase, it may only be copied if the modifications have not touched any critical chunks.
Critical chunks Edit
A decoder must be able to interpret these to read and render a PNG file.
IHDRmust be the first chunk, it contains the header.
PLTEcontains the palette; list of colors.
IDATcontains the image, which may be split among multiple IDAT chunks. Doing so increases filesize slightly, but makes it possible to generate a PNG in a streaming manner.
IENDmarks the image end.
PLTE chunk is essential for color type 3 (indexed color). It is optional for color types 2 and 6 (truecolor and truecolor with alpha) and it must not appear for color types 0 and 4 (greyscale and greyscale with alpha).
Ancillary chunks Edit
Other image attributes that can be stored in PNG files include gamma values, background color, and textual metadata information. PNG also supports color management through the inclusion of ICC color space profiles.
- bKGD gives the default background color. It is intended for use when there is no better choice available, such as in standalone image viewers (but not web browsers, see below for more details).
- cHRM gives the white balance.
- gAMA specifies gamma.
- hIST can store the histogram, or total amount of each color in the image.
- iCCP is an ICC color profile.
- iTXt contains UTF-8 text, compressed or not, with an optional language tag.
- pHYs holds the intended pixel size and/or aspect ratio of the image.
- sBIT (significant bits) indicates the color-accuracy of the source data.
- sPLT suggests a palette to use if the full range of colors is unavailable.
- sRGB indicates that the standard sRGB color space is used.
- tEXt can store text that can be represented in ISO/IEC 8859-1, with one name=value pair for each chunk.
- tIME stores the time that the image was last changed.
- tRNS contains transparency information. For indexed images, it stores alpha channel values for one or more palette entries. For truecolor and greyscale images, it stores a single pixel value that is to be regarded as fully transparent.
- zTXt contains compressed text with the same limits as tEXt.
The lowercase first letter in these chunks indicates that they are not needed for the PNG specification. The lowercase last letter in some chunks indicates that they are safe to copy, even if the application concerned does not understand them.
Color depth Edit
PNG images can either use palette-indexed color or be made up of one or more channels (numerical values directly representing quantities about the pixels). When there is more than one channel in an image all channels have the same number of bits allocated per pixel (known as the bit depth of the channel). Although the PNG specification always talks about the bit depth of channels, most software and users generally talk about the total number of bits per pixel (sometimes also referred to as bit depth or color depth). Since multiple channels can affect a single pixel, the number of bits per pixel is often higher than the number of bits per channel, as shown in the illustration below.
The number of channels will depend on whether the image is greyscale or color and whether it has an alpha channel. PNG allows the following combinations of channels:
- indexed (channel containing indexes into a palette of colors)
- greyscale and alpha (level of transparency for each pixel)
- red, green and blue (rgb/truecolor)
- red, green, blue and alpha
|Type||Bit depth per channel|
|indexed (color type 3)||1||2||4||8||No|
|greyscale (color type 0)||1||2||4||8||16|
|greyscale & alpha|
(color type 4)
(RGB: color type 2)
|truecolor & alpha|
(RGBA: color type 6)
With indexed color images, the palette is always stored in RGB at a depth of 8 bits per channel (24 bits per palette entry). The palette must not have more entries than the image bitdepth allows for but it may have fewer (so if an image for example only uses 90 colors there is no need to have palette entries for all 256).
Indexed color PNGs are allowed to have 1, 2, 4 or 8 bits per pixel by the standard; greyscale images with no alpha channel allow for 1, 2, 4, 8 or 16 bits per pixel. Everything else uses a bit depth per channel of either 8 or 16. The combinations this allows are given in the table above. The standard requires that decoders can read all supported color formats but many image editors can only produce a small subset of them.
Transparency of image Edit
PNG offers a variety of transparency options. With truecolor and greyscale images either a single pixel value can be declared as transparent or an alpha channel can be added (enabling any percentage of partial transparency to be used). For paletted images, alpha values can be added to palette entries. The number of such values stored may be less than the total number of palette entries, in which case the remaining entries are considered fully opaque.
The scanning of pixel values for binary transparency is supposed to be performed before any color reduction to avoid pixels becoming unintentionally transparent. This is most likely to pose an issue for systems that can decode 16 bits per channel images (as they must be compliant with the specification) but only output at 8 bits per channel (the norm for all but the highest end systems).
PNG uses a non-patented lossless data compression method known as DEFLATE, which is the same algorithm used in the zlib compression library. This method is combined with prediction, where for each image line, a filter method is chosen that predicts the color of each pixel based on the colors of previous pixels and subtracts the predicted color of the pixel from the actual color. An image line filtered in this way is often more compressible than the raw image line would be, especially if it is similar to the line above (since DEFLATE has no understanding that an image is a 2D entity, and instead just sees the image data as a stream of bytes). Compression is further improved by choosing filter methods adaptively on a line-by-line basis. This improvement, and a heuristic method of implementing it commonly used by PNG-writing software, were created by Lee Daniel Crocker, who tested the methods on many images during the creation of the format.
PNG offers an optional 2-dimensional, 7-pass interlacing scheme – the Adam7 algorithm. This is more sophisticated than GIF's 1-dimensional, 4-pass scheme, and allows a clearer low-resolution image to be visible earlier in the transfer. However, as a 7-pass scheme, it tends to reduce the data's compressibility more than simpler schemes.
PNG itself does not support animation at all. MNG is an extension to PNG that does; it was designed by members of the PNG Group. MNG shares PNG's basic structure and chunks, but it is significantly more complex and has a different file signature, which automatically renders it incompatible with standard PNG decoders.
The complexity of MNG led to the proposal of APNG by developers of the Mozilla Foundation. It is based on PNG, supports animation and is simpler than MNG. APNG offers fallback to single-image display for PNG decoders that do not support APNG. However, neither of these formats is currently widely supported. APNG is supported in Mozilla Firefox 3.0 and Opera 9.5. The PNG Group decided in April 2007 not to embrace APNG. Several alternatives are under discussion, ANG, aNIM/mPNG, PNG in GIF and its subset RGBA in GIF.
Comparison with other file formats Edit
Comparison with Graphics Interchange Format (GIF)Edit
- On most images, PNG can achieve greater compression than GIF (see the section on filesize, below).
- PNG gives a much wider range of transparency options than GIF, including alpha channel transparency.
- PNG gives a much wider range of color depths than GIF (truecolor up to 48-bit compared to 8-bit 256-color), allowing for greater color precision, smoother fades, etc.
- GIF intrinsically supports animated images. PNG only supports animation using an unofficial extension (see the section on animation, above).
- PNG images are widely supported (for instance, with modern web browsers and office software), but not as widely supported as GIF images.
Comparison with JPEG Edit
JPEG (Joint Photography Experts Group) can produce a smaller file than PNG for photographic (and photo-like) images, since JPEG uses a lossy encoding method specifically designed for photographic image data. Using PNG instead of a high-quality JPEG for such images would result in a large increase in filesize (often 5–10 times) with negligible gain in quality.
PNG is a better choice than JPEG for storing images that contain text, line art, or other images with sharp transitions. Where an image contains both sharp transitions and photographic parts a choice must be made between the large but sharp PNG and a small JPEG with artifacts around sharp transitions. JPEG also does not support transparency.
JPEG is a worse choice for storing images that require further editing as it suffers from generation loss, whereas lossless formats do not. This makes PNG useful for saving temporary photographs that require successive editing. When the photograph is ready to be distributed, it can then be saved as a JPEG, and this limits the information loss to just one generation. That said, PNG does not provide a standard means of embedding Exif image data from sources such as digital cameras, which makes it problematic for use amongst photographers, especially professionals. TIFF does support it as a lossless format.
JPEG has historically been the format of choice for exporting images containing gradients, as it could handle the color depth much better than the GIF format. However, any compression by the JPEG would cause the gradient to become blurry, but a 24-bit PNG export of a gradient image often comes out identical to the rasterization of the source vector image, and at a small file size. As such, the PNG format is the optimal choice for exporting small, repeating gradients for web usage.
Comparison with TIFF Edit
Tagged Image File Format (TIFF) is a complicated format that incorporates an extremely wide range of options. While this makes TIFF useful as a generic format for interchange between professional image editing applications, it makes adding support for it to applications a much bigger task and so it has little support in applications not concerned with image manipulation (such as web browsers). It also means that many applications can read only a subset of TIFF types, creating more potential user confusion.
The most common general-purpose, lossless compression algorithm used with TIFF is Lempel-Ziv-Welch (LZW). This compression technique, also used in GIF, was covered by patents until 2003. There is a TIFF variant that uses the same compression algorithm as PNG uses, but it is not supported by many proprietary programs. TIFF also offers special-purpose lossless compression algorithms like CCITT Group IV, which can compress bilevel images (e.g., faxes or black-and-white text) better than PNG's compression algorithm.
Software support Edit
Bitmap graphics editor support for PNG Edit
Adobe Fireworks uses PNG as its native file format, allowing other image editors and preview utilities to view the flattened image. However, Fireworks by default also stores meta data for layers, animation, vector data, text and effects. Such files should not be distributed directly. Fireworks can instead export the image as an optimized PNG without the extra meta data for use on web pages, etc.
Other popular graphics programs which support the PNG format include Adobe Photoshop, Corel Photo-Paint, Corel Paint Shop Pro, The GIMP, GraphicConverter, Helicon Filter, Inkscape, Pixel image editor, Paint.NET and Xara. Some programs bundled with popular operating systems, which support PNG include Microsoft's Paint and Apple's iPhoto and Preview.
Some image processing programs have PNG compression problems, mainly related to lack of full implementation of the PNG compressor library. These include:
- Microsoft's Paint for Windows XP
- older versions of Adobe Photoshop.
Adobe's Fireworks is sometimes placed in this category, but its difficulties are less severe than the other entries. The confusion stems from a misunderstanding of the mechanics of its Save format: though PNGs, the intermediate images produced by that option include large, private chunks containing complete layer and vector information, which allows further, lossless editing. Properly saved with the Export option, Fireworks' PNGs are competitive with those produced by other image editors, but are no longer editable as anything but flattened bitmaps. Fireworks is unable to save size-optimized vector-editable PNGs.
Web browser support for PNG Edit
Despite calls by the Free Software Foundation and the World Wide Web Consortium (W3C), tools such as gif2png, and campaigns such as burn all gifs, PNG adoption on websites has been fairly slow.
GIF is found to be in use more than PNG for a few reasons:
- No support on old browsers (such as Internet Explorer below version 4).
- No animation, still images only (unlike GIF, though Mozilla's unofficial APNG format is a potential solution).
PNG compatible browsers include: Apple Safari, Google Chrome, Mozilla Firefox, Opera, Camino, Internet Explorer 7, and many others.
However, Internet Explorer (Windows), up to at least version 7, has a fair share of issues, which prevent it from using PNG to its full potential.
Operating systems support for PNG icons Edit
PNG icons have been supported in most distributions of GNU/Linux since at least 1999, in desktop environments such as GNOME. In 2006, PNG icons were introduced into Microsoft Windows, with the release of Windows Vista. PNG icons are supported in Mac OS X as well. Another operating system to include 3rd party PNG icons support is AmigaOS 3/4 (and its clones - MorphOS and AROS Research Operating System).
File size and optimization software Edit
Generally, PNG files without unnecessary metadata should have a smaller file size than the identical image encoded in GIF format. PNG gives the image creator far more flexibility than GIF, but care must be taken to avoid PNG files that are needlessly large.
As GIF is limited to 256 colors, many image editors will automatically reduce the color depth when saving an image in GIF format. Often when people save the same truecolor image as PNG and GIF, they see that the GIF is smaller, and do not realise it is possible to create a 256 color PNG that has identical quality to the GIF with a smaller file size. This leads to the misconception that PNG files are larger than equivalent GIF files.
Some versions of Adobe Photoshop, CorelDRAW and Paint provide poor PNG compression effort, further fueling the idea that PNG is larger than GIF. Many graphics programs (such as Apple's Preview software) save PNGs with large amounts of metadata and color-correction data that are generally unnecessary for Web viewing. Unoptimized PNG files from Adobe Fireworks are also notorious for this.
It should be noted that Adobe Photoshop's performance on PNG files has been much improved in the CS Suite when using the Save For Web feature (which also allows explicit PNG/8 use).
Various tools are available for optimizing PNG files. OptiPNG and pngcrush are both open source software optimizers that run from a Unix command line or a Windows Command Prompt, and effectively reduce the size of PNG files.
Other tools such as AdvanceCOMP and Ken Silverman's PNGOUT are capable of reducing the file size even further still, giving the competent user the smallest file size possible for a given PNG image. The current version of IrfanView can use PNGOUT as an external plug-in, and the screenshots show PNGOUT's save options.
pngcrush and PNGOUT have the extra ability to remove all color correction data from PNG files (gamma, white balance, ICC color profile, standard RGB color profile). This often results in much smaller file sizes. The following command line options achieve this with pngcrush:
pngcrush -rem gAMA -rem cHRM -rem iCCP -rem sRGB InputFile.png OutputFile.png
There's GUI front-end for OptiPNG, pngcrush and advpng that runs on Mac OS X.
Since Windows Vista icons may contain PNG subimages, the optimizations can be applied to them as well. At least one icon editor, Pixelformer, is able to perform a special optimization pass while saving ICO files, thereby reducing their sizes.
See also Edit
- Image editing
- Graphics file formats
- ↑ History of PNG
- ↑ Thoughts on a GIF-replacement file format
- ↑ PNG (Portable Network Graphics) Specification, Version 1.1 – 12. Appendix: Rationale
- ↑ "[ http://www.libpng.org/pub/png/spec/iso/index-object.html#11iCCP Portable Network Graphics (PNG) Specification (Second Edition) Information technology — Computer graphics and image processing — Portable Network Graphics (PNG): Functional specification. ISO/IEC 15948:2003 (E) W3C Recommendation 10 November 2003]".
- ↑ PNG color options is linked from this page: PNG images - PNG Image Format.
- ↑ PNG: The Portable Network Graphic Format, Dr. Dobb's Journal #232 July 1995 (Vol 20, Issue 7), pp. 36-44.
- ↑ Opera Desktop Team: Post-Alpha Opera 9.5 Release
- ↑ http://sourceforge.net/mailarchive/message.php?msg_name=220.127.116.11.20070420132821.012dd8e8%40mail.comcast.net
- ↑ Comparison of animated PNG format proposals
- ↑ A Basic Introduction to PNG Features
- ↑ Internet Explorer crashes when previewing an HTML document – Adobe TechNote
- ↑ http://oregon.usgs.gov/png_images.html
- ↑ http://www.gnu.org/philosophy/gif.html
- ↑ http://www.w3.org/Press/PNG-fact.html
- ↑ http://www.libpng.org/pub/png/pngapbr.html#msie-win-unix
- ↑ http://developer.gnome.org/doc/whitepapers/libroadmap/
- ↑ "Windows Vista - Wallpapers". Retrieved on 2007-11-12.
- PNG Home Site
- libpng Home Page
- The Story of PNG by Greg Roelofs
- PNG: The Definitive Guide (Online Version) by Greg Roelofs
| 1 | 4 |
<urn:uuid:42d7faec-2e18-4e3e-ad93-40f43f0e7c8c>
|
Phonetic Matching is based on the principal that pronunciation depends on the language. So the first step is to determine the language from the spelling of the name. Then the name is converted into a sequence of phonetic tokens using pronunciation rules specific to that particular language. And, finally, names are compared based on their phonetic-token sequence.
Soundex is a phonetic algorithm for indexing names by sound, as pronounced in English. The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling. The algorithm mainly encodes consonants; a vowel will not be encoded unless it is the first letter. Soundex is the most widely known of all phonetic algorithms (in part because it is a standard feature of popular database software such as DB2, PostgreSQL, MySQL, Ingres,MS SQL Server and Oracle) and is often used (incorrectly) as a synonym for “phonetic algorithm”.
The Soundex code for a name consists of a letter followed by three numerical digits: the letter is the first letter of the name, and the digits encode the remaining consonants. Consonants at a similar place of articulation share the same digit so, for example, the labial consonants B, F, P, and V are each encoded as the number 1.
The correct value can be found as follows:
- Retain the first letter of the name and drop all other occurrences of a, e, i, o, u, y, h, w.
- Replace consonants with digits as follows (after the first letter):
- b, f, p, v → 1
- c, g, j, k, q, s, x, z → 2
- d, t → 3
- l → 4
- m, n → 5
- r → 6
- If two or more letters with the same number are adjacent in the original name (before step 1), only retain the first letter; also two letters with the same number separated by ‘h’ or ‘w’ are coded as a single number, whereas such letters separated by a vowel are coded twice. This rule also applies to the first letter.
- Iterate the previous step until you have one letter and three numbers. If you have too few letters in your word that you can’t assign three numbers, append with zeros until there are three numbers. If you have more than 3 letters, just retain the first 3 numbers.
Using this algorithm, both “Robert” and “Rupert” return the same string “R163” while “Rubin” yields “R150”. “Ashcraft” and “Ashcroft” both yield “A261” and not “A226” (the chars ‘s’ and ‘c’ in the name would receive a single number of 2 and not 22 since an ‘h’ lies in between them). “Tymczak” yields “T522” not “T520” (the chars ‘z’ and ‘k’ in the name are coded as 2 twice since a vowel lies in between them). “Pfister” yields “P236” not “P123” (the first two letters have the same number and are coded once as ‘P’).
The New York State Identification and Intelligence System Phonetic Code, commonly known as NYSIIS, is a phonetic algorithm devised in 1970 as part of the New York State Identification and Intelligence System (now a part of the New York State Division of Criminal Justice Services). It features an accuracy increase of 2.7% over the traditional Soundex algorithm.
The algorithm, as described in Name Search Techniques, is:
- Translate first characters of name: MAC → MCC, KN → N, K → C, PH, PF → FF, SCH → SSS
- Translate last characters of name: EE → Y, IE → Y, DT, RT, RD, NT, ND → D
- First character of key = first character of name.
- Translate remaining characters by following rules, incrementing by one character each time:
- EV → AF else A, E, I, O, U → A
- Q → G, Z → S, M → N
- KN → N else K → C
- SCH → SSS, PH → FF
- H → If previous or next is non-vowel, previous.
- W → If previous is vowel, A.
- Add current to key if current is not same as the last key character.
- If last character is S, remove it.
- If last characters are AY, replace with Y.
- If last character is A, remove it.
- Append translated key to value from step 3 (removed first character)
- If longer than 6 characters, truncate to first 6 characters. (only needed for true NYSIIS, some versions use the full key)
Metaphone is a phonetic algorithm, published by Lawrence Philips in 1990, for indexing words by their English pronunciation. It fundamentally improves on the Soundex algorithm by using information about variations and inconsistencies in English spelling and pronunciation to produce a more accurate encoding, which does a better job of matching words and names which sound similar. As with Soundex, similar sounding words should share the same keys.
Original Metaphone codes use the 16 consonant symbols 0BFHJKLMNPRSTWXY. The ‘0’ represents “th” (as an ASCII approximation of Θ), ‘X’ represents “sh” or “ch“, and the others represent their usual English pronunciations. The vowels AEIOU are also used, but only at the beginning of the code. This table summarizes most of the rules in the original implementation:
- Drop duplicate adjacent letters, except for C.
- If the word begins with ‘KN’, ‘GN’, ‘PN’, ‘AE’, ‘WR’, drop the first letter.
- Drop ‘B’ if after ‘M’ at the end of the word.
- ‘C’ transforms to ‘X’ if followed by ‘IA’ or ‘H’ (unless in latter case, it is part of ‘-SCH-‘, in which case it transforms to ‘K’). ‘C’ transforms to ‘S’ if followed by ‘I’, ‘E’, or ‘Y’. Otherwise, ‘C’ transforms to ‘K’.
- ‘D’ transforms to ‘J’ if followed by ‘GE’, ‘GY’, or ‘GI’. Otherwise, ‘D’ transforms to ‘T’.
- Drop ‘G’ if followed by ‘H’ and ‘H’ is not at the end or before a vowel. Drop ‘G’ if followed by ‘N’ or ‘NED’ and is at the end.
- ‘G’ transforms to ‘J’ if before ‘I’, ‘E’, or ‘Y’, and it is not in ‘GG’. Otherwise, ‘G’ transforms to ‘K’.
- Drop ‘H’ if after vowel and not before a vowel.
- ‘CK’ transforms to ‘K’.
- ‘PH’ transforms to ‘F’.
- ‘Q’ transforms to ‘K’.
- ‘S’ transforms to ‘X’ if followed by ‘H’, ‘IO’, or ‘IA’.
- ‘T’ transforms to ‘X’ if followed by ‘IA’ or ‘IO’. ‘TH’ transforms to ‘0’. Drop ‘T’ if followed by ‘CH’.
- ‘V’ transforms to ‘F’.
- ‘WH’ transforms to ‘W’ if at the beginning. Drop ‘W’ if not followed by a vowel.
- ‘X’ transforms to ‘S’ if at the beginning. Otherwise, ‘X’ transforms to ‘KS’.
- Drop ‘Y’ if not followed by a vowel.
- ‘Z’ transforms to ‘S’.
- Drop all vowels unless it is the beginning.
It should be noted, however, that this table does not constitute a complete description of the original Metaphone algorithm, and the algorithm cannot be coded correctly from it. Original Metaphone contained many errors and was superseded by Double Metaphone, and in turn Double Metaphone and original Metaphone were superseded by Metaphone 3, which corrects thousands of miscodings that will be produced by the first two versions.
The Double Metaphone phonetic encoding algorithm is the second generation of this algorithm. Its implementation was described in the June 2000 issue of C/C++ Users Journal. It makes a number of fundamental design improvements over the original Metaphone algorithm.
It is called “Double” because it can return both a primary and a secondary code for a string; this accounts for some ambiguous cases as well as for multiple variants of surnames with common ancestry. For example, encoding the name “Smith” yields a primary code of SM0 and a secondary code of XMT, while the name “Schmidt” yields a primary code of XMT and a secondary code of SMT—both have XMT in common.
Double Metaphone tries to account for myriad irregularities in English of Slavic, Germanic, Celtic, Greek, French, Italian, Spanish, Chinese, and other origin. Thus it uses a much more complex ruleset for coding than its predecessor; for example, it tests for approximately 100 different contexts of the use of the letter C alone.
To implement Metaphone without purchasing a (source code) copy of Metaphone 3, the best guide would be the reference implementation of Double Metaphone, which may be found here.
- Metaphone 3
Metaphone 3 is the third generation of the Metaphone algorithm. It increases the accuracy of phonetic encoding from the 89% of Double Metaphone to 98%, as tested against a database of the most common English words, and names and non-English words familiar in North America. This produces an extremely reliable phonetic encoding for American pronunciations.
Metaphone 3 was designed and developed by Lawrence Philips, who designed and developed the original Metaphone and Double Metaphone algorithms.
A professional version was released in October 2009, developed by the same author, Lawrence Philips. It is a commercial product but is sold as source code. Metaphone 3 further improves phonetic encoding of words in the English language, non-English words familiar to Americans, and first names and family names commonly found in the United States. It improves encoding for proper names in particular to a considerable extent. The author claims that in general it improves accuracy for all words from the approximately 89% of Double Metaphone to 98%. Developers can also now set switches in to code to cause the algorithm to encode Metaphone keys 1) taking non-initial vowels into account, as well as 2) encoding voiced and unvoiced consonants differently. This allows the result set to be more closely focused if the developer finds that the search results include too many words that don’t resemble the search term closely enough. Metaphone 3 is sold as C++, Java, C#, PHP, Perl, and PL/SQL source, as well as Metaphone 3 for Spanish and German available as Java source. The latest revision of the Metaphone 3 algorithm is v2.5.2, released February 2015.
| 1 | 8 |
<urn:uuid:67b91d8e-dc10-44a3-824c-b0210d386787>
|
Membranous glomerulonephritis (MGN) is a disease characterized by subepithelial immune deposits and formation of perpendicular projections of material similar to the glomerular basement membrane (GBM) in the external part of this one (between podocyte cytoplasm and GBM): “spikes”. Because in this glomerulopathy inflammatory cells usually are not detected and that in some, or many of the cases, there is not prominent local inflammation, but trapping of immune complexes, some authors prefers to use the name membranous glomerulopathy or membranous glomerulonephropathy; nevertheless, the presence of immunoglobulins, complement, and membrane attack complex (MAC) (C5b-9) implies an inflammatory process (see below). Other used terms have been: membranous nephropathy and epimembranous, perimembranous or extramembranous nephropathy (or glomerulopathy).
MGN is more commonly a primary or idiopathic disease, but it also appears like a disease secondary to other conditions, mainly infections, neoplasms and systemic lupus erythematosus (SLE). In approximately 25% of cases the disease is secondary, being greater the percentage in children and elderly patients (Glassock RJ, Nephrol Dial Transplant 7(S):64-71, 1992 [PubMed link]). Histopathologic findings do not allow a differentiation between primary or secondary forms, nevertheless, some microscopic characteristics (hypercellularity, crescents) and immunopathologic findings (complement deposits that indicate activation of the classic pathway: C1q, C4) permit suspect secondary forms.
MGN is the commonest cause of nephrotic syndrome (NS) in Caucasian adults (focal and segmental glomeruloesclerosis being the commonest in Afro-American and in Hispanic patients (Arias LF, et al. Glomerular diseases in a Hispanic population: review of a regional renal biopsy database. Sao Paulo Med J. 2009;127(3):140-4. [PubMed link][Free full text]). This disease is responsible approximately for 21-35% of NS cases in adults and 1,5-9% in children. Many series show greater frequency of MGN in male patients, with a male:female relation of 2:1.
The presence of immunoglobulins (Igs) and complement components in capillary walls (subepithelial), and the morphologic and immunopathologic similarities between the experimental MGN and immunological glomerular diseases support the concept that MGN is an immune complexes mediated disease. The etiology and origin of the antigens that cause MGN are not known. There are hypotheses that suggest that deposits come from circulating immune complexes and other that suggest in situ formation with circulating antibodies recognizing native antigens in the capillary walls, or foreign antigens previously deposited there.
Membranous nephropathy most likely is a heterogeneous disease, although a common denominator may be that podocytes provide antigenic targets for in-situ formation of glomerular immune deposits.
Many years ago it has been well-known the morphologic similarities between MGN and the Heymann’s experimental nephritis. In this experimental model, rats are immunized against antigens of the renal cortex; the animals develop a disease clinical and morphologically similar to MGN.
Initial studies of this model suggested that the subepithelial deposits resulted from glomerular trapping of circulating immune complexes formed by circulating brush-border related antigens and the corresponding antibodies. This hypothesis was based on the observation that the glomerular disease was induced by fractions of membrane prepared from rat renal brush-border, not from glomerular extracts. Subsequently, the development of the model of passive HN in rats that received an injection of rabbit anti-rat brush-border antibodies led to the suggestion that subepithelial immune deposits could be formed without the intervention of circulating immune complexes. Other authors demonstrated that anti-brush-border antibodies could bind glomeruli in the absence of circulating brush-border-related antigen, which provided the proof of principle that immune complex formation occurred in situ. Definitive evidence establishing the role of in situ immune complex formation in the glomerular capillary wall required identification of the antigen. The autoantigenic target in the rat disease was identified by Kerjaschki and Farquhar (Kerjaschki D, Farquhar MG: Proc Natl Acad Sci U S A 79 : 5557 –5561, 1982 [PubMed link] [Free Full Text] / Kerjaschki D, Farquhar MG. J Exp Med 157 : 667 –686, 1983 [PubMed link] [Free Full text]) in the early 1980s as the podocyte membrane protein now called megalin. The polyspecific receptor megalin, a member of the LDL-receptor superfamily, is expressed with clathrin at the sole of podocyte foot processes (where immune complexes are formed) (Ronco P, Debiec H. Molecular pathomechanisms of membranous nephropathy: from Heymann nephritis to alloimmunization. J Am Soc Nephrol. 2005;16(5):1205-13. [PubMed link][Full Text link])
In the human MGN in very few cases has been demonstrated antibodies against brush border antigens. In the Seventies, a Japanese group demonstrated location of tubular antigens in the immune deposits of patients with MGN (Naruse T et al, J Exp Med 144:1347-62, 1976 [PubMed link] [Free full text]), nevertheless, groups of other centers have not found such finding (Collins AB, et al, Nephron 27:297-301, 1981 [PubMed link]; Thorpe LW y Cavallo T, J Clin Lab Immunol 3:125-127, 1980 [PubMed link]; Whitworth JA, et al, Clin Nephrol 5:159-162, 1976 [PubMed link]). At the present the evidence suggests that the antigen-antibody complex (Ags-Acs) of the Heymann’s nephritis has not roll in human MGN. The variety of Ags-Acs associated with secondary forms of MGN suggests that in idiopathic forms of the disease the morphologic presentation is common to many Ags-Acs complexes.
Neutral endopeptidase - a podocyte antigen that can digest biologically active peptides - was recently identified as the target antigen of antibodies deposited in the subepithelial space of glomeruli in a subset of patients with antenatal membranous nephropathy. The mothers became immunized because they are deficient in neutral endopeptidase due to truncating mutations in the gene (Ronco P, Debiec H. New insights into the pathogenesis of membranous glomerulonephritis. Curr Opin Nephrol Hypertens. 2006;15:258-63. [PubMed link])
More recently (in a paper published in July 2009), a group carried out Western blotting of protein extracts from normal human glomeruli with serum samples from patients with membranous nephropathy, and found that a majority of patients with idiopathic membranous nephropathy have antibodies against a conformation-dependent epitope in M-type phospholipase A2 receptor (PLA2R), indicating that PLA2R is a major antigen in this disease" (Beck LH, et al. M-Type Phospholipase A2 Receptor as Target Antigen in Idiopathic Membranous Nephropathy. N Engl J Med. 2009;361(1):11-21. [Extract link]). A recent work (2010) reported co-localization of specific anti-aldose reductase (AR) and anti-manganese superoxide dismutase (SOD2) with IgG4 and C5b-9 in electron-dense podocyte immune deposits. The data support AR and SOD2 as renal antigens of human MN and suggest that oxidative stress may drive glomerular SOD2 expression (Prunotto M, et al. Autoimmunity in membranous nephropathy targets aldose reductase and SOD2. J Am Soc Nephrol. 2010;21(3):507-19. [PubMed link]).
Neutral endopeptidase and PLA2R are two antigens that, in nonpathologic conditions, localize at the podocyte membrane. However, anti-NEP antibody levels in adults with membranous nephropathy have not been found to be different from apparently healthy controls, suggesting that they are not involved in adult idiopathic MN. In this light, anti-NEP determination in patients seems without practical value. Data for anti-PLA2R specificity seem solid and their assessment may be extremely useful for clinicians, helping to differentiate between idiopathic and secondary forms of the disease (Murtas C, et al. Circulating antipodocyte antibodies in membranous nephropathy: new findings. Am J Kidney Dis. 2013;62(1):12-5. [PubMed link]).
In 2014, Tomas NM et al published that aproximately 2.5 to 5% of the patients with idiopathic membranous nephropathy whom we evaluated had autoantibodies against thrombospondin type-1 domain-containing 7A (THSD7A), which corresponds to 8 to 14% of the patients who are seronegative for anti-PLA2R1 antibodies. THSD7A was initially characterized as an endothelial protein that is expressed in the placental vasculature. The authors found that THSD7A is concentrated at the basal aspect of the podocyte, colocalizing with nephrin, and they did not find any expression in glomerular endothelial cells. (Tomas NM, Beck LH, 0Meyer-Schwesinger C, et al. Thrombospondin Type-1 Domain-Containing 7A in Idiopathic Membranous Nephropathy. N Engl J Med 2014; 371:2277-87 [Article on NEJM - link])
In MGN is necessary activation of the complement system for NS development, with formation of MAC (C5b-9) (Groggel GC, et al, J Clin Invest 72:1948-1957, 1983 [PubMed link] [Free full text]), which supports a cytolitic roll of the complement in this disease.
Additionally, there is evidence suggesting that, at least in a some cases, the immune complexes form elsewhere and after they are trapped in the subepithelial space. In favor of this last concept is the demonstration of circulating immune complexes (CICs) in a number of patients with MGN: 23-66% according to different series and techniques for detection.
MGN and hepatitis B: The most frequent glomerulopathy in patients infected with hepatitis B virus is MGN followed by membranoproliferative GN. The antigens Core (HBcAg) and e (HBeAg) seem the most important in the pathogenesis of hepatitis B-associated MGN. In these cases the antigens, or their antibodies, are identified in the glomerular immune deposits. It is not clear what is first deposited: the Ag., the Ac. or the Ag-Ac complex previously formed (circulating). Prevalence of MGN in the infection is not known, but in children with MGN the carrier stage is detected in around 20% of cases, with higher rates in endemic countries. In adults the percentage of patients with MGN carrying hepatitis B virus is lower than in children. In GNM cases associated with this infection there are more frequently mesangial hypercellularity, endocapillary proliferation, subendothelial immune deposits, and tubuloreticular endothelial structures (electron microscopy). It is frequent that appears with hypocomplementemia. The prognosis of MGN in hepatitis B patients seem more favorable, with most frequency of remission and less probability of evolution to terminal renal damage.
MGN and hepatitis C: In this infection disease also secondary MGN can appear, although membranoproliferative GN is more frequent. In many studies have not been identified antigens of the virus, or Acs against these, in the glomerular deposits. Clinic expression can be similar to idiopathic MGN or it may appear with asymptomatic proteinuria.
Congenital Syphilis: MGN is a rare complication in congenital syphilis, but it is a well-recognized cause of NS in children with this infection. Other glomerular disease in congenital syphilis include nephritic syndrome and crescentic GN with rapidly progressive disease. We have seen cases with these types of glomerular disease and there is a dramatic improvement with the antibiotic treatment. Several studies have demonstrated the presence of antigens of Treponema pallidum in the immune glomerular deposits.
In SLE the histopathologic presentation is very variable and there is combination of morphologic changes: MGN with subendothelial deposits, endocapillary and/or mesangial proliferation, crescents, combination with characteristics of membranoproliferative GN, and other patterns. In the most recent lupus nephritis classification, pure MGN (class V) is only diagnosed if there are no other active lesions; if there is combination with active lesions it is diagnosed as combination of class V and class III or IV only if there are lesions with membranous characteristics in more than 50% of the tuft in more than 50% of glomeruli. Occasional subepithelial deposits and “spikes” formation are very frequent in class III and IV lupus nephritis. In most of these cases we find C1q glomerular deposits.
MGN and neoplasms: The neoplasms more frequently associated with MGN are lung, breast, colon, stomach and kidney carcinomas, leukemia and lymphomas (Hodgkin’s and non-Hodgkin’s), but there is information of MGN in many other cancer types. Incidence of cancer in patients with MGN is approximately 1%. The histologic and immunopathologic findings and the clinical presentation are similar to those of idiopathic forms of MGN. The association between MGN and neoplasms is supported by the clinical course, the immune response of the host to the tumor and the glomerular pathology, nevertheless, in very few cases is documented an antigen of the tumor, or its antibody, in glomerular deposits. It is possible that the immune response against the neoplasm, in a propitious genetic context, allow the development of MGN. The prognosis of the glomerulopathy depend on that of the neoplasm. If there are treatment and response of this last one, the MGN tends to disappear.
Other diseases associated with MGN are: drugs (gold, penicillamine, mercury, captopril, etc.), other infections (parasitic, Streptococcus), other autoimmune diseases (rheumatoid arthritis, pemphigus, primary biliary cirrhosis, autoimmune enteropathy, Hashimoto’s disease, Graves’ disease, and so on), diabetes, renal vein thrombosis, sarcoydosis, cryoglobulinemia, and sick cell disease.
Clinical features: The most frequent presentation is proteinuria in nephrotic range, with or without the other findings of the complete NS. In a variable percentage of cases present as asymptomatic proteinuria. There is microscopic hematuria in most of patients, but macrohematuria is rare. Exceptionally it can appear with isolated hematuria. The renal function can be slightly altered at the time of the diagnosis in many cases, but renal failure is unusual at presentation. In 25-33% of the cases systemic hypertension is documented. MGN may appear to any age, with predilection by 4º and 5º decades of the life.
The clinical course of MGN is very variable, in many patients there is a favorable course; approximately 25% of patients will have partial or complete spontaneous remission, although, until 29% of them will present recurrence. Around 50% of patients will not present alteration of the renal function. In a small number of cases there will be a fast loss of renal function or death. This variable evolution makes difficult interpretation of clinical trials or treatment response. Treatment with steroids, clorambucil or other immunosuppressors has shown contradictory results; it does not exist at the present moment an universally accepted treatment.
Cases of post-transplant recurrent MGN have been informed, but, there are no large series that allow determining with precision the percentage of post-transplant recurrence. Since renal transplant receptors are susceptible to many causes of secondary MGN, an underlying or associated cause must be looked for. Histologically is not possible to differentiate between recurrent MGN and de novo MGN in a transplanted kidney; for this differentiation the histologic study in the native kidney is indispensable (pre-transplant).
Laboratory findings: Proteinuria is generally in nephrotic range and in many cases it is massive (>10 g/24h). There are, in addition, in most of cases, the other findings of the complete NS (hypoalbuminemia, hypercholesterolemia). Proteinuria usually is nonselective. Mmicrohematuria is common and macrohematuria is unusual. In many cases there is slight increase of serum creatinine and BUN, and in until 75% of cases there is reduction of the glomerular filtration rate. There is no hypocomplementemia (at least in idiopathic forms) and in many cases increased levels of MAC (C5b-C9) are documented. As was previously expressed, in a high percentage of cases circulating immune complexes are detected, although, as detection of increased levels of MAC, it has few importance for the differential diagnosis, since they are not specific.
The characteristic changes in MGN are in the glomerular capillary walls. The initial phase of the glomerulopathy is marked by subepithelial granular deposits: in the external part of the GBM, between this one and podocyte cytoplasm. Initially these deposits do not generate reaction of the GBM and therefore it is improbable that they are detected with light microscopy. In few cases the deposits are as large as to be seen with light microscopy with a good section and a good trichrome stain; they appear fuchsinophilic (red), sometimes homogenously spaced, and in the external part of the basement membrane (Figure 3); in order to see them thin histologic sections are needed and to observe with high power using immersion oil. In tangential sections of the GBM, with methenamine-silver stain, in some cases, a mottled aspect or very small orifices (“holes”) can be observed; them correspond to depressions originated by the subepithelial deposits (Figure 4), better observed with scanning electron microscope. This phase of the MGN evolution is called: STAGE I, the diagnosis can be missed if we do not have immunofluorescence (IF) or electron microscopy (EM); these deposits are immune and will be positive for IgG and, in most of cases, for C3, in addition, they are electron-dense. Without IF or EM it is very probable that we can not diagnose this stage, and possibly we will think in minimal change disease.
In STAGE II the histologic
findings with light microscopy allow an easier diagnosis. The glomerular architecture
is preserved and the capillary walls appear thickened with routine stains (Figure
1). The cellularity usually is not increased (if present it suggests a secondary
MGN) and the capillary lumina are ample. There is formation of material with
similar aspect to the GBM (although with different composition) that projects
perpendicularly to this one: “spikes” (Figure 2). These spikes are
originated in reaction to the deposits and go progressively surrounding them.
This material is composed of type IV collagen and noncollagenous components:
laminin, proteoglycans and vitronectin, and could be originated by mediators
produced by the podocyte or another mechanism that stimulates changes in the
GBM. In some sections are well observed the spaces or holes that produce the
immune deposits in the outer aspect of the GBM, with the center of the hole
corresponding to the immune deposit and the periphery to the GBM-like material.
In some cases these holes have an irregular form that gives a reticulated aspect
to the GBM (Figure 4). In some cases we find segmental cellular proliferation
(Figure 5), but in these cases we must consider the possibility of a secondary
MGN; in other cases there will also be segmental and focal sclerosing lesions
Figure 1. In GNM the alterations are demonstrated, mainly, in the capillary walls; here they appear thickened and with rigid aspect (green arrows). In some cases we find variable degrees of mesangial hypercellularity (blue arrows), in these cases we must consider the possibility of a secondary MGN. (H&E, X.400).
Figure 2. The characteristic finding that allows the diagnosis in most of cases are the perpendicular projections in the outer aspect of the GBM, seen with silver stain (arrows). In initial stages when spikes are not still formed may not be possible to do the diagnosis without immunofluorescence or electron microscopy. (Methenamine-silver, X1000).
Figure 3. In cases of large subepithelial deposits is possible, in thin sections and with a good trichrome stain, to see the immune deposits with a characteristic red color (fuchsinophilic) (arrows). Even in initial stages, without “spikes”, this finding allows us to make the diagnosis of MGN. Unfortunately it is not frequent to see these deposits with light microscopy (Masson’s trichrome, X1000)..
Figure 4. In sections in which the GBM appears tangential it is possible to see spaces or holes (arrows); these holes are due to the presence of immune deposits, negative with the silver, surrounded by BGM-like (positive with the silver). Many of these holes correspond to deposits completely surrounded by the “spikes”. (Methenamine-silver, X1000).
Figure 5. With PAS stain the thickening of capillary walls is evidenced; there is no a noticeable mesangial widening. In this case there is also a segment with endocapillary proliferation (arrows); this finding must suggest the possibility of a secondary MGN. (PAS, X.400).
Figure 6. In some cases there are sclerosing segmental lesions (arrows). This finding not necessarily indicates a worse prognosis (PAS, X.400).
When advancing the process the material that forms the spikes increases and surround completely the deposits, forming thus new layers of GBM leaving the deposits immersed in this material. Now the deposits are seen intramembranous and with silver stain capillary walls can take an aspect in “chain” or “rosary”. This evolution point of the glomerular changes is known as STAGE III. The deposits continue being positive with the immunostaining (IF), although progressively they are less electron-dense (Figures 7 and 8).
Figure 7. When advancing the process of the disease the parietal immune deposits are progressively surrounded by GBM-like material, which gives an irregular aspect in chain or “rosary” to the GBM (arrows); these findings characterize stage III of the MGN. According to the predominant alteration in the capillary we diagnose the lesions as stage I, II, III, or IV. (Methenamine-silver, X1000).
Figure 8. In this microphotography is possible demonstrate better the capillary wall lesions in stage III MGN; see as the material that appears in the external part of the GBM (black with the silver) forms circles or ring that completely surrounds the immune deposits (arrows). (Methenamine-silver, X1000).
In STAGE IV the GBM is irregularly thickened, without the presence of electron-dense deposits or holes. In this phase it is considered that the deposits have been resorpted leaving this irregular aspect. In these cases the diagnosis is sustained by the presence of other areas with lesions in stage II or III.
In many cases there is a mixed appearance, with areas presenting several stages. In order to classify these cases a good observation is required to determine the dominant pattern.
The histopathologic stages are progressive, nevertheless, although they present some correlation with the clinical evolution of the disease, there is no a perfect correlation between the stage and the prognosis. Remissions in anyone of this stages is possible, and progression to chronic renal failure has been described in stages I and II. It is not clear if these stages evolve in periods of time more or less determined. In Figure 9 appears the scheme of the different stages of MGN.
Figure 9. This scheme represents the characteristics of the different MGN stages. In stage I (blue dark arrows) the deposits have still not generated reaction in the GBM and therefore they are not accompanied by spikes. In stage II the reaction produced in the outer aspect of the GBM is seen as perpendicular projections: “spikes” that try to surround the deposits (green arrows). In stage III the GBM-like material has surrounded completely the deposits (red arrows). And in stage IV the GBM is very thickened and irregular and the deposits have disappeared almost completely (blue clear arrows). (Scheme on a microphotography of a section stained with Masson’s trichrome, X1000).
Other changes described in MGN are segmental sclerosis, lobulation of the tuft, mesangial hypercellularity, presence of inflammatory cells and necrosis, nevertheless, in these cases we must suspect a secondary form. In some works have been documented coexistence of MGN and IgA nephropathy, MGN and diabetes, and MGN and crescentic GN. Occasionally there are cases of MGN with crescents, in these cases the course is severe with poor prognosis; in several of these cases antibodies anti-GBM have been detected.
The spikes formation in GBM is “almost” diagnostic of MGN, nevertheless, I have seen some cases in which the similar aspect has generated confusion: amyloidosis with extensive formation of perpendicular projections in the GBM outer aspect, fibrillary and immunotactoid GN, and lecithin cholesterol acyl transferase deficiency.
The interstitium, tubules and vessel show unspecific changes. Frequently droplets of protein resorption or vacuolated aspect are observed in the tubular cells cytoplasm. Interstitial fibrosis and tubular atrophy correlate with the severity of the chronic damage and they are good prognosis indicators, reason why they would be quantified or semiquantified (mild - moderate - severe). The causes of tubulointerstitial damage, like in many glomerulopatías, seem to be related to alteration of the glomerular circulation and secondary atrophy. Proteinuria also can play an important role in the tubular damage.
The characteristic immunopathologic picture is granular parietal IgG positivity, accompanied by C3 deposits in approximately 75% of cases. The IgG immunostaining usually is more intense than C3. The staining can be seen as large grains or densely grouped fine grains that gives a pseudolinear aspect. Observing in detail, it can be demonstrated that these deposits are located towards the external part of the GBM or “transmembrane”. Other immunoglobulins can also be identified in a minority of cases, specially IgM and IgA. As previously expressed, C1q or C4 deposits force to study a secondary cause of MGN; the same possibility must be considered if there are mesangial deposits.
IgG4 sub-class is the most frequent, this sub-class fixes poorly the complement and this would explain the weakest C3 staining (Doi T et al, Clin Exp Immunol 58:57-62, 1984 [PubMed link]; Nöel LH, et al, Clin Immunol Immunopathol 46:186-194, 1988 [PubMed link]).
Figure 10. MGN is characterized, immunopathologically, by IgG granular capillary wall deposits and, in most of cases, C3 in the same location. In thin sections is possible, in many cases, to determine the location of the immune deposits in the external part of the GBM. If we also find subendothelial deposits we must think about the possibility of a secondary MGN. In both images we can see a characteristic appearance of the subepithelial deposits, which we sometimes describe as "reticular aspect". (Immunofluorescence with antibodies anti-IgG marked with fluorescein, X400).
Figure 11. In some cases of MGN the parietal deposits are granules that seem to protrude towards the outside of the capillaries (as in the photo above). Other times we may find that the deposits are very small grains, densely grouped, that give a pseudolinear aspect (as in the bottom photo). At high power it is possible to determine the granular nature of the deposits and does not confuse them with linear deposits. (Immunofluorescence with antibodies anti-IgG marked with fluorescein, X400).
There are electron-dense deposits in the epithelial aspect (external) of the GBM, between this one and the epithelial cell: subepithelials or epimembranous. These deposits are usually diffuse and homogenously distributed, but they can be, in some cases, irregularly distributed. Spikes are demonstrated as irregular projections of the GBM among the subepithelial deposits; with progression of the disease these projections become longer and surround the deposits incorporating them in a thickened GBM. The deposits are amorphous; the presence of organized deposits must alert of a possible lupus nephritis. These deposits lose their electron density until disappear in the advanced stages of the process. Like in many other diseases with NS, there is a variable loss or effacement of podocyte foot processes. In some cases, more frequently secondary, there are dense deposits in mesangium. (Image of MGN stage II (link) - More EM images (link)
Figure 12a. Left: electron-dense deposits on the outside of the basement membrane, without reaction of this around the deposits: Stage I. Right: deposits surrounded laterally by material similar to that of the basement membrane ("spikes"): Stage II. Original magnification, X4,000. (Images courtesy of Dr. Carlos A, Jiménez).
Figure 12b. Left: Electron-dense deposits are completely surrounded by basement membrane-like material, giving the appearance of being "embedded" within a very thick and irregular basement membrane: Stage III. Right: the glomerular basement membrane is thick and irregular and electron-dense deposits have almost disappeared: Stage IV. Original magnification: left, X4,000, right, X6,000. (Images courtesy of Dr. Carlos A, Jiménez). Note the extensive loss of pedicels in the four previous images.
Like in most glomerulopathies, serum creatinine
increase at diagnosis, severe proteinuria (>10 g/24h), arterial hypertension,
and chronic tubulointerstitial damage has been related, in greater or smaller
measurement, with a greater risk of evolution to terminal renal failure. Some
works suggest better prognosis if proteinuria is selective.
In secondary forms there is, in general, a better prognosis if the associate cause has successful treatment. In children the prognosis seems better. In some series there is better prognosis for women. Also the histopathologic stages show correlation with the evolution, although this correlation is not perfect as was previously expressed.
| 1 | 3 |
<urn:uuid:80e5defa-422f-4727-8249-2d0642673645>
|
Dolby n : United States electrical engineer who devised the Dolby system used to reduce background noise in tape recording [syn: Ray M. Dolby]
Dolby Laboratories, Inc. (Dolby Labs) (nyse DLB) is a USA-based company specializing in audio noise reduction and audio encoding/compression.
HistoryDolby Labs was founded by Ray Dolby in England in 1965. He moved the company to the United States (San Francisco, California) in 1976. The first product he made was Type A Dolby Noise Reduction, a simple compander. One of the features that set Dolby's compander apart was that it treated only the quiet sounds that would be masked by tape noise. Dolby marketed the product to record companies.
Dolby was persuaded by Henry Kloss of KLH to manufacture a consumer version of his noise reduction. Dolby worked more on companding systems and introduced B-type in 1968.
Dolby did not manufacture consumer products outright; it licensed the technologies to consumer electronics manufacturers.
Dolby also sought to improve film sound. As the corporation's history explains:
- Upon investigation, Dolby found that many of the limitations in optical sound stemmed directly from its significantly high background noise. To filter this noise, the high-frequency response of theatre playback systems was deliberately curtailed… To make matters worse, to increase dialogue intelligibility over such systems, sound mixers were recording soundtracks with so much high-frequency pre-emphasis that high distortion resulted.
The first film with Dolby sound was A Clockwork Orange (1971), which used Dolby noise reduction on all pre-mixes and masters, but a conventional optical sound track on release prints. Callan (1974) was the first film with a Dolby-encoded optical soundtrack. In 1975 Dolby released Dolby Stereo, which included a noise reduction system in addition to more audio channels (Dolby Stereo could actually contain additional center and surround channels matrixed from the left and right). The first film with a Dolby-encoded stereo optical soundtrack was Lisztomania (1975), although this only used an LCR (Left-Center-Right) encoding technique. The first true LCRS (Left-Center-Right-Surround) soundtrack was encoded on the movie A Star Is Born in 1976. In less than ten years, 6,000 cinemas worldwide were equipped to use Dolby Stereo sound. Dolby reworked the system slightly for home use and introduced Dolby Surround, which only extracted a surround channel, and the more impressive Dolby Pro Logic, which was the domestic equivalent of the theatrical Dolby Stereo.
Dolby developed a digital surround sound compression scheme for the cinema. Dolby Stereo Digital (now simply called Dolby Digital) was first featured on the 1992 film Batman Returns. Introduced to the home theater market as Dolby AC-3 with the 1995 laserdisc release of Clear and Present Danger, the format did not become widespread in the consumer market, partly because of extra hardware that was necessary to make use of it, until it was adopted as part of the DVD specification. Dolby Digital is now found in the HDTV (ATSC) standard of the USA, DVD players, and many satellite-TV and cable-TV receivers.
On February 17, 2005, the company became public, offering stock for sale on the New York Stock Exchange under the symbol DLB.
On March 15, 2005, Dolby celebrated forty years of enhancing entertainment at the ShoWest 2005 Festival in San Francisco.
On January 8, 2007, Dolby announced the arrival of an entirely new product called Dolby Volume at the International Consumer Electronics Show (CES). This product enables users to maintain a steady volume while switching through channels or program elements (i.e., loud TV commercials).
Dolby Labs has been good to its founder. Ray Dolby is a member of the Forbes 400 with an estimated net worth of 2.7 Billion in 2007.
Horrorween 2009 is to be released in Dolby 3-D.
Analog audio noise reduction
- Dolby SR (Spectral Recording): professional four-channel noise reduction system in use since 1986, which improves the dynamic range of analog recordings and transmissions by as much as 25 dB. Dolby SR is utilized by recording and post-production engineers, broadcasters, and other audio professionals. It is also the benchmark in analog film sound, being included today on nearly all 35 mm film prints. On films with digital soundtracks, the SR track is used in cinemas not equipped for digital playback, and it serves as a backup in case of problems with the digital track.
- Dolby FM: noise reduction system for FM broadcast radio. Dolby FM used Dolby B, combined with 25 microsecond pre-emphasis. This system integrated into a small number of receivers, and was used by a few radio stations in the late 1970s and early 1980s. The system is no longer used, however.
- Dolby HX Pro: single-ended system used on high-end tape recorders to increase headroom. The recording bias is varied with respect to the high frequency component of the signal being recorded. It does nothing to the actual audio that's being recorded, and doesn't require a special decoder. Any HX Pro recorded tape will have, in theory, better sound on any deck.
Digital (also known as AC-3): is a lossy audio compression
format. It supports channel configurations from mono up to six
discrete channels (referred to as "5.1"). This format first allowed
and popularized surround
sound. It was first developed for movie theater sound and
spread to Laserdisc and
DVD. It has
been adopted in many broadcast formats including all North American digital
television (ATSC), DVB-T, direct
broadcast satellite, cable
television, DTMB, IPTV, and surround
sound radio services. It was also part of Blu-ray and
standards. Dolby Digital is used to enable surround sound output by
game consoles. Several personal
computers support converting all audio to
Dolby Digital for output.
- Dolby Digital EX: introduces a matrix-encoded center rear surround channel to Dolby Digital for 6.1 channel output.
- Dolby Digital Plus: audio codec based on Dolby Digital that is backward compatible, but more advanced. The DVD Forum has selected Dolby Digital Plus as a standard audio format for HD DVD video. It supports datarates up to 6 Mbyte/s, an increase from Dolby Digital's 640 kbit/s maximum. Dolby Digital Plus is also optimized for limited datarate environments such as Digital broadcasting.
- Dolby TrueHD: Dolby's current lossless coding technology. It offers bit-for-bit sound reproduction identical to the studio master. Over seven full-range 24-bit/96 kHz discrete channels are supported (plus a LFE channel, making it 7.1 surround) along with the HDMI interface. It has been selected as the mandatory format for HD DVD and as an optional format for Blu-ray Disc. Theoretically, Dolby True HD can support more channels, but this number has been limited to 8 for HD DVD and Blu-ray Disc.
- Dolby Headphone: simulates 5.1 surround sound in a standard pair of stereo headphones.
- Dolby Virtual Speaker: simulates 5.1 surround sound in a setup of two standard stereo speakers.
- Audistry: sound enhancement technologies
- Dolby Volume: reduces volume level changes
- Dolby Contrast provides enhanced image contrast to LCD screens with LED backlight units by means of local dimming.
- Dolby Vision
- Dolby Digital Cinema
- Dolby 3-D Digital Cinema
- Dolby Lake Processor
dolby in Czech: Dolby Laboratories
dolby in German: Dolby
dolby in Spanish: Dolby
dolby in French: Dolby
dolby in Korean: 돌비 연구소
dolby in Italian: Dolby Laboratories
dolby in Dutch: Dolby
dolby in Japanese: ドルビーラボラトリーズ
dolby in Norwegian: Dolby
dolby in Portuguese: Dolby
dolby in Romanian: Dolby
dolby in Russian: Dolby
dolby in Finnish: Dolby Laboratories
dolby in Swedish: Dolby Laboratories
dolby in Thai: ดอลบี
| 1 | 2 |
<urn:uuid:e97c05c3-c53d-4ba1-8865-1540e42da963>
|
Pneumonia, the inflammatory state of lung tissue primarily due to microbial infection, claimed 52,306 lives in the United States in 20071 and resulted in the hospitalization of 1.1 million patients2. With an average length of in-patient hospital stay of five days2, pneumonia and influenza comprise significant financial burden costing the United States $40.2 billion in 20053. Under the current Infectious Disease Society of America/American Thoracic Society guidelines, standard-of-care recommendations include the rapid administration of an appropriate antibiotic regiment, fluid replacement, and ventilation (if necessary). Non-standard therapies include the use of corticosteroids and statins; however, these therapies lack conclusive supporting evidence4. (Figure 1)
Osteopathic Manipulative Treatment (OMT) is a cost-effective adjunctive treatment of pneumonia that has been shown to reduce patients’ length of hospital stay, duration of intravenous antibiotics, and incidence of respiratory failure or death when compared to subjects who received conventional care alone5. The use of manual manipulation techniques for pneumonia was first recorded as early as the Spanish influenza pandemic of 1918, when patients treated with standard medical care had an estimated mortality rate of 33%, compared to a 10% mortality rate in patients treated by osteopathic physicians6. When applied to the management of pneumonia, manual manipulation techniques bolster lymphatic flow, respiratory function, and immunological defense by targeting anatomical structures involved in the these systems7,8, 9, 10.
The objective of this review video-article is three-fold: a) summarize the findings of randomized controlled studies on the efficacy of OMT in adult patients with diagnosed pneumonia, b) demonstrate established protocols utilized by osteopathic physicians treating pneumonia, c) elucidate the physiological mechanisms behind manual manipulation of the respiratory and lymphatic systems. Specifically, we will discuss and demonstrate four routine techniques that address autonomics, lymph drainage, and rib cage mobility: 1) Rib Raising, 2) Thoracic Pump, 3) Doming of the Thoracic Diaphragm, and 4) Muscle Energy for Rib 1.5,11
21 Related JoVE Articles!
Following in Real Time the Impact of Pneumococcal Virulence Factors in an Acute Mouse Pneumonia Model Using Bioluminescent Bacteria
Institutions: University of Greifswald.
Pneumonia is one of the major health care problems in developing and industrialized countries and is associated with considerable morbidity and mortality. Despite advances in knowledge of this illness, the availability of intensive care units (ICU), and the use of potent antimicrobial agents and effective vaccines, the mortality rates remain high1
. Streptococcus pneumoniae
is the leading pathogen of community-acquired pneumonia (CAP) and one of the most common causes of bacteremia in humans. This pathogen is equipped with an armamentarium of surface-exposed adhesins and virulence factors contributing to pneumonia and invasive pneumococcal disease (IPD). The assessment of the in vivo
role of bacterial fitness or virulence factors is of utmost importance to unravel S. pneumoniae
pathogenicity mechanisms. Murine models of pneumonia, bacteremia, and meningitis are being used to determine the impact of pneumococcal factors at different stages of the infection. Here we describe a protocol to monitor in real-time pneumococcal dissemination in mice after intranasal or intraperitoneal infections with bioluminescent bacteria. The results show the multiplication and dissemination of pneumococci in the lower respiratory tract and blood, which can be visualized and evaluated using an imaging system and the accompanying analysis software.
Infection, Issue 84, Gram-Positive Bacteria, Streptococcus pneumoniae, Pneumonia, Bacterial, Respiratory Tract Infections, animal models, community-acquired pneumonia, invasive pneumococcal diseases, Pneumococci, bioimaging, virulence factor, dissemination, bioluminescence, IVIS Spectrum
A Novel Rescue Technique for Difficult Intubation and Difficult Ventilation
Institutions: Children’s Hospital of Michigan, St. Jude Children’s Research Hospital.
We describe a novel non surgical technique to maintain oxygenation and ventilation in a case of difficult intubation and difficult ventilation, which works especially well with poor mask fit.
Can not intubate, can not ventilate" (CICV) is a potentially life threatening situation. In this video we present a simulation of the technique we used in a case of CICV where oxygenation and ventilation were maintained by inserting an endotracheal tube (ETT) nasally down to the level of the naso-pharynx while sealing the mouth and nares for successful positive pressure ventilation.
A 13 year old patient was taken to the operating room for incision and drainage of a neck abcess and direct laryngobronchoscopy. After preoxygenation, anesthesia was induced intravenously. Mask ventilation was found to be extremely difficult because of the swelling of the soft tissue. The face mask could not fit properly on the face due to significant facial swelling as well. A direct laryngoscopy was attempted with no visualization of the larynx. Oxygen saturation was difficult to maintain, with saturations falling to 80%. In order to oxygenate and ventilate the patient, an endotracheal tube was then inserted nasally after nasal spray with nasal decongestant and lubricant. The tube was pushed gently and blindly into the hypopharynx. The mouth and nose of the patient were sealed by hand and positive pressure ventilation was possible with 100% O2
with good oxygen saturation during that period of time. Once the patient was stable and well sedated, a rigid bronchoscope was introduced by the otolaryngologist showing extensive subglottic and epiglottic edema, and a mass effect from the abscess, contributing to the airway compromise. The airway was secured with an ETT tube by the otolaryngologist.This video will show a simulation of the technique on a patient undergoing general anesthesia for dental restorations.
Medicine, Issue 47, difficult ventilation, difficult intubation, nasal, saturation
Isolation and Enrichment of Rat Mesenchymal Stem Cells (MSCs) and Separation of Single-colony Derived MSCs
Institutions: City of Hope Cancer Center.
MSCs are a population of adult stem cells that is a promising source for therapeutic applications. These cells can be isolated from the bone marrow and can be easily separated from the hematopoietic stem cells (HSCs) due to their plastic adherence. This protocol describes how to isolate MSCs from rat femurs and tibias. The isolated cells were further enriched against two MSCs surface markers CD54 and CD90 by magnetic cell sorting. Expression of surface markers CD54 and CD90 were then confirmed by flow cytometry analysis. HSC marker CD45 was also included to check if the sorted MSCs were depleted of HSCs. MSCs are naturally quite heterogeneous. There are subpopulations of cells that have different shapes, proliferation and differentiation abilities. These subpopulations all express the known MSCs markers and no unique marker has yet been identified for the different subpopulations. Therefore, an alternative approach to separate out the different subpopulations is using cloning cylinders to separate out single-colony derived cells. The cells derived from the single-colonies can then be cultured and evaluated separately.
Cellular Biology, Issue 37, mesenchymal stem cells, magnetic cell sorting, flow cytometry, cloning cylinder
Guidelines for Elective Pediatric Fiberoptic Intubation
Institutions: St. Jude Children's Research Hospital, Children's Hospital of Michigan, Children's Hospital of Michigan.
Fiberoptic intubation in pediatric patients is often required especially in difficult airways of syndromic patients i.e. Pierre Robin Syndrome. Small babies will desaturate very quickly if ventilation is interrupted mainly to high metabolic rate. We describe guidelines to perform a safe fiberoptic intubation while maintaining spontaneous breathing throughout the procedure. Steps requiring the use of propofol pump, fentanyl, glycopyrrolate, red rubber catheter, metal insuflation hook, afrin, lubricant and lidocaine spray are shown.
Medicine, Issue 47, Fiberoptic, Intubation, Pediatric, elective
An Affordable HIV-1 Drug Resistance Monitoring Method for Resource Limited Settings
Institutions: University of KwaZulu-Natal, Durban, South Africa, Jembi Health Systems, University of Amsterdam, Stanford Medical School.
HIV-1 drug resistance has the potential to seriously compromise the effectiveness and impact of antiretroviral therapy (ART). As ART programs in sub-Saharan Africa continue to expand, individuals on ART should be closely monitored for the emergence of drug resistance. Surveillance of transmitted drug resistance to track transmission of viral strains already resistant to ART is also critical. Unfortunately, drug resistance testing is still not readily accessible in resource limited settings, because genotyping is expensive and requires sophisticated laboratory and data management infrastructure. An open access genotypic drug resistance monitoring method to manage individuals and assess transmitted drug resistance is described. The method uses free open source software for the interpretation of drug resistance patterns and the generation of individual patient reports. The genotyping protocol has an amplification rate of greater than 95% for plasma samples with a viral load >1,000 HIV-1 RNA copies/ml. The sensitivity decreases significantly for viral loads <1,000 HIV-1 RNA copies/ml. The method described here was validated against a method of HIV-1 drug resistance testing approved by the United States Food and Drug Administration (FDA), the Viroseq genotyping method. Limitations of the method described here include the fact that it is not automated and that it also failed to amplify the circulating recombinant form CRF02_AG from a validation panel of samples, although it amplified subtypes A and B from the same panel.
Medicine, Issue 85, Biomedical Technology, HIV-1, HIV Infections, Viremia, Nucleic Acids, genetics, antiretroviral therapy, drug resistance, genotyping, affordable
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Combining Magnetic Sorting of Mother Cells and Fluctuation Tests to Analyze Genome Instability During Mitotic Cell Aging in Saccharomyces cerevisiae
Institutions: Rensselaer Polytechnic Institute.
has been an excellent model system for examining mechanisms and consequences of genome instability. Information gained from this yeast model is relevant to many organisms, including humans, since DNA repair and DNA damage response factors are well conserved across diverse species. However, S. cerevisiae
has not yet been used to fully address whether the rate of accumulating mutations changes with increasing replicative (mitotic) age due to technical constraints. For instance, measurements of yeast replicative lifespan through micromanipulation involve very small populations of cells, which prohibit detection of rare mutations. Genetic methods to enrich for mother cells in populations by inducing death of daughter cells have been developed, but population sizes are still limited by the frequency with which random mutations that compromise the selection systems occur. The current protocol takes advantage of magnetic sorting of surface-labeled yeast mother cells to obtain large enough populations of aging mother cells to quantify rare mutations through phenotypic selections. Mutation rates, measured through fluctuation tests, and mutation frequencies are first established for young cells and used to predict the frequency of mutations in mother cells of various replicative ages. Mutation frequencies are then determined for sorted mother cells, and the age of the mother cells is determined using flow cytometry by staining with a fluorescent reagent that detects bud scars formed on their cell surfaces during cell division. Comparison of predicted mutation frequencies based on the number of cell divisions to the frequencies experimentally observed for mother cells of a given replicative age can then identify whether there are age-related changes in the rate of accumulating mutations. Variations of this basic protocol provide the means to investigate the influence of alterations in specific gene functions or specific environmental conditions on mutation accumulation to address mechanisms underlying genome instability during replicative aging.
Microbiology, Issue 92, Aging, mutations, genome instability, Saccharomyces cerevisiae, fluctuation test, magnetic sorting, mother cell, replicative aging
Characterization of Inflammatory Responses During Intranasal Colonization with Streptococcus pneumoniae
Institutions: McMaster University .
Nasopharyngeal colonization by Streptococcus pneumoniae
is a prerequisite to invasion to the lungs or bloodstream1
. This organism is capable of colonizing the mucosal surface of the nasopharynx, where it can reside, multiply and eventually overcome host defences to invade to other tissues of the host. Establishment of an infection in the normally lower respiratory tract results in pneumonia. Alternatively, the bacteria can disseminate into the bloodstream causing bacteraemia, which is associated with high mortality rates2
, or else lead directly to the development of pneumococcal meningitis. Understanding the kinetics of, and immune responses to, nasopharyngeal colonization is an important aspect of S. pneumoniae
Our mouse model of intranasal colonization is adapted from human models3
and has been used by multiple research groups in the study of host-pathogen responses in the nasopharynx4-7
. In the first part of the model, we use a clinical isolate of S. pneumoniae
to establish a self-limiting bacterial colonization that is similar to carriage events in human adults. The procedure detailed herein involves preparation of a bacterial inoculum, followed by the establishment of a colonization event through delivery of the inoculum via an intranasal route of administration. Resident macrophages are the predominant cell type in the nasopharynx during the steady state. Typically, there are few lymphocytes present in uninfected mice8
, however mucosal colonization will lead to low- to high-grade inflammation (depending on the virulence of the bacterial species and strain) that will result in an immune response and the subsequent recruitment of host immune cells. These cells can be isolated by a lavage of the tracheal contents through the nares, and correlated to the density of colonization bacteria to better understand the kinetics of the infection.
Immunology, Issue 83, Streptococcus pneumoniae, Nasal lavage, nasopharynx, murine, flow cytometry, RNA, Quantitative PCR, recruited macrophages, neutrophils, T-cells, effector cells, intranasal colonization
Isolation of Myeloid Dendritic Cells and Epithelial Cells from Human Thymus
Institutions: Hertie Institute for Clinical Brain Research, University of Bern, University Medical Center Hamburg-Eppendorf, University Clinic Tuebingen, University Hospital Erlangen.
In this protocol we provide a method to isolate dendritic cells (DC) and epithelial cells (TEC) from the human thymus. DC and TEC are the major antigen presenting cell (APC) types found in a normal thymus and it is well established that they play distinct roles during thymic selection. These cells are localized in distinct microenvironments in the thymus and each APC type makes up only a minor population of cells. To further understand the biology of these cell types, characterization of these cell populations is highly desirable but due to their low frequency, isolation of any of these cell types requires an efficient and reproducible procedure. This protocol details a method to obtain cells suitable for characterization of diverse cellular properties. Thymic tissue is mechanically disrupted and after different steps of enzymatic digestion, the resulting cell suspension is enriched using a Percoll density centrifugation step. For isolation of myeloid DC (CD11c+
), cells from the low-density fraction (LDF) are immunoselected by magnetic cell sorting. Enrichment of TEC populations (mTEC, cTEC) is achieved by depletion of hematopoietic (CD45hi
) cells from the low-density Percoll cell fraction allowing their subsequent isolation via fluorescence activated cell sorting (FACS) using specific cell markers. The isolated cells can be used for different downstream applications.
Immunology, Issue 79, Immune System Processes, Biological Processes, immunology, Immune System Diseases, Immune System Phenomena, Life Sciences (General), immunology, human thymus, isolation, dendritic cells, mTEC, cTEC
Homing of Hematopoietic Cells to the Bone Marrow
Institutions: MGH - Massachusetts General Hospital.
Homing is the phenomenon whereby transplanted hematopoietic cells are able to travel to and engraft or establish residence in the bone marrow. Various chemomkines and receptors are involved in the homing of hematopoietic stem cells. [1, 2]
This paper outlines the classic homing protocol used in hematopoietic stem cell studies. In general this involves isolating the cell population whose homing needs to be investigated, staining this population with a dye of interest and injecting these cells into the blood stream of a recipient animal. The recipient animal is then sacrificed at a pre-determined time after injection and the bone marrow evaluated for the percentage or absolute number of cells which are positive for the dye of interest. In one of the most common experimental schemes, the homing efficiency of hematopoietic cells from two genetically distinct animals (a wild type animal and the corresponding knock-out) is compared. This article describes the hematopoietic cell homing protocol in the framework of such as experiment.
Immunology, Issue 25, HSC, homing, engraftment, transplantation
EEG Mu Rhythm in Typical and Atypical Development
Institutions: University of Washington, University of Washington.
Electroencephalography (EEG) is an effective, efficient, and noninvasive method of assessing and recording brain activity. Given the excellent temporal resolution, EEG can be used to examine the neural response related to specific behaviors, states, or external stimuli. An example of this utility is the assessment of the mirror neuron system (MNS) in humans through the examination of the EEG mu rhythm. The EEG mu rhythm, oscillatory activity in the 8-12 Hz frequency range recorded from centrally located electrodes, is suppressed when an individual executes, or simply observes, goal directed actions. As such, it has been proposed to reflect activity of the MNS. It has been theorized that dysfunction in the mirror neuron system (MNS) plays a contributing role in the social deficits of autism spectrum disorder (ASD). The MNS can then be noninvasively examined in clinical populations by using EEG mu rhythm attenuation as an index for its activity. The described protocol provides an avenue to examine social cognitive functions theoretically linked to the MNS in individuals with typical and atypical development, such as ASD.
Medicine, Issue 86, Electroencephalography (EEG), mu rhythm, imitation, autism spectrum disorder, social cognition, mirror neuron system
Investigating the Effects of Probiotics on Pneumococcal Colonization Using an In Vitro Adherence Assay
Institutions: Murdoch Childrens Research Institute, Murdoch Childrens Research Institute, The University of Melbourne, The University of Melbourne.
Adherence of Streptococcus pneumoniae
(the pneumococcus) to the epithelial lining of the nasopharynx can result in colonization and is considered a prerequisite for pneumococcal infections such as pneumonia and otitis media. In vitro
adherence assays can be used to study the attachment of pneumococci to epithelial cell monolayers and to investigate potential interventions, such as the use of probiotics, to inhibit pneumococcal colonization. The protocol described here is used to investigate the effects of the probiotic Streptococcus salivarius
on the adherence of pneumococci to the human epithelial cell line CCL-23 (sometimes referred to as HEp-2 cells). The assay involves three main steps: 1) preparation of epithelial and bacterial cells, 2) addition of bacteria to epithelial cell monolayers, and 3) detection of adherent pneumococci by viable counts (serial dilution and plating) or quantitative real-time PCR (qPCR). This technique is relatively straightforward and does not require specialized equipment other than a tissue culture setup. The assay can be used to test other probiotic species and/or potential inhibitors of pneumococcal colonization and can be easily modified to address other scientific questions regarding pneumococcal adherence and invasion.
Immunology, Issue 86, Gram-Positive Bacterial Infections, Pneumonia, Bacterial, Lung Diseases, Respiratory Tract Infections, Streptococcus pneumoniae, adherence, colonization, probiotics, Streptococcus salivarius, In Vitro assays
Feeder-free Derivation of Neural Crest Progenitor Cells from Human Pluripotent Stem Cells
Institutions: Sloan-Kettering Institute for Cancer Research, The Rockefeller University.
Human pluripotent stem cells (hPSCs) have great potential for studying human embryonic development, for modeling human diseases in the dish and as a source of transplantable cells for regenerative applications after disease or accidents. Neural crest (NC) cells are the precursors for a large variety of adult somatic cells, such as cells from the peripheral nervous system and glia, melanocytes and mesenchymal cells. They are a valuable source of cells to study aspects of human embryonic development, including cell fate specification and migration. Further differentiation of NC progenitor cells into terminally differentiated cell types offers the possibility to model human diseases in vitro
, investigate disease mechanisms and generate cells for regenerative medicine. This article presents the adaptation of a currently available in vitro
differentiation protocol for the derivation of NC cells from hPSCs. This new protocol requires 18 days of differentiation, is feeder-free, easily scalable and highly reproducible among human embryonic stem cell (hESC) lines as well as human induced pluripotent stem cell (hiPSC) lines. Both old and new protocols yield NC cells of equal identity.
Neuroscience, Issue 87, Embryonic Stem Cells (ESCs), Pluripotent Stem Cells, Induced Pluripotent Stem Cells (iPSCs), Neural Crest, Peripheral Nervous System (PNS), pluripotent stem cells, neural crest cells, in vitro differentiation, disease modeling, differentiation protocol, human embryonic stem cells, human pluripotent stem cells
Discovery of New Intracellular Pathogens by Amoebal Coculture and Amoebal Enrichment Approaches
Institutions: University Hospital Center and University of Lausanne.
Intracellular pathogens such as legionella, mycobacteria and Chlamydia-like organisms are difficult to isolate because they often grow poorly or not at all on selective media that are usually used to cultivate bacteria. For this reason, many of these pathogens were discovered only recently or following important outbreaks. These pathogens are often associated with amoebae, which serve as host-cell and allow the survival and growth of the bacteria. We intend here to provide a demonstration of two techniques that allow isolation and characterization of intracellular pathogens present in clinical or environmental samples: the amoebal coculture and the amoebal enrichment. Amoebal coculture allows recovery of intracellular bacteria by inoculating the investigated sample onto an amoebal lawn that can be infected and lysed by the intracellular bacteria present in the sample. Amoebal enrichment allows recovery of amoebae present in a clinical or environmental sample. This can lead to discovery of new amoebal species but also of new intracellular bacteria growing specifically in these amoebae. Together, these two techniques help to discover new intracellular bacteria able to grow in amoebae. Because of their ability to infect amoebae and resist phagocytosis, these intracellular bacteria might also escape phagocytosis by macrophages and thus, be pathogenic for higher eukaryotes.
Immunology, Issue 80, Environmental Microbiology, Soil Microbiology, Water Microbiology, Amoebae, microorganisms, coculture, obligate intracellular bacteria
Voluntary Breath-hold Technique for Reducing Heart Dose in Left Breast Radiotherapy
Institutions: Royal Marsden NHS Foundation Trust, University of Surrey, Institute of Cancer Research, Sutton, UK, Institute of Cancer Research, Sutton, UK.
Breath-holding techniques reduce the amount of radiation received by cardiac structures during tangential-field left breast radiotherapy. With these techniques, patients hold their breath while radiotherapy is delivered, pushing the heart down and away from the radiotherapy field. Despite clear dosimetric benefits, these techniques are not yet in widespread use. One reason for this is that commercially available solutions require specialist equipment, necessitating not only significant capital investment, but often also incurring ongoing costs such as a need for daily disposable mouthpieces. The voluntary breath-hold technique described here does not require any additional specialist equipment. All breath-holding techniques require a surrogate to monitor breath-hold consistency and whether breath-hold is maintained. Voluntary breath-hold uses the distance moved by the anterior and lateral reference marks (tattoos) away from the treatment room lasers in breath-hold to monitor consistency at CT-planning and treatment setup. Light fields are then used to monitor breath-hold consistency prior to and during radiotherapy delivery.
Medicine, Issue 89, breast, radiotherapy, heart, cardiac dose, breath-hold
In vitro Coculture Assay to Assess Pathogen Induced Neutrophil Trans-epithelial Migration
Institutions: Harvard Medical School, MGH for Children, Massachusetts General Hospital.
Mucosal surfaces serve as protective barriers against pathogenic organisms. Innate immune responses are activated upon sensing pathogen leading to the infiltration of tissues with migrating inflammatory cells, primarily neutrophils. This process has the potential to be destructive to tissues if excessive or held in an unresolved state. Cocultured in vitro
models can be utilized to study the unique molecular mechanisms involved in pathogen induced neutrophil trans-epithelial migration. This type of model provides versatility in experimental design with opportunity for controlled manipulation of the pathogen, epithelial barrier, or neutrophil. Pathogenic infection of the apical surface of polarized epithelial monolayers grown on permeable transwell filters instigates physiologically relevant basolateral to apical trans-epithelial migration of neutrophils applied to the basolateral surface. The in vitro
model described herein demonstrates the multiple steps necessary for demonstrating neutrophil migration across a polarized lung epithelial monolayer that has been infected with pathogenic P. aeruginosa
(PAO1). Seeding and culturing of permeable transwells with human derived lung epithelial cells is described, along with isolation of neutrophils from whole human blood and culturing of PAO1 and nonpathogenic K12 E. coli
(MC1000). The emigrational process and quantitative analysis of successfully migrated neutrophils that have been mobilized in response to pathogenic infection is shown with representative data, including positive and negative controls. This in vitro
model system can be manipulated and applied to other mucosal surfaces. Inflammatory responses that involve excessive neutrophil infiltration can be destructive to host tissues and can occur in the absence of pathogenic infections. A better understanding of the molecular mechanisms that promote neutrophil trans-epithelial migration through experimental manipulation of the in vitro
coculture assay system described herein has significant potential to identify novel therapeutic targets for a range of mucosal infectious as well as inflammatory diseases.
Infection, Issue 83, Cellular Biology, Epithelium, Neutrophils, Pseudomonas aeruginosa, Respiratory Tract Diseases, Neutrophils, epithelial barriers, pathogens, transmigration
Bronchial Thermoplasty: A Novel Therapeutic Approach to Severe Asthma
Institutions: Virginia Hospital Center, Virginia Hospital Center.
Bronchial thermoplasty is a non-drug procedure for severe persistent asthma that delivers thermal energy to the airway wall in a precisely controlled manner to reduce excessive airway smooth muscle. Reducing airway smooth muscle decreases the ability of the airways to constrict, thereby reducing the frequency of asthma attacks. Bronchial thermoplasty is delivered by the Alair System and is performed in three outpatient procedure visits, each scheduled approximately three weeks apart. The first procedure treats the airways of the right lower lobe, the second treats the airways of the left lower lobe and the third and final procedure treats the airways in both upper lobes. After all three procedures are performed the bronchial thermoplasty treatment is complete.
Bronchial thermoplasty is performed during bronchoscopy with the patient under moderate sedation. All accessible airways distal to the mainstem bronchi between 3 and 10 mm in diameter, with the exception of the right middle lobe, are treated under bronchoscopic visualization. Contiguous and non-overlapping activations of the device are used, moving from distal to proximal along the length of the airway, and systematically from airway to airway as described previously. Although conceptually straightforward, the actual execution of bronchial thermoplasty is quite intricate and procedural duration for the treatment of a single lobe is often substantially longer than encountered during routine bronchoscopy. As such, bronchial thermoplasty should be considered a complex interventional bronchoscopy and is intended for the experienced bronchoscopist. Optimal patient management is critical in any such complex and longer duration bronchoscopic procedure. This article discusses the importance of careful patient selection, patient preparation, patient management, procedure duration, postoperative care and follow-up to ensure that bronchial thermoplasty is performed safely.
Bronchial thermoplasty is expected to complement asthma maintenance medications by providing long-lasting asthma control and improving asthma-related quality of life of patients with severe asthma. In addition, bronchial thermoplasty has been demonstrated to reduce severe exacerbations (asthma attacks) emergency rooms visits for respiratory symptoms, and time lost from work, school and other daily activities due to asthma.
Medicine, Issue 45, bronchial thermoplasty, severe asthma, airway smooth muscle, bronchoscopy, radiofrequency energy, patient management, moderate sedation
Sublingual Immunotherapy as an Alternative to Induce Protection Against Acute Respiratory Infections
Institutions: Universidad de la República, Trinity College Dublin.
Sublingual route has been widely used to deliver small molecules into the bloodstream and to modulate the immune response at different sites. It has been shown to effectively induce humoral and cellular responses at systemic and mucosal sites, namely the lungs and urogenital tract. Sublingual vaccination can promote protection against infections at the lower and upper respiratory tract; it can also promote tolerance to allergens and ameliorate asthma symptoms. Modulation of lung’s immune response by sublingual immunotherapy (SLIT) is safer than direct administration of formulations by intranasal route because it does not require delivery of potentially harmful molecules directly into the airways. In contrast to intranasal delivery, side effects involving brain toxicity or facial paralysis are not promoted by SLIT. The immune mechanisms underlying SLIT remain elusive and its use for the treatment of acute lung infections has not yet been explored. Thus, development of appropriate animal models of SLIT is needed to further explore its potential advantages.
This work shows how to perform sublingual administration of therapeutic agents in mice to evaluate their ability to protect against acute pneumococcal pneumonia. Technical aspects of mouse handling during sublingual inoculation, precise identification of sublingual mucosa, draining lymph nodes and isolation of tissues, bronchoalveolar lavage and lungs are illustrated. Protocols for single cell suspension preparation for FACS analysis are described in detail. Other downstream applications for the analysis of the immune response are discussed. Technical aspects of the preparation of Streptococcus pneumoniae
inoculum and intranasal challenge of mice are also explained.
SLIT is a simple technique that allows screening of candidate molecules to modulate lungs’ immune response. Parameters affecting the success of SLIT are related to molecular size, susceptibility to degradation and stability of highly concentrated formulations.
Medicine, Issue 90, Sublingual immunotherapy, Pneumonia, Streptococcus pneumoniae, Lungs, Flagellin, TLR5, NLRC4
Induction of Alloantigen-specific Anergy in Human Peripheral Blood Mononuclear Cells by Alloantigen Stimulation with Co-stimulatory Signal Blockade
Institutions: Dana Farber Cancer Institute, Brigham and Womens Hospital, Dana Farber Cancer Institute, Children’s Hospital Boston.
Allogeneic hematopoietic stem cell transplantation (AHSCT) offers the best chance of cure for many patients with congenital and acquired hematologic diseases. Unfortunately, transplantation of alloreactive donor T cells which recognize and damage healthy patient tissues can result in Graft-versus-Host Disease (GvHD)1
. One challenge to successful AHSCT is the prevention of GvHD without associated impairment of the beneficial effects of donor T cells, particularly immune reconstitution and prevention of relapse. GvHD can be prevented by non-specific depletion of donor T cells from stem cell grafts or by administration of pharmacological immunosuppression. Unfortunately these approaches increase infection and disease relapse2-4
. An alternative strategy is to selectively deplete alloreactive donor T cells after allostimulation by recipient antigen presenting cells (APC) before transplant. Early clinical trials of these allodepletion strategies improved immune reconstitution after HLA-mismatched HSCT without excess GvHD5, 6
. However, some allodepletion techniques require specialized recipient APC production6, 7
and some approaches may have off-target effects including depletion of donor pathogen-specific T cells8
and CD4 T regulatory cells9
.One alternative approach is the inactivation of alloreactive donor T cells via induction of alloantigen-specific hyporesponsiveness. This is achieved by stimulating donor cells with recipient APC while providing blockade of CD28-mediated co-stimulation signals10
.This "alloanergization" approach reduces alloreactivity by 1-2 logs while preserving pathogen- and tumor-associated antigen T cell responses in vitro11
. The strategy has been successfully employed in 2 completed and 1 ongoing clinical pilot studies in which alloanergized donor T cells were infused during or after HLA-mismatched HSCT resulting in rapid immune reconstitution, few infections and less severe acute and chronic GvHD than historical control recipients of unmanipulated HLA-mismatched transplantation12
. Here we describe our current protocol for the generation of peripheral blood mononuclear cells (PBMC) which have been alloanergized to HLA-mismatched unrelated stimulator PBMC. Alloanergization is achieved by allostimulation in the presence of monoclonal antibodies to the ligands B7.1 and B7.1 to block CD28-mediated costimulation. This technique does not require the production of specialized stimulator APC and is simple to perform, requiring only a single and relatively brief ex vivo incubation step. As such, the approach can be easily standardized for clinical use to generate donor T cells with reduced alloreactivity but retaining pathogen-specific immunity for adoptive transfer in the setting of AHSCT to improve immune reconstitution without excessive GvHD.
Immunology, Issue 49, Allogeneic stem cell transplantation, alloreactivity, Graft-versus-Host Disease, T cell costimulation, anergy, mixed lymphocyte reaction.
The Preparation of Primary Hematopoietic Cell Cultures From Murine Bone Marrow for Electroporation
Institutions: Bio-Rad Laboratories, Inc.
It is becoming increasingly apparent that electroporation is the most effective way to introduce plasmid DNA or siRNA into primary cells. The Gene Pulser MXcell electroporation system and Gene Pulser electroporation buffer were specifically developed to transfect nucleic acids into mammalian cells and difficult-to-transfect cells, such as primary and stem cells.This video demonstrates how to establish primary hematopoietic cell cultures from murine bone marrow, and then prepare them for electroporation in the MXcell system. We begin by isolating femur and tibia. Bone marrow from both femur and tibia are then harvested and cultures are established. Cultured bone marrow cells are then transfected and analyzed.
Immunology, Issue 23, Primary Hematopoietic Cell Culture, Bone Marrow, Transfection, Electroporation, BioRad, IL-3
Freezing Human ES Cells
Here we demonstrate how our lab freezes HuES human embryonic stem cell lines. A healthy, exponentially expanding culture is washed with PBS to remove residual media that could otherwise quench the Trypsin reaction. Warmed 0.05% Trypsin-EDTA is then added to cover the cells, and the plate allowed to incubate for up to 5 mins at room temperature. During this time cells can be observed rounding, and colonies lifting off the plate surface. Gentle repeated pipetting will remove cells and colonies from the plate surface. Trypsinized cells are placed in a standard conical tube containing pre-warmed hES cell media to quench remaining trypsin, and then spun. Cells are resuspended growth media at a concentration of approximately one million cells in one mL of media, a concentration such that one frozen aliquot is sufficient to resurrect a culture on a 10cm plate. After cells are adequately resuspended, ice cold freezing media is added at equal volume. Cell suspensions are mixed thoroughly, aliquoted into freezing vials, and allowed to slowly freeze to -80C over 24 hours. Frozen cells can then moved to the vapor phase of liquid nitrogen for long term storage, or remain at -80 for approximately six months.
Cellular Biology, Issue 1, Embryonic Stem Cells, ES, Tissue Culture, Freezing
| 1 | 3 |
<urn:uuid:82c3f9e3-43c0-4714-91ee-1f254f928b78>
|
The neural crest (NC) is a major contributor to the vertebrate craniofacial skeleton, detailed in model organisms through embryological and genetic approaches, most notably in chick and mouse. Despite many similarities between these rather distant species, there are also distinct differences in the contribution of the NC, particularly to the calvariae of the skull. Lack of information about other vertebrate groups precludes an understanding of the evolutionary significance of these differences. Study of zebrafish craniofacial development has contributed substantially to understanding of cartilage and bone formation in teleosts, but there is currently little information on NC contribution to the zebrafish skeleton. Here, we employ a two–transgene system based on Cre recombinase to genetically label NC in the zebrafish. We demonstrate NC contribution to cells in the cranial ganglia and peripheral nervous system known to be NC–derived, as well as to a subset of myocardial cells. The indelible labeling also enables us to determine NC contribution to late–forming bones, including the calvariae. We confirm suspected NC origin of cartilage and bones of the viscerocranium, including cartilages such as the hyosymplectic and its replacement bones (hymandibula and symplectic) and membranous bones such as the opercle. The cleithrum develops at the border of NC and mesoderm, and as an ancestral component of the pectoral girdle was predicted to be a hybrid bone composed of both NC and mesoderm tissues. However, we find no evidence of a NC contribution to the cleithrum. Similarly, in the vault of the skull, the parietal bones and the caudal portion of the frontal bones show no evidence of NC contribution. We also determine a NC origin for caudal fin lepidotrichia; the presumption is that these are derived from trunk NC, demonstrating that these cells have the ability to form bone during normal vertebrate development.
Citation: Kague E, Gallagher M, Burke S, Parsons M, Franz-Odendaal T, Fisher S (2012) Skeletogenic Fate of Zebrafish Cranial and Trunk Neural Crest. PLoS ONE 7(11): e47394. https://doi.org/10.1371/journal.pone.0047394
Editor: Henry H. Roehl, University of Sheffield, United Kingdom
Received: August 21, 2012; Accepted: September 13, 2012; Published: November 14, 2012
Copyright: © 2012 Kague et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: Funding was provided by the United States National Institutes of Health/NIDCR Grant R21DE021509-01 (www.nih.gov), and Nova Scotia Health Research Foundation MED-Project-2009-5769 (www.nshrf.ca). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
The evolution of vertebrates is concomitant with the evolution of the multi–potent neural crest (NC), which contributes to much of the vertebrate craniofacial skeleton. Therefore, an understanding of the evolution of the NC and in particular its contribution to the skeleton in different vertebrates lends insight into much broader questions of the origin of vertebrates. The current knowledge of the skeletogenic potential of the NC comes largely from studies of chicken and mouse development, with some key additional studies on other model organisms such as zebrafish and frog, and from these a broad consensus has emerged on several points. First, it is generally accepted that the cartilages of the pharyngeal arches are NC–derived. In the case of the mouse, long–term genetic lineage labeling has shown that the osteoblasts that replace these cartilages with bone, either directly (through endochondral ossification), or indirectly as adjacently forming membranous bones, are also derived from NC . Second, it is clear that the bones in the vault of the skull are of mixed origin, with some being of NC origin and others deriving from head mesoderm , . The exact boundaries are still somewhat uncertain, particularly in the avian embryo , . Interestingly, data in the frog suggests that the entire vault of the skull contains NC-derived cells , unlike the situation in mouse or that supported by some of the data in chick. Finally, in neither mouse nor chick is there any evidence that trunk NC cells gives rise to cartilage or bone during normal development. Transplantation studies in chick have shown that trunk neural crest cells have skeletogenic potential, however this potential is not realized until these cells are put into the appropriate in vitro or in vivo environment , . In zebrafish, Smith and colleagues demonstrated migration of trunk NC into the caudal fin mesenchyme . The authors speculated these cells might contribute to the bony lepidotrichia, but lacked the lineage data to demonstrate that.
Aside from these areas of broad agreement, there are significant unresolved issues. Perhaps most importantly, thorough lineage studies with long–term labeling methods have only been performed in two species, the mouse and the chicken. It is likely misleading to extrapolate and assume the NC origin of specific aspects of the craniofacial skeleton in humans or other species. There also may be important contributions from the NC populations that are either transient or small, and require more careful investigation. For example, it has been suggested that small populations of NC cells are present in all sutures during formation of the mouse skull, and may even be required for proper suture patterning –. And while it seems clear that normally NC does not contribute to the skeleton caudal to the pectoral girdle in mouse or chicken, recent studies on the formation of the turtle carapace have challenged the assertion that trunk NC is not capable of forming bone and cartilage , .
While some studies on NC development in the zebrafish are in agreement with the broad consensus outlined above, there is currently no data from longer–term lineage studies that address the important issues of the origin of bones (as opposed to cartilages) in the craniofacial skeleton, or the skeletogenic potential of the trunk NC. Therefore, we have developed an approach to indelibly label the NC cells and their descendants, using a two–transgene system based on Cre recombinase. We can confirm results of previous lineage studies in the zebrafish, that demonstrated derivation of pharyngeal arch cartilages from NC . In addition, we demonstrate NC origin for some later developing cartilage elements, and for many bones of the craniofacial skeleton. Interestingly, we find that only the most anterior portion of the vault of the skull is derived from NC, and that the posterior boundary falls within the frontal bone. Previous analyses of the pectoral girdle in other species had suggested that the cleithrum would be a bone of mixed origin, derived partly from mesoderm and partly from NC ; however, we find no evidence of NC contribution to the cleithrum in zebrafish. Finally, we show conclusively that the lepidotrichia in the caudal fin are derived from the NC, demonstrating that the trunk NC cells in zebrafish have realized their capacity to differentiate into osteoblasts (unlike in other model vertebrates). Most previous lineage studies have been carried out in amniotes; our results are critical in defining the characteristics of NC development that are particular to these groups, which characteristics are common to all vertebrates, which are unique to teleosts, and which may have been present in ancestral vertebrates. Furthermore, this study provides valuable insight into the study of neural crest evolution, providing support for the current thinking that fossil and extant lower vertebrates utilized trunk neural crest cells in the exoskeletal body coverings (dermal bone and dentine) unlike amniotes.
Materials and Methods
Fish were maintained according to standard protocols . Studies were conducted in strict accordance with the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol #803318 was approved by the University of Pennsylvania Institutional Animal Care and Use Committee.
Two Cre reporter constructs were used in the work reported here. 1) The ef1a:loxP-dsRed-loxP-egfp (egfp reporter) construct was generated by cloning in a BamH1 fragment (loxP-dsRed-polyA-loxP) of CMV:LoxP-dsRed-loxP-eGFP (gift from Thomas Look ) upstream of the egfp gene in the T2KXIGΔIN vector (gift from Koichi Kawakami) at the available unique BamHI site. In the resulting transgenic fish the gene encoding fluorescent dsRed is expressed from the ef1a promoter/enhancer. Upon cre activation, expression is indelibly changed to egfp.
2) The bactin:loxP-mcfp-loxP-hmgb1-mCherry (nucCh reporter) construct was generated by cloning the promoter/enhancer region (5304 bp proximal to the ATG) from p5e-bactin plasmid (gift from Chi-Bin Chien) upstream of a Floxed cassette encoding a membrane tagged CFP (mcfp). Upstream of this cassette was cloned nuclear tagged mCherry (nucCh). Upon cre activation, expression is indelibly changed to nucCh.
The -28.5Sox10:cre was generated by cloning a previously described enhancer from upstream of the mouse Sox10 gene in front of the cFos minimal promoter and the cre coding sequence. Entry vector clones were constructed for the three components using the Tol2kit based on multi-site Gateway technology .
-210RUNX2:egfp: In a screen for cis–regulatory elements associate with RUNX2, we identified a conserved sequence from the last intron of the gene that acts to direct expression to early osteoblasts . The enhancer was cloned upstream of the cFos minimal promoter and egfp in a Tol2 backbone to generate the -210RUNX2:egfp construct.
-1.4col1a1:egfp: A 1.4 kb proximal promoter fragment of the zebrafish col1a1 gene was cloned upstream of egfp, and the first intron of the gene cloned downstream, in a Tol2 vector backbone. Several independent transgenic lines demonstrated strong GFP expression in all cartilages, persisting into adult fish.
Transgenic lines were generated via Tol2-mediated transgenesis, as previously described –. The Cre responder fish, carrying the egfp and nucCh reporters, are maintained as intercross stocks with multiple insertions. For the Sox10:cre, -210RUNX2:egfp, and -1.4col1a1 fish, multiple independent lines were examined for each and showed similar patterns of expression (data not shown).
Fish were fixed in 4% paraformaldehyde overnight at 4°C and stored in 0.01 M phosphate buffered saline, pH 7.4 (PBS) at 4°C until required. For whole–mount staining, larvae were washed in PBS plus 0.1% Tween-20 (PBST), transferred to methanol, and stored at −20°C at least overnight. After transfer back to PBST, larvae were digested briefly in Proteinase K and refixed in 4% PFA. After PBST washes and blocking in 10% goat serum, larvae were incubated with the 1° and 2° antibodies. For double staining, the 1° antibodies used were anti-GFP, 1∶500 (Invitrogen A11122) and anti-HuC, 1∶500 (Santa Cruz Biotechnology sc-56707) and the 2° antibodies were goat anti–rabbit IgG Alex Fluor488 conjugated and goat anti-mouse IgG AlexaFluor594 conjugated, both 1∶500.
For immunohistochemistry on sectioned tissue, frozen sections were cut at 15–20 µm and mounted on APTES (3-aminotriethoxysaline) coated slides. Tissue was incubated for one hour at room temperature in 10% bovine serum in PBS with 0.5% TritonX-100. The primary antibody used was anti-GFP (ABCAM AB6662) at a 1∶500 dilution. After incubation overnight at 4°C, tissues were mounted in a DAPI mountant (Vectashield sc24941).
For epifluorescence, live fish were anesthetized with Tricaine and observed and imaged on an Olympus MVX10 macroscope with mercury light source and filter sets for GFP and rhodamine. For examination of freshly dissected tissue, fish were euthanized by rapid immersion in ice water, immediately dissected, and tissue observed within 4 hours.
For confocal microscopy, live fish were anesthetized with Tricaine and mounted in glass bottom dishes with low melting point agarose in embryo medium. Samples were imaged on a Yokogawa spinning disc confocal; images were captured using Slidebook and processed with Image J.
Cre expression in sox10 domain efficiently activates egfp reporter
Our goal was to express Cre recombinase broadly in the NC. We used a recently described enhancer associated with the mouse Sox10 gene, which has been analyzed through transgenesis in mouse and zebrafish , . The enhancer is located 28.5 kb upstream of Sox10, and in conjunction with a heterologous minimal promoter drives expression in NC and many known derivatives, including craniofacial cartilage, sympathetic ganglia, and enteric neurons. However, the egfp expression does not persist strongly in the embryo past 2 days post fertilization (dpf).
We constructed a transgene in which the same Sox10 enhancer and minimal promoter are controlling cre expression (-28.5Sox10:cre). In preliminary experiments, we introduced the transgene into embryos also carrying a reporter transgene for Cre activity, in which the ubiquitous ef1a promoter is driving expression of dsRed flanked by LoxP sites, followed by egfp (egfp reporter). In the absence of an exogenous Cre transgene, dsRed is expressed strongly throughout the embryo, and no GFP+ cells are observed (data not shown). In many injected embryos, GFP was expressed mosaically, in cells apparently distributed as NC in the early embryo (data not shown). Injected fish were raised to adulthood and screened for germline transmission of both transgenes (-28.5Sox10:cre and the egfp reporter), as evidenced by embryos with GFP+ cells. Several independent lines were examined, and all yielded very similar patterns of expression.
We examined cre expression directly by in situ hybridization, and find broad expression in cranial and trunk neural crest during somitogenesis (Fig. 1). At 24 hours post fertilization (hpf), GFP expression is seen in a distribution similar to the expression pattern regulated by the enhancer (Fig. 1), as previously described . Cells are seen in the pharyngeal arches, and in the trunk migrating streams of NC cells are labeled by GFP expression. Although some of the founders are transmitting multiple copies of the egfp reporter transgene (data not shown), the conversion from dsRed to GFP expression seems nonetheless to be complete; we do not detect dsRed expression in GFP+ cells by confocal microscopy (e.g. Figs. 2, 3).
A) Diagram of transgenes used to genetically mark neural crest descendants; Cre activity under control of the Sox10 enhancer results in excision of the floxed first coding sequence in each reporter. In the first, dsRed is excised, leading to persistent expression of egfp under control of the ubiquitous ef1a promoter. In the second, cyan fluorescent protein (cfp) excision leads to persistent expression of nuclear mCherry (nucCh). B–D) At 24 hours post fertilization, egfp expression resulting from Cre activation (B, C) shows the same pattern as the expression under direct control of the Sox10 enhancer (D). E, F) Expression of cre is shown by in situ hybridization of a Sox10:egfp transgenic embryo. E) Early expression of cre is seen to the anterior extent of NC (arrow) flanking the neural keel at 6 somites. F) Expression persists in the mandibular (m), hyoid (h), and branchial (b) clusters of NC at 14 somites. G) At 30 hours, doubly transgenic embryos show robust expression of egfp in cells known to be derived from neural crest, including in the branchial arches (br), and in the otic vesicle (ov). B–D, G are side views with anterior to the left; E and F are dorsal views with anterior to the left.
A–K represent successive Z-stack projections of five confocal sections each, moving from ventral to dorsal through the head of a 10dpf doubly transgenic embryo. Images have been colored so that green represents GFP+ cells (NC derivatives) and magenta dsRed+ cells (non-NC). Note that throughout the remaining figures, the label associated with NC (GFP or nucCherry) is always shown as green in the two–color overlays. Cartilages known to be NC-derived, including Meckel's cartilage (B), the ethmoid plate (F), and palatoquadrate (G) are labeled. Also GFP+ are cells in specific areas of ossification, including the dentary (A) and the anguloarticular (E) surrounding Meckel's cartilage, and the maxilla and premaxilla (J) of the upper jaw. Note also the GFP+ nerve plexus in the lip taste buds (arrows in C), representing their innervation by NC-derived cells of the facial ganglia. Non-NC-derivatives, such as the intermandibularis anterior (ima) and interhyoideus (ih) muscle masses, remain dsRed+. Abbreviations for skeletal structures are listed in Table 3.
A–I) Transgenics carrying a reporter that activates nuclear-Cherry expression following Cre activation (A, D, G) were crossed to -1.4col1a1:egfp transgenics, in which all cartilage cells are GFP+ (B, E, H). At 4 dpf, cells within the ceratohyal (A–C), hyosymplectic (D–F) and Meckel's (G–I) cartilages have nucCh+ nuclei, indicating they are NC-derived. The GFP− cells surrounding the cartilages, largely representing perichondral cells or osteoblast precursors, are also NC-derived. J–L) The reporter transgene switches from dsRed to GFP expression following Cre activation. At 10 dpf (J, K), cartilage cells of the ceratohyal (J) and Meckel's (K) cartilages are GFP+, as are the cells surrounding them, indicating that the bone replacing the cartilages is also NC-derived. Bones forming via membranous ossification, such as the opercle (L), are also NC-derived.
As an alternative reporter of Cre activity, we used a transgene with ß-actin promoter driving expression of cyan fluorescent protein (cfp) flanked by LoxP sites, followed by nuclear-localized mCherry (nucCh reporter). In fish doubly transgenic the nucCh reporter and the Sox10:cre transgenes, we observed the same pattern of nucCh+ cells as described above for the GFP+ cells (data not shown), indicating that both reporters accurately reflect Cre activity in the early embryo.
Cells in peripheral nervous system are NC–derived
The neurons of the dorsal root ganglia (DRGs) are known to be NC–derived in zebrafish, as in other organisms . We find GFP+ cells in the DRGs, confirming that our genetic labeling includes these NC derivatives (Fig. S1A). Similarly, we find GFP+ cells in the hindgut (Fig. S1C), consistent with the known NC origin of the enteric neurons . We performed double antibody staining for GFP and HuC, and in both cases we find the cells to be co–labeled, confirming their neuronal identity (Fig. S1B, D, E). There are additional cells in the DRGs, GFP+ but HuC−, which we presume are the NC-derived Schwann cells. Similarly, there are GFP+/HuC− cells in the intestine with the morphology of intestinal glial cells, also known to be NC–derived . Within the cranial sensory ganglia, we find abundant GFP+/HuC+ neurons in the trigeminal, facial, and anterior and posterior lateral line ganglia (Fig. S1G–I, K). Although there are a few GFP+ cells in the vagal ganglia, these are not neurons, as evidenced by their failure to stain with anti-HuC (Fig. S1J, K). This is consistent with literature reporting NC contribution to neurons of the trigeminal, facial, and lateral line ganglia, but not to the vagal ganglia , , . We also see a prominent GFP+ nerve plexus in the taste buds of the lip (Fig. 4C), presumably reflecting innervation by the facial nerve .
A–R) Immunohistochemistry for GFP shows composition of cartilages with cellular resolution. In each set of three images, the first shows the DAPI counterstain (A, D, G, etc.), and the second (B, E, H, etc.) the GFP immunoreactivity. The third image in each group, the overlays, are pseudocolored with green representing GFP immunoreactivity and magenta the DAPI counterstain. The most anterior cartilages in the base of the skull, such as the ethmoid plate (A–C), trabeculae cranii (D–F), and taeniae marginalis anterior (G–I) are NC-derived. More posterior cartilages, like the basioccipital (J–L), contain no NC. M–O) A horizontal section at 14 dpf illustrates a more anterior NC-derived cartilage (arrow), the trabeculae cranii, and more posterior negative cartilage around the ear (arrowhead). P–X) Successive sections through a single fish at 44 dpf show that cartilage at intermediate locations, such as around the ear, is composed of a mix of NC (arrows) and non-NC cells in more ventral sections (P–R), and shows no NC-derived cells more dorsally (V–X).
Additional GFP+ progeny in double transgenic fish
In the hearts of our doubly transgenic fish, we find GFP+ cells within the myocardium, primarily in the region of the atrial-ventricular (AV) valve (data not shown). This is consistent with previous lineage data , , and also with the recently reported phenotype of a mutant in leo1, which has deficits in several NC lineages and a specific defect in cardiomyocyte differentiation in the AV valve region . In the central nervous system, we also find that oligodendrocytes are GFP+ (data not shown), consistent with known expression of sox10 in those cells. Melanoblasts are known to be NC–derived, and should also be GFP+ in the doubly transgenic fish. In early generations, in fish carrying multiple insertions of the egfp reporter transgene, we observed GFP+ melanoblasts (data not shown). However, in subsequent generations as the transgenes have been bred to single insertions, the melanoblasts are no longer labeled. This is consistent with the ef1a promoter displaying variable expression in specific differentiated cell types (M. P and S. F., unpub. obs.).
NC contribution to the skeleton
We have classified the skeletal elements of the craniofacial skeleton and pectoral girdle with respect to their embryological origin (NC-derived or not) through a combination of observations of live transgenics via epifluorescence or confocal microscopy; immunohistochemistry for GFP on sectioned tissue; and freshly dissected tissue imaged via epifluorescence. Below we discuss specific examples, with data shown in Figures 2, 3, 4, and 5; our overall results are summarized in Figure 6, and in Tables 1 and 2.
A) Bones derived by ossification of the pharyngeal arch cartilages are also NC-derived, seen in a horizontal section through a 44 dpf fish stained for GFP immunoreactivity. B) The odontoblasts of the pharyngeal teeth on the fifth ceratobranchial express the RUNX2:egfp transgene (B′) and are also NC-derived, seen by nucCh+ nuclei (B″). C, D) The scleral cartilages are NC-derived, shown by GFP immunohistochemistry (C), as are the ossicles derived by their ossification, seen as GFP+ in freshly dissected tissue (D). E) The parasphenoid shows no NC contribution, as seen in freshly dissected tissue. The scattered nucCh+ cells (E′, E″) are in associated soft tissue. The kinethmoid (F) also shows no NC contribution, although some of the associated soft tissue is NC-derived (nucCh+ in F′, F″). G-G″) Sections through the kinethmoid cartilage at 7 weeks show no GFP expression (G′) by immunohistochemistry (G is DAPI counterstain, G″ shows overlay). H, I) Dissections were used to determine the status of unresolved bones throughout the skull, e.g. the pterosphenoid is nucCh+ in freshly dissected tissue (I) and the supratemporal is not. J, K) The anterior portion of the frontal bones are NC-derived, seen as GFP+ cells in a live, 6-week-old fish (J), and also as nucCh+ cells in freshly dissected tissue (K). Dotted lines indicate the location of the coronal suture. The more posterior portion of the frontal bones, and the other flat bones of the skull, show no evidence of NC contribution. L–M″) Sections through the anterior frontal bone of a 7-week fish (L-L″) show GFP+ osteoblasts by immunohistochemistry (arrowheads in L′) aligned under the acellular bone matrix (bracket in L), as well as GFP+ cartilage cells in the underlying epiphyseal bar. A similar section through the posterior frontal bone (M-M″) shows no GFP expression in the osteoblasts (M′). In each set of panels, the first is the DAPI counterstain, the second the GFP immunohistochemistry, and the third the overlay.
Diagrams depict the cartilage elements and bones that are NC-derived (green), and those that show no evidence of NC contribution, and are presumably derived from mesoderm (magenta). The diagram in A shows a dorsal view of the chondrocranium from an approximately 12 dpf larva. B is a side view of the bones of an adult skull, with some elements of the pectoral girdle also shown. C shows a dorsal view of the dorsal aspect of the adult skull. In D, the view is of the base of the neurocranium, with the pharyngeal skeleton removed. Skeletal elements are labeled according to the abbreviations in Table 3. Note that in all diagrams, some elements are omitted for the sake of clarity; drawings were modified from Cubbage and Mabee (1996) and .
Cartilage and bones of the viscerocranium are NC–derived
Known derivatives of NC are GFP+ in the doubly transgenic embryos, including cartilages of the viscerocranium, derived from the pharyngeal arches (Fig. 2, 3). The cartilages of the pharyngeal arches are the earliest craniofacial skeletal elements to form, visible morphologically beginning at 2 dpf, and they have been previously shown to be entirely derived from NC of the mesencephalon and hindbrain rhombomeres . We also find that these cartilages are entirely NC-derived (Figs. 2, 3). Notably, we do not see evidence of mosaic activation of the reporter transgene in cartilage; because of its distinctive morphology and large cell size, it is quite easy to see unlabeled cells. In the few fish where GFP expression was mosaic, the expression of the dsRed reporter transgene was also mosaic (data not shown), suggesting that the Cre activity is quite robust from the -28.5Sox10:cre transgene.
Many of the cartilages of the viscerocranium are converted to bone through perichondral ossification, over a period of many weeks. The mineralized bone begins to accumulate at 5–6 dpf, visible by staining with calcium chelators, in collars around the cartilage elements. We find that the perichondral cells surrounding cartilages that ossify in this manner, such as the ceratohyal, are NC–derived (Fig. 3). Some other bones, notably the dentary and anguloarticular of the lower jaw, and the maxilla and premaxilla of the upper jaw, form via intramembranous ossification. The cells adjacent to these cartilages in the early larva, prior to ossification but in the locations where ossification will later take place, are also NC–derived (Fig. 3), as are the specific ossifications at later larval stages, when they can be distinguished (Fig. 4).
The opercle develops by intramembranous ossification, in close apposition to the hyosymplectic cartilage. Although the opercle was presumed to also be derived from NC of the second pharyngeal arch, it had not been directly demonstrated by lineage. We find that the opercle is NC derived (Fig. 3L), as are the branchiostegal rays (data not shown); these are also membranous bones likely to be derived from cells of the branchial arches based on their position.
The neurocranium is of mixed origin
Some elements that contribute to the base of the skull begin to form quite early, within the first week, including the parachordal cartilages and parasphenoid bone. The parachordals form by condensation around the anterior tip of the notochord, and do not show evidence of NC contribution (data not shown). The more anterior ethmoid plate and attached trabeculae cranii are NC-derived (Figs. 2, 4A–F), consistent with previous lineage studies . Eventually, the trabeculae join the parachordals posteriorly, and are contiguous with them as they expand to form the basal plates. At these later stages, we find that the cartilage in the more posterior regions of the ventral neurocranium is a mixture of NC and non-NC cells (Fig. 4P–X). Surprisingly, the more anterior parasphenoid is not NC-derived (Fig. 5E-E″). The most anterior bone that is not NC-derived is the kinethmoid (Fig. 5F-F″), a small midline bone at the anterior tip of the upper jaw, that forms as a sesamoid bone within the intermaxillary ligament ; the cartilage within which it ossifies is also not labeled (Fig. 5G-G″).
In the zebrafish, the flat bones of the skull (frontal, parietal, exoccipital bones) form several weeks after the facial bones, relatively late compared to the same process in the mouse. In live transgenic fish at 6 weeks, shortly after the flat bones have met at the coronal and sagittal sutures, GFP expression can be seen in the vault of the skull in the anterior portion of the frontal bones, with its posterior border at the position of the underlying epiphyseal bar cartilage (Fig. 5J). The posterior portion of the frontal bones, as well as the parietal and occipital bones, are GFP−. We verified this finding through dissection of fresh, unfixed tissue from a fish carrying the nucCh reporter transgene, and again observed that nucCh+ cells were confined to the anterior portion of the frontal bone (Fig. 5K). Immunohistochemistry for GFP on sectioned material confirms that the osteoblasts of the anterior frontal bone are labeled, while those in more posterior regions are not (Fig. 5L–M″)
The cleithrum does not contain neural crest–derived cells
The pectoral girdle represents a transition area in vertebrates between the portion of the skeleton derived from NC and that from mesoderm. In particular, the cleithrum had been predicted previously to be of mixed origin (i.e. partially NC-derived), much as the clavicle is in mammals , based on the embryological origins of the associated muscle attachments. In the juvenile fish at six weeks, we find GFP+ cells associated with the most dorsal tip of the cleithrum, visible when the bone is dissected (data not shown). However, they are not in the bone, but in the associated soft tissue. We examined the cleithrum more closely during its formation, by confocal microscopy. At stages from 16 to 21dpf, we can observe no NC cells associated with the dorsal tip of this bone (Fig. 7A–D). The osteoblasts associated with the bone at this stage are difficult to identify by morphology and position alone. Therefore, we also examined the cleithrum in a RUNX2:egfp transgenic line, in which early osteoblasts are GFP+. At 21 dpf, the osteoblasts are clustered around the tip of the bone (Fig. 7E, F); they do not have nucCh+ nuclei, which mark the NC derivatives in the same fish.
A–D) At 10 dpf (A, B) and 16 dpf (C, D), Z-stack projections of confocal sections through the area surrounding the dorsal end of the cleithrum (arrowheads) reveal no GFP+ cells. The NC-derived glia of the lateral line ganglia are clearly visible in the same fields of view (arrows). E, F) To localize the osteoblasts, fish carrying the nucCh reporter were crossed with RUNX2:egfp transgenics, in which the osteoblasts are GFP+. The dorsal tip of the cleithrum (arrowhead in E, F) is surrounded by osteoblasts (arrow in F), which do not have nucCh+ nuclei, indicating they are not NC-derived.
Neural crest contributes to the post-cranial skeleton
The Sox10 enhancer regulating expression of cre is also active in the trunk neural crest, allowing us to examine their contribution to tissues at later stages. In the caudal fin fold, we observed GFP+ cells clustered around the tip of the notochord as early as 2 dpf (data not shown); by 8 dpf, the accumulation is more striking, and comprises a group of ∼200 cells (Fig. 8A); the nearby hypural cartilages do not contain labeled cells. By 16 dpf, the forming lepidotrichia, or bony fin rays, are visible, and by 21 dpf a pattern essentially similar to the adult fin is formed. At these stages, the NC-derived cells associate with the lepdotrichia, and based on position and morphology appear to be osteoblasts (Fig. 8B–E); we have observed similarly positioned labeled cells in the dorsal fin, although the pectoral fin lepidotrichia are not labeled (data not shown). To confirm their identity, we again examined the RUNX2:egfp transgenic fish. At 21 dpf, we observed GFP+ cells both within the hollow lepidotrichia and closely associated with the outside surface, where osteoblasts are known to be located. These cells also have nucCh+ nuclei, indicating their NC origin (Fig. 8F–H).
A) At 8 dpf, NC-derived cells (GFP+; arrow) can be seen clustered around the tip of the notochord (nc). B, C) By 16 dpf, there are more GFP+ cells; some are located more distally in the fin, although many are still close to the notochord. D–H) At 21 dpf, the caudal fin contains well-formed lepidotrichia (le in D), which are associated with GFP+ cells (E). F–H) To confirm the identity of the cells as osteoblasts, fish carrying the nucCh reporter were crossed with RUNX2:egfp transgenics, in which the osteoblasts are GFP+. The osteoblasts have nucCh+ nuclei, indicating they are NC-derived (G), and they are located both within (arrowhead) and immediately outside (arrow) the lepidotrichia (H).
Several reports have demonstrated the usefulness of Cre recombinase in zebrafish for activating transgenes in specific cell types, both as a means of misexpression, for example of oncogenes , , , and of lineage tracing –. In this study, we have primarily addressed the question of the embryological origins of skeletal elements in the zebrafish. Because of the prolonged phase of pre–metamorphic development and late formation of the adult body form, diverse elements of the zebrafish skeleton are formed over a period of approximately six weeks, far longer than the period encompassing similar events in the mouse or chicken. While the lineage of some elements, such as those in the viscerocranium, can be determined by non–genetic methods, others require an indelible method of lineage tracing. Therefore, we developed a two–transgene system based on Cre recombinase, and used it here to examine the derivation of skeletal elements into the adult, as well as other cell types derived from NC.
In non–skeletal tissues, our results are largely consistent both with what is known in zebrafish, and data from other model organisms. We find many cells of the peripheral nervous system are NC–derived, including Schwann cells and neurons of the DRGs, enteric neurons, and neurons of some cranial sensory ganglia. In the mouse and chick, cardiac NC is important for proper patterning of the aortic arches, and directly contributes to the septum dividing the right and left outflow tract . Since zebrafish has a two–chambered heart, it was unclear what role NC would play in heart development. Indeed, previous reports of NC contribution to the zebrafish heart suggested that it differentiated into myocardium , . Consistent with these findings, we find NC contribution to cells within the myocardium, primarily in the AV valve region. Our finding is also consistent with the reported phenotype of a mutant in leo1, encoding a member of a complex of proteins active in chromatin remodeling , and support a mechanism for the cardiac defects arising directly from NC deficits.
In other vertebrates for which there is lineage evidence available, the cartilage elements of the pharyngeal arches are NC-derived. In the zebrafish, these cartilages develop within the first several days post fertilization , making them amenable to classic lineage tracing via dye injection . These earlier lineage studies, performed at the level of single cells, demonstrated that all cartilages comprising the pharyngeal arch skeleton are NC-derived. Interestingly, they demonstrated that most premigratory NC cells were already lineage restricted, and gave rise to clones of a single cell type. They did not analyze these cells with markers for specific cell types, or at a late enough stage to examine osteoblast lineage; osteoblasts would presumably have fallen into their “unidentified” cell pool. Together with our data that perichondral cells and osteoblasts are NC-derived, these earlier lineage studies might suggest that while chondrocytes and osteoblasts share NC lineage, they are derived from separate precursor cells, which are specified prior to NC migration.
We confirm the general vertebrate pattern in formation of the chondrocranium in zebrafish, that more anterior cartilage elements are NC-derived, while more posterior elements have a mixed origin, presumably with a mesodermal contribution. A particularly thorough study of this issue has been carried out in the mouse ; these authors identify a few cartilages with a mixed origin. However, in contrast to our finding of substantial mixing of NC and non–NC cells, they report distinct boundaries between the NC and mesoderm–derived portions of these elements. We speculate that this is because the separate centers of chondrification in the zebrafish fuse relatively early, and grow substantially after fusion.
A number of zebrafish mutants with craniofacial abnormalities have been identified in large–scale genetic screens –; based on our lineage results, we would predict that a mutant either lacking neural crest, or with a failure of neural crest development into cartilage, would show defects in only the ventral and anterior portions of the chondrocranium. While no such single mutant has been described, in fish deficient for both foxd3 and tfap2a the neural crest apparently fails to differentiate into cartilage . These larvae retain the most posterior portion of the neurocranium, and lack all other craniofacial cartilages, consistent with our results.
We find that the bones in the base of the skull are of mixed origin; most surprisingly, we find no evidence of NC contribution to the parasphenoid, which in the adult extends to the rostral border of the eye. The literature is somewhat unclear about the homologies between bones in the base of the neurocranium in zebrafish and in other vertebrates, although the midline bone in the equivalent position in the mouse (referred to variously as the “presphenoid” and the “parasphenoid” by different authors) is neural crest derived . The parasphenoid in chicken has been described as mixed in origin, with NC contribution anteriorly, while the posterior is derived from somitic mesoderm , . In Xenopus laevis, the parasphenoid is at least partially NC-derived , although the nature of the labeling procedure makes it impossible to rule out a mesodermal contribution.
We also find that another very anterior bone, the kinethmoid, is not of NC origin. The ontogeny of the kinethmoid is atypical; it forms, relatively late, as a sesamoid cartilage element, embedded entirely within the intermaxillary ligament, similar to the patella in humans . It is also unique to cypriniforms and lacking in other fishes . We speculate that because of its late and atypical ontogeny, it is derived from a pool of precursor cells distinct from those that give rise to the earlier patterned cartilages of the pharyngeal arches and neurocranium.
The flat bones of the skull develop relatively late in the zebrafish; the frontal and parietal bones first become visible as small areas of ossification around 3–4 weeks post-fertilization, and grow to meet at the sutures around 6 weeks in wild–type fish , , and our own observations). Given this lag in formation, only a genetic method of lineage tracing would allow determination of the embryological origin of the bones. Consistent with results from mouse , , , we find that the more posterior bones, the exoccipital and the paired parietal bones, do not have a NC contribution. Additionally, we find that only the anterior portions of the paired frontal bones are derived from NC, and there is a clear anterior-posterior boundary where the epiphyseal bar cartilage passes underneath the frontal bones. In Xenopus laevis, the best evidence is that NC contributes to the entire anterior-posterior extent of the frontoparietal bones , although it is possible that there is also a contribution from mesoderm. The derivation of the frontal bones in the chicken has been disputed in the literature , although recent retroviral-based lineage studies suggest that, as in the zebrafish, the frontal bone is of mixed origin, with only the anterior portion derived from NC . Given these species differences, it is difficult to reconstruct what might have been the ancestral vertebrate derivation of the skull vault. Even disregarding the conflicting data in chicken, the remaining evidence suggests two extremes, with amphibians having an entirely NC-derived skull vault, while zebrafish have only a small anterior NC contribution.
The pectoral girdle is a complex area of the skeleton, with contributions from mesoderm and neural crest described in different vertebrates. Bones form by a mixture of membranous and endochondral ossification, and what are thought to be analogous bones in different species sometimes ossify via different modes , . Finally, there is at least one bone, the clavicle in the mouse, that both forms by a combination of membranous and endochondral ossification , , and is derived partially from NC and partially from mesoderm . It is difficult in some cases to assign direct analogies between bones in different species, and different elements of the ancestral pectoral girdle complex have been preserved in different lineages through evolution. It has been argued that the pattern of muscles in the region, and their associated attachments to skeletal elements, are more highly conserved , although that assertion has been disputed , . Furthermore, there is evidence in the mouse of a correlation between the embryological origin of the attachments themselves, whether from NC or mesoderm, and the origin of the associated bones . Based on these observations, it was predicted that the cleithrum in bony fish would be a bone of mixed origin, similar to the clavicle. However, we examined the cleithrum through the first three weeks of development, at the resolution of single cells through confocal microscopy, and failed to see any contribution of NC cells; indeed, we find that the entire chain of bones connecting the pectoral girdle to the skull is non–NC in origin. We cannot completely rule out a later contribution, but at the gross level in dissected tissue, there did not appear to be any contribution at six weeks (data not shown).
The membranous bones of the fins, the lepidotrichia, develop relatively late in the zebrafish. Early fin folds have collagenous actinotrichia, arranged radially, which provide structure to the fin fold and likely serve in some way as a scaffold for later formation of the bones. In the caudal fin, the first appearance of the bones is at ∼4.9 mm standard length, or approximately 10 dpf in normal fish . The bones are made by scleroblasts (the functional equivalent of osteoblasts in the fin), located both outside (in less mature bone) and within the bone matrix of the hollow lepidotrichia . The derivation of these cells during fin ontogeny has not been described previously; we show here that they are derived from NC. We see NC-derived cells located near the tip of the notochord during late larval and early juvenile development, but few of these appear to enter the fin fold. At 16 dpf, we see substantial evidence of these cells invading the fin, coincident with extensive bone formation. Finally, at 21 dpf, when essentially the adult pattern of lepidotrichia is established, the NC progeny are associated with the bones and expressing a marker of osteoblasts. Based on experimental evidence in chicken and mouse, it has long been a tenet that post–cranial NC does not contribute to the skeleton during normal development, although several studies have suggested that trunk NC cells have skeletogenic potential which is only realized when they are placed in a permissive environment . Interestingly, recent studies in a non–traditional model organism, the turtle , , suggest that the plastron bones in the carapace are derived from a late–emerging population of trunk neural crest. Together with our own results, this lends support to a model where the ability of the trunk NC to form skeletogenic tissues was the ancestral condition; this ability was lost in disparate lineages concomitant with the loss of exoskeletal body armor and other intramembranous bones of the post–cranial skeleton.
In several instances, our data point to a non-NC origin for bones that appear to be NC-derived in other vertebrates. For example, we find that the parasphenoid in the base of the neurocranium is not NC-derived, although the homologous bones in mouse, chicken, and amphibians appear to be, at least partially. We also find that the frontal bones in zebrafish are of mixed origin, although they are entirely NC-derived in the mouse and possibly also in amphibians, while the situation is less clear in the chicken. And although the cleithrum is not directly homologous to any bone in mammals, it was predicted that it would be of mixed origin; however, we find no NC contribution. While overall we largely find conservation in the composition of craniofacial skeletal elements between fish and amniotes, our results also suggest that in some regions the specific origin of bones in the skull is fluid, where there are two populations of cells with the potential to form bone or cartilage, and the composition of homologous bones in different species can depend on fairly subtle variations in cell number or the exact location and strength of inducing signals. A similar idea has been proposed based on heterotopic avian NC grafts , in which transplanted NC cells in sufficient numbers were capable of participating in the formation of morphologically normal cartilages which would normally be mesodermal in origin. It is interesting to speculate how such a situation could have evolved, since the embryological origin and development of the neural crest and the mesoderm is so dramatically different.
Cells of the peripheral nervous system are NC-derived. A, B) Combined GFP/HuC immunostaining reveals that neurons of the DRG are GFP+ (A) and HuC+ (B); there are also GFP+/HuC− cells visible in some ganglia (arrows), presumably Schwann cells. C–E) Enteric neurons are GFP+ (C) and HuC+ (E); in all panels with merged images (D, G–K), GFP is shown in green and HuC in magenta. There are also some GFP+/HuC− cells, which may represent NC-derived enteric glial cells (arrowheads). F) Antibody staining for HuC reveals neurons of the cranial sensory ganglia in a 4 dpf larva. In the trigeminal (G), facial (H), anterior lateral line (I), acoustic (I) and posterior lateral line (K) ganglia, there are numerous doubly positive neurons, indicating substantial NC contribution. In contrast, in the vagal ganglia, there are only a few GFP+ cells, which are not HuC+ (J, K). All images in D and G–K are single confocal slices. Abbreviations: a (acoustic ganglion); all (anterior lateral line ganglion); f (facial ganglion); pll (posterior lateral line ganglion); tg (trigeminal ganglion); v (vagal ganglia).
The authors thank Andy McCallion for sharing reagents and data on the Sox10 enhancer element prior to publication, Seneca Bessling for invaluable technical assistance, and Paula Roy and Liping Sun for expert fish care.
Conceived and designed the experiments: SF MP. Performed the experiments: EK MG SB MP TFO SF. Analyzed the data: SF TFO. Contributed reagents/materials/analysis tools: MP MG. Wrote the paper: SF TFO.
- 1. Chai Y, Jiang X, Ito Y, Bringas P, Han J, et al. (2000) Fate of the mammalian cranial neural crest during tooth and mandibular morphogenesis. Development 127: 1671–1679.
- 2. Jiang X, Iseki S, Maxson RE, Sucov HM, Morriss-Kay GM (2002) Tissue origins and interactions in the mammalian skull vault. Dev Biol 241: 106–116.
- 3. Couly GF, Coltey PM, Le Douarin NM (1993) The triple origin of skull in higher vertebrates: a study in quail-chick chimeras. Development 117: 409–429.
- 4. Evans DJ, Noden DM (2006) Spatial relations between avian craniofacial neural crest and paraxial mesoderm cells. Dev Dyn 235: 1310–1325.
- 5. Gross JB, Hanken J (2005) Cranial neural crest contributes to the bony skull vault in adult Xenopus laevis: insights from cell labeling studies. J Exp Zool B Mol Dev Evol 304: 169–176.
- 6. Nakamura H, Ayer-le Lievre CS (1982) Mesectodermal capabilities of the trunk neural crest of birds. J Embryol Exp Morphol 70: 1–18.
- 7. McGonnell IM, Graham A (2002) Trunk neural crest has skeletogenic potential. Curr Biol 12: 767–771.
- 8. Smith M, Hickmann A, Amanze D, Lumsden A, Thorogood P (1994) Trunk neural crest origin of caudal fin mesenchyme in the zebrafish Brachydanio rerio. Proceedings of the Royal Society of London B 256: 137–145.
- 9. Morriss-Kay GM, Wilkie AO (2005) Growth of the normal skull vault and its alteration in craniosynostosis: insights from human genetics and experimental studies. J Anat 207: 637–653.
- 10. Gagan JR, Tholpady SS, Ogle RC (2007) Cellular dynamics and tissue interactions of the dura mater during head development. Birth Defects Res C Embryo Today 81: 297–304.
- 11. Opperman LA (2000) Cranial sutures as intramembranous bone growth sites. Dev Dyn 219: 472–485.
- 12. Cebra-Thomas JA, Betters E, Yin M, Plafkin C, McDow K, et al. (2007) Evidence that a late-emerging population of trunk neural crest cells forms the plastron bones in the turtle Trachemys scripta. Evol Dev 9: 267–277.
- 13. Clark K, Bender G, Murray BP, Panfilio K, Cook S, et al. (2001) Evidence for the neural crest origin of turtle plastron bones. Genesis 31: 111–117.
- 14. Schilling TF, Kimmel CB (1994) Segment and cell type lineage restrictions during pharyngeal arch development in the zebrafish embryo. Development 120: 483–494.
- 15. Matsuoka T, Ahlberg PE, Kessaris N, Iannarelli P, Dennehy U, et al. (2005) Neural crest origins of the neck and shoulder. Nature 436: 347–355.
- 16. Westerfield M, editor (1995) The Zebrafish Book. 3 ed. Eugene, OR: University of Oregon Press.
- 17. Langenau DM, Feng H, Berghmans S, Kanki JP, Kutok JL, et al. (2005) Cre/lox-regulated transgenic zebrafish model with conditional myc-induced T cell acute lymphoblastic leukemia. Proc Natl Acad Sci U S A 102: 6068–6073.
- 18. Antonellis A, Huynh JL, Lee-Lin SQ, Vinton RM, Renaud G, et al. (2008) Identification of neural crest and glial enhancers at the mouse Sox10 locus through transgenesis in zebrafish. PLoS Genet 4: e1000174.
- 19. Andreeva V, Connolly MH, Stewart-Swift C, Fraher D, Burt J, et al. (2011) Identification of adult mineralized tissue zebrafish mutants. Genesis 49: 360–366.
- 20. Knopf F, Hammond C, Chekuru A, Kurth T, Hans S, et al. (2011) Bone Regenerates via Dedifferentiation of Osteoblasts in the Zebrafish Fin. Dev Cell 20: 713–724.
- 21. Kague E, Bessling SL, Lee J, Hu G, Passos-Bueno MR, et al. (2010) Functionally conserved cis-regulatory elements of COL18A1 identified through zebrafish transgenesis. Dev Biol 337: 496–505.
- 22. Fisher S, Grice EA, Vinton RM, Bessling SL, McCallion AS (2006) Conservation of RET regulatory function from human to zebrafish without sequence similarity. Science 312: 276–279.
- 23. Fisher S, Grice EA, Vinton RM, Bessling SL, Urasaki A, et al. (2006) Evaluating the biological relevance of putative enhancers using Tol2 transposon-mediated transgenesis in zebrafish. Nat Protoc 1: 1297–1305.
- 24. Stine ZE, Huynh JL, Loftus SK, Gorkin DU, Salmasi AH, et al. (2009) Oligodendroglial and pan-neural crest expression of Cre recombinase directed by Sox10 enhancer. Genesis 47: 765–770.
- 25. Raible DW, Eisen JS (1994) Restriction of neural crest cell fate in the trunk of the embryonic zebrafish. Development 120: 495–503.
- 26. Shepherd I, Eisen J (2011) Development of the zebrafish enteric nervous system. Methods Cell Biol 101: 143–160.
- 27. Kelsh RN, Eisen JS (2000) The zebrafish colourless gene regulates development of non-ectomesenchymal neural crest derivatives. Development 127: 515–525.
- 28. Culbertson MD, Lewis ZR, Nechiporuk AV (2011) Chondrogenic and Gliogenic Subpopulations of Neural Crest Play Distinct Roles during the Assembly of Epibranchial Ganglia. PLoS One 6: e24443.
- 29. Collazo A, Fraser SE, Mabee PM (1994) A dual embryonic origin for vertebrate mechanoreceptors. Science 264: 426–430.
- 30. Hansen A, Reutter K, Zeiske E (2002) Taste bud development in the zebrafish, Danio rerio. Dev Dyn 223: 483–496.
- 31. Li YX, Zdanowicz M, Young L, Kumiski D, Leatherbury L, et al. (2003) Cardiac neural crest in zebrafish embryos contributes to myocardial cell lineage and early heart function. Dev Dyn 226: 540–550.
- 32. Sato M, Yost HJ (2003) Cardiac neural crest contributes to cardiomyogenesis in zebrafish. Dev Biol 257: 127–139.
- 33. Nguyen CT, Langenbacher A, Hsieh M, Chen JN (2010) The PAF1 complex component Leo1 is essential for cardiac and neural crest development in zebrafish. Dev Biol 341: 167–175.
- 34. Wada N, Javidan Y, Nelson S, Carney TJ, Kelsh RN, et al. (2005) Hedgehog signaling is required for cranial neural crest morphogenesis and chondrogenesis at the midline in the zebrafish skull. Development 132: 3977–3988.
- 35. Staab KL, Hernandez LP (2010) Development of the cypriniform protrusible jaw complex in Danio rerio: constructional insights for evolution. J Morphol 271: 814–825.
- 36. Seok SH, Na YR, Han JH, Kim TH, Jung H, et al. (2010) Cre/loxP-regulated transgenic zebrafish model for neural progenitor-specific oncogenic Kras expression. Cancer Sci 101: 149–154.
- 37. Feng H, Langenau DM, Madge JA, Quinkertz A, Gutierrez A, et al. (2007) Heat-shock induction of T-cell lymphoma/leukaemia in conditional Cre/lox-regulated transgenic zebrafish. Br J Haematol 138: 169–175.
- 38. Wang Y, Rovira M, Yusuff S, Parsons MJ (2011) Genetic inducible fate mapping in larval zebrafish reveals origins of adult insulin-producing beta-cells. Development 138: 609–617.
- 39. Jopling C, Sleep E, Raya M, Marti M, Raya A, et al. (2010) Zebrafish heart regeneration occurs by cardiomyocyte dedifferentiation and proliferation. Nature 464: 606–609.
- 40. Kikuchi K, Holdway JE, Werdich AA, Anderson RM, Fang Y, et al. (2010) Primary contribution to zebrafish heart regeneration by gata4(+) cardiomyocytes. Nature 464: 601–605.
- 41. Hesselson D, Anderson RM, Beinat M, Stainier DY (2009) Distinct populations of quiescent and proliferative pancreatic beta-cells identified by HOTcre mediated labeling. Proc Natl Acad Sci U S A 106: 14896–14901.
- 42. Hutson MR, Kirby ML (2007) Model systems for the study of heart development and disease. Cardiac neural crest and conotruncal malformations. Semin Cell Dev Biol 18: 101–110.
- 43. Cubbage CC, Mabee PM (1996) Development of the cranium and paired fins in the zebrafish Danio rerio (Osariophysi, Cyprinidae). Journal of Morphology 229: 121–160.
- 44. McBratney-Owen B, Iseki S, Bamforth SD, Olsen BR, Morriss-Kay GM (2008) Development and tissue origins of the mammalian cranial base. Dev Biol 322: 121–132.
- 45. Schilling TF, Piotrowski T, Grandel H, Brand M, Heisenberg CP, et al. (1996) Jaw and branchial arch mutants in zebrafish I: branchial arches. Development 123: 329–344.
- 46. Piotrowski T, Schilling TF, Brand M, Jiang YJ, Heisenberg CP, et al. (1996) Jaw and branchial arch mutants in zebrafish II: anterior arches and cartilage differentiation. Development 123: 345–356.
- 47. Neuhauss SC, Solnica-Krezel L, Schier AF, Zwartkruis F, Stemple DL, et al. (1996) Mutations affecting craniofacial development in zebrafish. Development 123: 357–367.
- 48. Arduini BL, Bosse KM, Henion PD (2009) Genetic ablation of neural crest cell diversification. Development 136: 1987–1994.
- 49. Gross JB, Hanken J (2004) Use of fluorescent dextran conjugates as a long-term marker of osteogenic neural crest in frogs. Dev Dyn 230: 100–106.
- 50. Fink SV, Fink WL (1981) Interrelationships of the ostariophysan fishes (Teleostei). Zoological Journal of the Linnean Society 72: 297–353.
- 51. Quarto N, Longaker MT (2005) The zebrafish (Danio rerio): a model system for cranial suture patterning. Cells Tissues Organs 181: 109–118.
- 52. Yoshida T, Vivatbutsiri P, Morriss-Kay G, Saga Y, Iseki S (2008) Cell lineage in mammalian craniofacial mesenchyme. Mech Dev 125: 797–808.
- 53. Gross JB, Hanken J (2008) Review of fate-mapping studies of osteogenic cranial neural crest in vertebrates. Dev Biol 317: 389–400.
- 54. Fisher S, Jagadeeswaran P, Halpern ME (2003) Radiographic analysis of zebrafish skeletal defects. Dev Biol 264: 64–76.
- 55. Harris MP, Rohner N, Schwarz H, Perathoner S, Konstantinidis P, et al. (2008) Zebrafish eda and edar mutants reveal conserved and ancestral roles of ectodysplasin signaling in vertebrates. PLoS Genet 4: e1000206.
- 56. Ahlberg PE, Koentges G (2006) Homologies and cell populations: a response to Sanchez-Villagra and Maier. Evol Dev 8: 116–118.
- 57. Sanchez-Villagra MR, Maier W (2006) Homologies of the mammalian shoulder girdle: a response to Matsuoka et al. (2005). Evol Dev 8: 113–115.
- 58. Parichy DM, Elizondo MR, Mills MG, Gordon TN, Engeszer RE (2009) Normal table of postembryonic zebrafish development: staging by externally visible anatomy of the living fish. Dev Dyn 238: 2975–3015.
- 59. Brown AM, Fisher S, Iovine MK (2009) Osteoblast maturation occurs in overlapping proximal-distal compartments during fin regeneration in zebrafish. Dev Dyn 238: 2922–2928.
- 60. Gilbert SF, Bender G, Betters E, Yin M, Cebra-Thomas JA (2007) The contribution of neural crest cells to the nuchal bone and plastron of the turtle shell. Integr Comp Biol 47: 401–408.
- 61. Schneider RA (1999) Neural crest can form cartilages normally derived from mesoderm during development of the avian head skeleton. Developmental Biology 208: 441–455.
- 62. Grandel H, Schulte-Merker S (1998) The development of the paired fins in the zebrafish (Danio rerio). Mech Dev 79: 99–120.
| 1 | 3 |
<urn:uuid:6f229133-fd03-4d32-bdfd-80219084d31f>
|
Human Protein C Receptor Is Present Primarily on Endothelium of Large Blood Vessels
Implications for the Control of the Protein C Pathway
Background The protein C anticoagulant pathway is critical to the control of hemostasis. Thrombomodulin and a newly identified receptor for protein C/activated protein C, EPCR, are both present on endothelium. EPCR augments activation of protein C by the thrombin-thrombomodulin complex.
Methods and Results To gain a better understanding of the relationship between thrombomodulin and EPCR, we compared the cellular specificity and tissue distributions of these two receptors by using immunohistochemistry. EPCR expression was detected almost exclusively on endothelium in human and baboon tissues. In most organs, EPCR was expressed relatively intensely on the endothelium of all arteries and veins, most arterioles, and some postcapillary venules. EPCR staining was usually negative on capillary endothelial cells. In contrast, thrombomodulin was detected at high concentrations in both large vessels and capillary endothelium. Both thrombomodulin and EPCR were expressed poorly on brain capillaries. The liver sinusoids were the only capillaries in which EPCR was expressed at moderate levels and thrombomodulin was low. EPCR and thrombomodulin were both expressed on the endothelium of vasa recta in the renal medulla, the lymph node subcapsular and medullary sinuses, and some capillaries within the adrenal gland. Even in these organs the majority of capillaries were EPCR negative or stained weakly.
Conclusions These studies suggest that EPCR may be important in enhancing protein C activation on large vessels. The presence of high levels of EPCR on arterial vessels may help explain why partial protein C deficiency is a weak risk factor for arterial thrombosis.
Protein C is a critical negative regulatory protein of the coagulation cascade, as evidenced by the fact that total deficiencies of protein C lead to life-threatening thrombotic complications in neonates that can be corrected by protein C supplementation1 (also reviewed in Reference 22 ). The protein C zymogen is activated to the anticoagulant APC by a complex between thrombin and the endothelial cell receptor thrombomodulin.3 (For reviews of the overall pathway, see References 4 through 64 5 6 .) In addition to promoting thrombin activation of protein C, thrombomodulin blocks thrombin-dependent fibrinogen clotting and platelet activation. APC functions as an anticoagulant in plasma by inactivating factors Va and VIIIa on membrane surfaces, a process that is potentiated by the plasma vitamin K–dependent factor, protein S.7 8 The importance of factor Va inactivation is illustrated by the clinical observation that the most common form of familial thrombophilia is caused by a polymorphism at residue 506,9 10 11 12 one of the cleavage sites involved in factor Va inactivation by APC.13
With the recent identification14 15 and cloning15 of the EPCR, this pathway has become more complex than previously appreciated. EPCR is a type 1 transmembrane protein16 that is constitutively expressed on cultured human umbilical cord endothelium and bovine aortic endothelium.15 Preliminary surveys of cultured cell lines indicated that EPCR is expressed at high levels only on endothelium.15 EPCR binds specifically to either protein C or APC.15 Binding of protein C to EPCR promotes protein C activation17 and blocks APC anticoagulant activity. It is unlikely that the physiological function of EPCR is to inhibit APC anticoagulant activity, but it may reflect a general change in enzyme specificity toward a new, as yet unidentified substrate. This possibility is supported by the observation that inhibition of APC anticoagulant activity is not due to masking the active site of APC because the EPCR-APC complex reacts normally with the macromolecular proteinase inhibitors α1-antitrypsin and protein C inhibitor.18 Blocking access of normal substrates would prevent substrates from competing with the putative new substrate. This situation is reminiscent of the change in specificity of thrombin that accompanies thrombomodulin binding. In this case, the clot-promoting activities of thrombin are blocked, whereas protein C activation is favored.4
The vascular location of thrombomodulin and EPCR has important ramifications in terms of the mechanisms of protein C activation. In the microvasculature, the surface area of endothelium exposed to blood is much greater than on large vessels, and hence the same thrombomodulin density per cell results in more than a 100-fold increase in the thrombomodulin/blood ratio.3 4 This predicts that protein C activation should occur primarily in the microcirculation and opens the question of whether mechanisms may exist to promote protein C activation selectively within in the larger vessels.
Given a prominent role of EPCR in protein C activation and the demonstrated ability to modulate APC function, we believed that it was important to analyze the cellular specificity and tissue distribution of this newly identified member of the protein C anticoagulant pathway, with particular emphasis on the comparison with the distribution of thrombomodulin. In this article we demonstrate that EPCR expression appears to be quite endothelial cell specific, and that unlike thrombomodulin, the expression of EPCR is largely restricted to veins and arteries, with little expression in the capillaries of most organs.
Baboon tissues were obtained from two animals after being given lethal infusion of sodium pentobarbital, after which organs were harvested immediately. The baboons (Papio c anubis) were from a breeding colony maintained at the University of Oklahoma Health Sciences Center. The animals were healthy adults with peripheral blood leukocyte counts of 6000 and 7000/μL, respectively.
Human tissues were collected from nonpathological portions of surgical and autopsy specimens from the University of Oklahoma Health Sciences Medical Center. Surgical tissue samples were obtained within 2 hours of surgical removal. The autopsy tissues were obtained within 12 hours of death: case 1 from a 62-year-old man with acute anterior wall myocardial infarction (6 hour duration), case 2 from a 1-month-old infant with bronchopulmonary dysplasia, and case 3 from a 36-year-old woman with acute pancreatitis. Human biopsy specimens of various tissues (kidney, liver, heart, stomach, skin, lymph node, and striated muscle) that did not show pathological changes in routine histological examination were also selected from the surgical pathology files (paraffin-embedded archival material).
For cryostat sectioning, tissue samples (approximately 6×6×3 mm) were immersed in OCT compound (Miles, Inc) in cryomolds, snap-frozen in liquid nitrogen, and stored at −70°C. For paraffin embedding, the baboon tissues were fixed in 4% phosphate-buffered paraformaldehyde for 18 hours, and the human tissues were fixed in 10% phosphate-buffered formaldehyde for 2 to 12 hours. Specimens submitted for cryostat sectioning and paraffin embedding included representative portions of all of the organs reported in this study.
Three murine mAbs reactive with human and baboon EPCR (1462, an IgG2Bk; 1489 and 1495, both IgG1k) were prepared by immunization with a recombinant soluble form of EPCR essentially as described17 and isolated on a HiTrap Protein G column (Pharmacia Biotech) from ascites according to the manufacturers directions. Antibodies 1462 and 1489 stain EPCR in paraffin-embedded tissues. The working concentrations of 1489, 1495, and 1462 mAbs were 4.5, 9, and 1.5 μg/mL, respectively.
For detection of thrombomodulin expression, a murine mAb (1009, an IgG1k)17 and a goat polyclonal antibody prepared against recombinant soluble human thrombomodulin19 were used. The polyclonal antibody reacts with thrombomodulin in paraffin-embedded human tissues. The working concentration of the 1009 mAb was 15 μg/mL.
For detecting protein C associated with the blood vessel, the murine mAb to human protein C (C8, an IgG1k) was chosen because it cross-reacts with baboon protein C and can be used on frozen sections or paraffin-embedded tissues. The working concentration of the C8 mAb was 1.7 μg/mL.
Immunohistochemical stainings for EPCR were performed on freshly prepared cryostat sections (5 μm) and also on paraffin sections (4 μm) of human and baboon tissues. Cryostat sections were fixed for 10 minutes in cold acetone (−20°C) and air dried. Paraffin sections were deparaffinized and rehydrated. All subsequent incubations and rinses were performed at room temperature. Optimal conditions for stainings were determined in preliminary experiments. All stainings on each frozen and paraffin material were performed in duplicate using mAb 1489 and 1495 for frozen material and 1489 and 1462 for the paraffin material. At least three specimens (from different individuals) from each of the human tissues studied were examined for EPCR expression and, except where indicated, gave similar staining patterns.
Before primary antibody incubation, endogenous peroxidase activity was blocked with 1.25% hydrogen peroxide in methanol for 30 minutes. For antigen retrieval, the microwave pretreatment method suggested by Shi et al20 was used with modifications for all EPCR antibodies. Briefly, paraffin sections in 10 mmol/L citric acid buffer (pH 6.0) were heated in a microwave oven for 5 minutes with a 700-W oven set at 50% power. The slides were allowed to cool for 20 minutes at room temperature in the buffer. After preincubation with 10% normal horse serum or 10% normal rabbit serum for 20 minutes, sections were incubated with primary antibodies for 60 minutes, followed sequentially with biotinylated horse anti-mouse (Vector Laboratories) or biotinylated rabbit anti-goat (Dako) antibodies for 20 minutes, and streptavidin-biotin peroxidase complex (Vector Laboratories) for 20 minutes. The reaction was developed with diamino-benzidine (Sigma Chemical Co), and the sections were counterstained with Mayer’s hematoxylin. All antibodies were diluted in phosphate-buffered saline (PBS, pH 7.4) containing 1% bovine serum albumin. Between the incubation steps, the slides were washed twice in PBS for 10 minutes.
Thrombomodulin expression was demonstrated on cryostat (5 μm) and paraffin (4 μm) sections of human tissues. The immunohistochemical staining for thrombomodulin was similar to that described above for EPCR except that (1) the sections were not treated in the microwave, (2) the blocking serum was either 10% normal horse (for antibody 1009) or rabbit serum (for polyclonal goat anti-human thrombomodulin antibody), and (3) the secondary antibody was either biotinylated horse anti-mouse (for antibody 1009) or biotinylated rabbit anti-goat (for polyclonal goat anti-human thrombomodulin antibody).
Protein C staining was performed on paraffin-embedded human lung and various baboon tissues. The staining procedure was similar to that described above for the mAbs against EPCR, which included microwave treatment for antigen retrieval. The tissues were fixed before paraffin embedding in an effort to minimize protein C dissociation in subsequent steps of tissue processing and staining.
As a negative control, stainings on sequential sections of each tissue were performed with substitution of the primary antibody with either mouse monoclonal IgG1 standard (Bethyl Labs) or preimmune goat serum with appropriate dilution. The nonspecific staining was negligible.
Evaluation of the immunohistochemical stains was conducted in a semiquantitative manner. All stainings were evaluated as signal/noise (image/background) intensity. Scoring of the staining intensity was given a scale of (−) to 4+. The positive range of scores was assessed as follows: 1+ (weak, but well recognizable staining); 2+ (moderate staining); 3+ (strong staining); and 4+ (very strong staining).
Evaluation of the distribution of EPCR from various human and baboon tissues with three EPCR mAbs with different epitopes yielded identical results, indicating that the staining was due to the presence of EPCR and not to cross-reacting material. No differences were noted in the staining patterns between frozen and paraffin material.
To gain an initial appreciation of the potential interactions of EPCR and thrombomodulin in the regulation of the protein C anticoagulant pathway, human tissues were evaluated for EPCR (Fig 1⇓, left) and thrombomodulin (Fig 1⇓, right) expression. For purposes of comparison, to avoid potential disease influences on EPCR expression, and to minimize the potential for post mortem changes in EPCR expression, we also examined the EPCR distribution and vascular location in baboon tissues (Fig 2⇓). With few exceptions, the EPCR distribution was the same in baboon and human autopsy and surgical tissues, and the small differences are discussed under the description of individual organs. For clarity of discussion, the pictures in Fig 1⇓ and Fig 2⇓ are of human and baboon tissues, respectively. At least three (usually more) representative sections were examined for each organ. These were analyzed at both high and low magnifications. The photographs in Figs 2 through 4⇓⇓⇓ are representative in all tissues, EPCR expression was essentially endothelial cell specific. Lung and heart were representative of most tissues. EPCR was expressed most strongly and consistently in large arteries, with the large veins expressing similar or perhaps slightly lower levels of EPCR (Fig 1⇓ (left), A [heart] and B [lung]). Postcapillary venule staining was variable, ranging from negative to relatively intense (see below). In contrast, most capillary endothelial cells were EPCR negative in heart and lung.
Heart. In the heart, in addition to the staining described above, EPCR staining was strongly positive (3+) on the endocardium including both ventricles, atria, appendages, and valves (Fig 2A⇑). The staining of some postcapillary venules can be seen in Fig 2A⇑ and more clearly at higher magnification (Fig 2B⇑). At higher magnification, the intense staining of all of the endothelial cells in a large coronary artery (Fig 2C⇑) is in stark contrast to the generally negative staining of the capillaries as seen in Fig 2B⇑. Similar EPCR staining to that seen in baboons was observed in human surgical and autopsy tissues, suggesting that post mortem changes, tissue harvesting, and possible disease processes in the humans probably did not dramatically influence the expression of EPCR. The capillary endothelial cells were usually negative, with ≈5% scattered endothelial cells (referred to as “patchy” in the Table⇓) staining weakly for EPCR in both baboon and human heart tissue. One exception was the capillary endothelium from the infant with bronchopulmonary dysplasia (autopsy case 2), in which weak to moderate (1 to 2+) EPCR staining was observed in ≈50% of the capillaries (data not shown). Although no histopathological changes in the myocardium were apparent in this case, the possibility that the widespread EPCR staining of the capillaries was due to the disease process or young age of the individual cannot be excluded.
As opposed to EPCR, thrombomodulin was widely expressed in the endothelial cells of the human heart including the coronary arteries, veins, postcapillary venules, and capillaries (Fig 1A⇑, right).
Lung. The arteries, including the small intraacinar and intra-alveolar arteries, and veins were strongly EPCR positive (3+) (Fig 1B⇑, left). There were very few (≤1%) scattered endothelial cells in the alveolar walls that were EPCR positive. In contrast, the alveolar endothelial cells stained intensely for thrombomodulin (Fig 1B⇑, right). The staining pattern for EPCR was similar in baboons and humans.
Skin. The dermal arteries, arterioles, and veins were strongly (3+) EPCR positive; some of the dermal capillaries revealed weak to moderate (1 to 2+) staining (Fig 1C⇑, left). The epidermis was negative with only a slight (≤1+) staining of the intercellular bridges. In contrast, as reported previously,21 thrombomodulin expression was observed not only in all of the endothelial cells of the skin but also in the squamous epithelium (Fig 1C⇑, right). Thus in the skin, EPCR is more endothelial cell specific than thrombomodulin, as well as being more restricted to larger vessels. The baboon and human skin had a similar EPCR staining pattern.
Liver. Endothelial cells of the liver revealed strong (3+) EPCR staining in the central veins, portal vein, and hepatic arteries and moderate (2+) staining in the sinusoidal capillaries (Fig 1D⇑, left). The staining of these sinusoidal capillaries for EPCR was more intense than in the capillaries of other tissues examined. In contrast, thrombomodulin stained intensely in the larger vessels within the liver, but thrombomodulin staining of the sinusoidal capillary endothelium was less intense (negative to 1+) than EPCR (2+) (compare Fig 1D⇑, left and right). The staining pattern for EPCR was similar for human and baboon liver.
Brain. Strong (3+) EPCR expression was detected in the arteries and veins of the subarachnoid space of the brain (Fig 2D⇑). Many of the arteries, veins, and postcapillary venules of the white matter but only some in the gray matter were moderately to strongly (2 to 3+) positive; the majority of capillaries in the white and gray matter were negative, with the others staining weakly (1+) for EPCR. In addition, all of the epithelial and most of the endothelial cells of the choroid plexus of the fourth ventricle showed 1+ and 2+ staining, respectively. Staining patterns for human and baboon brain were similar. We confirmed previous observations22 that thrombomodulin is expressed at high levels on large vessels in the human brain and at low levels, similar to EPCR, in the brain capillaries (data not shown).
Kidney. Moderate to strong EPCR staining (2 to 3+) was seen in the interlobular and arcuate arteries, veins, and in some of the arterioles. The glomerular capillary endothelial cells and the great majority of the cortical tubulointerstitial capillary endothelial cells were negative (Fig 2E⇑). The few cortical tubulointerstitial capillary endothelial cells that were EPCR positive were most conspicuous around the tubules in close vicinity to large veins. The medullary vasa recta and many of the tubulointerstitial capillary endothelial cells at the corticomedullary junction stained positively (2+) in both humans and baboons (Fig 2F⇑). In baboons, there were occasional glomeruli, with a few weakly positive (1+) glomerular capillary endothelial cells. Although not visible in Fig 2E⇑, these positive glomerular endothelial cells were located primarily at the glomerular stalk. In human kidneys with no associated histopathological changes, the cortical tubulointerstitial capillary staining for EPCR was even more sparse and weaker than in baboons (data not shown).
Adrenal. Weak to moderate staining of EPCR (1 to 2+) was detected in the endothelial cells of the zona reticularis and zona fasciculata in baboon (Fig 2G⇑) and human adrenal cortex. In addition, there was moderate EPCR staining in the small veins and postcapillary venules of the adrenal medulla.
Uterus. The arteries, veins, and many of the venules in the myometrium were strongly EPCR positive. There was strong (3+) EPCR staining in the spiral arteries of the endometrium and weak (1+) staining in some of the endometrial capillaries (Fig 2H⇑). The staining pattern was similar in baboons and humans.
Lymph nodes. There was strong (3+) EPCR staining in the subcapsular and medullary sinuses in both baboons and humans (Fig 3A⇑ and B). For comparison in Fig 3A⇑, the staining of a small artery is only slightly more intense than subcapsular sinusoidal endothelial cells. The high endothelial venules were negative.
Spleen. The trabecular veins and arteries and the follicular arterioles were strongly EPCR positive (3+). The venous sinusoids of the red pulp were either negative or slightly focally positive in the baboon and in two of the three human autopsy specimens (1+). In one of the human autopsy cases (case 3), the endothelial lining of the venous sinusoids of the red pulp was diffusely moderately positive (2+) (Fig 4A⇑).
Aorta, large muscular arteries, and veins. A very strong staining (4+) was limited to the endothelial cells in these vessels (Fig 4B⇑). The EPCR distribution in other tissues is summarized in the Table⇑.
Protein C binding to endothelium. EPCR was originally identified on the basis of protein C binding. If EPCR served as a protein C binding protein in vivo, then the vessels expressing the highest levels of EPCR should also bind protein C most intensely. Consistent with this possibility, protein C immunoreactivity (Fig 5A⇓) in baboons and humans mirrored that of EPCR (Fig 5B⇓) with protein C immunoreactivity detected on arteries, veins, and venules of various organs and negative in capillaries (Fig 5A⇓). The lone exception was the endocardium, in which EPCR was strongly positive but protein C was consistently negative. The basis for the discrepancy is not known. It is possible that EPCR in the endocardium exists in an inactive form on the cell surface or is largely intracellular, that protein C dissociated during processing or, less likely, that EPCR is not a major protein C–binding protein.
The current study demonstrates that EPCR expression in vivo is restricted primarily to the endothelium. In contrast to thrombomodulin, which is abundant both in large vessels and most capillaries, in most organs EPCR expression is restricted primarily to veins and arteries, with most capillary endothelial cells expressing little if any EPCR. Capillary expression of EPCR is detectable, however, in some specialized capillary beds. For example, medullary capillaries in the kidney and the subcapsular and medullary sinuses of the lymph nodes stained positively for EPCR. Liver sinusoidal endothelium is a rare exception in that EPCR expression is positive and thrombomodulin negative to weak. Expression within postcapillary venules varies from moderate to undetectable within the same region of the same organ. The arteries consistently showed somewhat higher levels of expression than veins of similar size. As a general observation, EPCR staining intensity increases with increasing vessel size.
From our current understanding of protein C activation, the differences in distribution between thrombomodulin and EPCR suggest major differences in the properties of the protein C activation complex on large vessels compared with capillaries. First consider the effective thrombomodulin concentrations in these two vessel types. Because of geometric considerations (endothelial cell surface to blood volume ratios) discussed previously4 23 and with the assumption that there are ≈50 000 thrombomodulin molecules per endothelial cell24 on both large and small vessels, then the effective thrombomodulin concentration in the large vessels would be >100-fold lower than in the capillaries (≈100 to 500 nmol/L). By augmenting protein C activation,17 EPCR may help to compensate for the relatively low concentration of thrombomodulin in large vessels. Second, since the Km of the activation complex for protein C is reduced by EPCR,17 the activation complexes on the arteries containing EPCR would be assumed to be less sensitive to changes in protein C concentration than the activation complexes within the capillaries having little or no EPCR. These properties of the activation complex may help explain the observation that heterozygous protein C deficiency is at most a weak risk factor for arterial thrombosis.25 The vascular distribution and properties of EPCR lead to the prediction that deficiencies of EPCR might constitute a risk factor for arterial thrombosis, especially if concomitant protein C deficiency were present.
One obvious question to arise from these studies is why EPCR expression is so low within most capillaries. Clearly the answer is not known, but some predictions can be made about the biological importance of this observation in terms of known functions for EPCR and thrombomodulin. For instance, thrombomodulin has at least two functions: to augment the thrombin-dependent activation of protein C and to enhance the reactivity of thrombin with antithrombin26 and the protein C inhibitor.27 When EPCR is present, each thrombin-thrombomodulin complex is a more efficient protein C activator. Thus EPCR enhances protein C activation without any known influence on thrombin clearance. In the capillaries, protein C activation is presumably less efficient because of the paucity of EPCR. Thus each thrombin-thrombomodulin complex activates less protein C before being inactivated by antithrombin or protein C inhibitor. This would result in a net shift in thrombomodulin function toward thrombin clearance. The high thrombomodulin concentration in the microcirculation allows for high thrombin binding capacity, potentially allowing this mechanism to play a major role in thrombin clearance.
Protein C and APC have been implicated as important factors in blocking tissue injury in gram negative sepsis. In experimental animals, the evidence in favor of this concept is that APC can protect rodents28 and primates29 from lethal levels of Escherichia coli, and blocking the protein C pathway exacerbates the response to low-level bacterial infusion and elevates the cytokine response.29 30 Clinically, the extent of protein C consumption correlates well with negative clinical outcomes in meningococcemia.31 Preliminary clinical results have suggested that replacement therapy with protein C results in blocking the progression of the disease process in most patients32 33 and a rapid improvement in organ function. For these reasons, we had considered EPCR a candidate for modulating the inflammatory response and had hypothesized that EPCR might be colocalized with key leukocyte adhesion receptors. Many of the receptors involved in leukocyte–endothelial cell interaction are located in the postcapillary venules, where leukocyte trafficking occurs,34 and not normally on large vessels. The presence of EPCR on some but not all postcapillary venules would be consistent with this potential function in at least some vascular beds. Whether EPCR can be induced in other postcapillary venules in response to inflammatory stimuli is currently under investigation. Consistent with the possibility, preliminary analysis at the whole organ level suggests that EPCR message rises as an early immediate response in rodents challenged with endotoxin.35 Inflammatory mediators also may be responsible for some of the relatively minor differences in EPCR expression, especially in capillaries, noted in some of the tissues. Alternatively, the possibility that these differences are related to different species cannot be excluded.
Selected Abbreviations and Acronyms
|APC||=||activated protein C|
|EPCR||=||endothelial cell protein C receptor|
These studies were supported by grants awarded from the National Institutes of Health, Grant Nos. PO1-HL-54804 and R37-HL-30340 (to C.T.E.) and R01-GM37704 (to F.B.T.). Dr Esmon is an investigator of the Howard Hughes Medical Institute. The authors would like to thank Drs Naomi Esmon, Debbie Stearns-Kurosawa, and Shinichiro Kurosawa for their helpful suggestions and Julie Wiseman for help in preparation of the final manuscript.
- Received April 24, 1997.
- Revision received July 9, 1997.
- Accepted July 15, 1997.
- Copyright © 1997 by American Heart Association
Esmon CT, Schwarz HP. An update on clinical and basic aspects of the protein C anticoagulant pathway. Trends Cardiovasc Med.. 1995;5:141-148.
Esmon CT, Owen WG. Identification of an endothelial cell cofactor for thrombin-catalyzed activation of protein C. Proc Natl Acad Sci U S A.. 1981;78:2249-2252.
Esmon CT. The roles of protein C and thrombomodulin in the regulation of blood coagulation. J Biol Chem.. 1989;264:4743-4746.
Walker FJ, Fay PJ. Regulation of blood coagulation by the protein C system. FASEB J.. 1992;6:2561-2567.
Castellino FJ. Human protein C and activated protein C. Trends Cardiovasc Med.. 1995;5:55-62.
Walker FJ. Regulation of activated protein C by a new protein: a role for bovine protein S. J Biol Chem.. 1980;255:5521-5524.
Regan LM, Lamphear BJ, Huggins CF, Walker FJ, Fay PJ. Factor IXa protects factor VIIIa from activated protein C. J Biol Chem.. 1994;269:9445-9452.
Zoller B, Svensson PJ, He X, Dahlbäck B. Identification of the same factor V gene mutation in 47 out of 50 thrombosis-prone families with inherited resistance to activated protein C. J Clin Invest.. 1994;94:2521-2524.
Dahlbäck B. Physiological anticoagulation: resistance to activated protein C and venous thromboembolism. J Clin Invest.. 1994;94:923-927.
Sun X, Evatt B, Griffin JH. Blood coagulation factor Va abnormality associated with resistance to activated protein C in venous thrombophilia. Blood.. 1994;83:3120-3125.
Kalafatis M, Bertina RM, Rand MD, Mann KG. Characterization of the molecular defect in factor VR506Q. J Biol Chem.. 1995;270:4053-4057.
Fukudome K, Esmon CT. Identification, cloning and regulation of a novel endothelial cell protein C/activated protein C receptor. J Biol Chem.. 1994;269:26486-26491.
Fukudome K, Kurosawa S, Stearns-Kurosawa DJ, He X, Rezaie AR, Esmon CT. The endothelial cell protein C receptor: cell surface expression and direct ligand binding by the soluble receptor. J Biol Chem.. 1996;271:17491-17498.
Stearns-Kurosawa DJ, Kurosawa S, Mollica JS, Ferrell GL, Esmon CT. The endothelial cell protein C receptor augments protein C activation by the thrombin-thrombomodulin complex. Proc Natl Acad Sci U S A.. 1996;93:10212-10216.
Regan LM, Stearns-Kurosawa DJ, Kurosawa S, Mollica J, Fukudome K, Esmon CT. The endothelial cell protein C receptor: inhibition of activated protein C anticoagulant function without modulation of reaction with proteinase inhibitors. J Biol Chem.. 1996;271:17499-17503.
Liu L, Rezaie AR, Carson CW, Esmon NL, Esmon CT. Occupancy of anion binding exosite 2 on thrombin determines Ca2+ dependence of protein C activation. J Biol Chem.. 1994;269:11807-11812.
Shi SR, Key ME, Kalra KL. Antigen retrieval in formalin-fixed paraffin-embedded tissues: an enhancement method for immunohistochemical staining based on microwave oven heating of tissue sections. J Histochem Cytochem.. 1991;39:741-748.
Raife TJ, Lager DJ, Madison KC, Piette WW, Howard EJ, Sturm MT, Chen Y, Lentz SR. Thrombomodulin expression by human keratinocytes: induction of cofactor activity during epidermal differentiation. J Clin Invest.. 1994;93:1846-1851.
Ishii H, Salem HH, Bell CE, Laposata EA, Majerus PW. Thrombomodulin, an endothelial anticoagulant protein, is absent from the human brain. Blood.. 1986;67:362-365.
Maruyama I, Majerus PW. The turnover of thrombin-thrombomodulin complex in cultured human umbilical vein endothelial cells and A549 lung cancer cells: endocytosis and degradation of thrombin. J Biol Chem.. 1985;260:15432-15438.
Cortellaro M, Boschetti C, Cofrancesco E, Zanussi C, Catalano M, de Gaetano G, Gabrielli L, Lombardi B, Specchia G, Tavazzi L, Tremoli E, della Volpe A, Polli E, PLAT Study Group. The PLAT Study: hemostatic function in relation to atherothrombotic ischemic events in vascular disease patients: principal results. Arterioscler Thromb.. 1992;12:1063-1070.
Bourin MC, Lindahl U. Glycosaminoglycans and the regulation of blood coagulation. Biochem J.. 1993;289:313-330.
Rezaie AR, Cooper ST, Church FC, Esmon CT. Protein C inhibitor is a potent inhibitor of the thrombin-thrombomodulin complex. J Biol Chem.. 1995;270:25336-25339.
Taylor FB Jr, Chang A, Esmon CT, D’Angelo A, Vigano-D’Angelo S, Blick KE. Protein C prevents the coagulopathic and lethal effects of E coli infusion in the baboon. J Clin Invest.. 1987;79:918-925.
Taylor F, Chang A, Ferrell G, Mather T, Catlett R, Blick K, Esmon CT. C4b-binding protein exacerbates the host response to Escherichia coli. Blood.. 1991;78:357-363.
Powars D, Larsen R, Johnson J, Hulbert T, Sun T, Patch MJ, Francis R, Chan L. Epidemic meningococcemia and purpura fulminans with induced protein C deficiency. Clin Infect Dis.. 1993;17:254-261.
Gerson WT, Dickerman JD, Bovill EG, Golden E. Severe acquired protein C deficiency in purpura fulminans associated with disseminated intravascular coagulation: treatment with protein C concentrate. Pediatrics.. 1993;91:418-422.
Ding W, Gu JM, Fukudome K, Laszik Z, Grammas P, Esmon CT. Upregulation of the message for rodent endothelial cell protein C receptor (EPCR) by endotoxin and thrombin. Circulation. 1996;94(suppl I):I-694. Abstract.
| 1 | 19 |
<urn:uuid:aa98911f-d943-4b7e-ab48-965ff9ad7875>
|
Levenhuk microscopes for school research are quite serious optical instruments, many of which are able to compete with professional models. With this microscope laboratory and independent researches in various fields of natural sciences can be carried out: medicine, botany, biology, archeology, and mineralogy.
Monocular. Magnification: 200x. Illumination: mirror.
Monocular. Magnification: 200x. Illumination: mirror. Also included: Levenhuk K50 Experiment Kit
USB digital microscope. Magnification: 20-230x. Digital camera: 2Mpx.
USB digital microscope with tripod. Magnification: 20-400x. Digital camera: 1.3Mpx.
Sturdy and easy-to-use educational microscope with experiment kit. Magnification: 40-400x.
Two microscopes in one. Advanced NG configuration. Color: azure.
Two built-in LED illuminations allow observing non-transparent objects in transmitted and reflected light.
High quality and reliable microscope for school and university students. Experiment kit included. Magnification: 64–640x.
Reliable educational microscope. The kit includes everything needed for first biological experiments. Magnification: 40-800x.
USB digital microscope with professional tripod. Magnification: 10-300x. Digital camera: 5Mpx.
Excellent optics, modern design, robust body. Experiment kit included. Magnification: 64–1280x.
Teaching digital microscope for the beginners, resolution 350 000 pixels
Modern microscope for beginning microcosm explorers. Digital camera and experiment kit included. Magnification: 40-400x.
Portable USB digital microscope with LCD display. Magnification: 20x, 200x, 500x. Digital camera: 5Mpx.
Compact. Capable. Wireless. Magnification: 10-200x. Digital camera: 1Mpx. Wireless connection.
USB digital microscope with LCD display. Magnification: 20x, 200x, 500x. Digital camera: 5Mpx.
Modern technologies, wide possibilities, user-friendly design. Digital camera and experiment kit included. Magnification: 64–1280x.
Digital microscope with large color display and connectivity to PC
If you have a child who studies natural sciences at school, then you should take a look at our school microscopes! One of them could become a loyal assistant in your kid's researches. Isn’t it wonderful when you can not only read boring textbooks but also can learn something by conducting your own experiments at home! It’s much more interesting and exciting!
When choosing a microscope for primary school children you should focus on the most significant points. Children’s microscopes should be very simple to use, so a kid can manage the instrument without (or with just a little of) your help. The body construction should be simple, with no unnecessary extra parts, sturdy and reliable. Levenhuk Rainbow microscopes meet all those requirements. And, as a bonus, these compound microscopes are available in 5 eye-catching vivid colors – the reason they are so loved by the youngest explorers!
Buy a quality high school microscope in Levenhuk optical equipment store
If you have an older kid and are looking for a quality middle- or high school microscope, our store is just the place. High school programs include serious microscopic research - that is why a simple microscope may not be enough. The best choice for older students is a more powerful biological microscope allowing for observations of a wider range of microscopic samples. In our range of microscopes for school you can also find more complex models indispensable for middle and high school students: digital and stereo microscopes. Digital microscopes allow observing microscope samples in real time on a PC monitor, and making high-quality photos and videos, which can be easily used in school projects. Stereo microscopes allow you to observe small volume samples and measure their dimensions with unprecedented accuracy. Get one and you will be surprised by the unsurpassed quality of Levenhuk high school microscopes.
Let your child always be at the top the class with Levenhuk school microscopes!
| 1 | 3 |
<urn:uuid:b7050d8d-e5ad-4214-8667-bf3e37bb433c>
|
Initiate some stuff:
Before start develop any android app we need IDE (Integrated Development Environment) like Eclipse , Android Studio etc. Then we need android SDK to start writing your own Android Applications.
Version of SDK, IDE are available for Windows, MacOs and Linux, So you can explore Android from the comfort of whatever OS You favor. The SDK tools and emulator work on all three OS environments and because Android applications are run on a virtual machine, there’s no advantage to developing form any particular operating system.
What You Need to Begin:
Because Android Applications run within the Dalvik virtual machine, you can write them on any platform that supports the developer tools. This currently includes the followings:
- Microsoft Windows XP or later
- MAC OS X 10.4.8 or later
To get started, you will need to download and install following kit:
- Java Development kit 6 +
- Android SDK
Downloading and Installing JDK :
The following steps work for Windows machines, but the steps are similar for Macs or Linux machines. Follow these steps to install the JDK:
1. Go to www.oracle.com/technetwork/java/javase/downloads/index.html. The Java SE downloads page appears.
2. Click the Download button for the Java Platform (JDK). A new Java SE Downloads page appears, asking you to specify which platform (Windows, Linux, or Mac) you’re using for your development work.
The web page shown in Figure may look different in the future. To ensure that you’re visiting the correct page, visit the Android SDK System Requirements page in the online Android documentation for a direct link to the Java SDK download page. View the requirements page at http://developer.android.com/sdk/requirements.html.
3. Click the Download link for the particular operating system you’re using. On Windows, choose the 32-bit install. If you’re on a 64-bit machine, you can install both the 32-bit (x86) and 64-bit (x64) JDKs if you like, but you must install the 32-bit to develop with Android. Windows may open a message box with a security warning.
4. In the Save As dialog box, select the location where you want to save the file, and click Save.
5. When the download is complete, double-click the file to install the JDK. A dialog box asks whether you want to allow the program to make changes to your computer.
6. Click the Yes button. If you click the No button, the installation stops.
7. When you’re prompted to do so, read and accept the license agreement. That’s all there is to it.
You have the JDK installed and are ready to move to the next phase.
Downloading and Installing the SDK :
The Android SDK is completely open. There’s no cost to download or use the API. You can download the latest version of SDK for your development platform from Android official site http://developer.android.com/sdk/index.html .
To download the Android SDK, follow these steps:
1. Go to http://developer.android.com/sdk/index.html.
2. Choose the latest version of the SDK starter package for your platform to download the SDK. You’ve just downloaded the Android SDK.
3. Open SDK Manager.
- Windows: Run the SDK Installer and install the SDK to the default location. When finished, check the Start SDK Manager check box and click Finish. If you’re prompted to accept the authenticity of the file, click Yes. The Android SDK Manager dialog box opens.
- Mac: Double-click the SDK file to unzip it. Move the resulting android-sdk-mac_x86 directory to a safe place, such as your Applications directory. Open the Terminal and enter cd to go to the android-sdk-mac_x86 directory, and then run tools/ android. You may be prompted to install
4. Select the SDK Platform Android 4.4 or latest check box.
Every time a new version of the Android operating system is released,
Google also releases an SDK that contains access to the added functionality in that version. If you want to include Bluetooth functionality in
your app, for example, make sure that you have Android SDK version 2.0 or later because this functionality isn’t available in earlier versions.
5. Click Install packages. The Choose Packages to Install dialog box opens.
6. Select the Accept radio button to accept the license, and then click Install, as shown in Figure .
The Installing Archives dialog box opens, displaying a progress bar.
7. When the archives installation is complete, click the Close button.
Getting the Tool:
After you have the SDK, you need an integrated development environment (IDE) to use it. It’s time to download Eclipse!
Downloading Eclipse :
To install Eclipse, extract the contents of the Eclipse .zip file to the location of your choice, such as C:\Program Files\Eclipse on Windows or in your Applications folder on a Mac.
On Windows, once you unzip Eclipse, pin a shortcut to your Start menu instead so that Eclipse is easy to find when you need it.
To start Eclipse, follow these steps:
1. To run Eclipse, double-click the Eclipse icon. If you’re running a recent version of Windows, the first time you run Eclipse, a Security Warning dialog box may appear. It tells you that the publisher hasn’t been verified and asks whether you still want to run the software. Clear the Always Ask Before Opening This File check box, and click the Run button.
2. When Eclipse starts, the first thing you see is the Workspace Launcher dialog box, as shown in Figure. You can modify your workspace
there, if you want, but you can aslo stick with the default:
on Windows, or
on a Mac
Leave the Use This As the Default and Do Not Ask Again check box deselected, and click the OK button.
3. Close welcome screen in eclipse.
Configuring Eclipse :
The Android Development Tools (ADT) plug-in adds functionality to Eclipse to do a lot of the work for you.
To set up Eclipse with the ADT, follow these steps:
1. Start Eclipse, if it’s not already running.
2. Choose Help➪Install New Software. The Install window opens. You use this window to install new plug-ins in Eclipse.
3. Click the Add button to add a new site. A site is a web address where software is hosted on the Internet. Adding a site to Eclipse makes it easier for you to update the software when a new version is released.
The Add Repository window opens, as shown in Figure.
4. This name can be anything you choose, but an easy one to remember is Android ADT.
5. Type https://dl-ssl.google.com/android/eclipse/ in the Location field.
6. Click OK.
Android ADT is selected in the Work With drop-down menu, and the available options are displayed in the Name and Version window of the
Install Details dialog box, as shown in Figure.
7. Select the check box next to Developer Tools, and click the Next button. The Install Details dialog box should list both the Android DDMS (see “Getting physical with a real Android device,” ) and the ADT.
8. Click the Next button to review the software licenses.
9. Click the Finish button.
10. When you’re prompted to do so, click the Restart Now button to restart Eclipse.
The ADT plug-in is installed.
Setting the location of the SDK :
You’re almost done, and you have to do this workonly once. Follow these steps:
1. Choose Window➪Preferences.
The Preferences dialog box opens, as shown in Figure
3. Set the SDK Location to the folder to which you saved the AndroidSDK.
If you saved Android SDK to c:\android on your computer, the location
4. Click OK.
Eclipse is configured, and you’re ready to start developing Android apps.
| 1 | 2 |
<urn:uuid:efba9ab9-0700-4a53-90c0-70fb219be495>
|
Congenital Myasthenic Syndromes
National Organization for Rare Disorders, Inc.
It is possible that the main title of the report Congenital Myasthenic Syndromes is not the name you expected.
The congenital myasthenic syndromes (CMS) are a diverse group of disorders that have an underlying defect in the transmission of signals from nerve cells to muscles. These disorders are characterized by muscle weakness, which is worsened upon exertion. The age of onset, severity of presenting symptoms, and distribution of muscle weakness can vary from one patient to another. A variety of additional symptoms affecting other organ systems can be present in specific subtypes. Severity can range from minor symptoms such as mild exercise intolerance to severe, disabling ones. Most CMS are transmitted by autosomal recessive inheritance; a few specific subtypes are transmitted by autosomal dominant inheritance. Genetic diagnosis these disorders is important because therapy that benefits one type CMS can worsen another type.
The CMS involve the neuromuscular junction which is a synapse where signals from motor nerves are passed to muscle fibers and tell the muscles fibers when to contract.
The normal neuromuscular junction consists of a presynaptic region, a synaptic space, and a postsynaptic region. The presynaptic region contains the end of a motor nerve cell called the motor nerve terminal. The motor nerve terminal overlies a specialized region of the muscle fiber called the postsynaptic region. The space between the motor nerve terminal and the postsynaptic region is called the synaptic space or synaptic cleft. The postsynaptic region displays multiple folds, known as junctional folds. The motor nerve terminal contains small vesicles that are filled by the neurotransmitter, acetylcholine, or ACh for short that acts as a chemical ‘messenger' with instructions for the muscles to contract.
The membrane covering the motor nerve terminal and facing the synaptic space is known as the presynaptic membrane. The membrane covering the postsynaptic region is known as the postsynaptic membrane.. The segment of the postsynaptic membrane that covers the tips of junctional folds is lined by molecules of the acetylcholine receptor, or AChR for short. The synaptic space is lined by a membrane known as the synaptic basement membrane. This membrane anchors molecules of acetylcholinesterase, or AChE for short, an enzyme that converts ACh to acetate and choline.
The process of how the motor nerve endings communicate with the muscle fibers is a highly specialized process and a genetic defect that impairs that communication can result in a congenital myasthenic syndrome. Understanding this process helps to understand myasthenic disorders.
When muscles are in the resting state, there is a randomly occurring release of acetylcholine from single synaptic vesicles in the motor nerve terminal. This release is known as exocytosis. The amount of ACh released from a single synaptic vesicle is referred to as quantum of ACh.
ACh released from a synaptic vesicle travels through the synaptic space and binds to the AChRs that are concentrated on the tips of the junctional folds. When this binding occurs, it causes a channel in the center of AChR to and allows positively charged sodium and lesser amounts of calcium ions to enter the muscle fiber. This process briefly changes the electric charge across the postsynaptic membrane from negative to positive (small postsynaptic depolarization) which is referred to as a miniature endplate potential (MEPP).
When a person wants to perform a voluntary action, (e.g. raising one's hand, dancing, kicking a ball, etc.), a series of successive nerve impulses are sent to the motor nerve terminal where they depolarize the presynaptic membrane, causing structures called voltage-gated calcium channels to open which allows calcium to enter the motor nerve terminal. This calcium influx results in a nearly synchronous release of the contents of several synaptic vesicles which results in a larger depolarization of the postsynaptic membrane, known as the endplate potential (EPP). When the EP reaches a certain threshold, it opens voltage-gated sodium channels found along the entire muscle fiber outside of the motor endplate area and this triggers a propagated muscle fiber action potential which causes the muscle fiber to contract.
The difference between the endplate potential and the depolarization required to activate the voltage-gated sodium channels is known as the safety margin of neuromuscular transmission. In healthy individuals, the amplitude of the EPP is quite large. With continued activity the EPP begins to decrease but still remains large enough to trigger a muscle fiber action potential.
After the muscle contracts, ACh is released from the AChRs into the synaptic space) where it is broken down (hydrolyzed) AChE into two molecules, acetate and choline. Choline is transported back into the nerve terminal where it recombines with acetate under the influence of an enzyme known as choline acetyltransferase to be stored once again within the synaptic vesicles.
The factors governing the safety margin of neuromuscular transmission fall into four major categories: (1) factors that affect the number of ACh molecules in the synaptic vesicle; (2) factors that affect quantal release mechanisms; (3) the density of AChE in the synaptic space; and (4) factors that affect the efficacy of individual quanta. The efficacy of individual quanta depends on the endplate geometry, the packing density of AChRs on the tips of the junctional folds, the affinity of these AChRs for ACh, and the kinetic properties of the AChR ion channel.
Congenital myasthenic syndromes are caused when there is an alteration (mutation) in a specific gene. This results in an abnormal protein or even loss of a protein that impairs some part of the process described above. The abnormal protein (disease protein) can reside in the motor nerve terminal, or the synaptic space, or in the postsynaptic region that underlies the nerve terminal, but in some patients the disease protein is also present in others tissues or organs causing not only CMS but also a variety of other symptoms.
European Alliance of Neuromuscular Disorders Associations
- Linhartova 1
- SI-1000 Ljubljana
- Slovenia, GAR 04
- Tel: 386 (0)1 47 20 500
- Email: [email protected]
- Website: http://www.eamda.eu/
Muscular Dystrophy Association
- 3300 East Sunrise Drive
- Tucson, AZ 85718-3208
- Tel: (520)529-2000
- Fax: (520)529-5300
- Tel: (800)572-1717
- Email: [email protected]
- Website: http://www.mda.org/
Muscular Dystrophy Campaign
- 61 Southwark Street
- London, SE1 0HL
- United Kingdom
- Tel: 2078034800
- Email: [email protected]
- Website: http://www.muscular-dystrophy.org
Myasthenia Gravis Foundation of America, Inc.
- 355 Lexington Ave 15th Floor
- New York, NY 10017-6603
- Tel: (212)297-2156
- Fax: (212)370-9047
- Tel: (800)541-5454
- Email: [email protected]
- Website: http://www.myasthenia.org
NIH/National Institute of Arthritis and Musculoskeletal and Skin Diseases
- Information Clearinghouse
- One AMS Circle
- Bethesda, MD 20892-3675
- Tel: (301)495-4484
- Fax: (301)718-6366
- Tel: (877)226-4267
- Email: [email protected]
- Website: http://www.niams.nih.gov/
NIH/National Institute of Neurological Disorders and Stroke
- P.O. Box 5801
- Bethesda, MD 20824
- Tel: (301)496-5751
- Fax: (301)402-2186
- Tel: (800)352-9424
- Website: http://www.ninds.nih.gov/
For a Complete Report
This is an abstract of a report from the National Organization for Rare Disorders (NORD). For a full-text version of this report, go to www.rarediseases.org and click on Rare Disease Database under "Rare Disease Information".
The information provided in this report is not intended for diagnostic purposes. It is provided for informational purposes only.
It is possible that the title of this topic is not the name you selected. Please check the Synonyms listing to find the alternate name(s) and Disorder Subdivision(s) covered by this report.
This disease entry is based upon medical information available through the date at the end of the topic. Since NORD's resources are limited, it is not possible to keep every entry in the Rare Disease Database completely current and accurate. Please check with the agencies listed in the Resources section for the most current information about this disorder.
Last Updated: 3/21/2016
Copyright 2016 National Organization for Rare Disorders, Inc.
Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
| 1 | 4 |
<urn:uuid:d488795e-4ddb-4899-960c-d2e96aab69ad>
|
Stats and Facts SAWEA
SOUTH AFRICA’S WIND INDUSTRY
Background, statistics and facts on wind energy and the renewables programme in South Africa
- Today in South Africa there are 19 wind energy developments, with more than 600 wind turbines equalling 1,471MW
- 3.365 GW of wind energy have been procured through the Department of Energy’s Renewable Energy Independent Power Producer’s Procurement Programme (REIPPPP) to date.
- 36 separate wind farm developments have been selected by the REIPPPP.
To date there are 55 renewable energy projects, of which 19 are wind farms, that are fully operational, adding 2,942 Megawatts (MW) to the grid. Details are:
- 19 wind energy developments equalling 1,471MW
- 31 solar photovoltaic developments equalling 1,344MW
- 3 Concentrated Solar Power plant at 200MW
- 2 hydroelectric power plants totalling 14.3MW
- Unlike many energy projects in Africa, 98% of those selected under the REIPPPP have reached commercial operation on time.
- A total of 14725 MW of renewable energy have been allocated to the REIPPPP.
- So far 6377MW (43%) of that has been procured over 6 bidding rounds and 3,029MW is operational.
- The REIPPPP is bringing renewable energy to the National grid fast and cheaper than new-build coal. Construction times for projects average less than two years, and the electricity price paid to projects has declined 68% within three years.
- REIPPPP Rounds 1-3.5 provide 11,784 Gigawatts of electricity
- The price of wind energy in the last Round 4 expedited was R0.62/kWh more than 40% less than forecast prices for Eskom’s new-build coal plants Kusile and Medupi.
Wind energy is water saving
- For each kilowatt hour of wind that displaces fossil fuels in our national grid, 1.2 litres of water will be saved. The entire portfolio the REIPPPP programme will save 52 million litres of water each year, equal to 371 428 standard sized bathtubs.
South Africa has outstanding conditions for generating wind energy
- More than 80% of SA’s land mass has the wind conditions to produce high load factors (more than 30%).
Wind energy is cheap, clean and green
- Wind is a clean, sustainable ‘renewable’ source of energy. Using wind energy helps combat climate change by reducing pollutants from fossil fuel.
- Local communities (within a 50 kilometre radius of developments) are already substantial beneficiaries of renewables, with an average shareholding of 10.5% in renewable projects. This constitutes more than four times obligated minimum of 2.5% which forms part of the criteria of the REIPPPP.
- Host communities will have billions of Rand invested in socioeconomic development from funds provided by these developments.
- The total projected value of goods and services to be procured from broad-based black economic empowerment suppliers is more than R101bn.
- Across the 6 bid windows, a total of 19.3 billion (2.2% of revenue) has been committed by the Industry, for Socio Economic Development i.e. 120% more than the minimum requirement (November report indicates this at 127%). Of this, R15.2 billion is specifically allocated for local communities where the IPPs operate.
- As part of bid obligations, Independent Power Producers (IPPs) must commit a share of their revenue over 20 years to community needs (SED). The minimum threshold is 1% of revenue.
- R91.1 billion is committed to various development initiatives under the REIPPPP.
- Total commitment from REIPPPP over the life-cycle of the projects 23.1 billion to local community development that includes job-creating initiatives.
- The REIPPPP has provided 127% of planned employment during construction (26 207 actual vs 20 688 planned job years) with 125% more local community members employed than was contractually required.
- Since 2013, the construction and operation of renewable energy projects has created 111,835 job years for South African citizens (DoE, 2016).
- Moody’s rated South Africa’s 2015 renewable energy market as the fastest growing, year-on-year in the world.
- Investment in renewables grew 20,500% in one year between 2011 and 2012 – the first year of the REIPPPP.
- 92 projects have been selected as part of the REIPPPP, attracting R198bn in private sector investment totalling a contribution of 6 376 MW of capacity to the national grid. 28% of this total comes from foreign investment – R53.2bn.
- The REIPPPP has attracted R135.6 Billion since inception. R35 Billion of this is from foreign investment.
- South Africa has a world-first wind atlas – a high-definition map which shows Independent Power Producers the best sites for wind energy development, allowing them to short-cut access to data to help them identify potential wind farm sites. The map was produced by South African National Energy Development Institute (SANEDI) and is available here: www.wasaproject.info
- Renewable energy production so far has cut the equivalent of 4.4million tonnes of carbon dioxide equivalent (CO2e).
- The REIPPPP is also stimulating local manufacturing and creating sustainable jobs. By March 2016 over R30 billion had been spent on local content and a further R65.7 billion is expected to be spent by projects that have yet to commence construction. Twelve new industrial facilities have been established as a direct result of the programme.
- According to a report by the Council for Scientific and Industrial Research, wind energy produced net savings of R1.8 billion in the first half of 2015 and was also cash positive for Eskom by R300 million. During the highest periods of load-shedding, collectively wind energy and solar power (photovoltaic) saved R4 billion from January to June in 2015.
- South Africa is the largest wind energy producer in Africa.
Background to the REIPPPP
- In 2010, the Department of Energy, the Treasury and the Development Bank of Southern Africa collaborated to set up the Independent Power Producer (IPP) office and designed the Renewable Energy Independent Power Producers Procurement Programme (REIPPPP). At the heart of the programme was the provision that Eskom enter Power Purchase Agreements (PPAs), ensuring that investors could forecast accurately their profits and bankability – which is enhanced by having payment risk mitigated by government guarantee.
- The Department of Energy has committed to 13,225MW of renewable energy generation by 2025. This will be secured under the Renewable Energy Independent Power Producer’s Procurement Programme (REIPPPP), which has been running since 2011 and has already completed 3 successful bidding rounds – The final sign off for Round 4 is being hindered by utility Eskom delaying the signing of Power Purchase Agreements.
- SA’s 2010 Integrated Resource Plan (2010-2030) calls for 17,800 MW of renewable energy to be energy Nation place by 2030. That equals more than one fifth of the county’s predicted demand. Commentators and industry experts expect the IRP 2016 to significantly increase the contribution allocated to wind power, leading to an ultimate industry of up to 40,000 MW installed.
- The overarching National Development Plan calls for a ‘greater mix of energy sources and a greater diversity of independent power producers (IPPs) in the energy industry’, acknowledging that energy market must look very different in years to come that how it appears now.
- In 2009 President Jacob Zuma committed South Africa to take mitigating action that would reduce emissions by 34% by 2020, and 42% by 2025 below “business as usual”, provided the international community supported the country with financial aid and the transfer of technology. In a short space of time these goals are being realised and we now have a flourishing and expanding renewable energy sector.
*This calculation is based on household consumption of 6.000 kiloWatt hours per year (supplied by the Council for Scientific and Industrial Research (CSIR) and an average capacity factor of 35% from the wind turbines (this is a conservative figure as many turbines are performing closer to 40% plus).
| 1 | 9 |
<urn:uuid:04bd682b-40f9-438f-99af-4eac8d215db7>
|
- freely available
Toxins 2013, 5(12), 2434-2455; doi:10.3390/toxins5122434
Abstract: Blooms of toxic cyanobacteria are well-known phenomena in many regions of the world. Microcystin (MC), the most frequent cyanobacterial toxin, is produced by entirely different cyanobacteria, including unicellular, multicellular filamentous, heterocytic, and non-heterocytic bloom-forming species. Planktothrix is one of the most important MC-producing genera in temperate lakes. The reddish color of cyanobacterial blooms viewed in a gravel pit pond with the appearance of a dense 3 cm thick layer (biovolume: 28.4 mm3 L−1) was an unexpected observation in the shallow lake-dominated alluvial region of the Carpathian Basin. [d-Asp3, Mdha7]MC–RR was identified from the blooms sample by MALDI-TOF and NMR. Concentrations of [d-Asp3, Mdha7]MC–RR were measured by capillary electrophoresis to compare the microcystin content of the field samples and the isolated, laboratory-maintained P. rubescens strain. In analyzing the MC gene cluster of the isolated P. rubescens strain, a deletion in the spacer region between mcyE and mcyG and an insertion were located in the spacer region between mcyT and mcyD. The insertion elements were sequenced and partly identified. Although some invasive tropical cyanobacterial species have been given a great deal of attention in many recent studies, our results draw attention to the spread of the alpine organism P. rubescens as a MC-producing, bloom-forming species.
Blooms of photoautotrophic organisms, like algae and cyanobacteria, are well-known phenomena that have been found in many types of fresh and marine waters over the past few decades [1,2]. Near to the spectacular discoloration of the habitats, several unpleasant accompanying incidences were detected with health and economic consequences, such as human and animal poisonings, fish-kills, and decline in quality of drinking water . Many cyanobacterial and algal strains can produce several toxic metabolites with diverse chemistry and bioactivity which may cause these problems [4,5].
While the harmful algal blooms (HAB) are mainly dominated by eukaryotic algal species (Dinophyceae, Bacillariophyceae) in marine waters, cyanobacteria occur much more frequently in freshwaters and cause these phenomena [6,7].
Microcystin (MC) as the most frequent cyanobacterial toxin is produced by entirely different cyanobacteria, including unicellular, multicellular filamentous, heterocytic, and non-heterocytic bloom-forming species. MCs are synthesized via non-ribosomal peptide synthetases (NRPS) and polyketide synthases (PKS) assembled into large multifunctional proteins encoded by the mcy gene cluster . The general chemical structure of MC is cyclo (d-Ala1,X2,d-MeAsp3,Z4,Adda5, d-Glu6,Mdha7), where d-MeAsp is the non-proteinogenic amino acid d-erythro-iso-aspartic acid (methyl aspartate), Mdha is N-methyl-dehydroalanine and Adda is an amino acid with a C10-chain: (2S,3S,8S,9S)-3-amino-9-methoxy-2,6,8-trimethyl-10-phenyldeca-4,6-dienoic acid. X and Z represent variable L-amino acids in positions 2 and 4, respectively .
Recently, progress has been made in the elucidation of the genetic basis of MC synthesis for all three main MC producers occurring in freshwater, i.e., Anabaena, Microcystis and Planktothrix. Three gene clusters responsible for the biosynthesis of MCs, containing 9 or 10 genes (depending on the genus) and spanning 55 kb, have been sequenced. The corresponding genes of Microcystis aeruginosa K-139 and PCC 7806, Planktothrix agardhii CYA 126, and Anabaena sp. strain 90 have been completely sequenced [9,10,11].
Planktothrix is one of the most important MC-producing genera in temperate lakes . Of the MC-producing genotypes within this genus, the red-pigmented phycoerythrin (PE)-rich genotypes are assigned to Planktothrix rubescens, while the green-pigmented phycocyanin (PC)-rich genotypes are frequently assigned to Planktothrix agardhii . Generally, Planktothrix rubescens is found in deep, stratified and oligo- to mesotrophic waters in which metalimnetic layers can be built up. Planktothrix agardhii has a broader distribution and inhabit shallow, polymictic water bodies in the mesotrophic to hypertrophic nutrient range .
P. rubescens was reported in the following European subalpine lakes: Zurich (Switzerland), Garda (Italy), Mondsee (Austria), Nantua (France) and Bourget (France) [14,15,16,17,18]. Various chemical, physical, and biological parameters are known to contribute to the developmental and spatial distribution of cyanobacterial populations , but the determinism of cyanobacterial blooms and their impact at the lake scale are not clearly understood.
Planktothrix spp. differ in their cellular MC contents as well as the production of MC variants [12,19]. Different MC structural variants were characterized for Planktothrix strains isolated from lakes in the Alps: the methyl-dehydro-alanine residue (Mdha) genotype, which was found to synthesize structural variants containing only Mdha in position 7; the butyric acid (Dhb) genotype, which was found to contain Dhb instead of Mdha in the same position; and the homotyrosine (Hty) genotype, which was found to contain Hty and Leu in position 2 but never Arg. The Hty variant has always been found to co-occur with Dhb in position 7 of the molecule [20,21].
Numerous papers have already investigated the impact of various biotic and abiotic environmental factors on MC production by various cyanobacterial strains. These studies demonstrated that MC production can be influenced by temperature, light, nutrients such as nitrogen and phosphorus, pH, iron, xenobiotics, and predators [7,22]. Despite inconsistent results, the production of MCs by the cells seems to be linked to their growth rate, which is itself affected by environmental conditions. On the other hand, several studies on variations in the proportions of MC-producing cells demonstrated the potential influence of nutrient concentrations, light and temperature, suggesting that there is a negative correlation between the proportions of MC-producing cells and the abundance of cyanobacterial cells .
During the last decade, genetic methods have significantly contributed to our understanding of the distribution of genes that are involved in the production of MCs in cyanobacteria causing cyanobacterial HABs.
The occurrence of inactive mcy genotypes (i.e. genotypes possessing the mcy genes but lacking MC production) of Planktothrix spp. and Microcystis spp. in nature might be understood as support for the mcy gene loss hypothesis. Moreover, inactivation of the mcy gene cluster by transposable elements or point mutations might be seen as an intermediate step in reorganization of the mcy gene cluster towards cell types with modified MC synthesis [24,25].
In this study we report the presence of P. rubescens bloom in a wind-sheltered, stably stratified shallow lake. Based on the unusual finding, we claim that P. rubescens can occur and build toxic blooms in waters which functionally mimic the deep alpine lakes. The morphometric features of the pond and the relevant physical and chemical variables were studied in order to understand the appearance of this alpine cyanobacterial species in the shallow lake-dominated alluvial region of the Carpathian Basin. In addition to the morphological and molecular identification of the species, we intended to study the toxicity of the species and to analyze the toxin profile by MALDI–TOF and NMR analyses. The mcy gene cluster of the isolated strain of the unusual bloom causing P. rubescens was also investigated and compared to the sequenced mcy gene cluster of strain CYA126/8.
2.1. Physicochemical Parameters of the Study Site
Analyses of water samples revealed high conductivity and alkaline character of the pond where the water bloom occurred (Figure 1). Physicochemical parameters in the pond during algal blooms are summarized in Table 1. Due to the pond’s small size and leeward location, this type of standing waters are stratified in the vegetation period with a 3 m metalimnion depth . Concentration of nutrients (Table 1) refer to meso-eutrophic character and, at this range, nutrient limitation does not develop .
|Lake volume||1.6 × 105||(m3)|
|Specific electrical conductivity||820||(µS cm−1)|
|Inorganic Nitrogen (IN)||1953||(µg L−1)|
|Soluble Reactive Phosphorus (SRP)||3||(µg L−1)|
|Total Nitrogen (TN)||3125||(µg L−1)|
|Total Phosphorus (TP)||370||(µg L−1)|
2.2. Morphology-Based Identification of the HAB Causing Organism
Prior to the molecular analyses, the collected bloom samples were investigated by light microscope (Figure 1). Trichomes were straight, solitary without sheath, and pale purple in color. Cells were cylindrical, not constricted at cross-walls, and mostly isodiametric with a diameter of 6–8 (8) µm. Cells after division were considerably shorter (3–4 µm). All the cells had numerous aerotopes and seemed densely granulated. Most of the filaments had widely rounded terminal cells, the wall of the distal end of these cells were not thickened. Occasionally, some filaments attenuated to the ends and had slightly conical terminal cells with thickened outer cell wall. These morphological features are identical with those characteristic of Planktothrix rubescens (DeCandolle ex Gomont) .
2.3. Molecular Phylogenetic Analyses
Sequence analysis of regions covering the almost complete 16S rRNA gene and the cpcBA-IGS of strain BGSD-500 resulted in 1387 and 527 nt, respectively. Based on the 16S rRNA, BGSD-500 showed high pairwise similarity values (99.9%–100%) to the sequence group containing the type strain P. rubescens NIVA-CYA 18 (=PCC 7821)T and was separated from the cluster harboring the type strain of P. agardhii, NIES 204T (Figure 2A). The analysis performed with cpcBA-IGS sequences showed similar results; BGSD-500 showed 100% pairwise similarity values to the cluster that contained mostly P. rubescens isolates (Figure 2B). Unfortunately, no type strain sequences are available currently in databases covering this region, only a shorter fragment with 217 nt from P. rubescens NIVA-CYA 18 (=PCC 7821)T (GenBank Acc. No. AJ558154), which was identical with sequences from the aforementioned cluster and showed ≤98.2% pairwise similarity values with the members of the other cpcBA-IGS cluster.
2.4. Identification of MC and Comparative Analysis of Bloom Sample and the Isolated P. rubescens Strain
Under the purification procedure, the toxic fractions were detected by mustard test (Figure 3).
The main toxic fractions after DEAE cellulose chromatography were combined and further purified by HPLC-DAD. The major toxin was identified as [d-Asp3, Mdha7]MC–RR (Figure 4) on the basis of the following studies.
The purified MC had an absorption maximum at 239 nm in methanol and exhibited at m/z 1024.6 [MH]+ by MALDI-TOF. The constitution of amino acids (Ala1,Arg2,Asp3,Arg4,Adda5,Glu6,Mdha7) was confirmed by MALDI post-source decay. Characteristic fragments were: m/z 754 ([Arg4-ADDA5-Glu6-DHB7-Ala1+H+] or [Arg4-ADDA5-Glu6-MDHA7-Ala1+H+]), 714 ([H-Arg2-Asp3-Arg4-ADDA5]+, lack of Me-Asp3), 216 ([Glu6-DHB7+H+] or [Glu6-MDHA7+H+]), 155 ([MDHA7-Ala1+H+] or [DHB7-Ala1+H+]), among others.
The connectivity and configuration of N-methyldehydroalanine could be determined from TOCSY and NOESY spectra. The Asp3 residue showed no methyl group at the C(β) position, but rather two H–C(β) resonances. This also allowed an assignment of the 1D 1H NMR spectrum. 2D HSQC spectra were also recorded. In our sample, an H–C link was identified in the HSQC spectrum between a carbon at 38.0 ppm and 1H at 3.32 ppm, indicating presence of the N-methyl group. Also, the =CH2 was found, a pair of 1Hs at 5.56 ppm and 5.88 ppm located on a 13C 116.0 ppm.
Two anabaenopeptin (B, m/z: 837 and F, m/z: 851) congeners were also identified from the P. rubescens by MALDI-TOF post-source decay.
The lyophilized samples were tested by mustard test and the toxicity of the samples was calculated. The IC50 value of the bloom sample was 0.97, and the BGSD-500 strain was 2.47 (Figure 5).
Comparing the MC content of the samples, the concentration of [d-Asp3, Mdha7]MC–RR were measured by capillary electrophoresis. The amount of MC content calculated for the bloom sample was 8.57 mg g−1, and 1.85 mg g−1 for the isolated P. rubescens strain (Figure 6).
2.5. Analysis of the mcy Gene Cluster
Deletions were identified by shorter-than-expected PCR amplicons at one site. In one case, PCR amplification constantly failed to give amplicons with the corresponding primer pairs (position: 23,612–24,003 nt, . This deletion was located in the spacer region between mcyE and mcyG, and should therefore not disturb the translation process.
The amplification of the MC synthesis gene cluster yielded an unusually long PCR product (around 1.6 kb) when using primer pair myc3 (position: 925–1399 nt); this insertion was located in the spacer region between mcyT and mcyD. Sequencing of this amplicon yielded 1509 nt and 1387 nt long sequences of the forward and reverse read, respectively, which made it possible to assemble a 1606 nt long “counting” sequence of the region. A standard nucleotide BLAST search in the nucleotide collection of GenBank conducted on 22 June 2013 for highly similar sequences (“megablast”) showed 99% and 98% sequence identity on 17% and 25% of the query length with the MC synthetase-associated thioesterase (mcyT) gene of Planktothrix rubescens and P. agardhii, respectively. When compared to the reference sequence of Planktothrix agardhii MC synthesis gene cluster (GenBank accession nr. AJ441056; ), the query sequence showed 98% identity on 334 nt length from the 960th to the 1293rd position, then after a ca. 1.2 kb gap of an unalignable part, followed by another 98% identical part on 80 nt length from the 1299th to the 1378th position with the same mcyT gene. The unalignable region was found to be an insertion into this gene of 1194 nt length (Figure 7).
When searching for highly similar sequences in BLAST (“megablast”), no significant similarity was found for this insertion. Therefore, we repeated the BLAST search but for somewhat similar sequences (“blastn”). This second search has found two somewhat similar sequences in GenBank: the first one was a hypothetical protein of a Synechococcus sp. (strain PCC 7002; GenBank accession nr. CP000951) which showed 77% identity on 87% length of the insertion region; whereas the second showed 74% similarity on 75% length in two parts: the first part was similar to signal transduction histidine kinase, while the second part was to tRNA(Ile)-lysidine synthetase of a Synechococcus sp. (strain PCC 6312; GenBank accession nr. CP003558). No further similarity was found for the inserted element.
When we compared the sequence of the insert to the whole genome of Synechococcus sp. (strain PCC 7002; GenBank accession nr. CP000951) using the LAGAN algorithm in the web-based version of mVISTA , it identified a similar part between positions 865,594 and 867,245 of the reference genome, which is potentially homologous to the insert. This region contains 65 nt at the 3'-end of the icd gene for the product isocitrate dehydrogenase, NADP-dependent; two hypothetical proteins (corresponding to locus tags SYNPCC7002_A0839 and SYNPCC7002_A0840) in the whole length; and 44 nt at the 5'-end of the petD gene for the product cytb6/f complex subunit IV.
The reddish color of cyanobacterial blooms viewed in Figure 1 in the Kocka pond with the appearance of a dense 3 cm thick layer (biovolume: 28.4 mm3/L) was an unexpected observation in our region. The identification of Planktothrix rubescens as the dominant bloom-forming species was a surprising observation, because Planktothrix rubescens has previously not been identified in our region.
This species is characteristic in deep-lakes located in Central and Northern Europe , including the lakes Zurich, Garda, Mondsee, Geneva, Nantua, Steinsfjorden and Bourget [16,24,33,34,35]. Occasionally the “Burgundy-blood phenomenon” might also occur.
The appearance of the mass on the surface of P. rubescens in November is a common phenomenon because during the mixing period, P. rubescens is spread within the entire water column but it is usually more concentrated in the upper part of the euphotic zone. During summer stratification, the metalimnetic position is maintained by performing a relatively slow buoyancy regulation [32,36]. Buoyancy is allowed by the production of gas vesicles which is higher when the photosynthetic activity is low. Vertical migration can be stimulated by both light intensity and nutrients distribution .
In our case, P. rubescens populations thrived under high nitrogen concentrations (Table 1). This interpretation is supported by the observation that P. rubescens mass occurrence primarily arises in lakes where Zeu reaches the more stable metalimnetic zone. These lakes are frequently characterized by low phosphate and high nitrogen loads as observed in many oligo- and mesotrophic pre-alpine lakes . Most of Hungary’s territory belongs to alluvial plains where the characteristic lake types are the small, shallow sandhill lakes and oxbows . Natural deep lakes cannot be found in the region, although the deeper oxbows can be stratified stably in the growing season [26,38] in which characteristic vertical distribution of phytoplankton can frequently occur. Nevertheless, gravel and sand mining on the alluvial fans created several pit lakes with maximum depth of 10–40 m. These lakes are stratified, usually mesotrophic and can be characterized by small vertical light attenuation coefficients . These characteristics favor the development of deep chlorophyll maxima (DCM) in the metalimnia by species having capability of effective buoyancy regulation and chromatic adaptation . In the alpine region, DCM is established primarily by the cyanobacteria Planktothrix rubescens , but the occurrence of this taxon in a mesotrophic pit pond is unique in this region.
While stratification in deep lakes is a well-known and well-studied limnological phenomenon [42,43,44], there is currently debate about the development and stability of shallow lakes’ stratification [38,45,46]. This is partly due to the lack of clear definition of shallow and deep lakes. Scheffer from a practical point of view proposed the term shallow lakes for those lakes with a depth less than 3 m. Padisák and Reynolds presented a functional approach to delineate a clear difference between shallow and deep lakes. They emphasized that absolute depth alone is not a sound criteria to define shallow or deep lakes; moreover, the stratification is also not decisive, because various stratification patterns can develop in shallow lakes both in space and time. Shallow lakes are generally considered polymictic [47,48,49] because wind-induced mixing continuously sets back the thermal stratification of the shallow water column [50,51]. However, recent studies suggest that even in shallow lakes, periods with stratification can occasionally be observed [26,52]. Intensive study of shallow lakes’ stratification started only in the last few decades [38,45]. Pithart and Pechar found weak stratification in floodplain pools of Lužnice River. Fonseca and Bicudo described a persistent stratification for a few weeks in a tropical shallow reservoir. Folkard et al. demonstrated that in sheltered conditions even a small shallow pond can be stratified.
The most successful bloom-forming organisms in a shallow lake-dominated area such as those found in our country are filamentous cyanobacteria: the shade-tolerant species Planktothrix agardhii and the invasive Cylindrospermopsis raciborskii, as well as the colonial genus Microcystis [2,46]. These cyanobacterial genera usually can build waterblooms usually in the summer period, when water temperature reaches 20–25 °C.
Although Planktothrix rubescens is a cold-water stenotherm species, it is largely distributed in middle European [1,36] and southern sub-alpine lakes . Considering our report, it cannot be excluded that the organism could appear in any waters that functionally mimic the alpine deep lakes.
Sequence analysis of the 16S rRNA gene and the phycocyanin operon has revealed that the closest relatives of the bloom-forming strain BGSD-500 are P. rubescens and P. agardhii isolates. These two species could not be separated based on the 16S rRNA gene or cell morphology, but could be distinguished based on phycobilin pigment composition [29,57,58]. The main diagnostic feature for differentiation is the high phycoerythrin content that gives a reddish purple or reddish brown color to P. rubescens contrary to the blue-green or yellow-green color of P. agardhii or P. suspensa trichomes [13,29,59]. On the basis of the results of microscopic and nucleotide sequence analyses, strain BGSD-500 was identified as P. rubescens.
The results obtained in the Kocka pond thus confirm that the Planktothrix bloom sample contained comparably high amounts of MC [12,60]. The MALDI-TOF and the CE-analyses demonstrated that the P. rubescens bloom sample and the isolated strain (BGSD-500) primarily contain one main congener, a demethylated variant of MC-RR. This was consistent with other water blooms of Planktothrix rubescens reported in the literature [12,24]. After the purification procedure, the pure major component of the microcystin exhibited molecular ion obtained by MALDI-TOF at m/z 1024.6 [M + H]+. It was consistent with three previously isolated microcystins: [d-Asp3]MC–RR (or reported as [d-Asp3,Mdha7]MC–RR) [61,62,63], [Dha7]MC–RR [62,63]) and [d-Asp3,(E)-Dhb7]MC–RR [64,65,66]. The isolation of a sufficient amount of the pure compound enabled extensive NMR spectrometric analyses to differentiate between these derivatives. Analysis of 1D and 2D HSQC NMR spectra provided evidence that the molecule is identical to that described by Meriluoto et al., , [d-Asp3,Mdha7]MC–RR, isolated from Oscillatoria (Planktothrix) agardhii.
This observation concurs with previous studies, describing demethylated variants of MC-RR to be the predominant MC congeners of P. rubescens [12,20,63,64], accompanied by a varying number of characterized MC variants, such as [Asp3]-MC-LR, [Asp3]-MC-HtyR and [Asp3]-MC-YR, and as yet uncharacterized congeners [12,20].
Freshwater cyanobacterial toxic blooms are a common problem in many Hungarian ponds and lakes. This phenomenon, which has been widely reported in the literature during the last decades, involves many species, including Microcystis aeruginosa, Cylindrospermopsis raciborskii, Chrysosporum (Aphanizomenon) ovalisporum, Planktothrix agardhii [67,68,69,70,71,72]. Thus far, the presence of MCs in Hungarian shallow lakes has been primarily attributed to Microcystis aeruginosa and occasionally to Planktothrix agardii. Although almost every investigated M. aeruginosa population was an MC producer so far and contained variable amounts of this heptapeptide, the P. agardii-dominated blooms have lower MC concentrations due to the patchy distribution of mcy genes in P. agardhii populations . The present report is the first unambiguous evidence of MC production by P. rubescens in our region and reveals that MC production might be more widespread within the cyanobacterial taxa found in Hungarian freshwaters than was previously assumed.
Under the purification procedure the MC was detected by a Sinapis plant test, which was then developed by us for the detection of cyanobacterial toxins . The detected MC congener was less toxic than the earlier investigated MC-LR and YR. The impact of structural differences on acute toxicity has also been observed with mouse bioassay and T. platyurus bioassay and these results reflected the specificity of sensitivities for different organisms . The lack of congruence between protein phosphatase (PP) inhibition and toxicity of different congeners of MC indicate that other mechanisms, such as uptake, transport, detoxification, other target sites etc., may have a strong modulating effect on the overall toxicity for an animal and can offset or even reverse the specific PP inhibitory activity .
The isolated P. rubescens culture was shown to contain MC corresponding to an amount of 1.85 MC-LR equiv. mg−1 dry weight using CE-analysis. Comparing the concentration of the MC congener in the bloom sample and the isolated strain it can be clearly seen that the bloom sample contained five times more MC than the isolated strain did. This difference may due to specific environmental conditions or to the fact that natural populations are a mixture of different strains with different toxic potentials.
Sequence analysis of the mcy region by Christiansen et al. revealed a 55 kb cluster of 9 genes presumably involved in MC biosynthesis in P. agardhii CYA 126/8. It showed both remarkable similarity to, but also differences from, the completely sequenced mcy gene cluster of M. aeruginosa. Eight of these genes (mcyA, -B, -C, -D, -E, -G, -H, and -J) showed significant similarity to the mcy genes from M. aeruginosa encoding peptide synthetases (PKSs) and modifying enzymes. One of the main differences between the mcy gene clusters of the two aforementioned genera are the general arrangement and the transcriptional orientation of the mcy genes which could be explained by deletion or rearrangement of several genes. This is confirmed by the fact that mcyF and mcyI are lacking in the Planktothrix cluster, while mcyT is missing in the Microcystis cluster. With the help of PCR products of 28 primer pairs covering the whole mcy gene cluster in Planktothrix we confirmed the presence of the mcy gene cluster in our isolates. There were no striking differences in size in the PCR products on agarose gels compared to the corresponding PCR products obtained from strain CYA126/8, except for two positions.
Deletions were identified by shorter-than-expected PCR amplicons at one site, since PCR amplification constantly failed to give amplicon with primer pairs corresponding to the positions from 23,612 to 24,003 nt within the mcy cluster of P. agardhii CYA 126/8. This deletion was located in the spacer region between mcyE and mcyG; therefore, it should not disturb the translation process.
Insertions were detected at one site by significantly larger-than-expected PCR products, using the primer pair amplifying the region between positions 925 and 1399 nt, binding to the spacer region between mcyT and mcyD.
Several Planktothrix strains that were inactive in MC synthesis were investigated, and a few were found to contain mutations within the mcy gene cluster by deletion(s) and insertion(s) [24,25]. However, some of the investigated strains without detectable MC did not reveal insertions or deletions, and, consequently, they may have acquired point mutations within the mcy gene cluster, as suggested by the authors [74,75]). That study was the first showing that mutations do occur frequently within the mcy gene cluster and that a large proportion of mutations are caused by insertion of an IS element at different sites [74,75].
Although we detected an insertion element in our isolate, its position was in an intergenic region and not likely to disturb the translation process. The detected IS element was close to the mcyT region which is a unique mcy region in the genus Planktothrix. In Planktothrix, the mcyT gene is located at the 5'-end of the mcy gene cluster, but has not been found in the mcy gene cluster of other MC-producing cyanobacteria [10,76]. To demonstrate the role of mcyT in MC synthesis, the mcyT gene was inactivated by experimental mutagenesis in P. agardhii strain CYA126/8. The insertional inactivation of mcyT resulted in a reduction of MC synthesis by 94% ± 2% (1 SD) compared with the wild type . In contrast, the proportion of MC variants, cellular growth rates, as well as the transcriptional rates of other mcy genes were not altered. According to the data of Mbedi et al. , mcyT and mcyTD are inadequate regions for the detection of MC-producing Planktothrix in field samples, since they also occur in non-producers.
Recombination has been recognized as a general feature in the formation of mcy gene clusters for the synthesis of new structural variants of MC and could be modified the net MC amount [74,75]. For the identification of the IS elements we tried to find similar sequences in GenBank. The first one was a hypothetical protein of a Synechococcus sp. The second showed 74% similarity on 75% length in two parts: the first part was similar to signal transduction histidine kinase, while the second part was to tRNA(Ile)-lysidine synthetase of a Synechococcus sp.
Although our element has probably no influence on the MC synthesis, considering the function of the product of the partly similar sequences, it is worth discussing this possibility. Lysidine is an essential modification that determines both the codon and amino acid specificities of tRNA(Ile) and the signal transduction histidine kinase play a role in signal transduction across the cellular membrane . Both functions could be associated with a regulation of a metabolite production.
4. Experimental Section
4.1. Site Description and Sampling
The Kocka pond (Figure 1) is a small, shallow, well-sheltered, gravel pit pond with a maximum depth of 7 m, situated in the northeastern part of Hungary (48°08′38.72″; 20°48′06.63″) 111 m a.s.l.. Concentration of the nutrients (Table 1) refers to meso-eutrophic character and, at this range, nutrient limitation does not develop . The pond is used for angling activity with regular fish stocking. Strong red-colored water-bloom was observed and field samples were collected on 19 November 2006.
Samples were taken from the water surface at the center of the pond, where the filaments were associated into mass and covered the water surface in 1–2 cm thick layer. Five L net samples were collected for the toxin analyses and 0.1 L for the isolation of the water-bloom-causing species. The isolated strain was identified as Planktothrix rubescens (see below), coded as BGSD-500 and cultivated in BG11 medium at 22 °C with continuous irradiation (50 µmol m2 s−1). Strain was harvested by centrifuge at 13,000 rpm for 10 min, and the pellet was lyophilized.
4.2. Physical and Chemical Variables
Water temperature was measured at the field using a mercury bulb thermometer. The other variables were determined in the laboratory. Samples were kept at 4 °C in darkness until the start of measurements. A pH meter with a glass electrode (WTW pH 539) was used to measure pH based on MSZ 1484-22:2009 Hungarian Standard. The specific electrical conductivity was determined on basis of MSZ EN 27888:1998 Hungarian Standard, using WTW LF539 conductivity meter. Both variables were temperature corrected (20 °C). Sampling, preservation and the analyses were carried out on basis of the Hungarian Standard, MSZ ISO 5813:1992. This standard is equivalent in technical content and fully corresponds in presentation to the International Standard ISO 5813:1983 and to the European Standard EN 25813:1992. For measuring chemical oxygen demand (COD), potassium permanganate was used as an oxidizing agent (MSZ 448-20:1990 Hungarian Standard). Ammonium concentration was determined by a manual spectrophotometric method based on the MSZ ISO 7150-1:1992, Hungarian Standard. This standard is totally equivalent in technical content and fully corresponds in presentation to the International Standard ISO 7150-1:1984. Determination of nitrate concentration was based on colorimetry by titration of salicylic acid (MSZ 1484-13: 2009). Colorimetry was used also to determine nitrite concentration applying sulfonic acid and aminonaphthalene reagents (MSZ 1484-13:2009). Inorganic nitrogen was calculated as the sum of these three forms. Total organic carbon (TOC) measurements were performed with an Elementar High TOC analyzer according to the combustion-infrared method as described in the MSZ EN 1484:1998 standard. Dissolved organic carbon (DOC) measurements were made on filtered water samples (0.45 nm pore diameter). Inorganic phosphorus concentrations were measured by the acid molybdate method (MSZ 448-20: 1990 Hungarian Standard).
4.3. Identification of HAB Species
The HAB species (Figure 1) was identified by observations of morphological characteristics . Phytoplankton samples (50 mL) were preserved with acidic Lugol’s solution and filaments were counted using a particle counter (HIAC/ROYCO 9064) calibrated by manual counting (of at least 400 cells) using an inverted microscope (LEICA DMIL research microscope equipped with DIC and phase contrast techniques). The identification by external characteristics of the species was also done from preserved and unpreserved samples.
4.4. Molecular Phylogenetic Analyses
Total genomic DNA was isolated from lyophilized cells according to the liquid nitrogen cell disruption protocol described by Somogyi et al. . PCR amplification of almost full length 16S rRNA gene was conducted as given in Lamprinou et al. , while a region within the phycocyanin operon (cpcBA-IGS) was amplified as described in Felföldi et al. . Sequencing reactions and capillary electrophoresis were performed by the Biomi Ltd. (Gödöllő, Hungary). The manual correction of automatic base calling on chromatograms and the removal of primer sequences were conducted with the Chromas software v1.45 (Technelysium, Brisbane, QLD, Australia). Sequence alignment (containing sequences obtained from the GenBank database) was performed with SINA in the case of the 16S rRNA gene, and with the built-in Clustal W module in the MEGA5 software in the case of cpcBA-IGS sequences. Phylogenetic analyses (including the search for the best-fit models) were performed with MEGA5. Obtained sequences are available under accession numbers KC510416 (16S rRNA gene) and KC510417 (cpcBA-IGS).
4.5. Purification of MC from Field Samples
Filaments of P. rubescens harvested by centrifugation at laboratory temperature (10,000 g, 10 min, Beckman Avanti J-25) were used for MC isolation. Frozen cell pellet (25–30 g) was thawed and frozen again and this procedure was repeated twice. The final thawed cell suspension was lyophilized and extracted overnight with 80% methanol (1:3 mass–volume) at 4 °C with continuous stirring. After centrifugation the pellet was washed twice with 100 mL 80% methanol. The combined supernatants were concentrated in a rotary evaporator at 40 °C (Büchi Rotavapor-R).
The residue was dissolved in 5 mM Tris-HCl, pH 7.5 and after centrifugation the solution was loaded onto a DEAE column (3 × 15 cm, DE-52, Whatman) equilibrated with 5 mM Tris-HCl, pH 7.5. The column was washed with this buffer and eluted with a gradient between 0 and 0.2 M NaCl in 5 mM Tris-HCl buffer, pH 7.5. The absorbance of the fractions (5–6 mL) was measured at 239 nm in a Shimadzu 1601A spectrophotometer. The plant growth inhibitory effect of fractions was measured with the Sinapis test on microtiter plates. Aliquots (20 µL) of the fractions were evaporated to dryness in the wells of microtiter plates and into those wells, an amount of 100 µL of agar containing (1%) plant growth medium was pipetted.
Based on the mustard growth inhibition, the cyanotoxin-containing fractions were combined and lyophilized, dissolved in methanol, and loaded to a semipreparative C-18 HPLC column (Supelcosil TM SPLC-18, 25 cm, 310 mm, 5µm), and the separation of the compounds was followed at 239 nm by their characteristic absorption spectra, using the gradient according to Chorus and Bartam .
The distinctive peaks (with characteristic UV spectra) of the chromatogram was tested with the Sinapis test and collected for further analysis. Bloom sample and the isolated P. rubescens strain were also tested with Sinapis test .
4.6. Identification of MC Congener
4.6.1. MALDI-TOF MS Analysis
Although HPLC-MS analysis is the most common and sensitive method in microcystin measurements, the MALDI-TOF analysis is also a well-known method in the identification of MC congeners .
The purified MC was examined in positive-ion mode using a Bruker Biflex MALDI-TOF mass spectrometer equipped with delayed-ion extraction. A 337 nm nitrogen laser was used for desorption/ionization of the sample molecules. Spectra from multiple (at least 100) laser shots were summarized using 19 kV accelerating and 20 kV reflectron voltage. External calibration was applied using the [M + Na]+ peaks of malto-oligosaccharides dp 3–7, m/z values 527.15, 689.21, 851.26, 1013.31, and 1175.36, respectively. The measurement was performed in 2,5-dihydroxybenzoic acid (DHB) matrix, by mixing 0.5 mL of matrix solution with 0.5 mL of sample on the sample target and allowing it to dry at room temperature. DHB matrix solution was prepared by dissolving DHB (10 mg) in a mixture (0.5 mL) of ethanol and water (1:1, V:V). The compounds were identified on the basis of the mass of [M + H]+ peak. After determination of mass values, post-source decay (PSD) measurements were performed directly from the same sample on the template and MC and other peptides were identified by PSD fragment structure analysis .
NMR spectra were acquired using a Bruker DRX-500 spectrometer operating at 500.13/125.79 MHz for protons and 13 C, respectively. Normal, one-dimensional 1H-NMR spectra were obtained in D2O at 298 K. Residual HDO signal was saturated. A two-dimensional 1H–13C HSQC experiment yielded a1 H/13C assignment identical to that published by Meriluoto et al. (, data not shown).
4.7. Capillary Electrophoresis of Field Samples and Isolated P. rubescens Laboratory Strain Samples
Microcystin variants in the whole extracts of the samples were analyzed by micellar electrokinetic chromatography developed by our laboratory [40,41] (separation conditions: capillary: 64.5 cm, 50 µm i.d., buffer electrolyte: 25 mM borate and 75 mM SDS, pH 9.3, applied voltage: +25 kV, detection: UV absorption at 238 nm).
4.8. Genetic Analysis of the mcy Gene Cluster
DNA extraction from strains and field samples was performed by a standard phenol-chloroform procedure. PCR amplifications were performed in reaction mixtures of 20 µL as published by Kurmayer et al. and Christiansen et al. . In order to screen the complete Planktothrix mcy gene cluster, 28 primer pairs covering the whole mcy gene cluster were used to amplify fragments of 2 kb without interruption . DNA mutations were detected via the difference in PCR product sizes in agarose gels compared to the corresponding PCR products obtained from strain CYA126/8, whose mcy gene cluster has been sequenced . The PCR thermal cycling protocol included an initial denaturation step at 94 °C for 3 min, followed by 35 cycles of denaturation temperature of 94 °C for 30 s, annealing temperature of 60 °C for 30 s, and elongation temperature of 72 °C for 2 min. The primer pairs of Christiansen et al. [10,25,86], amplifying fragments of ca. 500 bp, were used to detect size differences in the cluster. PCR-products with possible inserted or deleted elements were sequenced directly from the same PCR products (sequencing followed the same procedure described in phylogenetic analyses).
In this multidisciplinary study we reported the presence of P. rubescens bloom from a wind-sheltered, stably stratified shallow lake with low phosphate and high nitrogen loads, where the Secchi transparency was 1.2 m. The reddish color of cyanobacterial blooms was an unexpected observation in our region and the causative organism was identified by classic morphological markers and by 16S rRNA gene and phycocyanin operon (cpcBA-IGS) as molecular markers.
The results obtained in the Kocka pond thus confirm that the Planktothrix bloom sample contained comparably high amounts of MC. The MALDI-TOF and the CE-analyses demonstrated that the P. rubescens bloom sample and the isolated strain (BGSD-500) primarily contain one main MC congener, a demethylated variant of MC-RR. Analysis of MALDI-TOF spectra and 2D HSQC NMR spectra provided evidence that the molecule is identical to [d-Asp3,Mdha7]MC–RR.
Comparing the concentration of the MC congener in the bloom sample and in the isolated strain it can be clearly seen that the bloom sample contained five times more MC than the isolated strain. This difference may due to specific environmental conditions but it is important to note a deletion in the spacer region between mcyE and mcyG, and an insertion were detected at one site binding to the spacer region between mcyT and mcyD. Although our element has probably no influence on the MC synthesis, considering the function of the product of the partly similar sequences, it is worth discussing this possibility.
Although some invasive tropical cyanobacterial species have recently come to the fore in many studies, in our paper, we draw the attention to the possible spread of an alpine harmful organism, P. rubescens.
This work has been supported by Hungarian National Research Foundation Grants OTKA K81370, F046493 and K105459, GVOP-3.2.1.-2004-04-0110/3.0, GVOP-TST 3.3.1-05/1-2005-05-0004/3.0. The work/publication is supported by the TÁMOP-4.2.2/B-10/1-2010-0024 project. Tamás Felföldi and István Bácsi were supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences. The work of Gábor Sramkó and Gyula Batta (personal funds) was supported by the grant no. TÁMOP 4.2.4.A/2-11-1-2012-0001 in frame of the “National Excellence Program” of Hungary co-funded by the European Social Fund.
Conflicts of Interest
Authors declare no conflict of interest.
- Reynolds, C.S.; Walsby, A.E. Water blooms. Biol. Rev. 1975, 50, 437–481. [Google Scholar] [CrossRef]
- Paerl, H.W. Combating the global proliferation of harmful cyanobacterial blooms by integrating conceptual and technological advances in an accessible water management toolbox. Environ. Microbiol. Rep. 2013, 5, 12–14. [Google Scholar]
- Paerl, H.W.; Huisman, J. Climate change: A catalyst for global expansion of harmful cyanobacterial blooms. Environ. Microbiol. Rep. 2009, 1, 27–37. [Google Scholar] [CrossRef]
- Carmichael, W.W. Freshwater cyanobacteria (blue-green algae) toxins. In Natural Toxins; Owby, C.L., Odell, G.V., Eds.; Pergamon Press: Oxford, UK, 1989; pp. 47–82. [Google Scholar]
- Carmichael, W.W. The toxins of cyanobacteria. Sci. Am. 1994, 270, 78–86. [Google Scholar] [CrossRef]
- Paerl, H.W.; Huisman, J. Blooms like it hot. Science 2008, 320, 57–58. [Google Scholar] [CrossRef]
- Chorus, I.; Bartam, J. Toxic Cyanobacteria in Water—A Guide to Their Public Health Consequences, Monitoring and Management; E & FN Spon: London, UK, 1999; pp. 41–111. [Google Scholar]
- Tillett, D.; Dittmann, E.; Erhard, M.; Döhren, H.; Borner, T.; Neilan, B.A. Structural organization of microcystin biosynthesis in Microcystis aeruginosa PCC7806: An integrated peptide-poliketide synthetase system. Chem. Biol. 2000, 7, 753–764. [Google Scholar] [CrossRef]
- Nishizawa, T.; Asayama, M.; Fujii, K.; Harada, K.I.; Shirai, M. Genetic analysis of the peptide synthetase genes for a cyclic heptapeptide microcystin in Microcystis spp. J. Biochem. 1999, 126, 520–529. [Google Scholar] [CrossRef]
- Christiansen, G.; Fastner, J.; Erhard, M.; Börner, T.; Dittmann, E. Microcystin biosynthesis in Planktothrix: Genes, evolution, and manipulation. J. Bacteriol. 2003, 185, 564–572. [Google Scholar] [CrossRef]
- Rouhiainen, L.; Vakkilainen, T.; Siemer, B.L.; Buikema, W.; Haselkorn, R.; Sivonen, K. Genes coding for hepatotoxic heptapeptides (microcystins) in the cyanobacterium Anabaena strain 90. Appl. Environ. Microb. 2004, 70, 686–692. [Google Scholar] [CrossRef]
- Fastner, J.; Erhard, M.; Carmichael, W.W.; Sun, F.; Rinehart, K.L.; Rönicke, H.; Chorus, I. Characterization and diversity of microcystins in natural blooms and strains of the genera Microcystis and Planktothrix from German freshwaters. Arch. Hydrobiol. 1999, 145, 147–163. [Google Scholar]
- Komárek, J.; Komárková, J. Taxonomic review of the cyanoprokaryotic genera Planktothrix and Planktothricoides. Czech. Phycol. Olomouc. 2004, 4, 1–18. [Google Scholar]
- Jann-Para, G.; Schwob, I.; Feuillade, M. Occurrence of toxic Planktothrix rubescens blooms in lake Nantua, France. Toxicon 2004, 43, 279–285. [Google Scholar] [CrossRef]
- Barco, M.; Flores, C.; Rivera, J.; Caixach, J. Determination of microcystin variants and related peptides present in a water bloom of Planktothrix (Oscillatoria) rubescens in a Spanish drinking water reservoir by LC/ESI-MS. Toxicon 2004, 44, 881–886. [Google Scholar] [CrossRef]
- Jacquet, S.; Briand, J.-F.; Leboulanger, C.; Avois-Jacquet, C.; Oberhaus, L.; Tassin, B.; Vincon-Leite, B.; Paolini, G.; Druart, J.-C.; Anneville, O.; et al. The proliferation of the toxic cyanobacterium Planktothrix rubescens following restoration of the largest natural French lake (Lac du Bourget). Harmful Algae 2005, 4, 651–672. [Google Scholar] [CrossRef]
- Legnani, E.; Copetti, D.; Oggioni, A.; Tartari, G.; Palumbo, M.-T.; Morabito, G. Planktothrix rubescens’ seasonal dynamics and vertical distribution in Lake Pusiano North Italy. J. Limnol. 2005, 64, 61–73. [Google Scholar]
- Ernst, B.; Hoeger, S.J.; O’Brien, E.; Dietrich, D.R. Abundance and toxicity of Planktothrix rubescens in the pre-Alpine Lake Ammersee, Germany. Harmful Algae 2009, 8, 329–342. [Google Scholar] [CrossRef]
- Paulino, S.; Valério, E.; Faria, N.; Fastner, J.; Welker, M.; Tenreiro, R.; Pereira, P. Detection of Planktothrix rubescens (Cyanobacteria) associated with microcystin production in a freshwater reservoir. Hydrobiologia 2009, 621, 207–211. [Google Scholar] [CrossRef]
- Kurmayer, R.; Christiansen, G.; Gumpenberger, M.; Fastner, J. Genetic identification of microcystin ecotypes in toxic cyanobacteria of the genus Planktothrix. Microbiology 2005, 151, 1525–1533. [Google Scholar] [CrossRef]
- Kurmayer, R.; Gumpenberger, M. Diversity of microcystin genotypes among populations of the filamentous cyanobacteria Planktothrix rubescens and Planktothrix agardhii. Mol. Ecol. 2006, 15, 3849–3861. [Google Scholar] [CrossRef]
- Bácsi, I.; Vasas, G.; Surányi, G.; M-Hamvas, M.; Máthé, C.; Tóth, E.; Grigorszky, I.; Gáspár, A.; Tóth, S.; Borbely, G. Alteration of cylindrospermopsin production in sulfate- or phosphate-starved cyanobacterium Aphanizomenon ovalisporum. FEMS Microbiol. Lett. 2006, 259, 303–310. [Google Scholar] [CrossRef]
- Neilan, B.A.; Pearson, L.A.; Muenchhoff, J.; Moffitt, M.C.; Dittmann, E. Environmental conditions that influence toxin biosynthesis in cyanobacteria. Environ. Microbiol. 2013, 15, 1239–1253. [Google Scholar] [CrossRef]
- Kurmayer, R.; Christiansen, G.; Fastner, J.; Börner, T. Abundance of active and inactive microcystin genotypes in populations of the toxic cyanobacterium Planktothrix spp. Environ. Microbiol. 2004, 6, 831–841. [Google Scholar]
- Christiansen, G.; Kurmayer, R.; Liu, Q.; Börner, T. Transposons inactivate the biosynthesis of the nonribosomal peptide microcystin in naturally occurring Planktothrix spp. Appl. Environ. Microbiol. 2006, 72, 117–123. [Google Scholar] [CrossRef]
- Borics, G.; Abonyi, A.; Krasznai, E.; Várbíró, G.; Grigorszky, I.; Szabó, S.; Deák, C.; Tóthmérész, B. Small-scale patchiness of the phytoplankton in a lentic oxbow. J. Plankton Res. 2011, 33, 973–981. [Google Scholar] [CrossRef]
- Reynolds, C.S. Ecology of Phytoplankton; Cambridge University Press: Cambridge, UK, 2006; pp. 42–74. [Google Scholar]
- Komárek, J.; Anagnostidis, K. Cyanoprokaryota, part 2. Oscillatoriales. In Süsswasser Flora von Mitteleuropa Band 19/2; Büdel, B., Gärtner, G., Krienitz, L., Schagerl, M., Eds.; Gustav Fischer: Jena, Germany, 2005; p. 759. [Google Scholar]
- Suda, S.; Watanabe, M.M.; Otsuka, S.; Mahakahant, A.; Yongmanitchai, W.; Nopartnaraporn, N.; Liu, Y.; Day, J.G. Taxonomic revision of water-bloom-forming species of oscillatorioid cyanobacteria. Int. J. Syst. Evol. Micr. 2002, 52, 1577–1595. [Google Scholar] [CrossRef]
- Brudno, M.; Do, C.B.; Cooper, G.M.; Kim, M.F.; Davydov, E.; Green, E.D.; Sidow, A.; Batzoglou, S. NISC comparative sequencing program. LAGAN and Multi-LAGAN: Efficient tools for large-scale multiple alignment of genomic DNA. Genome Res. 2003, 13, 721–731. [Google Scholar] [CrossRef]
- Mayor, C.; Brudno, M.; Schwartz, J.R.; Poliakov, A.; Rubin, E.M.; Frazer, K.A.; Pachter, L.S.; Dubchak, I. VISTA: Visualizing global DNA sequence alignments of arbitrary length. Bioinformatics 2000, 16, 1046–1047. [Google Scholar] [CrossRef]
- Walsby, A.E.; Schanz, F.; Schmid, M. The Burgundy-blood phenomenon: A model of buoyancy change explains autumnal waterblooms by Planktothrix rubescens in Lake Zürich. New Phytol. 2005, 169, 109–122. [Google Scholar] [CrossRef]
- Salmaso, N. Factors affecting the seasonality and distribution of cyanobacteria and chlorophytes: A case study from the large lakes south of the Alps, with special reference to Lake Garda. Hydrobiologia 2000, 438, 43–63. [Google Scholar] [CrossRef]
- Messineo, V.; Mattei, D.; Melchiorre, S.; Salvatore, G.; Bogialli, S.; Salzano, R.; Mazza, R.; Capelli, G.; Bruno, M. Microcystin diversity in a Planktothrix rubescens population from Lake Albano (Central Italy). Toxicon 2006, 48, 160–174. [Google Scholar]
- Halstvedt, C.B.; Rohrlack, T.; Andersen, T.; Skulberg, O.; Edvardsen, B. Seasonal dynamics and depth distribu-tion of Planktothrix spp. in Lake Steinsfjorden (Norway) related to environmental factors. J. Plankton Res. 2007, 29, 471–482. [Google Scholar] [CrossRef]
- Reynolds, C.S. The ecology of the planktonic blue-green algae in the North Shropshire meres. Field Stud. 1971, 3, 409–432. [Google Scholar]
- Borics, G.; Tóthmérész, B.; Lukács, B.A.; Várbíró, G. Functional groups of phytoplankton shaping diversity of shallow lake ecosystems. Hydrobiologia 2012, 698, 251–262. [Google Scholar] [CrossRef]
- Teszárné, N.M.; Márialigeti, K.; Végvári, P.; Csépes, E.; Bancsi, I. Stratification analysis of the Óhalász Oxbow of the River Tisza (Kisköre Reservoir, Hungary). Hydrobiologia 2003, 506–509, 37–44. [Google Scholar] [CrossRef]
- V.-Balogh, K.; Németh, B.; Vörös, L. Specific attenuation coefficients of optically active substances and their contribution to the underwater ultraviolet and visible light climate in shallow lakes and ponds. Hydrobiologia 2009, 632, 91–105. [Google Scholar] [CrossRef]
- Reynolds, C.S.; Huszar, V.; Kruk, C.; Naselli-Flores, L.; Melo, S. Towards a functional classification of the freshwater phytoplankton. J. Plankton Res. 2002, 24, 417–428. [Google Scholar] [CrossRef]
- Dokulil, M.T.; Teubner, K. Deep living Planktothrix rubescens modulated by environmental constraints and climate forcing. Hydrobiologia 2012, 698, 29–46. [Google Scholar] [CrossRef]
- Sundaram, T.R.; Rehm, R.G. The seasonal thermal structure of deep temperature lakes. Tellus 1973, 25, 157–167. [Google Scholar] [CrossRef]
- Berman, T.; Pollinger, U. Annual and seasonal variations of phytoplankton chlorophyll and photosynthesis in Lake Kinneret. Limnol. Oceanogr. 1974, 19, 31–55. [Google Scholar] [CrossRef]
- Salmaso, N. Ecological patterns of phytoplankton assemblages in Lake Garda: Seasonal, spatial and historical features. J. Limnol. 2002, 61, 95–115. [Google Scholar]
- Branco, B.F.; Torgersen, T. Predicting the onset of thermal stratification in shallow inland waterbodies. Aquat. Sci. 2009, 71, 65–79. [Google Scholar] [CrossRef]
- Padisák, J.; Reynolds, C.S. Shallow lakes: The absolute, the relative, the functional and the pragmatic. Hydrobiologia 2003, 506–509, 1–11. [Google Scholar] [CrossRef]
- Scheffer, M.; Nes, E.H. Shallow lakes theory revisited: Various alternative regimes driven by climate, nutrients, depth and lake size. In Shallow Lakes in a Changing World. Developments in Hydrobiology; Gulati, R.D., Lammens, E., Pauw, N., Donk, E., Eds.; Springer: Dordrecht, The Netherlands, 2007; Volume 196, pp. 455–466. [Google Scholar]
- Hutchinson, G.E.; Löffler, H. The thermal classification of lakes. Proc. Natl. Acad. Sci. USA. 1956, 42, 84–86. [Google Scholar] [CrossRef]
- Lewis, W.M., Jr. Tropical limnology. Annu. Rev. Ecol. Syst. 1987, 18, 159–184. [Google Scholar]
- Padisák, J.; G.-Tóth, L.; Rajczy, M. Stir-up effect of wind on a more-or-less stratified shallow lake phyto- plankton community, Lake Balaton, Hungary. Hydrobiologia 1990, 191, 249–254. [Google Scholar] [CrossRef]
- Pithart, D.; Pechar, L. The stratification of pools in the alluvium of the river Lužnice. Int. Rev. Gesamten Hydrobiol. Hydrogr. 1995, 80, 61–75. [Google Scholar] [CrossRef]
- Mischke, U. Cyanobacteria associations in shallow poly- trophic lakes: Influence of environmental factors. Acta Oecol. 2003, 24, 11–23. [Google Scholar] [CrossRef]
- Fonseca, B.M.; Bicudo, C.E.M. Phytoplankton seasonal variation in a shallow stratified eutrophic reservoir (Garcas Pond, Brazil). Hydrobiologia 2008, 600, 267–282. [Google Scholar] [CrossRef]
- Folkard, A.M.; Sherborne, A.J.; Coates, M.J. Turbulence and stratification in Priest Pot, a productive pond in a sheltered environment. Limnology 2007, 8, 113–120. [Google Scholar] [CrossRef]
- Vareli, K.; Briasoulis, E.; Pilidis, G.; Sainis, I. Molecular confirmation of Planktothrix rubescens as the cause of intense, microcystin—Synthesizing cyanobacterial bloom in Lake Ziros, Greece. Harmful Algae 2009, 8, 447–453. [Google Scholar] [CrossRef]
- Micheletti, S.; Schanz, F.; Walsby, A.E. The daily integral of photosynthesis by Planktothrix rubescens during summer stratification and autumnal mixing in Lake Zürich. New Phytol. 1998, 139, 233–246. [Google Scholar]
- Komárek, J. Recent changes (2008) in cyanobacteria taxonomy based on a combination of molecular background with phenotype and ecological consequences (genus and species concept). Hydrobiologia. 2010, 639, 245–259. [Google Scholar] [CrossRef]
- Lin, S.; Wu, Z.; Yu, G.; Zhu, M.; Yu, B.; Li, R. Genetic diversity and molecular phylogeny of Planktothrix (Oscillatoriales, cyanobacteria) strains from China. Harmful Algae 2010, 9, 87–97. [Google Scholar] [CrossRef]
- Konopka, A. Influence of temperature, oxygen, and pH on a metalimnetic population of Oscillatoria rubescens. Appl. Environ. Microbiol. 1981, 42, 102–108. [Google Scholar]
- Akcaalan, R.; Young, F.M.; Metcalf, J.S.; Morrison, L.F.; Albay, M.; Codd, G.A. Microcystin analysis in single filaments of Planktothrix spp. in laboratory cultures and environmental blooms. Water Res. 2006, 40, 1583–1590. [Google Scholar] [CrossRef]
- Meriluoto, J.A.O.; Sandström, A.; Eriksson, J.E.; Remaud, G.; Craig, A.G.; Chattopadhyaya, J. Structure and toxicity of a peptide hepatotoxin from the cyanobacterium Oscillatoria agardhii. Toxicon 1989, 27, 1024–1034. [Google Scholar]
- Sivonen, K.; Namikoshi, M.; Evans, W.R.; Carmichael, W.W.; Sun, F.; Rouhiainen, L.; Luukkainen, R.; Rinehart, K.L. Isolation and characterization of a variety of microcystins from seven strains of the cyanobacterial genus Anabaena. Appl. Environ. Microb. 1992, 58, 2495–2500. [Google Scholar]
- Luukkainen, R.; Sivonen, K.; Namikoshi, M.; Färdig, M.; Rinehart, K.L.; Niemelä, S.I. Isolation and identification of eight microcystins from thirteen Oscillatoria agardhii strains and structure of a new microcystin. Appl. Environ. Microb. 1993, 59, 2204–2209. [Google Scholar]
- Blom, J.F.; Robinson, J.A.; Jüttner, F. High grazer toxicity of [d-Asp3, (E)-Dhb7]microcystin-RR of Planktothrix rubescens as compared to different microcystins. Toxicon 2001, 39, 1923–1932. [Google Scholar] [CrossRef]
- Blom, J.F.; Jüttner, F. High crustacean toxicity of microcystin congeners does not correlate with high protein phosphatase inhibitory activity. Toxicon 2005, 46, 465–470. [Google Scholar] [CrossRef]
- Sano, T.; Takagi, H.; Kaya, K. A Dhb-microcystin from the filamentous cyanobacterium Planktothrix rubescens. Phytochemistry 2004, 65, 2159–2162. [Google Scholar] [CrossRef]
- Vasas, G.; Gáspár, A.; Surányi, G.; Batta, G.; Gyémánt, G.; M-Hamvas, M.; Máthé, C.; Grigorszky, I.; Molnár, E.; Borbély, G. Capillary electrophoretic assay and purification of cylindrospermopsin, a cyanobacterial toxin from Aphanizomenon ovalisporum by plant test (Blue-Green Sinapis Test). Anal. Biochem. 2002, 302, 95–103. [Google Scholar] [CrossRef]
- Vasas, G.; Gáspár, A.; Páger, C.; Surányi, G.; M-Hamvas, M.; Máthé, C.; Borbély, G. Analysis of cyanobacterial toxins (anatoxin-a, cylindrospermopsin, microcystin-LR) by capillary electrophoresis. Electrophoresis 2004, 25, 108–115. [Google Scholar] [CrossRef]
- Vasas, G.; Szydlowska, D.; Gáspár, A.; Welker, M.; Trojanowicz, M.; Borbély, G. Determination of microcystins in environmental samples using capillary electrophoresis. J. Biochem. Biophys. Methods 2006, 66, 87–97. [Google Scholar] [CrossRef]
- Borics, G.; Grigorszky, I.; Szabó, S.; Padisák, J. Phytoplankton associations under changing pattern of bottom-up vs. top-down control in a small hypertrophic fishpond in East Hungary. Hydrobiologia 2000, 424, 79–90. [Google Scholar] [CrossRef]
- Krasznai, E.; Borics, G.; Várbíró, G.; Abonyi, A.; Padisák, J.; Deák, C.; Tóthmérész, B. Characteristics of the pelagic phytoplankton in shallow oxbows. Hydrobiologia 2010, 639, 173–184. [Google Scholar] [CrossRef]
- Vasas, G.; Bacsi, I.; Suranyi, G.; M Hamvas, M.; Mathe, C.; Nagy, S.A.; Borbely, G. Isolation of viable cell mass from frozen Microcystis viridis bloom containing microcystin-RR. Hydrobiologia 2010, 639, 147–151. [Google Scholar] [CrossRef]
- Farkas, O.; Gyémant, G.; Hajdú, G.; Gonda, S.; Parizsa, P.; Horgos, T.; Mosolygó, Á.; Vasas, G. Variability of microcystins and its synthetase gene cluster in Microcystis and Planktothrix waterblooms in shallow lakes of Hungary. Acta Biol. Hung. 2014, 65, 5–23. [Google Scholar]
- Kurmayer, R.; Christiansen, G. The genetic basis of toxin production in Cyanobacteria. Freshw. Rev. 2009, 2, 31–50. [Google Scholar]
- Kurmayer, R.; Schober, E.; Tonk, L.; Visser, P.; Christiansen, G. Spatial divergence in the proportions of genes encoding toxic peptide synthesis among populations of the cyanobacterium Planktothrix in European lakes. FEMS Microbiol. Lett. 2011, 317, 127–137. [Google Scholar] [CrossRef]
- Rounge, T.B.; Rohrlack, T.; Nederbragt, A.J.; Kristensen, T.; Jakobsen, K.S. A genome-wide analysis of nonribosomal peptide synthetase gene clusters and their peptides in a Planktothrix rubescens strain. BMC Genomics 2009, 10, 396–406. [Google Scholar] [CrossRef]
- Mbedi, S.; Welker, M.; Fastner, J.; Wiedner, C. Variability of the microcystin synthetase gene cluster in the genus Planktothrix (Oscillatoriales, Cyanobacteria). FEMS Microbiol. Lett. 2005, 245, 299–306. [Google Scholar] [CrossRef]
- Suzuki, T.; Miyauchi, K. Discovery and characterization of tRNAIle lysidine synthetase (TilS). FEBS Lett. 2010, 584, 272–277. [Google Scholar] [CrossRef]
- Loomis, W.F.; Shaulsky, G.; Wang, N. Histidine kinases in signal transduction pathways of eukaryotes. J. Cell Sci. 1997, 110, 1141–1145. [Google Scholar]
- Somogyi, B.; Felföldi, T.; Vanyovszki, J.; Ágyi, Á.; Márialigeti, K.; Vörös, L. Winter bloom of picoeukaryotes in Hungarian shallow turbid soda pans and the role of light and temperature. Aquat. Ecol. 2009, 43, 735–744. [Google Scholar] [CrossRef]
- Lamprinou, V.; Skaraki, K.; Kotoulas, G.; Economou-Amilli, A.; Pantazidou, A. Toxopsis calypsus gen. nov., sp. nov. (Cyanobacteria, Nostocales) from cave ‘Francthi’, Peloponnese, Greece: A morphological and molecular evaluation. Int. J. Syst. Evol. Micr. 2012, 62, 2870–2877. [Google Scholar] [CrossRef]
- Felföldi, T.; Duleba, M.; Somogyi, B.; Vajna, B.; Nikolausz, M.; Présing, M.; Márialigeti, K.; Vörös, L. Diversity and seasonal dynamics of the photoautotrophic picoplankton in Lake Balaton (Hungary). Aquat. Microb. Ecol. 2011, 63, 273–287. [Google Scholar] [CrossRef]
- Pruesse, E.; Peplies, J.; Glöckner, F.O. SINA: Accurate high throughput multiple sequence alignment of ribosomal RNA genes. Bioinformatics 2012, 28, 1823–1829. [Google Scholar] [CrossRef]
- Tamura, K.; Peterson, D.; Peterson, N.; Stecher, G.; Nei, M.; Kumar, S. MEGA5: Molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods. Mol. Biol. Evol. 2011, 28, 2731–2739. [Google Scholar] [CrossRef]
- Welker, M.; Fastner, J.; Erhard, M.; Döhren, H. Application of MALDI-TOF MS in cyanotoxin research. Environ. Toxicol. 2002, 17, 367–374. [Google Scholar] [CrossRef]
- Christiansen, G.; Molitor, C.; Philmus, B.; Kurmayer, R. Non-toxic strains of cyanobacteria are the result of major gene deletion events induced by a transposable element. Mol. Biol. Evol. 2008, 25, 1695–1704. [Google Scholar] [CrossRef]
© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
| 1 | 5 |
<urn:uuid:e5cfec17-593e-4d5b-bd05-d4626a4b5947>
|
How Automatic Faucets Work? Tips about Basic Principle of Kitchen and Bathroom Automatic Faucets. Automatic faucet is a Solenoid valve, sensor and control electronics, power source faucet. Automatic Faucet names electronic tap, sensor faucet, touchless faucet as well as hands free faucet.
The work principle Overview
Automatic faucets are created by combining four key components: Solenoid valve, sensor and control electronics, power source, and faucet. Although there are variations to this theme, these are key tools each with a distinct function that, once combined, constitute an automatic faucet. Here’s an overview break down of these components:
- Solenoid operated diaphragm valve which is entrusted with the task of physically starting and stopping water flow. A small number of foreign manufacturers use geared motors to achieve valve opening and closing.
- Sensor and control electronics who’s combined mission is to sense the presence of an object in front of the faucet (automatic faucets employ presence sensors and not motion sensors) and order the solenoid valve to initiate the flow of water. Then, when the object is no longer present, the sensor and control electronics order the solenoid valve to terminate the flow of water, but only after a predetermined time have passed. This “off delay” time is generally measured in seconds.
- Power source, generally batteries or AC transformer. Since both solenoid valve as well as sensor and control electronics require power source, this readily available component is crucial to insure faucet operation. Commonly used batteries are C, AA, 6Volt and 9Volt Lithium batteries. Automatic faucets using AC transformer as power source are generally inexpensive to produce and are priced accordingly in the marketplace. A notable exception to this cost basis detail is the type of automatic faucet specifically designed for either power source (MAC Faucets FA400 and FA444 series are clear examples).
- Faucet spout, for water delivery. Most automatic faucet spouts are design to house within them the sensor capsule, or in the case of a notable competitor, the faucet spout houses fiber optic cables to carry infrared signal from the sensor to the spout. Some spouts house within them the whole “enchilada” sensor, control electronics, solenoid valve, and even, batteries.
Automatic Faucet Parts and MainWork Principle
No credible explanation of the solenoid valve is complete without discussing the two hybrid technologies that, when combined, form a solenoid valve:
- Electromagnetism (solenoid)
- Fluid dynamics (diaphragm valve, AKA poppet valve). I will start by breaking the solenoid valve discussion into 2 sections, one that addresses the “solenoid” portion of the solenoid valve and one that addresses the “valve” portion of the solenoid valve.
A) Solenoids are electrical components that transform electrical energy into mechanical energy “motion”. When energized a solenoid creates a magnetic field which exerts a linear force on an object called plunge or actuator. That’s why solenoids are called “linear motors”. Automatic faucets powered by batteries employ a type of solenoid called “magnet latching or bi-stable solenoid”. These solenoids operate at low voltage, usually 6 volt DC, with some solenoids operating at 9 volt DC. The reason these solenoids are called “magnet latching” is that as the solenoid is initially energized to start the water flow, the plunger is driven into the range of a permanent magnet which in turn holds the plunger in the “open” position. This initial energizing of the solenoid is called “pulsing or inrushing” and takes place within a fraction of a second. In order to return the plunger into its original “closed” position the solenoid is once again “pulsed” but this time by reversing polarity (remember, we’re working with DC voltage here). The reason behind this complex operation: Conserving battery power.
Faucets powered by AC transformers utilize standard, none latching solenoids. This type of solenoid requires constant energy to hold plunger in place, and when e-energized, the plunger naturally returns to its normal “closed” position with the help of a biasing spring.
I won’t go into dizzying details here about the solenoid (or any of the four components discussed earlier) for that you have to stick around for act three.
In addition to Voltage specification, solenoids have Mill Watt specification. This last specification is especially important when dealing with battery powered automatic faucets since solenoids, by far, are the largest consumer of battery power in an automatic faucets. Mill Watt specification gives us a glance at solenoid’s efficiency and the amount of current it needs to do its job. It’s a sort of “miles per gallon” measurement, the more miles per gallon, given a specific size gas tank, the less gas stops we have to make on our journey. The same holds true in an automatic faucet, the more efficient the solenoid the less often we have to replace the batteries.
B) Diaphragm valves are often misunderstood and much maligned devices; however, they are commonly used in many household equipment such as toilets tanks, sprinkler valves, washers, dishwashers, ice makers and many more. I don’t know who invented the first diaphragm valve but the guy (or it could as easily have been gal) is a genius. The basic premise behind the diaphragm valve is to control the flow of a large volume of water with a smaller, more manageable, water volume. The way in which this task is accomplished is nothing short of brilliant (like I said earlier, the original inventor must have been a genius). I want to ask you at this time to suspend your disbelief for a moment, read this paragraph through once, this’ll make it easier to understand the next time around. The percentage values are used strictly for illustration purposes. We begin with a malleable rubber disc (today’s diaphragms are much more sophisticated than that of course) which itself acts as a water valve as it seats itself against a solid valve seat no different from a standard faucet valve seat. This disc is located in an environment punctuated by three pressure zones locked into a constant battle to get the upper hand, like my dear nieces and nephews AKA The Trio of Terror. On one end we have supply pressure acting on 70% of the diaphragm area (the portion may be lesser or greater depending on the specific valve design) and pushing the diaphragm up and away from the valve seat, an act that would result in opening the diaphragm valve. Also on the same end we have atmospheric pressure which, for the purpose of this discussion, equals “0” psig and is unable to impact the diaphragm movement in either direction. This “0” psig zone represents the area inside the valve seat and occupies the other 30%. Why “0” psig you ask? The valve seat is plumbed directly into the faucet spout, “0” psig is the pressure in the faucet spout when no water is going through it. On the other end of this tug of war we have pressure that equals supply pressure (that’s because it is supplied by supply pressure through a very small diameter hole often built into the diaphragm itself) acting on 100% of the diaphragm area, pushing the diaphragm the other way and firmly seating it against the valve seat, an act that would result in closing the diaphragm valve. Think of it this way: we have two arm wrestlers, equally strong in every way, yet one of them ate only 70% of his wheaties, and the other one finished the bowl. Who do you think is going to win the arm wrestling competition? If you guessed the guy who finished his bowl, you guessed right. In this case the guy who finished the bowl represents the pressure acting on 100% of the diaphragm area pushing the diaphragm tight against the valve seat.
In order to open the diaphragm valve all we need to do here is to release the pressure that is pinning the diaphragm against the valve seat. This is a job assigned to the solenoid valve. Remember the solenoid’s plunger we spoke about earlier in this discussion? The plunger, which is driven by the solenoid is yet another miniature water valve who’s sole mission is to open and release the pressure that is pinning the diaphragm against the valve seat (the pressure acting on 100% of the diaphragm area), and close and allow the pressure supplied by supply pressure through a very small diameter hole (we spoke about that earlier) to build up “behind the diaphragm” pinning the diaphragm back against the valve seat.
While on the subject of diaphragm valves, it is inappropriate to move on to the next subject without discussing briefly “particle filters” and the type of relationship they have with diaphragm valves. There are 2 realities in plumbing systems that we all have to live with. 1. Water flowing in plumbing supply lines contain loose particles. 2. Diaphragm valves are defenseless against them. If allowed to enter a given diaphragm valve (regardless of whether the valve is involved in a faucet application or any of the numerous applications outlined above), particles present in water supply lines will damage the valve. Did you notice that I didn’t say “if”? This damage often takes place immediately after installation or within a few days afterwards. Regardless of the time laps, allowing loose particles to enter diaphragm valves will cause valve damage, I cannot stress that enough. The damage mostly manifests itself in a leaky faucet or one that simply does not shut off, even though the solenoid did its job by driving the plunger into a closed position. This last condition called “runaway faucet” is a specially menacing condition that could possible cause severe flood damage to the area surrounding the faucet. Particle filters are an integral component of an automatic faucet. All automatic faucets are shipped with particle filters, these filters must not be removed and discarded.
Sensor and Control Electronics
If the diaphragm is misunderstood and maligned, the electronic sensor component of automatic faucets by comparison is mired by hate and loathing of many in the plumbing industry. There seems to be an aura of mystery surrounding these sensors somewhat reminiscent of that exhibited towards personal computers and cellular phones when they first came into use. Well I’d like to do my part in dispelling this mystery first by taking it away on a long, hopefully entertaining, explanation, and then by bringing it all home to you. Feel free to email me and let me know if I did a good job, no hate letters please.
Automatic faucets are presence sensors and not motion sensors. They employ Active Infrared technology which senses “presence” and not “movement” of objects. Active infrared technology, like the name implies is actively emitting infrared light and actively waiting for this light to come back to it. On the frequency spectrum, infrared light lies between radio waves and light waves that are visible to the human eye. To achieve the task of emitting and receiving, faucet sensors employ 2 key components: an emitter AKA transmitter and a collector AKA receiver each about 1/4″ in diameter and 5/16″long or smaller. These components are housed within the sensor capsule that is located either at the neck of the faucet spout, in a separate sink hole to the side of the faucet spout, or in a special compartment up next to the aerator. The emitter is constantly emitting infrared light in a blinking method, that is, the emitter is constantly blinking in the same way that turn signals on automobiles blink when the turn signal lever is engaged. The collector on the other hand, is always ready to receive (collect) this blinking light, and when it does, the control electronics take a factory preset action, in the case of battery powered faucets, the control electronics send an electrical pulse to the solenoid valve asking it to open. When the collector no longer receive the blinking light, the control electronics will then send yet another electrical pulse to the solenoid, this time, asking the solenoid to close. Since the emitter emits infrared light in a narrow and focused beam (imagine the focusing apparatus on a common household flash light), and since the collector also receives infrared light in a straight and narrow beam, and since both emitter and collector point in the same direction never in plane view of one another, the only way that the collector can receive the blinking light that is emitted by the emitter is to place a reflective object in the path of the beam, in most cases human hands.
I promised you earlier that I will bring it all home to you. The reason that I want to do so is that some may read the earlier paragraph and immediately conclude that this automatic faucet “thing” is advanced technology, too complicated for plumbing purposes, maybe even extreme, and should never be used in any consumer level products. This assessment by some is simply unfair and, more notably, far from the truth. In fact this technology has been in the palm of your hand, literally, for well over twenty year. I am talking about TV remote control, man’s new best friend, we all have them, we all love them, and some of us, can’t live without them. Your TV remote control employs the same active infrared technology that your automatic faucet does, sometimes the same exact emitter and collector component. In the case of a remote control the emitter is located inside the remote control itself and at the end that points at the TV, whereas the collector is located inside the TV generally hidden behind an amber color transparent plastic guard. If you think that faucet sensor and control electronics are complex .. think again. Whereas faucet sensor and control electronics act as an On Off switch, a remote control designed for an entertainment centers, for example, sends and receives coded messages that are 100,000 to 1,000,000 times more complex than that of the simple on-off function that faucet electronics perform. Aren’t some of us lucky we’re in the plumbing business and not in the TV business?
Batteries and/or Wall A.C.
Automatic faucets draw power from two popular sources: Six Volt seems to be the standard voltage for battery powered faucets (for now), although 9 volt is not uncommon. Battery powered faucets generally employ the services of AA batteries, C batteries, standard 9 Volt alkaline batteries, or lithium batteries. These batteries have storage capacities measured in milliamp/Hour, 1000 milliamp/hour is equivalent to 1 amp/hour which is enough energy to light up a 1 amp bulb for 1 hour. Larger size batteries exhibit greater energy capacity than smaller size batteries, similar to an automobile’s gas tank, the larger the tank the more fuel you can put in it, but only efficient cars get the longer haul. Which brings us back to automatic faucets, no discussion of battery powered faucets is complete without briefly touching on faucet efficiency. How often you need to replace the batteries depends largely on how fast or slow the faucet consumes the energy stored within its batteries. This principally is a none-issue for AC powered faucets since wall AC is an inexhaustible power source, until the power goes out of course. AC powered faucets will be discussed in length later in this chapter. It is worth mentioning here that the Faucet Automator model FA100sca is the only faucet automation device- to the best of our knowledge with a true AC/DC automatic switchover feature. With the batteries installed and AC transformer plugged in the device automatically switches over to AC power from battery power, however, should the power in the wall go out the device switches back over to batteries for continuous, uninterrupted service. If anyone out there is aware of another faucet automation device that has this feature, please let me know and I will be glad to revise this writing to include the brand name of that faucet.
AC powered faucets employ transformers and switching adapters (more on those later) that plug into/or are hard wired to the wall. Transformers and switching adapters transform wall AC into 24, 12, 9, or 6 Volt AC or DC depending on the application. These are generally the 4 voltage ratings that automatic faucets operate on, at least here in the USA. Transformers and switching adapters measure their output capacity in VAC, higher VAC transformers and switching adapters are capable of delivering steady current at rated voltage to loads that require more “juice”, in this case faucet electronics and solenoids. To better understand the relationship between VAC and power consumption by load, one only needs to look at automobile engines. The larger the engine, the larger the carburetor and fuel line that feed the engine. The larger the solenoid and electronics, the larger the VAC rating should be.
I mentioned switching adapters earlier which are commonly employed for the purpose of supplying power to electronic equipment. They are generally found along side laptops and cell phones. There are key reasons why switching adapters are the power supply of choice for electronic equipment manufacturers, automatic faucets are no exception. When employing standard transformers to supply power to modern electronic equipment, use of these transformers can lead to power quality degradation and heating problems, here are some of the reasons why:
- Single phase electronic loads can cause excessive transformer heating.
- Electronic loads draw “non-linear” currents, resulting in momentary low voltage supply and output voltage distortion.
- Oversizing for impedance and thermal performance can result in a transformer with a significantly larger footprint, and weight.
Switching adapters by contrast, are specifically designed for non-linear loads and incorporate substantial design improvements that address both thermal and power quality concerns. Such devices are low impedance, compact, and have better high frequency performance than standard transformers.
Automatic faucet spoutare not dissimilar to standard faucet spouts. They’re mainly constructed of brass or in some cases, Zink. It is proper to state a disclaimer here that all MAC faucets are constructed of brass, MAC faucets does not build faucets or faucet components out of Zink. Some automatic faucets are machined, for example: European style faucets, (the MAC 200spl), some are die cast with the water passage made of copper tubing, and yet some are made of brass that is sand cast, this type is generally referred to as solid brass for the large brass content in it. We will discuss the various forms of castings in Part III.
Automatic faucet spouts are designed for several applications: standard bathroom fixtures, lab or bar sink faucets, and splash mount faucets chiefly for food service applications. Beyond delivering water, spouts mainly encompass an aesthetic purpose. Style and finish combined are the number one reason why buyers choose one faucet over the next, therefore much attention is focused on creating a faucet fixture that is aesthetically pleasing.
| 1 | 2 |
<urn:uuid:fb209b2f-985a-45c8-bab3-e4d211d66ef9>
|
Very little is known of the early history of mankind, since most of the records were destroyed during the Great Jihad. It seems that man had reached a technologically advanced and prosperous society somewhere around AD2000, succeeded in mastering inter-planetary space travel by about AD2090, and began to form colonies around the Solar System by AD2200. By AD2500, colonies had been established on Mars, many of the moons of Jupiter, all the larger moons of Saturn, and on Pluto. In addition, man had finally developed ships capable of crossing the vast gaps between stars. By 2630, a colony had been established on the largest world in neighbouring Alpha Centauri, on the newly named planet of Antan (After the name of the colony ship).
Rise of the Monarchy
As the colony on Antan developed, a leader was born from the tough life lead by the population; Cyrus Kaey. Cyrus oversaw the expansion of the Antan and was posthumously crowned King, his descendants continued on as the monarchy took hold. During the centuries that followed the expansion of the Antan spread to other planets in the system, the first to be colonised was planet Pörst. Named after the first man to set foot on its surface, Pörst was a perilous place to land; ion storms in the upper atmosphere and storms almost to the surface almost destroyed the small colony ship on landing. The atmosphere, though hostile provided the Antan with a powerful power source: the electric storms could provide huge amounts of power to the planet and could be beamed across the system to provide power for fleets and developing colonies on other planets. Pörst soon became an industrial centre, its mineral and ore rich composure provided materials for the Great Industrial Revolution that built the great fleets of the Kingdom.
Along with this great expansion and industry came dissidence and distrust of the Monarchy; the workers saw the power and riches of the Royal Family and their close friends and compared it too their lifestyle. Strikes and revolts stirred up around the system, King Henning II saw the dangers and set about building a parliament that would run along side the monarchy. His quick thinking calmed the population down, and the system calmed down. Over time the monarchy lost power and eventually faded away to obscurity.
When planet Granam (one of the outer planets of the system) was still developing, the government was coming under fire from those it represented; corruption was rife, surveillance left the population paranoid about their security, and in a time of desperation they searched for any sign of relief. The relief came in the form of Kroega Kaey; alleged descendant of the Monarchs, she had risen to high authority on Granam and she vowed to restore the Antan to their former glory. All that was needed to ignite the situation was a spark.
The government were too heavy handed with Kroega, declaring her and enemy of the state, causing uproar amongst much of the population. The armed forces were split in two; the Democrats and the Royalists, and so the Ascension War began.
Much of planet Antan’s defence fleet joined forces with the Royalists, the Democrat forces put up a brave fight in orbit, weakening the enemy fleet enough to allow government troops to dig in around Antan. Kroega herself moved to take personal command of the invasion force on the ground and with troops she had gathered from planets loyal to her, she tore her way through the enemy towards the Capitol. Government members fled the planet; some were killed while trying to escape. Kroega declared a victory, and she was crowned Queen by her supporters. The government forces fled to the outer planets, where they live as exiles.
Those who support the Monarchy are fiercely loyal, the citizens, whatever they think of their Queen, will defend their homeland until the end.
The Antan Armed Forces (AAF) use a mix of energy and projectile based weapons. Some of the different regiments prefer different designs of weapons, usually this is influenced by the resources available on their planet of origin.
- Pörst Storm Troopers prefer to use Guass rifles in combat, their specialist soldiers will occasionally carry Tesla Arch weapons.
- The Antan Royal Guard equip their troopers with Plasma guns.
The AAF uses a variety of vehicles for combat maneuvers depending on the battlefield situation.
- Tr5M6 'Tortoise' armoured transport is a quad-tracked APC design to provide a reliable transport, with enough firepower to defend itself from the enemy.
- R2M2 Scout Car reconnaissance vehicle is an armoured 6 wheeled radar vehicle used to detect enemy troops, and to spot targets for artillery.
- O3M2 Mobile Artillery is a large self propelled gun that is capable of firing great distances. The artillery piece, however, is extremely slow and not has difficulty crossing rough terrain.
Bipedal Assault Vehicles (BAVs)
Designed on the swamp planet of Hola, where tanks would easily get bogged down by the terrain, these walkers are used extensively for a variety of roles (depending on the type of BAV).
- B5 'Hessian' is armed with two Type4 AP cannons. Designed for tank hunting, this BAV has jump jets that allow it to keep out of harms way, making up for its relatively light armour.
- B6 'Stinger' is armed with HE rockets and heavy machine guns making it a highly effective anti infantry weapon.
- B10 'Hellcat' is heavily armoured and is armed with rockets, heavy machine guns and anitmatter cannons. This slow BAV is designed to be an all purpose vehicle, but if isolated it becomes vulnerable.
The AAF aircraft range from small recon UAVs to giant battleships and space stations.
These vehicles are the Antan craft that require a suitable planetary atmosphere to operate in.
- ATr7 'Hopper' this twin rotored transport craft is used to carry single squads around and to the battle field.
- The Raptor gunship is used to provide air support to the ground troops when fleet fire support is unavailable.
This term is used to describe any vehicles capable of operating in both space and in an atmsophere.
- B90 Gotha bomber. Armed with 4 turrets for defense, this heavily armoured torpedo bomber is designed for both anti-fleet and and ground attack missions.
- I12 Interceptor, this small agile craft is used to defend the fleets, or bombers from small craft or to bring down enemy air support.
- ATr60 Landing craft is used for boarding action in space of for landing troops on a planets surface.
- Granam class Gunboat: These craft are smaller than the rest of the capitol ships, but they are faster and are armed with torpedoes.
- Kaey Class Gunboat: Similar in looks to Granams, but armed with bombardment weapons to provide fire support to ground troops from within the atmosphere.
Most space-only craft are just too massive to safely pilot at atmospheric distance from a planet, therefore most are not designed to withstand reentry.
- Hammer Orbital Bombardment Craft - Designed to give support fire to multiple battles simultaneously. Armed with hundreds of bombardment lasers and plasma rockets. Due to the comparatively short range of their weapons OBCs cannot maintain a geostationary orbit around most planets and still be in range, this means that they can only give coverages to certain sectors at certain times.
- Spectre Class Cruiser - Equiped with advanced sensors, radar jamming equipement, and advanced countermeasures. These cruisers effectively invisible, even the glow from their engines is masked by blinker-like plates. Several of these craft can be used in careful formation to hide a fleet in a similar way. Armed mainly with disruption weapons, EMP cannons and engine disruptors, these craft are perfect for disabiling a craft ready for boarding.
| 1 | 3 |
<urn:uuid:f42a9171-598a-4c92-94c1-fcc183a145ef>
|
The molecular graphics software called Chimera, written and supported by a team of scientists in Tom Ferrin’s lab at the University of California, San Francisco (UCSF), has been cited over 7000 times and helps biologists and drug developers visualize molecules and biological structures in 3D at various resolutions. The tool has a personal history that traces back to 1994, and an ancestral history that stretches nearly four full decades earlier, to a London lab in 1955 and a man named Robert Langridge, also known as the pioneer of molecular graphics.
Bob Langridge. Photo by Christopher Springmann, 1985.
Though computation has changed almost indescribably since the middle of the 20th century, Ferrin faces some of the same challenges Langridge faced, such as how to work with data sets that far exceed the memory capacity of his computational tools. For Ferrin, Director of the UCSF Resource for Biocomputing, Visualization, and Informatics (RBVI), this problem has most recently presented itself in a collaboration with 2014 Nobel prize winner Eric Betzig, who with others developed super-resolution light microscopy, a technique that produces terabytes of data and presents Ferrin with the task of visualizing that data. “It’s an exciting problem to have,” says Ferrin. “Technology is always marching onward.”
The Beginning: Molecular Virtual Reality
It was 1955 when Robert Langridge published his first important paper, in Nature, detailing the molecular structure of DNA based on X-ray diffraction data. Langridge was working with Maurice Wilkins, who along with James Watson and Francis Crick was awarded the Nobel Prize for their discoveries concerning the molecular structure of nucleic acids in 1962. Back then, there were two approaches to visualizing molecular structures. Scientists built wire models of them or sketched them. “The picture of DNA that’s in that article, I drew with a pen,” says Langridge, who is now retired and living in Berkeley CA, a professor emeritus at UCSF.
Doing the calculations that defined the structure was even more cumbersome. Langridge used a Marchant Calculator, a 1911 invention that looks like a cross between a typewriter and a cash register. It took two weeks to calculate a Fourier transform. “At the end of that two weeks, after modifying the model, you had to do it all again,” says Langridge.
Around this time, Langridge got access to an IBM 650 computer, released in 1953, (a bi-quinary system) and learned to program it. Despite the fact that computers were so new that Fortran hadn’t even emerged as a language yet (the first FORTRAN compiler came out in 1957), and that he had only 2000 words of memory to work with, he loved programming and reduced the computation time of his calculations from two weeks to thirty minutes.
In 1964, Langridge, who had moved to Harvard University, received a phone call from his friend Cy Levinthal at MIT. Levinthal had come across a new computer system developed at MIT’s Project MAC, a defense-department funded effort to improve computation and, in particular, time sharing and computer graphics. The system had an advanced display that could show pictures in three dimensions. The displays were still rudimentary cathode ray tube consoles limited to simple line drawings, but the system also had a “crystal ball,” a bit like a track ball, that could be used to rotate an image on the screen.
Project MAC display system circa 1965.
Langridge crossed the river into Cambridge immediately and spent many nights programming the computer to display molecules and spin them in virtual space.
Langridge and Levinthal had created what was among the earliest virtual reality experiences. Prior to their graphics, the wire models that scientists built by hand were room-sized. The model of hemoglobin dwarfed the men that built it. The models were impossible to turn, they were brittle, they were frustratingly inaccurate, and you couldn’t get at their insides once they were constructed. “Also,” says Langridge, “pieces fell off.”
Yet when Langridge and Levinthal showed their molecular graphics to other scientists, their reaction was, essentially, Meh. “They asked, ‘Why do we need this? We have our wire models,’” says Langridge.
The Middle: Software Advances
The real barrier wasn’t the utility of the new tool; it was the practicality. The computer Langridge was working on was a two million dollar system that very few institutions had access to. But one group did see the utility. The National Institutes of Health Division of Research Resources, which supported technology development, granted Langridge funding to advance molecular graphics. In 1969, with this funding, he launched his Computer Graphics Laboratory at Princeton University.
In the meantime, computation was advancing on its own. Moore’s Law, the doubling of the number of transistors within an integrated circuit every two years, had already established itself as a trend in 1965, and costs were falling similarly. Still, computer graphics hardware was rare and expensive and would remain so for another two decades. “There were different flavors from one machine to another,” recalls Langridge. “It was very hard to share.”
In 1976, Langridge moved the lab to UCSF, where he met and hired Ferrin. They adopted UNIX, giving the tools some portability, though every computer system still had it’s own display language. Over the coming two decades, Ferrin and Langridge would design and redesign a series of molecular graphics systems, each one advancing with computing platforms, operating systems, and graphics capabilities. The first, called MIDS, was designed to support color graphics capabilities. In 1980, they developed the next-generation system called MIDAS, a complete rewrite, with an attempt to create a portable graphics language with documented source code to allow others to see under the hood.
A solvent accessible surface model of actinomycin intercalated with DNA created with dots on a Picture System 2 color display circa 1980.
The next incarnation, MidasPlus, was the first package written with an open architecture designed to allow others to contribute functions and extensions to the system. However, since code sharing and portability were still challenging and graphics hardware still costly, many research teams came to visit Langridge’s lab to use his systems.
Other teams were also at work developing molecular graphics. In particular, Alwyn Jones at Uppsala University in Sweden took on the task of answering the needs of crystallographers for electron density fitting tools. Langridge’s team was more focused on drug design. One of the most striking examples of how graphics aids drug developers searching for small molecule drugs is the discovery of buckminsterfullerene as an HIV drug. Based on the graphical display of HIV protease, a graduate student made an off-handed remark that “anything might fit into the protease, even buckminsterfullerene,” says Langridge. “It did.”
The Future: Technology Marches Onward
In the mid-1990s, Langridge retired and Ferrin took over as Principal Investigator. The lab took on its current name, replacing the original “Computer Graphics Laboratory” moniker. “Times changed,” says Ferrin.
Tom Ferrin. Photo by Majed.
Indeed, computer graphics was in the midst of a transformation. During the 1980s, the team had taken advantage of SGI (Silicon Graphics, Inc.) workstations with onboard graphics hardware. But in the 1990s, demand from the computer gaming community drove the rapid development of very large scale integration (VLSI) graphics. Graphics hardware was no longer rare and expensive. “Now graphics chips come for free on virtually every laptop and desktop on the market,” says Ferrin.
To keep up with these changes, Ferrin decided to replace MidasPlus and create Chimera in 1994. Today the tool is used to visualize molecular models made using crystallography, electron microscopy, NMR, and also theoretical models made using hybrid methods that combine experimental data and theoretical computational methods.
A recent Chimera image: model of an HIV virus particle displayed using Chimera showing 12,000 proteins and protein fragments.
Work is also underway to support super-resolution light microscopy in a collaboration between Ferrin, Betzig of the Howard Hughes Medical Institute’s Janelia Farm Research Campus, and Dyche Mullins of UCSF. The project began when a post-doc in the Mullins lab, Lillian Fritz-Laylin, collected a time-series of images of macrophage-like immune cells in motion using Betzig’s new microscope. The microscope captures 3D images about once per second at resolutions of about 200 to 250 nanometers, not atomic resolution, but still at a dynamic level never before seen. The data — 40 terabytes in all — came home with Fritz-Laylin, but was left untouched for months because she had no tools to use to visualize it.
For the past year or so, Ferrin’s lab has been working to enhance Chimera to read, process and visualize the data. Simply reading a terabyte off a disk takes three hours and, since no computer has a TB of real memory, he also faced the same limitations Langridge faced back when bytes came at a premium. While the new microscopy has challenged Ferrin, he and the other collaborators have also provided Betzig with feedback that has helped him improve his instrument. The collaborators hope to publish soon. “These are some of the first 3D images of cells in motion,” he says. “It’s just fantastic.”
-- Elizabeth Dougherty
| 1 | 3 |
<urn:uuid:752e52f6-4d87-4db9-898c-776a5fbc3e19>
|
Jackson Mac Low Lines‑Letters‑Wordsby Rabia Ashfaque
The Drawing Center | January 20 – March 19, 2017
While the beginning of the First World War is perhaps the most politically significant event associated with the year 1914, an intriguing and rather unconventional book of poems called Tender Buttons by Gertrude Stein began making a lot of noise around the same time. An unorthodox investigation into the way in which language constructs a view of the world, Stein’s experimental work received mixed reviews; while the implications of her exploration regarding the limitations of language did not register with some, others lauded Stein for the inquisitive de- (or rather re-) construction of the (hitherto) conventional vernacular. Jackson Mac Low’s (1922 – 2004) visual works in Lines-Letter-Words at the Drawing Center offer traces of Stein’s inquiries that informed his own work throughout his career. Works on display are categorized by various phases of evolution in his practice, and explore the visual dynamics of language and its inherent multi-functionality.
Like many of his peers, Mac Low was using mixed media to work out his creative processes; while he is primarily recognized as a poet, it is often overlooked that his first language experiments were conducted through drawings. Starting in his early teens, Mac Low continued to produce drawings in tandem with his poetry throughout the span of his career to produce more contemplative ways of seeing. The pieces on display in his first solo museum exhibition of visual works reveal the poet’s absorption with the meditative practice of drawing.
Early doodle-like works from the late 1940s evolved in the early ’50s to take the shape of fragmented writings such as H (1953)—a lone alphabet, trembling with emotion like the “single hurt color” from “A Carafe, That is a Blind Glass” in Stein’s Tender Buttons, a poem which, according to Mac Low, was “pointing” at a new way of seeing something that is ordinary and finding in it something “not ordinary.”1 It would be accurate to say that Mac Low’s work was building up to a similar kind of “pointing” by investigating the nature of language and reconfiguring it to discover new meaning.
Interested in science from a young age, Mac Low kept up with the latest scientific developments, events, and technologies throughout his life. Given the subject’s analytical core, it is unsurprising to find its influence across many elements of his work—from the naming (Light Poems, Drawing-Asymmetries, Skew Lines) to the methods (his “chance operations” were created with computer-generated texts), to the range of media (spanning from drawings to recordings such as his 1961 Tree* Movie). Science became Mac Low’s greatest ally in his experiments, used both to study language as well as to “invent techniques of artistic production,”2
After encountering D. T. Sukuzi’s teachings on Buddhism in the 1950s, he was occupied with a desire to write “egoless poetry” to discover, in the very nature of language, a way to express an idea wholly of itself. Performances of his drawings and poems, including the Gathas (starting 1961), Drawing-Asymmetries (starting 1960), Skew Lines (1979 – 80), and Vocabularies and Name Poems (starting mid-’70s) were a vital component of Mac Low’s experiments in egoless expression, exploring new possibilities of visual, sonic and gestural representations of language. His Light Poems (starting June 10, 1962) were Mac Low’s personal blend of science and Zen Buddhism, relating various types of light with the people to whom its particular qualities could be attributed. As part of research for the project, he made the Light Poems Chart (1962) by filing different types of light under a partially determined set of guidelines involving the use of letters from his and his first wife’s name and all the denominations from playing cards.
Mac Low’s illustrative practice came full circle with the thirteen Vermont Drawings (1995), the last of the works in the exhibition that are focused solely on exploring drawing. Free from the textual and performative aspects of his earlier works, the wispy line-drawings “echo the unsettled system of marks in Jackson’s early works,” wrote Brett Littman in his essay “Jackson Mac Low: Lines-Letters-Words,” in the exhibition catalogue by the same name, which also includes essays by Anne Tardos and Sylvia Mae Gorelick. It all comes down to these three words, really. Lines, letters, and words—signifiers of language—shaped Mac Low’s entire legacy. “Whatever the degree of guidance given by the authors, all or the larger part of the work of giving or finding meaning devolves upon the perceivers,”3 Mac Low once wrote. Driven by this democratic notion of shared ownership, he used lines, letters, and words to draw connections between diverse fields of study, consequentially blowing each wide open for new possibilities of communication. Knowingly or not, Mac Low was working on a new visual language. And given today’s increasingly interdisciplinary landscape, I’d say his experiments were downright groundbreaking.
- Jackson Mac Low, “READING A SELECTION FROM TENDER BUTTONS”, L=A=N=G=U=A=G=E, Vol.1, No.6, December 1978.
- Mordecai-Mark Mac Low, “Science, Technology, and Poetry: Some Thoughts on Jackson Mac Low”, CRAYON #1, September 1997.
- Jackson Mac Low, “Language-Centered”, OPEN LETTER 5.1, L=A=N=G=U=A=G=E, Vol.4, No.1, Winter 1982.
| 1 | 2 |
<urn:uuid:8582958b-1e23-4968-b032-a2b9bb2df114>
|
Friday, May 27, 2011
The Otter. Madra Uisce.
The otter lives along riverbanks and beside lakes all around Ireland. It is very good at swimming and diving. The otter has a small-flattened head, a long thick neck and a thick tail that narrows to a point. It can be 3 feet in length, which is about a metre and when they are fully grown up they can weigh about 20 lbs. It has a long body covered with fur. It belongs to the same family as the stoat, pine marten and badger. The otter looks like a seal. The otter is a carnivore, which means that he usually eats meat and often eats shellfish. To get at the shellfish,the otter bangs it against a stone. This way he can get at the food inside. The otter is found in many parts of Ireland. Otters are very playful animals. It has a grey to brown thick coat of fur, which helps it swim well in water. They are often seen jumping in and out of water just for fun,
The Dobhar-chu (see my post 26th-Oct-2010).
In Irish folklore, the Dobhar-chu (say durra-ghoo) is the king of all otters, the seventh cub of an ordinary otter. It is said to be much larger than a normal otter, and it never sleeps. The king of all otters is so magical that an inch of its fur will protect a man from being killed by gunshot, stop a boat from sinking or stop a horse from being injured.
The Dobhar-chu is also often said to be accompanied by a court of ordinary otters. When captured, these beasts would grant any wish in exchange for their freedom. Their skins were also prized for their ability to render a warrior invincible, and were thought to provide protection against drowning. Luckily, the Otter Kings were hard to kill, their only vulnerable point being a small point below their chin,(first you had to get past those sharp teeth). There are also traditions of the "King Otter", who is dangerous, and will devour any animal or beast that comes in its way. This otter is sometimes described as white with black rimmed ears and a black cross on his back, and sometimes as pure black with a spot of white on his belly. He could only be killed by a silver bullet and the person who killed him would die within 24 hours.
It was believed that if you were bitten by an otter then the only cure was to kill and eat another otter.
The otter is protected under Irish law and it is a criminal offense to kill one.
The otter is a loyal mate and a good parent who will after its cubs for longer than most other animals and for this reason is a symbol of a strong family.
The otter is sacred to the Irish sea god Manannan Mac Lir and the goddess Ceridwen.
Irish harps used to be carried in bags made from otter skin as it protected them from getting wet.
A warrior’s shield would be covered in otter skin (lining the inside) and in this way they protected the warrior in battle.
It was believed that the magical power of the otter’s skin could be used for healing. It was used to cure fever, smallpox and as an aid in childbirth.
If a person licked the still warm liver of a dead otter they would receive the power to heal burns or scalds by licking them.
The Wounded Otter.
by Michael Hartnett - translated from the Irish by the Author
From 20th Century Irish poems selected by Michael Longley. Published by Faber and Faber.
A wounded otter on a bare rock a bolt in her side,
stroking her whiskers stroking her feet.
Her ancestors told her once that there was river,
a crystal river, a waterless bed.
They also said there were trout there
fat as tree-trunks and kingfishers
bright as blue spears -
men without cinders in their boots,
men without dogs on leashes.
She did not notice the world die nor the sun expire,
She was already swimming at ease in the magic crystal river.
One of his country's best-loved poets, Irish born Michael Hartnett, died in October '99 in Ireland. He was 58 years old.
Tuesday, May 24, 2011
May the frost never afflict
may the outside leaves of your cabbage always be
free of worms,
may the crows never pick your haystack,
and may your donkey always be in foal
Old Irish Proverb
Christian tradition hold that donkeys originally had unmarked hides, and that it was only after Christ's entry into Jerusalem on the back of a donkey that they received the dark cross on their backs.
The hairs from the cross were widely believed to cure a number of ailments, and were often worn in a charm around the neck to guard against whooping-cough, toothache, fits, and to ease teething pains in babies.
Riding a donkey was also believed efficacious, especially if the rider faced the donkey's tail end, and was sometimes used as a preventative for toothache, measles and other children's complaints.
One cure for whooping-cough stated that the patient should be passed under a donkey and over its back either three or nine times; the trick of feeding an animal some of the patient's hair to transfer the illness was also used with donkeys. The donkey was also used to help cure the complaints of other animals; letting a black donkey run with mares in a field was thought to stop the mares miscarrying.
An old saying claims that no-one ever sees a dead donkey, this stems from the belief that a donkey knows when it is about to die and hides itself away. However, there is also a tradition that to see a dead donkey means great good fortune, and even as recently as this century it was considered a good-luck charm to leap over the carcass of a dead donkey three times
"If a donkey brays in the morning,
Let the haymakers take a warning;
If the donkey brays late at night,
Let the haymakers take delight."
A pregnant woman seeing a donkey - the child will grow wise and well behaved.
When a donkey brays and twitches its ears, it is said to be an omen that there will be wet weather.
When a pregnant woman sees a donkey, her child will grow up well behaved and wise.
In Ireland Mothers would wear a strip of donkey skin and a piece of hoof around their neck as a talisman against harm.
In County Mayo they believed that the spot on a donkey’s leg was put there by Our Lady’s thumb.
If two people with the same surname get married they can cure jaundice by placing a donkey’s halter on the afflicted person and leading them to a well. The person is then made to drink the water three times from the well.
A donkey that won’t stop braying and twitching its ears is an omen that rain is on the way.
A child sitting on the back of a donkey that circles nine times will be cured of whooping cough.
Hairs from a donkey’s back cure fits, convulsions, toothache, and teething trouble in babies.
The right hoof of a donkey protects against epilepsy.
Feeding a donkey the hairs from a patient cures the patient of scarlet fever.
One of the main uses for the donkey in Ireland was for carrying turf, seaweed or milk churns and it is for this reason that it became a symbol of the Irish countryside. In 1743 an act of parliament was passed, to kill a donkey carried a sentence of death.
It was also valued for its milk as it was considered a cure for tuberculosis, whooping cough; gout and improving the skin (remember Cleopatra).
The skin of a donkey was used to make shoes, sieves and bodhráns (drums).
Today you will mainly see donkeys in the countryside but now they are usually kept as pets although sometimes you may see them pulling the little donkey cart. For me there is a sense of nostalgia and beauty at this sight.
Sunday, May 22, 2011
Stoat. Easóg. Often referred to as ‘The Weasel’ in Ireland.
The Irish name Easóg refers to its eel like shape (eas is the Irish for eel) and when it runs its body undulates.
Rarely encountered in the flesh, but common in country tales, stoat packs have long hunted the borderland between folklore and natural history. It was once believed that the stoat was a form of cat brought to our shores as a pet by the Anglo-Normans.
The stoat has been present in Ireland since before the Ice Age, and possibly survived here through the Ice Age too. In fact, we have our very own sub-species, with a whiter belly, that is only found in Ireland and the Isle of Man.
On a mild, sunny day in March, a man was walking down a Yorkshire lane. Partridges were calling in the stubble, there was a blue haze in the air, and all was quiet in that part of the world.
Suddenly, as he walked, a pack of small animals charged down the bank into the lane and all about him. They leaped at him red-eyed, snapping little white fangs, leaping, dancing, darting, and as agile as snakes on four legs. Indeed, they looked like furry snakes, with their short legs, their long, undulating bodies, their little pointed heads, their flattened ears, rat-like tails and little murderous eyes.
The man laid about him with his stick. He knocked six or eight flying into the ditches on either side. He kicked off two or three that had fastened their fangs into his trouser leg. And those that he had knocked flying with blows that would have stunned a dog came out of the ditches and at him again. So, after a minute or two of this cut-and-thrust business, he took a good sharp run down the lane.
The man in question was Sir Alfred Pease, "a brave man who knows more about animals than most", and it was thus that J Wentworth Day described, in the 1930s, Sir Alfred's encounter with a stoat pack.
The stoat (Mustela erminea) is a member of the family mustelidae that includes weasels, ferrets, martens and otters. We are familiar with the paralysis it can inflict on rabbits, even at some distance, without knowing quite how it does it.
Well documented also is the stoat's whirling Dervish-like dance that mesmerises other animals until it darts forward and seizes one. Slightly less explicable is the dance that witnesses have reported the stoat performing as if in triumph over its already dispatched prey: "It ran round and round the dead bird," wrote one, "sometimes almost turning head over heels; then it would break away and race off into the bushes, then back out again." Stranger still is the fact that stoats carry their dead - appearing soon after one of their kind has been killed to drag the corpse into a hiding place.
It is perhaps such behaviour, along with their almost preternatural speed and flexibility, which have given stoats a slightly uncanny character. They are elusive, usually solitary animals; collectively, however, they can induce a feeling of menace.
No one is really sure why stoats occasionally form packs. The ability to hunt bigger prey is one obvious motive, yet as many stoat packs have been recorded in times of plenty - high summer for instance - as during hard winters. A female stoat hunting with her large brood of kits (usually between six and 12), or an accidental meeting of two family groups, giving a false impression of an organised pack, has also been suggested.
In Irish mythology, stoats were viewed as if they had human like abilities, as animals with families, which held rituals for their dead. They were also viewed as noxious animals prone to thieving and their saliva was said to be able to poison a grown man. They were even believed to understand human speech. So greet them politely or suffer the consequences.
To encounter a stoat when setting out for a journey was considered bad luck, but one could avert this by greeting the stoat as a neighbour.
Stoats were also supposed to hold the souls of infants who died before baptism
It was believed that if you killed a stoat its family would return and spit in the milk churn to poison it.
A purse or wallet made from the skin of a stoat was believed to bring great fortune for it would never be empty.
The skin of a stoat was said to cure rat bites.
If a woman cut off the testicles of a male stoat, stitched them into a wee bag and wore it round her neck it would act as a form of contraception. Well it would put me off.
We have a stoat that lives in one of our banks next to some stone steps. It has never harmed our chickens and ducks or to my knowledge any other of the wild birds. We have a blackbird that has a white spot on its shoulder and lives in a group of bushes near the stoat and he has been there about four years. The stoat is unusual as we have witnessed it a couple of times playing tag with a rat, as the two species are said to be sworn enemies it is this I find unusual. Recently we saw a smaller stoat, is this a female or are they breeding? I'll keep you posted.
Exams over, summer is upon us, time to relax.
Stoats are totally protected in Ireland. If stoats are proving a problem, by killing chicks or other domestic animals, you must solve the problem by using good fencing; it is illegal to kill a stoat.
Saturday, May 21, 2011
The Bear. Mathúin.
Art, In Ireland a separate name from Arthur it comes from an ancient word for "a bear," used in the sense of "outstanding warrior" or "champion." A pagan High King of Ireland, Art's rule was so honest that two angels hovered over him in battle.
Bear folklore is widespread, especially in the far northern hemisphere. It is not surprising that this awesome beast was one of the first animals to be revered by our ancestors. From as far back as the Palaeolithic (around 50,000 years ago) there is evidence of a bear cult in which the bear was seen as lord of the animals, a god, and even the ancestor of humans. Various species of bear played a central role in many shamanic practices of the north, and brown bears were part of our native forests as recently as the 10th century, when hunting and habitat loss drove them to extinction.
The Celts venerated the bear goddess, Artio - like a mother bear she was a fiercely protective influence. The bear god Artaois is closely linked to the warrior-king, Arthur; with his legendary strength and fighting prowess, Arthur's name and emblem both represent this animal. Celtic families would often have their own animal totem, a tradition that is still evident in the family name McMahon, which means 'son of the bear'.
Viking warriors were famous for working themselves into an insane battle frenzy (it has been suggested that the psychotropic fly agaric mushroom was sometimes used, see one of my earlier posts). They invoked the bear spirit, at times even donning a bear skin, to imbue them with superhuman strength and fury. These were the Berserkers, their name being derived from a Norse word meaning 'bear shirt'.
Perhaps the most wonderful characteristic of bears is their ability to hibernate and then emerge at the end of winter, which suggests death and resurrection. In part because bears give birth during hibernation, they have been associated with mother goddesses. The descent into caverns suggests an intimacy with the earth and with vegetation, and bears are reputed to have special knowledge of herbs
In Celtic mythology, Andarta was a warrior goddess worshipped in southern Gaul. Inscriptions to her have been found in Bern, Switzerland as well as in southern France. Like the similar goddess Artio, she was associated with the bear. .
In Irish and Scottish mythology, Cailleach (also called Cailleach Beara or Cailleach Behr) was the "Mother of All". The word Cailleach means "old woman". She was a sorceress. In addition to the Celts, the Picts also worshipped her. In art, she was depicted as a wizened crone with bear teeth and a boar's tusks. Each year, the first farmer to finish his harvest made a corn dolly representing Cailleach from part of his crop. He would give it to the next farmer to finish his harvest, and so on. The last farmer had the responsibility to take care of the corn dolly, representing Cailleach, until the next year's harvest.
In Scotland, she is Cailleach Behr, The Blue Hag of Winter, an Underworld goddess and a faery spirit. She appears as an old woman in black rags carrying a staff, who travels about at night with a crow on her left shoulder. She has a bad temper and is dangerous to people. She has fangs and sometimes three faces. She could turn herself into a cat. One legend describes her as turning to stone on Bealtaine and reverting back on Samhain to rule as Queen of Winter. In another, she spent the autumn washing her plaid in her washtub, the whirlpool of Corryvreckan. By winter this was white, and became the white blanket of snow that falls over Scotland in January.
Bears are no longer found in Ireland (since the end of the eleventh century) or Scotland, they became extinct in the late Middle Ages. Bear amulets made of jet have been found in North Britain. Many times these were placed in the cribs of new-born babies so they would be under the protection of the Great Mother Bear. The Bear's strength and power made them a powerful totem symbol for the ancient Celts, and Bear's teeth were considered powerful amulets. Some Celtic sites had votive statues and ritual jewellery dedicated to the Bear.
The Celts had two goddesses that took the form of the Bear: Andarta ("powerful bear") and Artio. The Celtic god, Cernunnos is often depicted as being accompanied by a bear and other animals. The Druids called upon the blessings of the Great Bear, which is associated with the North. The reverence for Bears began to wain with the coming of Christianity, and was perverted into bear-baiting.
Phrases such as "licking a child into shape" comes from the belief that newborn bear cubs were small and fragile and their mother licked them into health and shape. The Bear Paw is also thought to secrete a substance that kept the bear through long winter hibernations.
Medieval "mummers" play the Bear as a villain, having him terrorize flocks of sheep. Bears have always been admired for their great strength, and their knowledge. Bears will stay away from trouble with humans if possible, but when cornered, they will fight bravely.
In medieval times, it was believed that a Bear's eye in a beehive would make the bees prosper and make more honey. Bears love honey, and often will brave the anger of the hive for a taste of their favourite nectar.
A child riding on the back of a Bear was thought to cure whooping cough.
Bears roamed Ireland thousands of years ago; a time when the entire island was almost totally forested. The Irish bear – the brown bear – was of the same species as the North American grizzly, and as such could reach heights of over eight feet when standing on hind legs. Bones were found in Glenade, in County Leitrim, in 1997 and at 3,000 years old are thought to be of the last bears to have lived in Ireland. The finding shows that bears lived on the island at the same time as humans; perhaps hunting and loss of habitat leading to their extinction.
Wednesday, May 18, 2011
Just for a bit of a change I am posting a series of animal posts.
The Raven. An Fiach Dubh.
In Irish folklore the Raven and the Crow was associated with the Triple Goddess the Morrigan and it was believed that the Raven/Crow that flew over the battlefield was the Morrigan. Some would consider her the protector; others looked upon her as the bringer of death. She was however the protector of warriors. Her message really should be that in war there can only be one winner and that is death. As a symbol of death the raven would be buried with its wings outstretched in order to symbolize the connection between this world and the otherworld and the raven as a messenger between the two.
Banshees could take the shape of ravens or crows as they cried above a roof, an omen of death in the household below.
"To have a raven's knowledge" is an Irish proverb meaning to have a seer's supernatural powers to see all, to know all and to hear all. Raven is considered one of the oldest and wisest of animals.
The raven was the favourite bird of the solar deity, Lugh. Lugh was said to have had two ravens that attended to all his needs.
Giving a child their first drink from the skull of a raven will give the child powers of prophecy and wisdom.
The raven, with its glistening purple-black plumage, large size and apparent intelligence has inspired man from ancient times. It is regarded as an omen of both good fortune and bad, carrying the medicine of magic. It is often associated with war, death and departed spirits. However, the raven has not always been associated with death, spirits and darkness. Quite the contrary, the raven was believed by some to be the bringer of light, truth and goodness.
A raven sits on the shoulder of Ulster hero, Cú Chulainn, to symbolise the passing of his spirit.
The Bible (Genesis, chapter 8: 6-13 of the Old Testament) tells how birds are sent by Noah to detect whether there is any dry land outside the ark that he had built to withstand the Flood:
At the end of forty days Noah opened the window of the ark which he had made, and sent forth a raven; and it went to and fro until the waters were dried up from the earth. This was the first recorded use of the Sat Rav (sorry, my sense of humour).
The druids would predict the future by studying the flight and the cries of the birds. The raven is believed to be an oracular bird, and a bearer of messages from the Otherworld. It is a symbol of the connection between this world and the next and it was said to represent the balance between life and death and the creation of the new.
Ravens are associated with knowledge, warning, procreation, healing, prophecy and are also a form favoured by shape shifters.
Finding a dead crow on the road is good luck.
Crows in a church yard are bad luck.
A single crow over a house meant bad news, and often foretold a death within. "A crow on the thatch, soon death lifts the latch."
When crows were quiet and subdued during their midsummer's molt, some European peasants believed that it was because they were preparing to go to the Devil to pay tribute with their black feathers.
Two crows would be released together during a wedding celebration. If the two flew away together, the couple could look forward to a long life together. If the pair separated, the couple might expect to be soon parted, too. (This practice was also performed using pairs of doves).
It has been said that a baby will die if a raven's eggs are stolen.
Ravens are considered royal birds. Legend has it King Arthur turned into one.
Crows feeding in village streets or close to nests in the morning means inclement weather is to come - usually storms or rain. Crows flying far from their nest means fair weather.
The Romans used the expression "To pierce a Crow's eye" in relation to something that was almost impossible to do.
An Irish expression, "You'll follow the Crows for it" meant that a person would miss something after it was gone.
The expression, "I have a bone to pick with you" used to be “I have a crow to pick with you".
A ritual for invisibility: Cut a raven’s heart into three, place beans inside each portion, and then bury them right away. When the bean sprouts, keep one and place it into the mouth. Invisibility occurs while the bean is inside the mouth.
Ravens facing the direction of a clouded sun foretell hot weather.
If you see a raven preening, rain is on the way.
Ravens flying towards each other signify an omen of war.
Seeing a raven tapping on a window foretold death.
If a raven is heard croaking near a house, there will be a death in it.
If a raven flies around the chimney of a sick person's house, they will die.
Many parts of Celtic Britain and Ireland view the raven as a good omen:
Shetland and Orkney - if a maiden sees a raven at Imbolc she can foretell the direction of her future husband's home by following the raven's path of flight.
Wales - if a raven perches on a roof, it means prosperity for the family.
Scotland - deerstalkers believed it bode well to hear a raven before setting out on a hunt.
Ireland - ravens with white feathers were believed a good omen, especially if they had white on the wings. Ravens flying on your right hand or croaking simultaneously were also considered good omens.
Raven is said to be the protector and teacher of seers and clairvoyants. In the past, witches were thought to turn themselves into ravens to escape pursuit.
Once upon a midnight dreary, while I pondered, weak and weary,
Over many a quaint and curious volume of forgotten lore,
While I nodded, nearly napping, suddenly there came a tapping,
As of someone gently rapping, rapping at my chamber door.
"'Tis some visitor," I muttered, "tapping at my chamber door -
Only this, and nothing more."
Extract from The Raven by Edgar Allan Poe
Sunday, May 15, 2011
Rabbit . Coinín. Irish Hare. Giorria Éireannach.
A hare was a dreaded animal to see on a May morning. An old Irish legend tells of a hare being spotted sucking milk from a cow. The hare was chased by hounds and received a bad wound and it made its way into an old house to hide. When the house was searched all that was found was an old woman hiding a wound. The woman of the house had a central role in dairy production. From this fact springs the idea that women were those essentially involved in the theft of the farmers "profit". Old, widowed, unmarried or independent women were usually pinpointed as the main culprits.
Hares feature in Irish folklore, and the hare is older than our island’s culture itself. The Irish hare has been immortalised as the animal gracing the Irish pre-decimal three pence piece. Hare mythology exists throughout almost every ancient culture and when the first settlers colonised Ireland, the Irish hare was already an iconic figure. There are many examples in Celtic mythology, and storytellers still relate tales of women who can shape-change into hares.
The cry of the Banshee foretelling death might be legend but it may have parallels with the Irish hare of today as it struggles to avoid extinction in modern times.
Fertility rituals: place a rabbit skin under your bed to bring fertility and abundance to your sexual activities. If you're opposed to the use of real fur, use some other symbol of the rabbit that you're more comfortable with.
The obvious one -- a rabbit's foot is said to bring good luck to those who carry it, although one might argue that it's not so lucky for the rabbit.
To bring yourself boundless energy, carry a talisman engraved or painted with a rabbit's image.
If you have wild rabbits or hares that live in your yard, leave them an offering of lettuce, shredded carrots, cabbage, or other fresh greens. In some magical traditions, the wild rabbit is associated with the deities of spring.
Rabbits and hares are able to go to ground quickly if in danger. Add a few rabbit hairs to a witch bottle for protection magic.
In some legends, rabbits and hares are the messengers of the underworld -- after all, they come and go out of the earth as they please. If you're doing a meditation that involves an underworld journey, call upon the rabbit to be your guide.
Eostre, the Celtic version of Ostara, was a goddess also associated with the moon, and with mythic stories of death, redemption, and resurrection during the turning of winter to spring. Eostre, too, was a shape–shifter, taking the shape of a hare at each full moon; all hares were sacred to her, and acted as her messengers. Caesar recorded that rabbits and hares were taboo foods to the Celtic tribes.
In Ireland, it was said that eating a hare was like eating one’s own grandmother — perhaps due to the sacred connection between hares and various goddesses, warrior queens, and female faeries, or else due to the belief that old "wise women" could shape–shift into hares by moonlight.
The Celts used rabbits and hares for divination and other shamanic practices by studying the patterns of their tracks, the rituals of their mating dances, and mystic signs within their entrails. It was believed that rabbits burrowed underground in order to better commune with the spirit world, and that they could carry messages from the living to the dead and from humankind to the faeries.
As Christianity took hold in western Europe, hares and rabbits, so firmly associated with the Goddess, came to be seen in a less favourable light — viewed suspiciously as the familiars of witches, or as witches themselves in animal form. Numerous folk tales tell of men led astray by hares who are really witches in disguise, or of old women revealed as witches when they are wounded in their animal shape.
Although rabbits, in the Christian era, were still sometimes known as good luck symbols (hence the tradition of carrying a "lucky rabbit’s foot"), they also came to be seen as witch–associated portents of disaster.
Despite this suspicious view of rabbits and their association with fertility and sexuality, Renaissance painters used the symbol of a white rabbit to convey a different meaning altogether: one of chastity and purity. It was generally believed that female rabbits could conceive and give birth without contact with the male of the species, and thus virginal white rabbits appear in biblical pictures of the Madonna and Child. The gentle timidity of rabbits also represented unquestioning faith in Christ’s Holy Church in paintings such as Titian’s Madonna with Rabbit (1530).
From 1893 edition of Folklore: “Country people in Kerry don’t eat hares; the souls of their grandmothers are supposed to have entered into them.
Hares were strongly associated with witches. The hare is quiet and goes about its business in secret. They are usually solitary, but occasionally they gather in large groups and act very strangely, much like a group of people having a conference. A hare can stand on its hind legs like a person; in distress, it utters a strange, almost human cry which is very disconcerting to the listener. Watching such behaviour, people claimed that a witch could change her form at night and become a Hare. In this shape she stole milk or food, or destroyed crops. Others insisted that hares were only witches' familiars. These associations caused many people to believe hares were bad luck, and best avoided.
A hare crossing one's path, particularly when the person was riding a horse, caused much distress. Still, the exact opposite superstition claimed that carrying a rabbit's or hare's foot brought good luck. There is no logic to be found in superstitions.
Hares are considered unlucky, as the witches constantly assume their form in order to gain entrance to a field where they can bewitch the cattle. A man once fired at a hare he met in the early morning, and having wounded it, followed the track of the blood till it disappeared within a cabin. On entering he found Nancy Molony, the greatest witch in all the county, sitting by the fire, groaning and holding her side. And then the man knew that she had been out in the form of a hare, and he rejoiced over her discomfiture.
A tailor one time returning home very late at night from a wake, or better, very early in the morning, saw a hare sitting on the path before him, and not inclined to run away. He approached, with his stick raised to strike her, as he did so he distinctly heard a voice saying, "Don't kill it." However, he struck the hare three times, and each time heard the voice say, "Don't kill it." The last blow knocked the poor hare quite dead and immediately a great big weasel sat up, and began to spit at him. This greatly frightened the tailor who, grabbed the hare, and ran off as fast as he could. Seeing him look so pale and frightened, his wife asked the cause, on which he told her the whole story; and they both knew he had done wrong, and offended some powerful witch, who would be avenged. However, they dug a grave for the hare and buried it; for they were afraid to eat it, and thought that now perhaps the danger was over. However, the next day the man became suddenly speechless, and died before the seventh day was over, without a word evermore passing his lips; and then all the neighbours knew that the witch-woman had taken her revenge.
Top image:Madonna and Child with Saint Catherine (The Virgin and the rabbit). 1525-1530 Oil on canvas. Louvre, Paris
Lower image: Hare in The Moon.
Friday, May 13, 2011
Folklore of the Hedgerow. Part Twenty One.
The Bat. Laltóg.
Feared as creatures of the night associated with death, sickness and witchcraft. Made famous as the familiars of vampires by the cinema.
They sleep hanging upside down by their feet. They live in shelters such as caves or hollow trees, but they also take advantage of human structures. Like most small animals that are drawn to human habitations, bats have often been identified in folk belief with the souls of the dead. As a result, in cultures that venerate ancestral spirits, bats are often considered sacred or beloved. When spirits are expected to pass on rather than return, bats appear as demons or, at best, souls unable to find peace.
According to one well-known fable, popularly attributed to Aesop, the birds and beasts were once preparing for war. The birds said to the bat, “Come with us,” but he replied, “I am a beast.” The beasts said to the bat, “Come with us,” but he replied, “I am a bird.” At the last moment a peace was made, but ever since, all creatures have shunned the bat.
In relation to bats the learned folklorist Joseph Jacobs said “He that is neither one thing nor the other has no friends”Revulsion against them, however, is far from universal, and their quizzical faces have often inspired affection. There were no glass windows in the ancient world, and so people had little choice but to share their homes with bats.
In Ireland if a bat was seen near the house it was taken as a sign of an impending death for a member of the household. However, we have bats in our roof space (they came in last winter). We are quite happy with them and they cause us no problems whatever.
A common bat seen in and around hedgerows at dusk is the Pipistrelle Bat. Their Irish name is Laltog Fheascrach which means 'bat of the evening'.
Wood mouse. Luch fhéir / Luchóg.
The earliest remains of wood mice in Ireland, date to the Stone Age, 7600 years ago. It is believed that more wood mice came to Ireland with humans at various times, giving a certain genetic variability. The wood mouse is a very important part of the Irish food web. Many Irish predators eat wood mice, including owls, kestrels, stoats, foxes, badgers, pine martens, and domestic cats. Wood mice are susceptible to pesticides, insecticides, and herbicides, and to the burning of straw. A decline in wood mice numbers can effect predator numbers, especially owls.
To hear a mouse squeaking anywhere near someone who is ill is a sign that the person will die, and much of the abhorrence towards mice (who are actually far cleaner creatures than generally imagined) probably stems from the old superstition that they are the souls of people who have been murdered.
If they nibble anyone's clothing during the night, that person will suffer some misfortune, while no journey undertaken after seeing one is likely to be successful.
In Ireland boiled mice were given to infants to cure their incontinence and were also a cure for whooping cough.
Mice were used as a cure for baldness. Fill a pot with mice and leave it under the hearth for a year. You then spread the contents of the pot over your scalp. If for some reason you couldn’t wait then you moved the pot to the back of the hearth, light a fire in front of it then after six days you spread the contents onto the scalp.
Lower image :Archibald Thorburn - Pipistrelle And Noctule Bat 1920
Folklore of the Hedgerow. Part Twenty One.
"A butterfly or moth will hover for a time in one place or fly in a fleeting, hesitant manner, suggesting a soul that is reluctant to move on to the next world".
The transformation of a caterpillar into a butterfly seems to provide the ultimate model for our ideas of death, burial, and resurrection. This imagery is still implicit in Christianity when people speak of being “born again.” The chrysalis of a butterfly may have even inspired the splendour of many coffins from antiquity. Many cocoons are very finely woven, with some threads that are golden or silver in colour. The Greek word “psyche” means soul, but it can also designate a butterfly or moth. The Latin word “anima” has the same dual meaning.
The custom of scattering flowers at funerals is very ancient, and the flowers attract butterflies, which appear to have emerged from a corpse.
Up to the 1600s it was against common law in Ireland to kill a white butterfly because they were believed to hold the souls of dead children.
In Irish folklore, they were the souls of dead people who return to visit their favourite place and their loved ones and it was unlucky to harm one. The red admiral butterfly, however, was thought to be the devil and was persecuted.
Old Irish saying "Butterflies are souls of the dead waiting to pass through Purgatory"
The significance of the butterfly in Irish folklore attributes it as the soul and thus it has the ability to cross into the Otherworld. It is also a symbol of transformation and creation.
"For Christians, the butterfly's three steps of metamorphosis -- as caterpillar, pupa and then winged insect -- are reminiscent of spiritual transformation"
An Irish blessing: May the wings of the butterfly kiss the sun and find your shoulder to light upon. To bring you luck, happiness and riches today and beyond.
Butterfly - If the first butterfly you see in the year is white, you will have good luck all year.
Three butterflies together mean a child will soon be born.
Moth - A white moth inside the house or trying to enter the house means death.
A brown moth means an important letter is coming.
A big black moth in the house means a deceased one is just visiting reincarnated through that moth.
According to superstition, the death's head hawk moth, with its skull and crossbones markings and loud squeak, was a harbinger of death, war and disease. The moth uses its tough proboscis to crack through beehives and suck out honey and in some parts of Ireland is known as a bee robber.
Few people know how the butterfly got its name. The witch was supposed to change her shape into this insect. She then flew to the dairy, and stole milk, cheese and, of course, butter!
Top image: The Butterfly Bird of Summer.
Lower image: The Butterfly Tree.
Monday, May 9, 2011
Folklore of the Hedgerow. Part Nineteen.
The Fox. Sionnach.
A popular belief concerning the origin of the fox was held in Ireland. It was believed that they were the dogs of the Norsemen who were supposed to have brought them to Ireland.
Foxes are very good at concealing themselves. Their ability to hide and move swiftly through the hedgerow corridors is legendary. It is this ability together with their skill and cunning when it comes to taking poultry and small animals that has resulted in a reputation that we know today.
The Celtic druids admired the fox for this skill and cunning. In 1984 the two thousand year old body of a man who had been garrotted was found in a bog near Manchester, England (Lindow man). He was wearing a fox fur amulet and had traces of mistletoe pollen in his gut, and his death by three causes, led Dr. Anne Ross to suggest that he may have been a druid prince slaughtered in a ritual.
In common with the otter, the fox is said to carry a magical pearl, which brings good luck to whoever finds it.
The fox is associated with adaptability, and was thought to be a shape-shifter.
There are many stories showing the cunning of the Fox, not always to its credit, but it should be remembered that ‘cunning’ comes from kenning, meaning ‘to know’, without necessarily carrying slyness. This is the fox’s great secret. In folklore all over the world it’s described as "sly", "clever", and “cunning" – and it is. It’s clever at adapting so that it assimilates into its environment even when this environment is changing rapidly.
That cunning may, however, be associated with the false trails a fox can leave in order to deceive its hunters - and foxes were hunted for their pelts, perhaps in a ritual manner. Like the Deer, the Fox was often part of burial rituals, found now in excavations.
The fox was said to be able to foresee events including the weather and its barking was said to be a sure sign of rain.
It is thought to be unlucky to meet a woman with red hair or a fox when setting out in the morning, especially if you were a fisherman.
One cure for infertility was for a woman to sprinkle sugar on the testicles of a fox and roast them in an oven. She should then eat them before her main meal for three days in succession. It does not mention whether the fox was dead or not but I certainly hope so.
An Irish cure for gallstones and kidney stones was to rub the affected area with foxe’s blood.
The tongue of a fox was also thought to be able to remove a stubborn thorn from the foot, when all else has failed.
The Frog. Losgann.
Frogs are quite recent additions to the fauna of the Irish hedgerow and its exact method of introduction is unknown. Some suggest it was introduced by the Anglo-Normans yet others believe they were introduced sometime during the late 1500s early 1600s by students of Trinity College Dublin who had brought them here from England. They released the frogs into ponds and ditches that were around Trinity at that time, from there they spread to all parts of Ireland and the rest is history. However, it is harmless and well thought of and appears to have found its niche in the rich habitat of the hedgerow.
Water is considered sacred to druids and all water has its guardian spirits or deity. Frogs and their close relative’s toads may be found in ditches at the edge of hedgerows or where riverine hedges grow. They are spawned in water and will return to the place of their birth in order to carry out the cycle of life and for this reason they were thought to be representatives of the water spirits. Some even believed that a frog was the earthly manifestation of water spirits that lived in sacred wells.
Frogs were seen as creatures of the underworld and for this reason they became associated with witches and the supernatural to be used in the preparation of potions and spells. They were also believed to be one of the witch’s familiars who would give warning to its mistress by loud croaking. As a familiar of the witch or indeed some druids the frog was looked upon as a messenger of the water god/goddess who brought blessings of rain and purification.
The ashes of a cremated frog was thought to stop bleeding, its spawn was considered a cure for rheumatism and inflammatory diseases.
Sore eyes could be cured by getting someone to lick the eye of a frog then licking the eye of the affected sufferer.
The frog, through its connection to Mother earth was considered lucky to have living in the dairy for it protected the milk.
If you look at the colour of the frog you can predict the weather, dark coloured frogs are a sign of rain, light brown or yellow means that dry weather is on the way. There may be some truth in it as rain does make frogs darker and good dry sunny weather makes their skin a lighter colour so who knows?
It is considered bad luck if a frog comes into your house although we have had many a frog come into our cottage and it never did us any harm. Having said that I have never won millions on the lotto so again who knows?
If you put a live frog in your mouth it will cure toothache. You had to rub the frog on the tooth or chew its leg.
It will cure a cold if you hold a frog by its legs and place it in the sufferer’s mouth for a moment (you’ll be too busy vomiting to cough).
If a child had whooping cough it could be cured by bringing it to running water, putting a frog into the child’s mouth three times and then letting te frog swim away uninjured. It would take the whooping cough with it. Is this where the saying “I’ve got a frog in my throat” came from?
A love charm—Bury a live frog in a box and after a few days dig it up. Take the skeleton apart and select a particular bone, place the bone in the clothing of the intended and they will fall madly in love with you.
Why do the English call the French ‘Frogs’?
The main reason is that three frogs have been depicted on the heraldic device of Paris since ancient times; probably dating back to when Paris was a swamp. In pre-revolutionary France the common people of France were called grenouilles, or frogs, and the same name was later extended to include all the French people (By the English). Although some people will still believe it’s because they eat frog’s legs.
Top image: The Fox and The Wren.
Lower image: The Fairy and The Frog
Saturday, May 7, 2011
Folklore of the Hedgerow. Part Eighteen.
The Badger. Broc.
Some people thought that badgers could bring bad luck. This rhyme dates from about 200 years ago:
Should one hear a badger call,
And then an ullot cry,
Make thy peace with God, good soul,
for thou shall shortly die.
So, according to this bit of folklore, if you hear badgers call, then hear an "Ullot" (an owl) hoot, you are not long for this world.
Some people used to say that badgers had legs that were shorter on one side than the other. This was supposed to be because badgers often walked on sloping ground on the sides of hills.
Another 200-year-old story says that badgers - like black cats - can bring bad luck or good luck. If the badger walks across the path that you have just walked on, you are in for very good luck. However, if the badger walks across the path in front of you, and if it happens to scrape up a bit of earth as it goes, then it is time for you to choose your coffin! The old rhyme goes like this:
Should a badger cross the path
which thou hast taken, then
Good luck is thine, so it is said
beyond the luck of men.
But if it cross in front of thee,
beyond where thou shalt tread,
and if by chance doth turn the mould,
Thou art numbered with the dead.
The hair is used in the making of shaving brushes and also for artist’s brushes.
This animal is unyielding in the face of danger and is noted for its tenacity and courage.
The badger was an animal that was always favoured by the gambling fraternity.
If you wear a Badgers tooth around your neck you will be lucky in whatever you place wagers on especially cards.
Highlanders, on the other hand, had rather more regard for the badger, admiring its strength and tough hide. Badger faces were used to cover sporran’s, badger teeth employed as buttons, and even badger penises given as fertility charms to bridegrooms from brides' fathers.
Badger fat was used for cooking and also rubbing on the chest as a cure for rheumatism.
Henry Smith, author of 'The Master Book of Poultry and Game', which was published shortly after the end of World War Two, declares "the flesh can be treated as young pig in every respect, it being just as rich and having the flavour of a young pig".
In the middle of the 20th century they were thought to be the carrier of tuberculosis, which was subsequently transmitted to cattle. Their persecution was relentless and their numbers in Ireland dipped as a result. Protection was afforded to badgers in the 1970s and since then their numbers have started to recover.
Their home, referred to as a ‘set’, is a complicated tunnel construction where the female or ‘sow’ raises up to three cubs each year during February or March. A Badger set can be as much as twenty metres long and be several metres below the surface.
The Badger (Broc) connects to perseverance, along with the patience and persistence this requires. He is considered self-reliant, determined, assertive and willing to work, with an earthy wisdom. Brocan was a name for Pictish wise men.
That said, the Badger was not always treated with respect - the game 'Badger in the Bag' started, according to legend, with the celtic hero Pwyll tricking a rival into a bag and each of his men having a turn at kicking the supposed 'badger' he had trapped. Bagging badgers before dealing with them (or indeed baiting them) also has to do with their aggression and fighting skills.
Middle image. Badger Rough and Tumble by Martin Ridley.
Folklore of the Hedgerow. Part Seventeen.
If you harm a robin's nest, you will be struck by lightning. There is also an old saying "Kill a robin or a wren, never prosper, boy or man." A robin entering the house foretells of a death to come. If a robin stays close to the house in autumn, a harsh winter can be expected. Robins are thought to be helpful to humans, occasionally granting favours. Robins are a sure sign of spring and if you make a wish on the first robin of spring before it flies off,
you'll have luck throughout the following year.
Robins with their cheery red breasts adorn many of our Christmas cards and decorations, and there are several stories as to how the robin acquired its red breast feathers. In the Christian tradition, it is thought that a robin tried to remove the thorns from Jesus’ head during the Crucifixion, and that drops of his blood fell onto the bird and stained his breast feathers red forever. In another myth, the robin gained his red breast from flying into the fiery wastes of hell to carry water to the stricken sinners who were suffering there for all eternity. It’s enough to give you nightmares.
The robin is another bird where it is believed that if they are seen tapping on the window or flying into a room that a member of the household will soon be dead. However, we often have Robins flying into our cottage and we look on them as our friends not as harbingers of death.
If you break a robin’s eggs expect something important of yours to be broken very soon.
Note that if you see a robin singing in the open that good weather is on its way, but that if the robin is seen sheltering among the branches of a tree that it will soon rain. Also, if the first bird that you see on St Valentine’s Day is a robin, it means that you are destined to marry a sailor!
It is said to be extremely unlucky to kill this bird. The hand that does so will continue to shake thereafter. Traditionally the Irish believe that a large lump will appear on the right hand if you kill one. It is said that whatever you do to a robin you will suffer the same tragedy. Some believe that the robin will not be chased by a cat.
It was widely believed that if a robin came across a dead body it would carefully cover the body with leaves and vegetation until it was completely hidden.
Robins were believed to provide a cure for depression. The remedy suggests a robin must be killed and its heart removed. The heart should then be stitched into a sachet and worn around the neck on a cord. I think that would give me depression.
In the south east of Ireland they believed that if a robin entered a house it was a sign of snow or frost.
A robin singing indicated a coming storm.
How Robin got his Red Breast.
One winter, a long time ago, Jack Frost was very cruel. He made the snow fall thickly upon the ground, and he put ice on the ponds and frost on the window panes.
The birds found it very hard to get food and soon they began to get hungry.
Then, one day, the birds were sitting in a ring under a hedge, trying to think what was to be done. After a while a little brown, bird, called Robin, got up to speak.
"I have an idea," he said. "I will go into the gardens and try to get people to give us a lot more crumbs!"
Now Robin had a way all of his own of making friends. He went along to the houses where people lived and in one of the gardens he saw a man clearing away the snow from a path, so he hopped up very close to the man. Most birds are very much afraid of men, but Robin was brave. He had to be, if he was to help the other birds. When the man saw how friendly Robin was, and how hungry he seemed to be, he went into his house and fetched a tray full of crumbs.
Robin was glad, and he flew off to fetch the other birds, and soon there were crowds of them in the kind man's garden.
The best way they could say "Thank you" to the kind man was to eat the crumbs out of his hand. Robin then flew away into other gardens, and wherever he went he made friends. So, while the snow stayed on the ground the birds were able to feed after all. At last Jack Frost sent the snow away, and then the happy birds wanted to thank Robin so they made him a little red waistcoat, which he still wears.
That is why he is now called Robin Redbreast.
Many years ago, late in the year, a cruel wind brought biting cold weather; making the night more difficult for a father and son who had travelled so far and yet still had a long way to go. They looked for a cottage, a barn, or even a tree - anywhere they could find shelter. However, there was nothing to be seen or found, except for an old bush, so at last the father built a fire and told his son to try and sleep a little.
When the father's eyes began to droop he woke his son and told him to watch the fire.
Well the boy tried to stay awake! He hadn't really slept while lying on the frozen ground and he was still exhausted from the walk. His eyes got lower. His head got lower. The fire got lower.
So low in fact that a starving wolf began to inch nearer the sleeping pair.
However, there was one who was awake. There was one who saw everything from the middle of the old bush; a little bird who was as grey as the brambly wood.
The bird hopped down and began fanning the flickering embers until the flames began to lick out hungrily; he flapped his wings for so long that he began to feel a pain in his breast, yet despite this he kept fanning the embers until the flames were dancing with strength.
The heat from the flames caused his breast feathers to change colour and from that day on the Robin has proudly worn a red breast.
Robins feature in ‘Babes in the Woods’ when the little bird buried the children, who had died of cold, with leaves. The ballad ‘Who Killed Cock Robin’ was first published in 1744 and Drayton in 1604 referred to the robin in his work entitled ‘The Owlet’. In fact there are many writers who have been inspired by the dear old robin.
Associated with the druids of Ireland who consider the wren a sacred bird and used their musical notes for divination. They were called magus avium (the magic or druid bird).
This poor unfortunate bird was for many years hunted and killed although today it is respected. The main day for hunting was December 26 when the cruel practice was carried out by young boys (Wren boys). The Wren boys would receive money as they paraded the dead birds from house to house.
The wren was seen as a sacred bird to the early Druids and therefore was the target by Christian believers as Pagan purges were frequent and all-embracing. This unfortunate set of circumstances may also have come about as the feathers were thought to prevent a person from drowning, and because of this the feathers were traditionally very popular with sailors.
A traditional French belief tells that children should not touch the nest of a wren or the child will suffer from pimples. In the same way as a robin is revered, if anyone harms the bird then the person will suffer the same fate.
The Breton druids have given the wren an honoured role in their folklore, they believe that it was the wren that brought fire from the gods but as she flew back down to earth her wings began to burn so she passed her gift to the robin, whose chest plumage began to burst into flames. The lark came to the rescue, finally bringing the gift of fire to the world.
The wren’s eggs are said to be protected by lightning. Whoever tries to steal wren’s eggs or even baby wrens would find their house struck by lightning and their hands would shrivel up
During the winter wren’s lose their body heat rapidly and therefore will often roost together to keep warm. Remember an odd nest box left up occasionally during the winter months will often be used for roosting. It is not unusual for several wrens to cuddle up together in one box during cold times. The male bird builds two or three ball-shaped nests for the female to inspect. She decides which one she likes best and will then proceed to line the chosen nest ready for egg laying.
The wren is a mouse-like little bird for it scurries here and there hiding in ivy leaves and picking up insects in all sorts of hideaway places.
Wordsworth writes about the wren’s song in Book II of The Prelude. Whilst most people find the wrens song a little harsh, he favoured its song and celebrates it in his writing. Good old Wordsworth!
An earlier post called 'WHY THE WREN FLIES CLOSE TO THE EARTH’ tells the story of why the wren was known as The King of the Birds, why not have a look.
Top image-A misty hedgerow.
Middle image-The Wren.
Bottom image-The Robin.
Thursday, May 5, 2011
I thought that this little story might entertain you. Enjoy. I will return to Folklore of the Hedgerow on the next post.
When Felix Agnus put up the life-sized shrouded bronze statue of a grieving angel, seated on a pedestal, in the Agnus family plot in the Druid Ridge Cemetery, he had no idea what he had started. The statue was a rather eerie figure by day, frozen in a moment of grief and terrible pain. At night, the figure was almost unbelievably creepy; the shroud over its head obscuring the face until you were up close to it. There was a living air about the grieving angel, as if its arms could really reach out and grab you if you weren’t careful.
It didn’t take long for rumours to sweep through the town and surrounding countryside. They said that the statue – nicknamed Black Aggie – was haunted by the spirit of a mistreated wife who lay beneath her feet. The statue’s eyes would glow red at the stroke of midnight, and any living person who returned the statues gaze would instantly be struck blind. Any pregnant woman who passed through her shadow would miscarry. If you sat on her lap at night, the statue would come to life and crush you to death in her dark embrace. If you spoke Black Aggie’s name three times at midnight in front of a dark mirror, the evil angel would appear and pull you down to hell. They also said that spirits of the dead would rise from their graves on dark nights to gather around the statue at night.
People began visiting the cemetery just to see the statue, and it was then that a secret society decided to make the statue of Grief part of their initiation rites. “Black Aggie” sitting, where candidates for membership had to spend the night crouched beneath the statue with their backs to the grave of General Agnus, became very popular.
One dark night, two society members accompanied a new hopeful to the cemetery and watched while he took his place underneath the creepy statue. The clouds had obscured the moon that night, and the whole area surrounding the dark statue was filled with a sense of anger and malice. It felt as if a storm were brewing in that part of the cemetery, and they noticed that gray shadows seemed to be clustering around the body of the frightened society candidate crouching in front of the statue.
What had been a funny initiation rite suddenly took on an air of danger.
One of the society brothers stepped forward in alarm to call out to the initiate. As he did, the statue above the boy stirred ominously. The two society brothers froze in shock as the shrouded head turned toward the new candidate. They saw the gleam of glowing red eyes beneath the concealing hood as the statue’s arms reached out toward the cowering boy.
With shouts of alarm, the society brothers leapt forward to rescue the new initiate. But it was too late. The initiate gave one horrified yell, and then his body disappeared into the embrace of the dark angel. The society brothers skidded to a halt as the statue thoughtfully rested its glowing eyes upon them. With gasps of terror, the boys fled from the cemetery before the statue could grab them too.
Hearing the screams, a night watchman hurried to the Agnus plot. He was extremely distressed to discover the body of a young man lying at the foot of the statue. The young man had apparently died of fright.
The disruption caused by the statue grew so acute that the Agnus family finally donated it to the Smithsonian museum in Washington D.C.. The grieving angel sat for many years in storage there, never again to plague the citizens visiting the Druid Hill Park Cemetery.
Is it true? I will leave it up to you to decide.
Folklore of the Hedgerow. Part Sixteen.
Blackbird. Lon Dubh.
Place blackbird feathers under someone's pillow and they will tell you their innermost secrets. Blackbirds symbolize reincarnation. Blackbirds are linked to the element of Water.
Two blackbirds seen together mean good luck. The sight of two together is unusual as they are quite territorial. If they nest near your house you will be lucky throughout the year and will experience good fortune. They are also regarded as messengers of the dead.
Blackbirds make their nests in trees from moss, grass and hair. A European tradition says that if human hair is used, the unfortunate unknowing donor will continue to suffer from headaches and possibly even boils and skin complaints until the nest is destroyed, so old hair should be disposed of carefully.
The beautiful song of the blackbird makes it a symbol of temptations, especially sexual ones. The devil once took on the shape of a blackbird and flew into St Benedict's face, thereby causing him to be troubled by an intense desire for a beautiful girl he had once seen. In order to save himself, the saint tore off his clothes and jumped into a thorn bush. This painful act is said to have freed him from sexual temptations for the rest of his life. Now if you believe that you’ll believe anything.
Like the crow and the raven, the blackbird is often considered a bad omen. Dreaming of a blackbird may be a sign of misfortune for you in the coming weeks. It also means you lack motivation and that you are not utilising your full potential.
Dreaming of a flying blackbird is said to bring good fortune.
One story concerning the blackbird is about St.Kevin, an Irish 7th century Saint who loved wildlife. It is said that in the temple of the rock at Glendalough, St.Kevin was praying with his hand outstretched upwards when a blackbird flew down and laid her eggs in his palm. The story goes on to say that the saint remained still for as long as it took for the eggs to hatch and the brood to fly the nest.
Among the Celts the blackbird is thought to be one of the three oldest animals in the world. The other two being the trout and the stag. They are said to represent the water, air and earth.
Legend says that the birds of Rhiannon are three blackbirds, which sit and sing in the World Tree of the Otherworlds. Their singing puts the listener in to a sleep or trance which enables her/him to go to the Otherworlds. It was said to impart mystic secrets.
nd in the nineteenth century, blackbirds were supposed to hold the souls of those in purgatory until judgement day. It was said that whenever the birds voices were particularly shrill, it was those souls, parched and burning, calling for rain. The rain always followed.
The whistle of the blackbird at dawn warned of rain and mist for the coming day.
Bottom in a Midsummer’s dream sings;
“The ouzel cock so black of hue
With orange tawny bill…” (Ouzel being an old name for blackbird.)
The Dunnock. Bráthair an Dreoilín.
Known more popularly as the “Irish Nightingale,” the dunnock is the object of a most tender superstition. By day it is a happy little bird that tries to outdo every other bird with its song. However, at night particularly at midnight their sad and tender songs are said to reflect the cries of unbaptised babies that have returned from the spirit world in search of their parents.
The dunnock’s blue-green eggs were regarded as charms against witches spells when strung out along the hob. They were especially good for keeping witches and spirits from coming down the chimney.
It was in fact Linnaeus who gave the Dunnock the name Accentor which means ‘one who sings with another’. Chaucer made notes on how the cuckoo uses the dunnock to rear its young. Cuckoos which use dunnocks in this way can imitate the colour of the dunnock eggs whereas other cuckoos which may use another species of bird, say a meadow pipit, will imitate the colour of the meadow pipit eggs. Chaucer refers to the Dunnock as Hegesugge which means ‘flutterer in the hedges’. Hegesugge is the Old English name for Dunnock/Hedge Sparrow.
The Thrush. Smólach.
There are many superstitions associated with Song thrushes, including the notion that they dispose of their old legs and acquire new ones when they are about 10 years old. Another superstition is that they are believed to be deaf. All sorts of things have also been said and written about Mistle thrushes also. In the fourth century Aristotle was already writing about its fondness for mistletoe and there is an old belief that Mistle thrushes could speak seven languages!
In Ireland it was believed that the faeries made sure that the thrush built its nest low down near the fairies home in the grass so that they could enjoy the birds song. If the thrush built its nest high up in a thorn-bush it was a sure sign that the faeries were unhappy and misfortune would come to the neighbourhood.
It was believed that the flesh of the song thrush would cure sickness and convulsions.
That’s the wise thrush;
he sings each song
Lest you think he
never could recapture
The first fine careless rapture!
Extract from Home-Thoughts, From Abroad by Robert Browning
Top image: The Song Thrush.
Middle image: The Dunnock.
Bottom image: The Blackbird.
Wednesday, May 4, 2011
Folklore of the Hedgerow. Part Fifteen.
Buff-Tailed Bumblebee nests can be found in the hedgerows. The bees may be seen coming and going through a hole in the ground. The nest will be hard to see as bees are very private individuals but if you listen carefully you may hear them buzzing away quite happily. Sometimes the Queen may decide to occupy an old abandoned mouse nest as these are usually warm and well insulated. She may also nest underneath sheds, decking, in compost bags, in hedge clippings or even in attics or under floor boards. You could move a nest if it was causing you problems but it may not fully recover therefore leave it alone if it is doing you no harm. Like all bumblebees, they need to be greatly provoked before they sting.
As bees are becoming victim to an ever changing world that threatens their habitat you can do your bit to help them survive. Plant suitable flowers in your garden, window boxes, containers or even along the hedgerow. Provide a nest box, these are now becoming increasingly available in any good garden centre or make your own, they are very easy and you can Google plans. Remember they are a gardener’s friend and we need bees to pollinate our plants.
There is a superstition that if a bumblebee buzzes at the window it is a sign of a coming visitor.
A servant girl was standing at the kitchen window, in flew a bumblebee ‘Oh!’ she said, ‘a visitor is coming! Has the bee got a red tail or white? Red for a man and white for a lady’.
Irish folklore tells us how easily the bees take offence and this will cause them to cease producing honey, desert their hives and die. You must treat them as you would a member of your own family. They must be told all the news, in particular births, deaths and marriages. In the event of a death their hive must be adorned with a black cloth or ribbon and they must be given their share of the funeral food. You may then hear them gently hum in contentment and they will stay with you.
Other beliefs were that if the bees heard you quarrelling or swearing they would leave so you must talk to them in a gentle manner. They cannot tolerate the presence of a woman of loose morals or one that was menstruating but would sting her and drive her away (sounds like Christian influence here). You must never buy bees with normal money, only with gold coin although you may obtain them through gift, loan or barter. It was also believed that if a single bee entered your house it was a sign of good luck on the way, usually in the form of wealth.
When bees swarmed, it was the women and children of the household that had to follow them, making a noise with pots and pans. This was supposed to make them settle or maybe it was really just to warn people to get out of the way? It was accepted that in these circumstances you could follow them onto someone else’s land without being accused of trespassing.
The law on bees (Brehon Law) was that bees taking nectar from plants growing on your neighbours land were guilty of 'grazing trespass' in the same way a cow or sheep would be if they were on your neighbours land. They could even be accused of 'leaping trespass' in the same way as poultry. The way this law was observed was that a beekeeper was allowed three years of freedom during which time the bees were allowed free reign, on the fourth year the first swarm to issue from the hive had to be given to your neighbour as payment. On the following years other swarms were given in turn to other neighbours, in this way everyone was happy. From all accounts it seemed to work. Another issue the Bechbretha (Law governing bees) was enacted was in the event of stings. As long as you swore you had not retaliated by killing the bee you would be entitled to a meal of honey from the bee keeper. However if the unfortunate person died from a sting then two hives had to be paid in compensation to their family.
It was a bad omen if a swarm settled on a dead branch for it meant death for someone in the bee keeper’s family or for the person who witnessed the swarm settling. Popular folklore also suggested that bee stings aide in the relief of arthritis and rheumatism in much the same way as nettle stings and recently bee venom has been revived as a possible treatment for multiple sclerosis.
In Celtic myth, bees were regarded as beings of great wisdom and as spirit messengers between worlds. Honey was treated as a magical substance and used in many rituals. It was made into mead and was considered to have prophetic powers and it may have been this that was called ‘nectar of the gods’. The rivers that lead to the summer lands are said to be rivers of mead.
“Telling the Bees” was extremely important, whether good news or bad or just everyday gossip. As stated earlier you had to tell the bees about a death in the family or the bees would die too. Bad news was given before sunrise of the following day for all to be well. You may even formally invite the bees to attend the funeral or you could turn the beehives round as the coffin was carried out of the house and past the hives. In ancient European folklore, bees were regarded as messengers of the gods and so the custom of “Telling the Bees” may be a throwback to the idea of keeping the gods informed of human affairs.
Trembling, I listened: The summer sun
Had the chill of snow;
For I knew she was telling the bees of one
Gone on the journey we must all go!
And the song she was singing ever since
In my ear sounds on:
‘Stay at home, pretty bees, fly not hence!
Mistress Mary is dead and gone!
Extract from “Telling the Bees” by John Greenleaf Whittier.
Monday, May 2, 2011
Folklore of the Hedgerow. Part Fourteen.
The Cow Parsley. Peirsil Bhó
(Wild Chervil, Hedge Parsley, Keck, Wild Beaked Parsley, Devil’s Parsley, Queen Anne’s lace, Mothers dies)
'Neath billowing skies that scatter and amass.
All round our nest, far as the eye can pass,
Are golden kingcup-fields with silver edge
where the cow-parsley skirts the hawthorn-hedge.
'Tis visible silence, still as the hour-glass.
Dante Gabriel Rossetti, (1828-1882).
Also known as Devil's Parsley, possibly because of its resemblance to the highly poisonous Hemlock, this plant occurs in accounts of witchcraft practices. It is a native plant belonging to the Apiaceae family.
The name Queen Anne's lace: from time when Queen Anne travelled the country side in May, around Kensington in England, as she suffered from asthma, & came to get fresh air. The roadsides were said to have been decorated for her by this plant. As she & the ladies in waiting walked, they carried lace pillows; the Cow Parsley resembled the lace.
The origin of the name Mothers Dies seems to be a folk tale that children were told that if they picked cow parsley, their mother would die. This threat would deter children who couldn't tell the difference from picking hemlock which is poisonous.
The Celts used to include Cow Parsley in their diet according to archaeologists who analysed the stomach contents of a Celtic man discovered in a peat bog in Cheshire. They also found Emmer and Spelt wheat, Barley, fat hen and dock.
While some claim that the root of the wild plant is also edible, it is not advisable to eat any part of this plant unless it has been expertly identified. There are several plants that look the same as Cow Parsley and are extremely poisonous and potentially fatal if ingested. DO NOT EAT THIS PLANT Remember Cow parsley can be easily confused with Hemlock.
Cow parsley is said to get rid of stones and gravel in the gall bladder and kidneys but very little research has been done on the common plant. It has been used by amateur dyers as a beautiful green dye; however, it is not permanent. The most common use for the stalks is for pea-shooters as the stems are hollow, so children love them. The foliage used to be sold by florists in Victorian times and used in flower arrangements.
Like sweet woodruff, cow parsley has the reputation of “breaking your mother’s heart”. This is said to have come about because the tiny white blossoms drop quickly. In the days before vacuum cleaners, the temptation for mothers to ban these work-generating posies from the house was understandable. This may be where the superstition came from describing Cow Parsley as ‘unlucky indoors’ and a ‘harbinger of death’.
The cultivated relative of Cow Parsley, Chervil, is a well known herb which when made into an infusion can be used in the treatment of water retention, stomach upsets and skin problems. It can be used to promote wound healing. Chervil water is used as a constituent of gripe water. Cow Parsley may be used as a natural mosquito repellent when applied to the skin.
The Nettle. Neanntóg
In our folklore there are many uses for Nettle.
'To cure a sting of a nettle, place a dock leaf over sore part for a few minutes and it will be well'
'The water of boiled nettles if drank will cure anyone suffering from worms'
'Cure for dropsy.
'It is said if a person went to a graveyard and plucked a bunch of nettles that would be growing there and boiled them and give the water to drink to a person that had dropsy if would cure him'
'For rheumatics a bed strewn with nettles'
'3 doses of nettles in the month of April will prevent any disease for the rest of the year'
All the above are from the National Folklore Collection, University College Dublin.
17th century herbalist and apothecary, Nicholas Culpeper is reputed to have said:
'Nettles may be found by feeling for them in the darkest night'.
They are recognised as being a rich source of vitamin C and contain more iron than spinach. Indeed they make a very tasty soup but it is essential to pick them where no chemicals or pollution may have affected them and to use only the upper leaves as the lower leaves may contain irritants. Nettles also contain anti-histamines which are helpful to those with allergies and serotonin which is reputed to aid one's feeling of 'well-being'.
Arthritic joints were sometimes treated by whipping the joint with a branch of stinging nettles. The theory was that it stimulated the adrenals and thus reduced swelling and pain in the joint.
Nettles are reputed to enhance fertility in men, and fever could be dispelled by plucking a nettle up by its roots while reciting the names of the sick man and his family.
Turkey and other poultry (as well as cows and pigs) are said to thrive on nettles, and ground dried nettle in chicken feed will increase egg production.
Nettles left to rot down in water make a fantastic liquid fertiliser.
Nettle can alter the menstrual cycle and may contribute to miscarriage, pregnant women should not use nettle.
Stinging nettle may affect the blood's ability to clot, and could interfere with blood-thinning drugs.
Stinging nettle may lower blood pressure.
Stinging nettle can act as a diuretic, so it can increase the effects of certain drugs, raising the risk of dehydration.
Stinging nettle may lower blood sugar, so it could make the effects of certain drugs stronger, raising the risk of hypoglycaemia (low blood sugar). Diabetics beware.
The Nettle is significant among plants used for medicine by the Celts in that it was probably one of the most widely used due to its ability to prevent haemorrhaging and stop bleeding from wounds. They would have used it to treat the wounds their warriors received in battle.
Recently it has been found that lectin found in Nettles is useful in treating Prostate enlargement and is widely prescribed for this in our times.
Nettles also have a place in ancient Celtic folklore and were also known as "Devil's Claw".
Nettles were believed to indicate the living place of fairies, and their stings protected one from witchcraft or sorcery.
The Primrose. Sabhaircín.
'Guard the house with a string of primroses on the first three days of May. The fairies are said not to be able to pass over or under this string.'
From the National Folklore Collection, University College Dublin. NFC S.455:237. From Co Kerry.
The symbol of safety and protection, in ancient times it was placed on the doorstep to encourage the fairy folk to bless the house and anyone living in it, and it was also said that if you ate the blooms of the primrose you would see a fairy. Both the cowslip and the primrose were thought to hold the keys to heaven and so were considered to be very sacred by the Celtic people.
It was the flower of Love and bringer of good luck, and was the symbol of the first day of spring and so was laid across thresholds to welcome ''May Day''. Also considered to be a bringer of great inspiration for poets, the flower of youth, birth, sweetness and tenderness.
Insects, in particular ants, play an important role in pollinating these flowers. Nectar is located at the bottom of the flower tube and the long thin body of the ant is perfectly designed to carry and deliver pollen from other primrose plants. The primrose family is also remarkable for the number of hybrids it produces.
The primrose has many medicinal uses and was important in the past as a remedy for muscular rheumatism, paralysis and gout. The leaves and flowers can be used either fresh or dried; the roots should be dried before use. Culpepper was aware of the healing properties of the Primrose and said, “Of the leaves of Primrose is made as fine a salve to heal wounds as any I know.”
The Primrose was highly-prized by the Celtic Druids and its abundance in woods, hedgerows and pastures made it an easily-collectible plant. Primroses were often carried by the Druids during certain celtic rituals as a protection from evil. The fragrant oil of the flower was also used by the Druids to anoint their bodies prior to specific rites in order that they might be cleansed and purified.
In the middle ages they were used to treat gout and rheumatism and an infusion of the roots was used to treat headaches.
Primroses are loved by the faeries so if you grow them don’t let them die for if you do you will greatly offend the faeries and who knows what will happen.
Primroses were very important in the rural area especially during the butter making season that began in May. In order to encourage cows to produce a lot of milk, primroses were rubbed on their udders at Bealtaine. Primroses would also be scattered on the doorstep to protect the butter from the faeries.
Primroses were also associated with chickens and egg laying and it was considered unlucky to bring primroses indoors if the hens were hatching in the coop dresser.
It was said that primroses bloomed in Tír na nóg and that people returning from there always brought a bunch with them as proof that they had been there.
In Irish folklore it was believed that rubbing a toothache with a primrose leaf for two minutes would relieve the pain. It was also used as a cure for jaundice (yellow flower).
The flower was often used in medicine throughout the ages as it has similar properties to aspirin, it has always been known as a ''healing'' plant, and so was often used extensively in foods. Even today it is known for its healing properties and is used as a healing tea, while in the world of the flower essence, it is said to help heal those who have experienced the loss of a mother figure as a child.
In Ireland an ointment would be made from certain herbs including primrose and pigs lard and this would be used on burns.
DO NOT PICK AND USE HERBS UNLESS YOU KNOW EXACTLY WHAT YOU ARE DOING.
| 1 | 5 |
<urn:uuid:38e6351e-c52a-4fe3-9203-ec465b4c7a9b>
|
|Integrated home automation
systems have been on the market since the early 1980’s but during
the last few years their lower cost and ease of operation and installation
have made them very attractive for installation in middle income homes.
The sudden increase in Ethernet based home networks and Ethernet enabled
devices in only the last two years promises to make home automation
systems even more affordable, useful, and easy to use.
Terms and Definitions
Whole-house automation system – a system that integrates the
operation of various subsystems of the house through a common user
interface. In this section the term “home automation system”
and “whole-house automation system” are used to mean the
same thing. The term “system” implies an integrated controller,
either dedicated to home automation tasks or part of a PC.
Event – An event (as in “event driven”) is
something that can be defined in a HA system to trigger an action.
Typical events are 8:15 AM (time of day), security system armed,
front door open, motion sensor activity, an X10 command, and so
on. Usually, anything that can be monitored by the HA system can
be used as an event.
Program – All HA systems can be programmed to perform various
subsystem tasks based on events or user requests. The term does
not refer to program code written in C+ or a traditional programming
language used to run the system. HA programs (referred to as automation
programs) are proprietary sets of instructions entered into the
system by the installer or homeowner and are unique to each manufacturer.
Some systems use a purely graphical technique of “programming”,
while some manufacturers use a sort of pseudo-code such as:
outside_light = ON
- A scene is a preset list of operating modes defined in
a home automation system of all the subsystems in the house for
a specific event or activity of the house. Typical scenes include
“good morning”, “at work”, “arrive
home”, “evening”, “sleeping”, “party”,
and so on. The parameters that define a scene are programmed into
the HA system by the installer or homeowner and are triggered by
an event such as a time of day. A “good morning” scene
may be triggered by a time of day such as 7:00 AM and cause the
HA system to turn off outside lights, disarm the security system,
turn on the front lawn sprinklers for 20 minutes, turn on the coffee
maker, and so on.
Home Automation Systems
There are two general categories of home automation systems: hardware-based
systems that use a dedicated micro-controller hardware platform or
“controller”; and software-based systems that rely on
a PC as the controller.
Hardware- based systems will use a dedicated hardware controller and I/O electronics,
typically housed in an enclosure with power supply and backup battery.
They will always be supplied with several dedicated user interface
devices and optionally can use PC’s, telephones, PDA, etc.
as a user interface. The main advantage of hardware-based automation
systems is their inherent stability. The hardware is dedicated to
the task of automation and does not rely on a Windows operating
system or other software components on the same machine. Since these
products also handle traditional security system functions, they
are equipped with battery backup and can be monitored by a security
monitoring service. The disadvantage is the reliance on a single
manufacturer for hardware and software support.
Software- based systems operate on a PC running a version of Microsoft
Windows. These systems rely on the hardware capabilities of the
PC and are limited to the I/O capabilities of the PC. Most software-based
home automation products rely on the existence of a network in the
home, typically Ethernet with TCP/IP enabled devices. Software home
automation products are usually supplied with some interface hardware
for the PC such as an X10 PLC serial interface, RS-232 interface,
or Ethernet adapter. The PC is always the primary user interface.
The advantage of software-based automation products is the fact
that they run on a PC, a relatively inexpensive hardware platform
and can take advantage of the large storage and processing capability
of the PC. Hardware support for the system can be obtained from
multiple sources anywhere almost any time. They can usually interface
with several different manufacturers of subsystem products such
as thermostats and security systems using a home network. The primary
disadvantage is the inherent instability. Even if a PC can be dedicated
to running the home automation software, long term stability of
the system is questionable regardless of the Windows operating system
used. These system must also rely on other hardware such as security
systems and lighting controllers to perform those subsystem functions
since the required hardware is not part of the PC.
Home Automation Systems
A typical hardware-based Home Automation System is shown in Figure
5.1. The system consists of a controller housed in a metal enclosure.
The enclosure is similar to those used for security systems and
can optionally be locked. This is because the controller also incorporates
the functions of a security system for the home and therefore should
be protected from tampering. The electronics will have screw terminals
for connecting traditional security sensors and alarm devices, and
optionally other connectors for RS-232, Ethernet, wiring for thermostats,
and an X10 interface to the power line.
Typical hardware-based home automation system
from HAI. The photo shows the controller with two
types of dedicated user interface devices (center, right) and a
The following is a typical set of features for a medium size hardware-based
home automation system .
• Can control hundreds of lights via X10 PLC or hardwired
networked light switches
Control of up to 64 thermostats
Two-way X10 transmission to receive signals for use as program triggers.
It incorporates collision detection and message
retry for reliability
Lights can be set to scenes of varying brightness, with direct dim
and scene support for advanced home theater lighting
Lights, control outputs, temperature and security modes can be scheduled
by time, sunrise, sunset and date or day of week
and various system events
1,500 lines of non-volatile program storage
Programmable via keypads or from a PC
Text and voice descriptions for all zones, units, codes, temperatures,
messages and areas
500+ word speech vocabulary plus user-recordable phrases
Ethernet port built-in for connection to home network
• 16 security zones, expandable to 176
All zones support 4-wire smoke detectors; zones 1-4 support 2-wire
8 hardwire outputs, expandable to 136
Supports 16 LCD keypad consoles
True partitions: security and automation can be portioned into 8
99 user access codes with selectable authority levels
Will turn all lights on when alarm is tripped to frighten intruders
Outdoor lights are flashed when alarm is tripped to alert neighbors
System announces type and location of alarm with optional 2-way
Trouble conditions indicated in English on display for: zone and
system trouble, AC power off, battery low and phone
Phone line monitor
Optional wireless receiver is fully supervised for complete reliability
Dials up to 8 user-programmable numbers and reports type and location
• Works with touchstone phones inside or away from the premises
with access codes
Compatible with answering machines and answering services
From any phone you can change modes, change temperatures, arm/disarm
security, bypass and restore zones and much more
Software-based Home Automation Systems
Software-based home automation system are usually furnished with
a software CD and optionally network or serial interface hardware
(Figure 5.4). Each manufacturer will have specific PC hardware and
software requirements in order to run the system.
is best to dedicate a PC to run the software since it will not only
take up considerable memory and I/O resources on the PC but running
other software and peripheral devices can compromise the stability
of the system.
Premise Systems SYS home automation software
with optional Lantronix “single device server”.
The server allows SYS software to communicate with any RS232/485
device from an Ethernet network.
Software home automation products use
either a proprietary user interface (UI) on the PC or rely on an
HTML formatted web oriented user interface. If the PC is on a TPC/IP
network, an HTML based UI can be accessed on any web enabled device
inside or from outside the home.
Software/PC based automation systems
can perform most of the automation features of a hardware-based
products since they can interface to the power line for X10 control
and use a serial or Ethernet interface to access other hardware
based subsystems. They do not, however, incorporate security system
functions but can usually interface with one or more specific models
of home security systems via X10, RS232, or Ethernet.
Software-based home automation software runs
on a PC and relies on the serial and
Ethernet interface ports of the PC to access other subsystems in
automation systems rely completely on the screen/keyboard user interface
of the PC. Systems which use an HTML web browser interface (Figure
5.6) can also be accessed by any web browser enabled device on the
same home network or from the Internet with a properly configured
programmed UI screen from Premise Systems’ SYS software. The
image from the front-door was
acquired from an Ethernet enabled camera over the home Ethernet
design of the UI can be customized to the needs and desires of the
homeowner by using any web page design software. Different interface
designs can be selected by using a different “home”
page to access the system. Anything that can be done on a web page
(animation, music, video) can be incorporated into a UI for the
User Interface Options
The user interface is the most important component of a home automation
system since its primary function is to provide a common, easy to
use, interface for all subsystems in the home. For that reason,
the user interface is also the major differentiator between manufacturers
and a key factor in selection of a system. With the increase in
wired and wireless home network technology and high-speed Internet
access, the user interface options have increase substantially.
home automation systems will have several types of user interfaces
available that can be used at the same time. The major categories
of user interface (UI) are the keypad, telephone, PC, web devices,
The keypad device, also referred to as a console, (Figure 5.7) is
the oldest and most common UI device and used exclusively by hardware-based
home automation systems. They are usually wired directly to the
controller with low-voltage cable and are powered from the controller.
Keypad/display devices come in an infinite variety from simple LED
lights and two or three buttons, to graphical LCD displays with
full alpha-numeric keypads. The most common are two or three line
LCD displays with numeric and special function keys (see Figure
5.7). When the home automation system is the security system, keypads
provide a traditional UI for security system functions and are typically
installed where a security keypad would be placed.
keypad user interface for a home automation system. The display
and buttons are backlit for easy use at night.
The touch-tone telephone has been used as a UI for home automation
systems since the 1980’s. Most hardware-based home automation
systems and several software-based system (with appropriate hardware
sound I/O interface) allow the use of a phone (either traditional
wired or cell) to access the system for status and control functions.
The phone keypad is used in a “voice-mail” fashion to
select from a series of menus spoken by voice output from the HA
This provides a very convenient UI
since there are usually several phones around the house as well
as portable phones. The phone interface can also be used from outside
the house from a cell phone by inputting a security access code.
A home PC can be used as a user interface for most hardware-based
home automation systems usually running application software provided
by the manufacturer. Connection is typically through the serial
port on the PC wired directly to the home automation system controller.
As the PC interface migrated to more traditional PC networks such
as Ethernet, the interface has also migrated to using a standard
web browser (client).
Wireless user interface access to HAI’s
home automation system is made possible by low cost PDAs and 802.11b
hardware. The same device can be used from any wireless network
in the world that has Internet access.
The latest trend in home automation system user interface design
is to install a web server in the system controller and use a web
browser client to access the system through a home network. The
advantage of this approach is that it allows the homeowner to use
any web enabled device such as a PC, PDA, or web tablet to access
the system (see Figure 5.8). Since the cost of web enabled devices
is constantly falling, this is an attractive alternative to expensive
proprietary graphical displays.
software in the home automation system must be designed to present
the system information and menu screens in an HTML format. If the
home network is attached to the internet via a router, then the
homeowner can access the home automation system from anywhere in
the world via the Internet. The router must be configured to allow
access to the home automation system from the outside the home LAN.
UI technique is used by most software-based home automation system
products that run on a PC. The home automation software has a web
server as part of its design. Many software-based systems also allow
the installer or homeowner to customize the look and feel of the
web page design.
Typical UI screen for HAI Omni Pro automation
system Web-Link software component. The screen is accessed by entering
local IP address of the networked HA system (for example http://192.168.100.148)
using any web browser
Automation System Operation
While a home automation system can be used to perform isolated tasks
(such as turn on a light or set the thermostat) its biggest benefit
is coordinating the operation of subsystems based on how the homeowner
wishes the house to operate at different times of day or different
events. The subsystems in the house are usually set a specific way
during these events, and the entire set became known as a “scene”
A “scene” is like an operating mode of a house and is
a key concept in home automation. The term comes from the stage
setup in a play. A scene is a preset list of operating modes of
all the subsystems in the house for a specific activity of the house.
Typical scenes include “good morning”, “at work”,
“arrive home”, “evening”, “sleeping”,
“party”, and so on. For example, a scene such as “arrive
home” may be how the homeowner wants the house to operate
when the family arrives home from work or school. This might be
defined as: security system disarmed, certain lights on, other lights
off, temperature at 72, music system to a favorite CD, music in
the family room, kitchen, den, check for e-mail, and so on.
A home automation system usually has
several common scenes pre-programmed (such as “home”,
“away”, “asleep”) while others are defined
by the homeowner and typically programmed by the installer. Not
all operations of a HA system need to be part of a scene.
All HA systems perform monitoring and
control operations through a combination of three types of events:
Preprogrammed and timed schedules -
operations such as turning lights on or off, arming a security system,
turning on the sprinklers, or setting the house to a scene can be
set to occur at specific times on specific days. These schedules
can be entered and adjusted by the homeowner or by the installer.
driven - operations such as turning lights on or off or
setting back the thermostat can occur based on some event such as
a motion sensor input, a temperature change, or someone ringing
the front door bell. The programming of what event causes what action
is typically set by the installer after conferring with the homeowner.
However, some systems provide an easy to use interface, typically
on a PC, to allow the homeowner to program events.
selected - actions and scenes can be initiated directly
by manual user input such as pressing a keypad button labeled “night
scene” or a similar button on a remote control.
Associating an event with an action is done through an automation
program entered in the system by the installer or homeowner. Programs
use a simple programming like “language” to identify
events and actions to take when the event (or group of events) is
true. Typical examples might be:
SECURITY = ALARM
OUTSIDE LIGHTS = FLASH
TIME = 11:00 PM
THERMOSTAT (ZONE 2) = 71
first program will cause the outside lights to flash on and off
if the security system is in the alarm condition. The second program
will set the zone 2 thermostat to 71 degrees at 11:00 PM. The HA
system constantly interprets the programs to see if the IF condition
is meet in the program and if so, performs the THEN action.
entering the program, key words such as IF, THEN, THERMOSTAT are
usually selected from menus to make it easier and quicker. Improper
syntax mistakes are detected and flagged before they are entered
into the system. Most HA systems can typically store hundreds of
automation programs requires care and practice with each system
since it is easy to enter conflicting programs that can have unpredictable
results and confuse the homeowner.
The home automation system controller contains the electronics of
the system (micro controller, memory, sensor interfaces, alarm device
interface, monitoring service interface), power supply, and backup
battery. Most controllers are contained in a lockable steel box
mounted on the wall and contain no "user serviceable parts".
The panel is located in a conditioned space of the home. Since the
controller performs all the functions of a security system it should
be accessible but not easily locatable by an intruder. Typical locations
include a closet, basement, or utility room.
Typical home automation system controller
security systems, HA controllers usually have a battery power backup
installed in the enclosure. Most manufacturers provide additional
expansion boards and electronics to increase the number of I/O devices
that can be attached to the system (see below).
controller is usually available separately mounted on a bracket
to allow it to be mounted in a structured cabling system enclosure.
This makes a very convenient installation since the controller should
be in the same location as the structure cabling system to access
network wiring, telephone service wiring, and security device wiring.
A variety of expansion modules are available for hardware-based
HA system to allow adding security zones, monitored contact closure
inputs, relay contact outputs, and various analog inputs and outputs.
are also available to add special features such as an Ethernet interface,
X10 I/O, thermostat interfaces, and so on.
I/O expansion board. The board is supplied with cable and
mounting hardware to attach it to the main controller board.
Installing a hardware-based HA system is only slightly more difficult
than installing a conventional security system. The additional work
includes connection to other peripheral devices such as thermostats,
networked light switches, consoles, etc.
Since hardware-based systems incorporate
security system functions and devices you may need to obtain a state
security/low-voltage license. Check with your local city or country
government offices to determine if you are required to have a license
in your area.
All of the installation information
in Section 4 is applicable since installation usually requires installing
many of the subsystem devices to automate the home.
HA system controllers are usually installed
next to or as part of a structured cabling system since this allows
all the cabling to be run to the same location. The controllers
can often be mounted in the same enclosure as the structured cabling
system, simplifying the installation and greatly reducing the wall
Whole house automation system will require some integration tasks
to interface the system with existing products and subsystems in
the house. There are two basic ways to accomplish integration. Remove
existing devices in a subsystem and replace them with home automation
“friendly” devices that will interface easily, or use
existing equipment by installing adapters, converters, or interfaces
between the equipment and the home automation system.
example, to interface to an existing HVAC system using the first
technique, the non-automation thermostat is replaced with a thermostat,
such as the one shown in Figure 5.2, supplied with the home automation
system that is either hardwired to the automation controller or
communicates with the controller over a home network. The “home
automation thermostat” then acts as a subsystem interface
between the HA system and the HVAC subsystem.
Once a system is installed it must be configured for the home environment.
This includes entering occupant information, room and device names,
zone information, access codes as well as scene and automation program
information. While this can be done using keypads/consoles, most
hardware-based systems allow the use of a laptop PC running either
a browser program or a manufacturer supplied access program. The
PC can be attached locally in the home or, on some system, can use
a modem and dial into the system from a remote location.
Configuration software is naturally built-in to
software-based products since they already run on a PC. They simply
use setup or configuration screens.
is complete all information entered can be stored in a separate
file on the configuration PC or the PC running the software-based
system. If a system needs repair or replacement, the file can be
downloaded back in to the system from the PC configuration software.
Sample screen from PC Access software used
to configure HAI’s home automation systems.
Configuration software also allows you to monitor the present status
of all system components, including door and window sensors, lights,
appliances, thermostats, I/O expansion boards and other internal
and external components.
Most systems also keep an “event
log” that stores a record of all status changes and each action
taken by the HA system. This can be downloaded and examined by the
configuration software (either locally, through a dial-up connection,
or over the Internet). This is a great troubleshooting tool to locate
malfunctioning sensors, bugs in automation programs, and improper
operation by the owner.
| 1 | 10 |
<urn:uuid:bfc2aa47-ca13-4242-8387-80d27a0fea84>
|
Today’s smartphone market is relatively saturated, with few or no differences among smartphones. Of course, that is not to say that the hardware itself cannot satisfy people’s needs; on the contrary, what we see today is the direct result of excessive hardware development. Under such circumstances, much anticipation is around the to-be released <Smart Holographic Phone> Eastar Takee.
What is the Smart Holographic Phone?
As most of our readers may not know what a smart holographic phone is, we would give you guys a quick introduction first.
Currently, most phones out there are configured with a touch screen, providing a enhanced user experience than the old-fashioned T9 keyboard. However, what we call “holographic display” is a type of display technology that can track the position of human eyes. It will then calculate the actual holographic image with the model of holographic image data and respectively project the 3D image to the retinas of both left and right eyes, creating a feeling that it was a real object that the person is looking at.
This kind of holographic technology is based on the position of human eyes. But since it will not display everything simultaneously in the filed of view, it works better when only one person is viewing the image. As a result, we call it “Personal Holographic” or “Smart Holographic”. To be more specific, holographic display can be categorized into physical holography (holographic film, laser holographic printing), digital holography ( projected laser holographic imaging) and computing holography (eyeball-tracking holographic imaging, scenario-tracking holographic imaging).
Laser holography is a method to record both the amplitude and the phase information in the light wave that reaches the film. There are a number of variations of the basic method, but all holography requires laser light (strictly speaking light that is coherent over the object to be imaged) in order to construct the hologram. While in normal photography light, the film can only record the intensity of light. With all the phase information contained in the original light reaching the film is lost, there is no such 3D effect of the film.
Where can Smart Holographic Phone be used?
You might think the holographic display is just like that in the sci-fi films, where you can view a content from every aspect by moving your finger in the air, or “action from a distance”. What exactly will this technology help us achieve? Read on to find out:
1. Holographic Navigation: while 2D display cannot show the complex road system, holographic display will offer users a better idea of what the traffic is like by displaying a 3D image.
2. While shopping online, we can view commodities from different aspects and see more details before we actually place an order.
3. Holographic display will make gaming experience more enjoyable by providing a life-like scene.
4. Different from 2D display, you will be able to really get “into” a film with holographic technology.
5. You will feel as if the person is just around you when making a video call.
6. The coolest thing may be that you can print a 3D self when connect your holographic phone with a 3D printer.
Any apps to go with the Smart Holographic Phone?
It is undeniable that the success of the iPhone is partly due to millions of apps in its App Store. Fascinating as the holographic technology may be, users are still wondering if there are other apps to go with the smart holographic phone. According to source, Eastar has set up a open platform for holographic apps to invite holographic app developer from around the globe. We do not the the number of partners or any achievements yet.
Difference between Holographic Display and Naked Eye 3D
Will the smart holographic phone be of real usage or is it jsut some stunt used by phone makers? We have seen many advertised their naked eye 3D phones, including LG Optimus 3D, LG P920, Sharp SH9298U, HTC X515m, Gionee GN868, ZOPO ZP200, etc. But holographic display relies on light intervene and diffraction to reproduce the 3D image of a project; while naked eye 3D uses the principle of grating, requiring the video source to be processed. The video source of naked eye 3D can be converted from 2D videos, but it requires viewers to be at a specific angle and distance in order to experience the 3D effect.
Even though the technology of naked eye 3D has improved greatly over the years, viewers still get dizzy when looking at the screen, some even feel vomitive. Such side effects negatively impacted user experience. As a result, the fascinating naked eye 3D technology was more of a stunt than of any practical use.
As smartphone display has reached the resolution of 1080p and hardware configuration is becoming ever advanced, although there is no further space for display progress, smartphone makers are trying to find a way to differentiate themselves using the holographic display. As the holographic display can make users view a 3D virtual object when images of an object from different angles are projected above the screen, we have great confidence to believe that the holographic display technology will achieve a better market response than that of naked eye 3D.
Competitor to the Smart Holographic Phone: Fire Phone
Before the official release of the Smart Holographic Phone, the market has already seen its strong competitor. Around three weeks earlier, Amazon has launched its Fire Phone. Except from its great service, its biggest selling point is the Dynamic Perspective. It constructs 3D images at 60 frames per second, and together with the help of several infrared sensors to display the image in 3D. Compared to the naked eye 3D technology, the Dynamic Perspective of Fire Phone can change a picture’s view as users move and tilt the phone around. However, this is not related to the holographic technology, but is still within the category of 2D display.
Amazon Fire Phone is configured a 4.7-inch HD LCD display, powered by 2.2 GHz Quad-core Snapdragon 800 CPU and Adreno 330 GPU, along with 2GB RAM. Its buttons are made of aluminum. It has a 13-megapixel rear camera, complete with OIS and a powerful f/2.0 lens, as well as a 2.1 MP front-facing camera.
Judging from current situation, the display technology of smartphones will surely undergo a revolution. Maybe we will experience a brand new interactive display in the near future. Hopefully that day will come soon.Always be the first to know. Follow us:
| 1 | 4 |
<urn:uuid:f7d326c6-beac-40f0-8665-6613edaccde9>
|
Visualization of the vasculature is becoming increasingly important for understanding many different disease states. While several techniques exist for imaging vasculature, few are able to visualize the vascular network as a whole while extending to a resolution that includes the smaller vessels1,2. Additionally, many vascular casting techniques destroy the surrounding tissue, preventing further analysis of the sample3-5. One method which circumvents these issues is micro-Computed Tomography (μCT). μCT imaging can scan at resolutions <10 microns, is capable of producing 3D reconstructions of the vascular network, and leaves the tissue intact for subsequent analysis (e.g., histology and morphometry)6-11. However, imaging vessels by ex vivo μCT methods requires that the vessels be filled with a radiopaque compound. As such, the accurate representation of vasculature produced by μCT imaging is contingent upon reliable and complete filling of the vessels. In this protocol, we describe a technique for filling mouse coronary vessels in preparation for μCT imaging.
Two predominate techniques exist for filling the coronary vasculature: in vivo via cannulation and retrograde perfusion of the aorta (or a branch off the aortic arch) 12-14, or ex vivo via a Langendorff perfusion system 15-17. Here we describe an in vivo aortic cannulation method which has been specifically designed to ensure filling of all vessels. We use a low viscosity radiopaque compound called Microfil which can perfuse through the smallest vessels to fill all the capillaries, as well as both the arterial and venous sides of the vascular network. Vessels are perfused with buffer using a pressurized perfusion system, and then filled with Microfil. To ensure that Microfil fills the small higher resistance vessels, we ligate the large branches emanating from the aorta, which diverts the Microfil into the coronaries. Once filling is complete, to prevent the elastic nature of cardiac tissue from squeezing Microfil out of some vessels, we ligate accessible major vascular exit points immediately after filling. Therefore, our technique is optimized for complete filling and maximum retention of the filling agent, enabling visualization of the complete coronary vascular network – arteries, capillaries, and veins alike.
22 Related JoVE Articles!
Mouse Models for Graft Arteriosclerosis
Institutions: Yale University School of Medicine , Yale University School of Medicine .
Graft arteriosclerois (GA), also called allograft vasculopathy, is a pathologic lesion that develops over months to years in transplanted organs characterized by diffuse, circumferential stenosis of the entire graft vascular tree. The most critical component of GA pathogenesis is the proliferation of smooth muscle-like cells within the intima. When a human coronary artery segment is interposed into the infra-renal aortae of immunodeficient mice, the intimas could be expand in response to adoptively transferred human T cells allogeneic to the artery donor or exogenous human IFN-γ in the absence of human T cells. Interposition of a mouse aorta from one strain into another mouse strain recipient is limited as a model for chronic rejection in humans because the acute cell-mediated rejection response in this mouse model completely eliminates all donor-derived vascular cells from the graft within two-three weeks. We have recently developed two new mouse models to circumvent these problems. The first model involves interposition of a vessel segment from a male mouse into a female recipient of the same inbred strain (C57BL/6J). Graft rejection in this case is directed only against minor histocompatibility antigens encoded by the Y chromosome (present in the male but not the female) and the rejection response that ensues is sufficiently indolent to preserve donor-derived smooth muscle cells for several weeks. The second model involves interposing an artery segment from a wild type C57BL/6J mouse donor into a host mouse of the same strain and gender that lacks the receptor for IFN-γ followed by administration of mouse IFN-γ (delivered via infection of the mouse liver with an adenoviral vector. There is no rejection in this case as both donor and recipient mice are of the same strain and gender but donor smooth muscle cells proliferate in response to the cytokine while host-derived cells, lacking receptor for this cytokine, are unresponsive. By backcrossing additional genetic changes into the vessel donor, both models can be used to assess the effect of specific genes on GA progression. Here, we describe detailed protocols for our mouse GA models.
Medicine, Issue 75, Anatomy, Physiology, Biomedical Engineering, Bioengineering, Cardiology, Pathology, Surgery, Tissue Engineering, Cardiovascular Diseases, vascular biology, graft arteriosclerosis, GA, mouse models, transplantation, graft, vessels, arteries, mouse, animal model, surgical techniques
Assessment of Vascular Function in Patients With Chronic Kidney Disease
Institutions: University of Colorado, Denver, University of Colorado, Boulder.
Patients with chronic kidney disease (CKD) have significantly increased risk of cardiovascular disease (CVD) compared to the general population, and this is only partially explained by traditional CVD risk factors. Vascular dysfunction is an important non-traditional risk factor, characterized by vascular endothelial dysfunction (most commonly assessed as impaired endothelium-dependent dilation [EDD]) and stiffening of the large elastic arteries. While various techniques exist to assess EDD and large elastic artery stiffness, the most commonly used are brachial artery flow-mediated dilation (FMDBA
) and aortic pulse-wave velocity (aPWV), respectively. Both of these noninvasive measures of vascular dysfunction are independent predictors of future cardiovascular events in patients with and without kidney disease. Patients with CKD demonstrate both impaired FMDBA
, and increased aPWV. While the exact mechanisms by which vascular dysfunction develops in CKD are incompletely understood, increased oxidative stress and a subsequent reduction in nitric oxide (NO) bioavailability are important contributors. Cellular changes in oxidative stress can be assessed by collecting vascular endothelial cells from the antecubital vein and measuring protein expression of markers of oxidative stress using immunofluorescence. We provide here a discussion of these methods to measure FMDBA
, aPWV, and vascular endothelial cell protein expression.
Medicine, Issue 88, chronic kidney disease, endothelial cells, flow-mediated dilation, immunofluorescence, oxidative stress, pulse-wave velocity
Ultrasound Assessment of Endothelial-Dependent Flow-Mediated Vasodilation of the Brachial Artery in Clinical Research
Institutions: University of California, San Francisco, Veterans Affairs Medical Center, San Francisco, Veterans Affairs Medical Center, San Francisco.
The vascular endothelium is a monolayer of cells that cover the interior of blood vessels and provide both structural and functional roles. The endothelium acts as a barrier, preventing leukocyte adhesion and aggregation, as well as controlling permeability to plasma components. Functionally, the endothelium affects vessel tone.
Endothelial dysfunction is an imbalance between the chemical species which regulate vessel tone, thombroresistance, cellular proliferation and mitosis. It is the first step in atherosclerosis and is associated with coronary artery disease, peripheral artery disease, heart failure, hypertension, and hyperlipidemia.
The first demonstration of endothelial dysfunction involved direct infusion of acetylcholine and quantitative coronary angiography. Acetylcholine binds to muscarinic receptors on the endothelial cell surface, leading to an increase of intracellular calcium and increased nitric oxide (NO) production. In subjects with an intact endothelium, vasodilation was observed while subjects with endothelial damage experienced paradoxical vasoconstriction.
There exists a non-invasive, in vivo
method for measuring endothelial function in peripheral arteries using high-resolution B-mode ultrasound. The endothelial function of peripheral arteries is closely related to coronary artery function. This technique measures the percent diameter change in the brachial artery during a period of reactive hyperemia following limb ischemia.
This technique, known as endothelium-dependent, flow-mediated vasodilation (FMD) has value in clinical research settings. However, a number of physiological and technical issues can affect the accuracy of the results and appropriate guidelines for the technique have been published. Despite the guidelines, FMD remains heavily operator dependent and presents a steep learning curve. This article presents a standardized method for measuring FMD in the brachial artery on the upper arm and offers suggestions to reduce intra-operator variability.
Medicine, Issue 92, endothelial function, endothelial dysfunction, brachial artery, peripheral artery disease, ultrasound, vascular, endothelium, cardiovascular disease.
Gene Transfer for Ischemic Heart Failure in a Preclinical Model
Institutions: Mount Sinai School of Medicine .
Various emerging technologies are being developed for patients with heart failure. Well-established preclinical evaluations are necessary to determine their efficacy and safety.
Gene therapy using viral vectors is one of the most promising approaches for treating cardiac diseases. Viral delivery of various different genes by changing the carrier gene has immeasurable therapeutic potential.
In this video, the full process of an animal model of heart failure creation followed by gene transfer is presented using a swine model. First, myocardial infarction is created by occluding the proximal left anterior descending coronary artery. Heart remodeling results in chronic heart failure. Unique to our model is a fairly large scar which truly reflects patients with severe heart failure who require aggressive therapy for positive outcomes. After myocardial infarct creation and development of scar tissue, an intracoronary injection of virus is demonstrated with simultaneous nitroglycerine infusion. Our injection method provides simple and efficient gene transfer with enhanced gene expression. This combination of a myocardial infarct swine model with intracoronary virus delivery has proven to be a consistent and reproducible methodology, which helps not only to test the effect of individual gene, but also compare the efficacy of many genes as therapeutic candidates.
Medicine, Issue 51, Myocardial infarction, Gene therapy, Intracoronary injection, Viral vector, Ischemic heart failure
Ultrasound-guided Transthoracic Intramyocardial Injection in Mice
Institutions: Boston Children's Hospital, Harvard University.
Murine models of cardiovascular disease are important for investigating pathophysiological mechanisms and exploring potential regenerative therapies. Experiments involving myocardial injection are currently performed by direct surgical access through a thoracotomy. While convenient when performed at the time of another experimental manipulation such as coronary artery ligation, the need for an invasive procedure for intramyocardial delivery limits potential experimental designs. With ever improving ultrasound resolution and advanced noninvasive imaging modalities, it is now feasible to routinely perform ultrasound-guided, percutaneous intramyocardial injection. This modality efficiently and reliably delivers agents to a targeted region of myocardium. Advantages of this technique include the avoidance of surgical morbidity, the facility to target regions of myocardium selectively under ultrasound guidance, and the opportunity to deliver injectate to the myocardium at multiple, predetermined time intervals. With practiced technique, complications from intramyocardial injection are rare, and mice quickly return to normal activity on recovery from anesthetic. Following the steps outlined in this protocol, the operator with basic echocardiography experience can quickly become competent in this versatile, minimally invasive technique.
Medicine, Issue 90, microinjection, mouse, echocardiography, transthoracic, myocardium, percutaneous administration
A Research Method For Detecting Transient Myocardial Ischemia In Patients With Suspected Acute Coronary Syndrome Using Continuous ST-segment Analysis
Institutions: University of Nevada, Reno, St. Joseph's Medical Center, University of Rochester Medical Center .
Each year, an estimated 785,000 Americans will have a new coronary attack, or acute coronary syndrome (ACS). The pathophysiology of ACS involves rupture of an atherosclerotic plaque; hence, treatment is aimed at plaque stabilization in order to prevent cellular death. However, there is considerable debate among clinicians, about which treatment pathway is best: early invasive using percutaneous coronary intervention (PCI/stent) when indicated or a conservative approach (i.e.
, medication only with PCI/stent if recurrent symptoms occur).
There are three types of ACS: ST elevation myocardial infarction (STEMI), non-ST elevation MI (NSTEMI), and unstable angina (UA). Among the three types, NSTEMI/UA is nearly four times as common as STEMI. Treatment decisions for NSTEMI/UA are based largely on symptoms and resting or exercise electrocardiograms (ECG). However, because of the dynamic and unpredictable nature of the atherosclerotic plaque, these methods often under detect myocardial ischemia because symptoms are unreliable, and/or continuous ECG monitoring was not utilized.
Continuous 12-lead ECG monitoring, which is both inexpensive and non-invasive, can identify transient episodes of myocardial ischemia, a precursor to MI, even when asymptomatic. However, continuous 12-lead ECG monitoring is not usual hospital practice; rather, only two leads are typically monitored. Information obtained with 12-lead ECG monitoring might provide useful information for deciding the best ACS treatment.
Therefore, using 12-lead ECG monitoring, the COMPARE Study (electroC
n of ischeM
sive to phaR
atment) was designed to assess the frequency and clinical consequences of transient myocardial ischemia, in patients with NSTEMI/UA treated with either early invasive PCI/stent or those managed conservatively (medications or PCI/stent following recurrent symptoms). The purpose of this manuscript is to describe the methodology used in the COMPARE Study.
Permission to proceed with this study was obtained from the Institutional Review Board of the hospital and the university. Research nurses identify hospitalized patients from the emergency department and telemetry unit with suspected ACS. Once consented, a 12-lead ECG Holter monitor is applied, and remains in place during the patient's entire hospital stay. Patients are also maintained on the routine bedside ECG monitoring system per hospital protocol. Off-line ECG analysis is done using sophisticated software and careful human oversight.
Medicine, Issue 70, Anatomy, Physiology, Cardiology, Myocardial Ischemia, Cardiovascular Diseases, Health Occupations, Health Care, transient myocardial ischemia, Acute Coronary Syndrome, electrocardiogram, ST-segment monitoring, Holter monitoring, research methodology
Isolation and Functional Characterization of Human Ventricular Cardiomyocytes from Fresh Surgical Samples
Institutions: University of Florence, University of Florence.
Cardiomyocytes from diseased hearts are subjected to complex remodeling processes involving changes in cell structure, excitation contraction coupling and membrane ion currents. Those changes are likely to be responsible for the increased arrhythmogenic risk and the contractile alterations leading to systolic and diastolic dysfunction in cardiac patients. However, most information on the alterations of myocyte function in cardiac diseases has come from animal models.
Here we describe and validate a protocol to isolate viable myocytes from small surgical samples of ventricular myocardium from patients undergoing cardiac surgery operations. The protocol is described in detail. Electrophysiological and intracellular calcium measurements are reported to demonstrate the feasibility of a number of single cell measurements in human ventricular cardiomyocytes obtained with this method.
The protocol reported here can be useful for future investigations of the cellular and molecular basis of functional alterations of the human heart in the presence of different cardiac diseases. Further, this method can be used to identify novel therapeutic targets at cellular level and to test the effectiveness of new compounds on human cardiomyocytes, with direct translational value.
Medicine, Issue 86, cardiology, cardiac cells, electrophysiology, excitation-contraction coupling, action potential, calcium, myocardium, hypertrophic cardiomyopathy, cardiac patients, cardiac disease
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+
release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Permanent Ligation of the Left Anterior Descending Coronary Artery in Mice: A Model of Post-myocardial Infarction Remodelling and Heart Failure
Institutions: Catholic University of Leuven.
Heart failure is a syndrome in which the heart fails to pump blood at a rate commensurate with cellular oxygen requirements at rest or during stress. It is characterized by fluid retention, shortness of breath, and fatigue, in particular on exertion. Heart failure is a growing public health problem, the leading cause of hospitalization, and a major cause of mortality. Ischemic heart disease is the main cause of heart failure.
Ventricular remodelling refers to changes in structure, size, and shape of the left ventricle. This architectural remodelling of the left ventricle is induced by injury (e.g.,
myocardial infarction), by pressure overload (e.g.,
systemic arterial hypertension or aortic stenosis), or by volume overload. Since ventricular remodelling affects wall stress, it has a profound impact on cardiac function and on the development of heart failure. A model of permanent ligation of the left anterior descending coronary artery in mice is used to investigate ventricular remodelling and cardiac function post-myocardial infarction. This model is fundamentally different in terms of objectives and pathophysiological relevance compared to the model of transient ligation of the left anterior descending coronary artery. In this latter model of ischemia/reperfusion injury, the initial extent of the infarct may be modulated by factors that affect myocardial salvage following reperfusion. In contrast, the infarct area at 24 hr after permanent ligation of the left anterior descending coronary artery is fixed. Cardiac function in this model will be affected by 1) the process of infarct expansion, infarct healing, and scar formation; and 2) the concomitant development of left ventricular dilatation, cardiac hypertrophy, and ventricular remodelling.
Besides the model of permanent ligation of the left anterior descending coronary artery, the technique of invasive hemodynamic measurements in mice is presented in detail.
Medicine, Issue 94, Myocardial infarction, cardiac remodelling, infarct expansion, heart failure, cardiac function, invasive hemodynamic measurements
Evaluation of a Novel Laser-assisted Coronary Anastomotic Connector - the Trinity Clip - in a Porcine Off-pump Bypass Model
Institutions: University Medical Center Utrecht, Vascular Connect b.v., University Medical Center Utrecht, University Medical Center Utrecht.
To simplify and facilitate beating heart (i.e.,
off-pump), minimally invasive coronary artery bypass surgery, a new coronary anastomotic connector, the Trinity Clip, is developed based on the excimer laser-assisted nonocclusive anastomosis technique. The Trinity Clip connector enables simplified, sutureless, and nonocclusive connection of the graft to the coronary artery, and an excimer laser catheter laser-punches the opening of the anastomosis. Consequently, owing to the complete nonocclusive anastomosis construction, coronary conditioning (i.e.,
occluding or shunting) is not necessary, in contrast to the conventional anastomotic technique, hence simplifying the off-pump bypass procedure. Prior to clinical application in coronary artery bypass grafting, the safety and quality of this novel connector will be evaluated in a long-term experimental porcine off-pump coronary artery bypass (OPCAB) study. In this paper, we describe how to evaluate the coronary anastomosis in the porcine OPCAB model using various techniques to assess its quality. Representative results are summarized and visually demonstrated.
Medicine, Issue 93, Anastomosis, coronary, anastomotic connector, anastomotic coupler, excimer laser-assisted nonocclusive anastomosis (ELANA), coronary artery bypass graft (CABG), off-pump coronary artery bypass (OPCAB), beating heart surgery, excimer laser, porcine model, experimental, medical device
Intramyocardial Cell Delivery: Observations in Murine Hearts
Institutions: Imperial College London, Imperial College London, Monash University.
Previous studies showed that cell delivery promotes cardiac function amelioration by release of cytokines and factors that increase cardiac tissue revascularization and cell survival. In addition, further observations revealed that specific stem cells, such as cardiac stem cells, mesenchymal stem cells and cardiospheres have the ability to integrate within the surrounding myocardium by differentiating into cardiomyocytes, smooth muscle cells and endothelial cells.
Here, we present the materials and methods to reliably deliver noncontractile cells into the left ventricular wall of immunodepleted mice. The salient steps of this microsurgical procedure involve anesthesia and analgesia injection, intratracheal intubation, incision to open the chest and expose the heart and delivery of cells by a sterile 30-gauge needle and a precision microliter syringe.
Tissue processing consisting of heart harvesting, embedding, sectioning and histological staining showed that intramyocardial cell injection produced a small damage in the epicardial area, as well as in the ventricular wall. Noncontractile cells were retained into the myocardial wall of immunocompromised mice and were surrounded by a layer of fibrotic tissue, likely to protect from cardiac pressure and mechanical load.
Medicine, Issue 83, intramyocardial cell injection, heart, grafting, cell therapy, stem cells, fibrotic tissue
Detecting Abnormalities in Choroidal Vasculature in a Mouse Model of Age-related Macular Degeneration by Time-course Indocyanine Green Angiography
Institutions: University of Utah Health Sciences Center, University of Utah Health Sciences Center.
Indocyanine Green Angiography (or ICGA) is a technique performed by ophthalmologists to diagnose abnormalities of the choroidal and retinal vasculature of various eye diseases such as age-related macular degeneration (AMD). ICGA is especially useful to image the posterior choroidal vasculature of the eye due to its capability of penetrating through the pigmented layer with its infrared spectrum. ICGA time course can be divided into early, middle, and late phases. The three phases provide valuable information on the pathology of eye problems. Although time-course ICGA by intravenous (IV) injection is widely used in the clinic for the diagnosis and management of choroid problems, ICGA by intraperitoneal injection (IP) is commonly used in animal research. Here we demonstrated the technique to obtain high-resolution ICGA time-course images in mice by tail-vein injection and confocal scanning laser ophthalmoscopy. We used this technique to image the choroidal lesions in a mouse model of age-related macular degeneration. Although it is much easier to introduce ICG to the mouse vasculature by IP, our data indicate that it is difficult to obtain reproducible ICGA time course images by IP-ICGA. In contrast, ICGA via tail vein injection provides high quality ICGA time-course images comparable to human studies. In addition, we showed that ICGA performed on albino mice gives clearer pictures of choroidal vessels than that performed on pigmented mice. We suggest that time-course IV-ICGA should become a standard practice in AMD research based on animal models.
Medicine, Issue 84, Indocyanine Green Angiography, ICGA, choroid vasculature, age-related macular degeneration, AMD, Polypoidal Choroidal Vasculopathy, PCV, confocal scanning laser ophthalmoscope, IV-ICGA, time-course ICGA, tail-vein injection
Coronary Artery Ligation and Intramyocardial Injection in a Murine Model of Infarction
Institutions: East Carolina University.
Mouse models are a valuable tool for studying acute injury and chronic remodeling of the myocardium in vivo
. With the advent of genetic modifications to the whole organism or the myocardium and an array of biological and/or synthetic materials, there is great potential for any combination of these to assuage the extent of acute ischemic injury and impede the onset of heart failure pursuant to myocardial remodeling.
Here we present the methods and materials used to reliably perform this microsurgery and the modifications involved for temporary (with reperfusion) or permanent coronary artery occlusion studies as well as intramyocardial injections. The effects on the heart that can be seen during the procedure and at the termination of the experiment in addition to histological evaluation will verify efficacy.
Briefly, surgical preparation involves anesthetizing the mice, removing the fur on the chest, and then disinfecting the surgical area. Intratracheal intubation is achieved by transesophageal illumination using a fiber optic light. The tubing is then connected to a ventilator. An incision made on the chest exposes the pectoral muscles which will be cut to view the ribs. For ischemia/reperfusion studies, a 1 cm piece of PE tubing placed over the heart is used to tie the ligature to so that occlusion/reperfusion can be customized. For intramyocardial injections, a Hamilton syringe with sterile 30gauge beveled needle is used. When the myocardial manipulations are complete, the rib cage, the pectoral muscles, and the skin are closed sequentially. Line block analgesia is effected by 0.25% marcaine in sterile saline which is applied to muscle layer prior to closure of the skin. The mice are given a subcutaneous injection of saline and placed in a warming chamber until they are sternally recumbent. They are then returned to the vivarium and housed under standard conditions until the time of tissue collection. At the time of sacrifice, the mice are anesthetized, the heart is arrested in diastole with KCl or BDM, rinsed with saline, and immersed in fixative. Subsequently, routine procedures for processing, embedding, sectioning, and histological staining are performed.
Nonsurgical intubation of a mouse and the microsurgical manipulations described make this a technically challenging model to learn and achieve reproducibility. These procedures, combined with the difficulty in performing consistent manipulations of the ligature for timed occlusion(s) and reperfusion or intramyocardial injections, can also affect the survival rate so optimization and consistency are critical.
Medicine, Issue 52, infarct, ischemia/reperfusion, mice, intramyocardial injection, coronary artery, heart, grafting
Remote Magnetic Navigation for Accurate, Real-time Catheter Positioning and Ablation in Cardiac Electrophysiology Procedures
Institutions: La Paz University Hospital, Magnetecs Corp., Geffen School of Medicine at UCLA Los Angeles.
New remote navigation systems have been developed to improve current limitations of conventional manually guided catheter ablation in complex cardiac substrates such as left atrial flutter. This protocol describes all the clinical and invasive interventional steps performed during a human electrophysiological study and ablation to assess the accuracy, safety and real-time navigation of the Catheter Guidance, Control and Imaging (CGCI) system. Patients who underwent ablation of a right or left atrium flutter substrate were included. Specifically, data from three left atrial flutter and two counterclockwise right atrial flutter procedures are shown in this report. One representative left atrial flutter procedure is shown in the movie. This system is based on eight coil-core electromagnets, which generate a dynamic magnetic field focused on the heart. Remote navigation by rapid changes (msec) in the magnetic field magnitude and a very flexible magnetized catheter allow real-time closed-loop integration and accurate, stable positioning and ablation of the arrhythmogenic substrate.
Medicine, Issue 74, Anatomy, Physiology, Biomedical Engineering, Surgery, Cardiology, catheter ablation, remote navigation, magnetic, robotic, catheter, positioning, electrophysiology, clinical techniques
Retro-orbital Injection in Adult Zebrafish
Institutions: Children’s Hospital Boston, Harvard Medical School, Dana Farber Cancer Institute.
Drug treatment of whole animals is an essential tool in any model system for pharmacological and chemical genetic studies. Intravenous (IV) injection is often the most effective and noninvasive form of delivery of an agent of interest. In the zebrafish (Danio rerio
), IV injection of drugs has long been a challenge because of the small vessel diameter. This has also proved a significant hurdle for the injection of cells during hematopoeitic stem cell transplantation. Historically, injections into the bloodstream were done directly through the heart. However, this intra-cardiac procedure has a very high mortality rate as the heart is often punctured during injection leaving the fish prone to infection, massive blood loss or fatal organ damage. Drawing on our experience with the mouse, we have developed a new injection procedure in the zebrafish in which the injection site is behind the eye and into the retro-orbital venous sinus. This retro-orbital (RO) injection technique has been successfully employed in both the injection of drugs in the adult fish as well as transplantation of whole kidney marrow cells. RO injection has a much lower mortality rate than traditional intra-cardiac injection. Fish that are injected retro-orbitally tend to bleed less following injection and are at a much lower risk of injury to a major organ like the heart. Further, when performed properly, injected cells and/or drugs quickly enter the bloodstream allowing compounds to exert their effect on the whole fish and kidney cells to easily home to their niche. Thus, this new injection technique minimizes mortality while allowing efficient delivery of material into the bloodstream of adult fish. Here we exemplify this technique by retro-orbital injection of Tg(globin
:GFP) cells into adult casper
fish as well as injection of a red fluorescent dye (dextran, Texas Red
) into adult casper
fish. We then visualize successful injections by whole animal fluorescence microscopy.
Cellular Biology, Issue 34, fluorescent dye, kidney marrow cells, vasculature, red blood cells, Zebrafish, injection, retro-orbital injection, transplantation, HSC
Anatomical Reconstructions of the Human Cardiac Venous System using Contrast-computed Tomography of Perfusion-fixed Specimens
Institutions: University of Minnesota , University of Minnesota , University of Minnesota , University of Minnesota , University of Minnesota .
A detailed understanding of the complexity and relative variability within the human cardiac venous system is crucial for the development of cardiac devices that require access to these vessels. For example, cardiac venous anatomy is known to be one of the key limitations for the proper delivery of cardiac resynchronization therapy (CRT)1
Therefore, the development of a database of anatomical parameters for human cardiac venous systems can aid in the design of CRT delivery devices to overcome such a limitation. In this research project, the anatomical parameters were obtained from 3D reconstructions of the venous system using contrast-computed tomography (CT) imaging and modeling software (Materialise, Leuven, Belgium). The following parameters were assessed for each vein: arc length, tortuousity, branching angle, distance to the coronary sinus ostium, and vessel diameter.
CRT is a potential treatment for patients with electromechanical dyssynchrony. Approximately 10-20% of heart failure patients may benefit from CRT2
. Electromechanical dyssynchrony implies that parts of the myocardium activate and contract earlier or later than the normal conduction pathway of the heart. In CRT, dyssynchronous areas of the myocardium are treated with electrical stimulation. CRT pacing typically involves pacing leads that stimulate the right atrium (RA), right ventricle (RV), and left ventricle (LV) to produce more resynchronized rhythms. The LV lead is typically implanted within a cardiac vein, with the aim to overlay it within the site of latest myocardial activation.
We believe that the models obtained and the analyses thereof will promote the anatomical education for patients, students, clinicians, and medical device designers. The methodologies employed here can also be utilized to study other anatomical features of our human heart specimens, such as the coronary arteries. To further encourage the educational value of this research, we have shared the venous models on our free access website: www.vhlab.umn.edu/atlas.
Biomedical Engineering, Issue 74, Medicine, Bioengineering, Anatomy, Physiology, Surgery, Cardiology, Coronary Vessels, Heart, Heart Conduction System, Heart Ventricles, Myocardium, cardiac veins, coronary veins, perfusion-fixed human hearts, Computed Tomography, CT, CT scan, contrast injections, 3D modeling, Device Development, vessel parameters, imaging, clinical techniques
Interview: Protein Folding and Studies of Neurodegenerative Diseases
Institutions: MIT - Massachusetts Institute of Technology.
In this interview, Dr. Lindquist describes relationships between protein folding, prion diseases and neurodegenerative disorders. The problem of the protein folding is at the core of the modern biology. In addition to their traditional biochemical functions, proteins can mediate transfer of biological information and therefore can be considered a genetic material. This recently discovered function of proteins has important implications for studies of human disorders. Dr. Lindquist also describes current experimental approaches to investigate the mechanism of neurodegenerative diseases based on genetic studies in model organisms.
Neuroscience, issue 17, protein folding, brain, neuron, prion, neurodegenerative disease, yeast, screen, Translational Research
A New Single Chamber Implantable Defibrillator with Atrial Sensing: A Practical Demonstration of Sensing and Ease of Implantation
Institutions: University Hospital of Rostock, Germany.
Implantable cardioverter-defibrillators (ICDs) terminate ventricular tachycardia (VT) and ventricular fibrillation (VF) with high efficacy and can protect patients from sudden cardiac death (SCD). However, inappropriate shocks may occur if tachycardias are misdiagnosed. Inappropriate shocks are harmful and impair patient quality of life. The risk of inappropriate therapy increases with lower detection rates programmed in the ICD. Single-chamber detection poses greater risks for misdiagnosis when compared with dual-chamber devices that have the benefit of additional atrial information. However, using a dual-chamber device merely for the sake of detection is generally not accepted, since the risks associated with the second electrode may outweigh the benefits of detection. Therefore, BIOTRONIK developed a ventricular lead called the LinoxSMART
S DX, which allows for the detection of atrial signals from two electrodes positioned at the atrial part of the ventricular electrode. This device contains two ring electrodes; one that contacts the atrial wall at the junction of the superior vena cava (SVC) and one positioned at the free floating part of the electrode in the atrium. The excellent signal quality can only be achieved by a special filter setting in the ICD (Lumax 540 and 740 VR-T DX, BIOTRONIK). Here, the ease of implantation of the system will be demonstrated.
Medicine, Issue 60, Implantable defibrillator, dual chamber, single chamber, tachycardia detection
Injection of dsRNA into Female A. aegypti Mosquitos
Institutions: University of California, Irvine (UCI), University of California, Irvine (UCI).
Reverse genetic approaches have proven extremely useful for determining which genes underly resistance to vector pathogens in mosquitoes. This video protocol illustrates a method used by the James lab to inject dsRNA into female A. aegypti mosquitoes, which harbor the dengue virus. The technique for calibrating injection needles, manipulating the injection setup, and injecting dsRNA into the thorax is illustrated.
Cellular Biology, Issue 5, mosquito, malaria, genetics, injection
In Utero Intraventricular Injection and Electroporation of E16 Rat Embryos
Institutions: University of California, San Francisco - UCSF.
In-utero in-vivo injection and electroporation of the embryonic rat neocortex provides a powerful tool for the manipulation of individual progenitors lining the walls of the lateral ventricle. This technique is now widely used to study the processes involved in corticogenesis by over-expressing or knocking down genes and observing the effects on cellular proliferation, migration, and differentiation. In comparison to traditional knockout strategies, in-utero electroporation provides a rapid means to manipulate a population of cells during a specific temporal window. In this video protocol, we outline the experimental methodology for preparing rats for surgery, exposing the uterine horns through laporatomy, injecting DNA into the lateral ventricles of the developing embryo, electroporating DNA into the progenitors lining the lateral wall, and caring for animals post-surgery. Our laboratory uses this protocol for surgeries on E15-E21 rats, however it is most commonly performed at E16 as shown in this video.
Neuroscience, Issue 6, Protocol, Stem Cells, Cerebral Cortex, Brain Development, Electroporation, Intra Uterine Injections, transfection
In Utero Intraventricular Injection and Electroporation of E15 Mouse Embryos
Institutions: University of California, San Francisco - UCSF.
In-utero in-vivo injection and electroporation of the embryonic mouse neocortex provides a powerful tool for the manipulation of individual progenitors lining the walls of the lateral ventricle. This technique is now widely used to study the processes involved in corticogenesis by over-expressing or knocking down genes and observing the effects on cellular proliferation, migration, and differentiation. In comparison to traditional knockout strategies, in-utero electroporation provides a rapid means to manipulate a population of cells during a specific temporal window. In this video protocol we outline the experimental methodology for preparing mice for surgery, exposing the uterine horns through laporatomy, injecting DNA into the lateral ventricles of the developing embryo, electroporating DNA into the progenitors lining the lateral wall, and caring for animals post-surgery. Our laboratory uses this protocol for surgeries on E13-E16 mice, however, it is most commonly performed at E15, as shown in this video.
Neuroscience, Issue 6, Protocol, electroporation, Injection, Stem Cells, brain, transfection
Modified Technique for Coronary Artery Ligation in Mice
Institutions: Sahlgrenska Academy, University of Gothenburg.
Myocardial infarction (MI) is one of the most important causes of mortality in humans1-3
. In order to improve morbidity and mortality in patients with MI we need better knowledge about pathophysiology of myocardial ischemia. This knowledge may be valuable to define new therapeutic targets for innovative cardiovascular therapies4
. Experimental MI model in mice is an increasingly popular small-animal model in preclinical research in which MI is induced by means of permanent or temporary ligation of left coronary artery (LCA)5
. In this video, we describe the step-by-step method of how to induce experimental MI in mice.
The animal is first anesthetized with 2% isoflurane. The unconscious mouse is then intubated and connected to a ventilator for artificial ventilation. The left chest is shaved and 1.5 cm incision along mid-axillary line is made in the skin. The left pectoralis major muscle is bluntly dissociated until the ribs are exposed. The muscle layers are pulled aside and fixed with an eyelid-retractor. After these preparations, left thoracotomy is performed between the third and fourth ribs in order to visualize the anterior surface of the heart and left lung. The proximal segment of LCA artery is then ligated with a 7-0 ethilon suture which typically induces an infarct size ~40% of left ventricle. At the end, the chest is closed and the animals receive postoperative analgesia (Temgesic, 0.3 mg/50 ml, ip). The animals are kept in a warm cage until spontaneous recovery.
Medicine, Issue 73, Anatomy, Physiology, Biomedical Engineering, Surgery, Cardiology, Hematology, myocardial infarction, coronary artery, ligation, ischemia, ECG, electrocardiology, mice, animal model
| 1 | 8 |
<urn:uuid:0f65dca5-6460-4c28-aaba-2d53dac3d637>
|
|This article does not cite any references or sources. (February 2010)|
- For other uses of server, see server.
In computer networking, a server is simply a program that operates as a socket listener. The term server is also often generalized to describe a host that is deployed to execute one or more such programs.
A server computer is a computer, or series of computers, that link other computers or electronic devices together. They often provide essential services across a network, either to private users inside a large organization or to public users via the internet. For example, when you enter a query in a search engine, the query is sent from your computer over the internet to the servers that store all the relevant web pages. The results are sent back by the server to your computer.
The server is used quite broadly in information technology. Despite the many Server branded products available (such as Server editions of Hardware, Software and/or Operating Systems), in theory any computerised process that shares a resource to one or more client processes is a Server. To illustrate this, take the common example of File Sharing. While the existence of files on a machine does not classify it as a server, the mechanism which shares these files to clients by the operating system is the Server.
Similarly, consider a web server application (such as the multiplatform "Apache HTTP Server"). This web server software can be run on any capable computer. For example, while a laptop or Personal Computer is not typically known as a server, they can in these situations fulfil the role of one, and hence be labelled as one. It is in this case that the machine's purpose as a web server classifies it in general as a Server.
In the hardware sense, the word server typically designates computer models intended for running software applications under the heavy demand of a network environment. In this client–server configuration one or more machines, either a computer or a computer appliance, share information with each other with one acting as a host for the other.
While nearly any personal computer is capable of acting as a network server, a dedicated server will contain features making it more suitable for production environments. These features may include a faster CPU, increased high-performance RAM, and typically more than one large hard drive. More obvious distinctions include marked redundancy in power supplies, network connections, and even the servers themselves.
Between the 1990s and 2000s an increase in the use of dedicated hardware saw the advent of self-contained server appliances. One well-known product is the Google Search Appliance, a unit which combines hardware and software in an out-of-the-box packaging. Simpler examples of such appliances include switches, routers, gateways, and print server, all of which are available in a near plug-and-play configuration.
Modern operating systems such as Microsoft Windows or Linux distributions rightfully seem to be designed with a client–server architecture in mind. These OSes attempt to abstract hardware, allowing a wide variety of software to work with components of the computer. In a sense, the operating system can be seen as serving hardware to the software, which in all but low-level programming languages must interact using an API.
These operating systems may be able to run programs in the background called either services or daemons. Such programs may wait in a sleep state for their necessity to become apparent, such as the aforementioned Apache HTTP Server software. Since any software which provides services can be called a server, modern personal computers can be seen as a forest of servers and clients operating in parallel.
The Internet itself is also a forest of servers and clients. Merely requesting a web page from a few kilometers away involves satisfying a stack of protocols which involve many examples of hardware and software servers. The least of these are the routers, modems, domain name servers, and various other servers necessary to provide us the world wide web.
Hardware requirements for servers vary, depending on the server application. Absolute CPU speed is not usually as critical to a server as it is to a desktop machine. Servers' duties to provide service to many users over a network lead to different requirements like fast network connections and high I/O throughput. Since servers are usually accessed over a network they may run in headless mode without a monitor or input device. Processes which are not needed for the server's function are not used. Many servers do not have a graphical user interface (GUI) as it is unnecessary and consumes resources that could be allocated elsewhere. Similarly, audio and USB interfaces may be omitted.
Servers often run for long periods without interruption and availability must often be very high, making hardware reliability and durability extremely important. Although servers can be built from commodity computer parts, mission-critical servers use specialized hardware with low failure rates in order to maximize uptime. For example, servers may incorporate faster, higher-capacity hard drives, larger computer fans or water cooling to help remove heat, and uninterruptible power supplies that ensure the servers continue to function in the event of a power failure. These components offer higher performance and reliability at a correspondingly higher price. Hardware redundancy—installing more than one instance of modules such as power supplies and hard disks arranged so that if one fails another is automatically available—is widely used. ECC memory devices which detect and correct errors are used; non-ECC memory is more likely to cause data corruption.
Many servers take a long time for the hardware to start up and load the operating system. Servers often do extensive pre-boot memory testing and verification and startup of remote management services. The hard drive controllers then start up banks of drives sequentially, rather than all at once, so as not to overload the power supply with startup surges, and afterwards they initiate RAID system pre-checks for correct operation of redundancy. It is common for a machine to take several minutes to start up, but it may not need restarting for months or years.
Server operating systems
Some popular operating systems for servers — such as FreeBSD, Solaris and Linux — are derived from or are similar to UNIX. UNIX was originally a minicomputer operating system, and as servers gradually replaced traditional minicomputers, UNIX was a logical and efficient choice of operating system. Many of these derived OSs are free in both senses, and popular.
Server-oriented operating systems tend to have certain features in common that make them more suitable for the server environment, such as
- GUI not available or optional
- ability to reconfigure and update both hardware and software to some extent without restart,
- advanced backup facilities to permit regular and frequent online backups of critical data,
- transparent data transfer between different volumes or devices,
- flexible and advanced networking capabilities,
- automation capabilities such as daemons in UNIX and services in Windows, and
- tight system security, with advanced user, resource, data, and memory protection.
Server-oriented operating systems can, in many cases, interact with hardware sensors to detect conditions such as overheating, processor and disk failure, and consequently alert an operator and/or take remedial measures itself.
Because servers must supply a restricted range of services to perhaps many users while a desktop computer must carry out a wide range of functions required by its user, the requirements of an operating system for a server are different from those of a desktop machine. While it is possible for an operating system to make a machine both provide services and respond quickly to the requirements of a user, it is usual to use different operating systems on servers and desktop machines. Some operating systems are supplied in both server and desktop versions with similar user interface.
The desktop versions of the Windows and Mac OS X operating systems are deployed on a minority of servers, as are some proprietary mainframe operating systems, such as z/OS. The dominant operating systems among servers are UNIX-based or open source kernel distributions, such as Linux (the kernel).
The rise of the microprocessor-based server was facilitated by the development of Unix to run on the x86 microprocessor architecture. The Microsoft Windows family of operating systems also runs on x86 hardware, and since Windows NT have been available in versions suitable for server use.
While the role of server and desktop operating systems remains distinct, improvements in the reliability of both hardware and operating systems have blurred the distinction between the two classes. Today, many desktop and server operating systems share similar code bases, differing mostly in configuration. The shift towards web applications and middleware platforms has also lessened the demand for specialist application servers.
Servers on the Internet
Almost the entire structure of the Internet is based upon a client–server model. High-level root nameservers, DNS servers, and routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world.
- World Wide Web
- Domain Name System
- FTP file transfer
- chat and instant messaging
- voice communication
- streaming audio and video
- Online gaming
- Database servers
Virtually every action taken by an ordinary Internet user requires one or more interactions with one or more servers.
There are also technologies that operate on an inter-server level. Other services do not use dedicated servers; for example peer-to-peer file sharing, some implementations of telephony (e.g. Skype), and supplying television programs to several users (e.g. Kontiki, SlingBox).
Energy consumption of servers
In 2010, servers were responsible for 2.5% of energy consumption in the United States. A further 2.5% of United States energy consumption was used by cooling systems required to cool the servers. It was estimated in 2010, that if trends continued, by 2020 servers would use more of the world's energy than air travel.
- ↑ Comer, Douglas E.; Stevens, David L. (1993). Vol III: Client-Server Programming and Applications. Internetworking with TCP/IP. Department of Computer Sciences, Purdue University, West Lafayette, IN 47907: Prentice Hall. pp. 11. ISBN 0134742222.
- ↑ "ARM chief calls for low-drain wireless". The Inquirer. 29 June 2010. http://www.theinquirer.net/inquirer/news/1719749/arm-chief-calls-low-drain-wireless. Retrieved 30 June 2010.
als:Server ar:خادم (معلوماتية) bs:Server bg:Сървър ca:Servidor cs:Server da:Server de:Server et:Server el:Εξυπηρετητής es:Servidor eo:Servilo eu:Zerbitzari fa:کارساز (رایانه) fr:Serveur informatique gl:Servidor ko:서버 hi:सर्वर hr:Poslužitelj id:Peladen ia:Servitor os:Сервер is:Miðlarihe:שרת ka:სერვერი (კომპიუტერი) kk:Сервер lv:Serveris lt:Serveris hu:Kiszolgáló ml:സെർവർ കംപ്യൂട്ടർ ms:Komputer pelayan nl:Server ja:サーバ no:Tjener oc:Servidorpt:Servidor ro:Serversimple:Server sk:Server sl:Strežnik sr:Server sh:Server fi:Palvelin sv:Server tl:Serbidor ta:வழங்கி th:เซิร์ฟเวอร์ tr:Sunucu (bilişim) uk:Сервер ur:معیل (شمارندیات) vi:Máy chủ yi:דינער (קאמפיוטער) zh:服务器
- Google's first server, now held at the Computer History Museum
| 1 | 5 |
<urn:uuid:bb829341-f9fe-4cec-b4e6-a4ebcbe2deca>
|
Pediatricians often encounter children with delays of motor development in their clinical practices. Earlier identification of motor delays allows for timely referral for developmental interventions as well as diagnostic evaluations and treatment planning. A multidisciplinary expert panel developed an algorithm for the surveillance and screening of children for motor delays within the medical home, offering guidance for the initial workup and referral of the child with possible delays in motor development. Highlights of this clinical report include suggestions for formal developmental screening at the 9-, 18-, 30-, and 48-month well-child visits; approaches to the neurologic examination, with emphasis on the assessment of muscle tone; and initial diagnostic approaches for medical home providers. Use of diagnostic tests to evaluate children with motor delays are described, including brain MRI for children with high muscle tone, and measuring serum creatine kinase concentration of those with decreased muscle tone. The importance of pursuing diagnostic tests while concurrently referring patients to early intervention programs is emphasized.
- AAP —
- American Academy of Pediatrics
- CK —
- creatine phosphokinase
- CPT —
- Current Procedural Terminology
- DCD —
- developmental coordination disorder
- DMD —
- Duchenne muscular dystrophy
The American Academy of Pediatrics (AAP) recommends developmental surveillance at all preventive care visits and standardized developmental screening of all children at ages 9, 18, and 30 months.1 Recently, developmental screening instruments and their clinical interpretations have emphasized the early detection of delays in language and social development, responsive to rising prevalence rates of autism spectrum disorders in US children.2 The most commonly used developmental screening instruments have not been validated on children with motor delays.3,4 Recognizing the equal importance of surveillance and screening for motor development in the medical home, this clinical report reviews the motor evaluation of children and offers guidelines to the pediatrician regarding an approach to children who demonstrate motor delays and variations in muscle tone. (This report is aimed at all pediatric primary care providers, including pediatricians, family physicians, nurse practitioners, and physician assistants. Generic terms, such as clinician and provider, are intended to encompass all pediatric primary care providers.)
Gross motor development follows a predictable sequence, reflecting the functional head-to-toe maturation of the central nervous system. Although parents are reliable in reporting their child’s gross motor development,5,6 it is up to the clinician to use the parent’s report and his or her own observations to detect a possible motor delay.7
Gross motor delays are common and vary in severity and outcome. Some children with gross motor delays attain typical milestones at a later age. Other children have a permanent motor disability, such as cerebral palsy, which has a prevalence of 3.3 per 1000.8 Other children have developmental coordination disorder (DCD), which affects up to 6% of the population and generally becomes more evident when children enter kindergarten.9 When motor delays are pronounced and/or progressive, a specific neuromuscular disorder is more likely to be diagnosed. Motor delays may be the first or most obvious sign of a global developmental disorder. For infants, motor activities are manifestations of early development. It is often the case that children whose developmental trajectories are at risk may experience challenges in meeting early motor milestones. Establishing a specific diagnosis can inform prognostication, service planning, and monitoring for associated developmental and medical disorders. When the underlying etiology of motor delays is genetic, early recognition may assist parents with family planning. A timely diagnosis may reduce family stress related to diagnostic and prognostic uncertainties.5 For children with the few neuromuscular diseases for which treatments are available, outcomes may be improved when therapy is implemented early.10
Focus groups were conducted with 49 pediatricians at the AAP National Conference and Exhibition in 2010, and members of the AAP Quality Improvement Innovation Network were surveyed to ascertain current provider practices and needs regarding neuromotor screening.11 Pediatricians described widely varying approaches to motor examinations and identification of delays and expressed uncertainty regarding their ability to detect, diagnose, and manage motor delays in children. Participants requested more education, training, and standardization of the evaluation process, including an algorithm to guide clinical care (Fig 1).
The Algorithm: Identifying Children With Motor Delays: An Algorithm for Surveillance and Screening
Step 1. Pediatric Patient at Preventive Care Visit
Each child’s motor development should be addressed with other developmental and health topics at every pediatric preventive care visit.
Step 2. Is This a 9-, 18-, 30-, or 48-Month Visit?
All children should receive periodic developmental screening by using a standardized test, as recommended in the 2006 AAP policy statement “Identifying Infants and Children With Developmental Disorders in the Medical Home: An Algorithm for Developmental Surveillance and Screening.”1 Most children will demonstrate typical development without identifiable risks for potential delays. In the absence of established risk factors or parent or provider concerns, completion of a general developmental screening test is recommended at the 9-, 18-, and 30-month visits. These ages were selected, in part, on the basis of critical observations of motor skills development.
At the recommended screening visits, the following motor skills should be observed in the young child. These skills are typically acquired at earlier ages, and their absence at these ages signifies delay:
9-month visit: The infant should roll to both sides, sit well without support, and demonstrate motor symmetry without established handedness. He or she should be grasping and transferring objects hand to hand.
18-month visit: The toddler should sit, stand, and walk independently. He or she should grasp and manipulate small objects. Mild motor delays undetected at the 9-month screening visit may be apparent at 18 months.
30-month visit: Most motor delays will have already been identified during previous visits. However, more subtle gross motor, fine motor, speech, and oral motor impairments may emerge at this visit. Progressive neuromuscular disorders may begin to emerge at this time and manifest as a loss of previously attained gross or fine motor skills.
An additional general screening test is recommended at the 48-month visit to identify problems in coordination, fine motor, and graphomotor skills before a child enters kindergarten.
48-month visit: The preschool-aged child should have early elementary school skills, with emerging fine motor, handwriting, gross motor, communication, and feeding abilities that promote participation with peers in group activities. Preschool or child care staff concerns about motor development should be addressed. Loss of skills should alert the examiner to the possibility of a progressive disorder.
Continuous developmental surveillance should also occur throughout childhood, with additional screenings performed whenever concerns are raised by parents, child health professionals, or others involved in the care of the child.
A summary of screening and surveillance for motor development based on the AAP “Recommendations for Preventive Pediatric Health Care” (also known as the periodicity schedule) is described in Table 1.12 Listed are the mean ages at which typically developing children will achieve motor milestones. Marked delay beyond these ages warrants attention but does not necessarily signify a neuromotor disease.
Step 3a. Perform Developmental Surveillance
As the 2006 policy states, “Developmental surveillance is a flexible, longitudinal, continuous and cumulative process whereby knowledgeable health care professionals identify children who may have developmental problems. Surveillance can be useful for determining appropriate referrals, providing patient education and family-centered care in support of healthy development, and monitoring the effects of developmental health promotion through early intervention and therapy.” The 5 components of developmental surveillance are as follows: eliciting and attending to the parents’ concerns about their child’s development, documenting and maintaining a developmental history, making accurate observations of the child, identifying risk and protective factors, and maintaining an accurate record of documenting the process and findings.
A great breadth and depth of information is considered in comprehensive developmental surveillance. Much of this information, including prenatal, perinatal, and interval history will accumulate in the child’s health record and should be reviewed at each screening visit.
Step 3b. Administer Screening Tool
Developmental screening involves the administration of a brief standardized tool that aids in the identification of children at risk for a developmental disorder. Many screening tools can be completed by parents and scored by nonphysician personnel; pediatric providers interpret the screening results. The aforementioned 2006 policy statement on developmental surveillance and screening provides a list of developmental screening tools and a discussion of how to choose an appropriate screening tool.
Step 4. Do Surveillance and/or Screening Demonstrate Neuromotor Concern?
Step 5a. Perform Remainder of Bright Futures Health Supervision Examination
Step 5b. Consider Administering Screening Tool if Not Already Done
The concerns of both parents and child health professionals should be included in determining whether surveillance suggests that the child may be at risk for developmental problems. If parents or health care providers express concern about the child’s development, administration of a developmental screening tool to address the concern may be added.
Step 6. Obtain/Review Expanded History and Perform Neurologic Examination
Pediatricians can elicit key clinical information about a child’s motor development from the child, parents, and family. Key elements are listed in Table 2. It is essential to ask parents broad, open-ended questions and listen carefully for any concerns. Some concerns will be stated explicitly; others may be suggested through statements of perceived differences between a child’s abilities and those of their age-matched peers. To broaden historical perspectives, clinicians can ask if extended family members, educators, or others who know the child well express any concerns about motor development. In instances of birth at earlier than 36 weeks’ gestation, most experts recommend correcting for prematurity for at least the first 24 months of life.13 Last, while taking the history, clinicians should carefully watch the child’s posture, play, and spontaneous motor function without the stressful demands of performance under deliberate observation. When children are tired or stressed, direct observation of motor skills may not be possible, and full reliance on historical information is needed.
Children with increased tone may attain motor milestones early, asymmetrically, or “out of order.” These aberrant milestones may include rolling supine to prone before prone to supine, asymmetric propping with sitting, asymmetric grasp, development of handedness before 18 months,14 and standing before sitting.15
The examination maneuvers described here are focused on medical home visits of children in the ambulatory setting. A discussion of newborn examination within the nursery setting is beyond the scope of this report; however, Guidelines for Perinatal Care, developed by the AAP Committee on Fetus and Newborn and American College of Obstetrics and Gynecology Committee on Obstetric Practice, provides further information.16
When there are concerns regarding the quality or progression of a child’s motor development, evaluation begins with a complete physical examination, with special attention to the neurologic examination and evaluation of vision and hearing. Children with motor delays related to systemic illness often show alterations in their level of interaction with their environment and general arousal. Careful assessments of head circumference, weight, and length/height with interpretation of percentiles according to Centers for Disease Control and Prevention or World Health Organization growth curves are essential and may facilitate early identification of children with microcephaly, macrocephaly, and growth impairments. Often, poor cooperation by the child may interfere with proper measurements, so any unexpected change in growth pattern should be rechecked by the clinician. Drooling or poor weight gain may suggest facial and oral motor weaknesses, and ptosis should prompt clinicians to consider congenital myopathies or lower motor neuron disorders. Respiratory problems, such as tachypnea, retractions, and ineffective airway clearance, can accompany many neuromotor conditions. Careful palpation of the abdomen may reveal organomegaly suggesting glycogen storage diseases, sphingolipidoses, or mucopolysaccharidoses. The astute clinician can use findings from the general pediatric examination to individualize a diagnostic approach for a child with motor delays.
Ideally, children should be well rested and comfortable for neuromotor examinations. However, when toddlers and preschoolers are uncooperative, clinicians can still gain important diagnostic information by observing the quality and quantity of movement.
The cranial nerve examination includes eye movements, response to visual confrontation, and pupillary reactivity. Although fundoscopic examination may be difficult, red reflexes should be detectable and symmetric. The quality of eye opening and closure and facial expression, including smile and cry, should be observed. Oromotor movement can be observed and, in the older child formally tested, by observing palate and tongue movement and, if possible, by drinking through a straw or blowing kisses. Observation for tongue fasciculations and quality of shoulder shrug should be assessed.
Strength is most easily assessed by functional observation. Attention to the quality and quantity of body posture and movement includes antigravity movement in the infant and the sequential transition from tripod sitting with symmetrical posture to walking and then running, climbing, hopping, and skipping in the older child. Clinicians should note any use of a Gower maneuver, characterized by an ambulatory child’s inability to rise from the floor without pulling or pushing up with his arms. Muscle bulk and texture, joint flexibility, and presence or absence of atrophy should be observed. Quality and intensity of grasp is most easily assessed by observation during play.
For the infant, postural tone is assessed by ventral suspension in the younger infant and truncal positioning when sitting and standing in the older infant.17 Extremity tone can be monitored during maturation by documenting the scarf sign in infants18,19 and popliteal angles after the first year (see Fig 2).20 Persistence of primitive reflexes and asymmetry or absence of protective reflexes suggest neuromotor dysfunction. Unsteady gait or tremor can be a sign of muscle weakness. Diminution or absence of deep tendon reflexes can occur with lower motor neuron disorders, whereas increased reflexes and an abnormal plantar reflex can be signs of upper motor neuron dysfunction. Neuromotor dysfunction can be accompanied by sensory deficits and should be assessed by testing touch and pain sensation.
In older children, difficulties with sequential motor planning, or praxis, should be differentiated from strength and extrapyramidal problems. Dyspraxia refers to the inability to formulate, plan, and execute complex movements. Assessment includes the presence and quality of age-appropriate gross motor skills (stair climb, 1-foot stand, hop, run, skip, and throw) and fine motor skills (button, zip, snap, tie, cut, use objects, and draw). Many of these children also have hypotonia.21
Step 7. Are the History or Examination Results Concerning?
After identifying concerns of motor development, primary care clinicians can perform key diagnostic tests. All testing should be performed in the context of the child’s past medical history, including prenatal complications and exposures, perinatal problems, feeding, and growth. Family history is also important to identify any other relatives with developmental or motor issues, recurrent pregnancy loss, stillbirth, or infant death, which may lead to identification of an underlying genetic etiology. Findings on physical examination, such as unusual facial features or other known visceral anomalies, may suggest a specific genetic condition. The state-mandated newborn screening laboratory results should be reviewed, because normal results exclude many disorders and avoid unnecessary testing. Although newborn screening is comprehensive, it does not test for all inborn biochemical disorders.
Step 8. High, Normal, or Low Tone?
Step 9a. Consider Neuroimaging
Increased tone in a child with neuromotor delay suggests an upper motor neuron problem, such as cerebral palsy. The American Academy of Neurology recommends imaging of the brain, preferably by MRI, for patients suspected of having cerebral palsy.22 This test can be ordered within the medical home at the same time the patient is referred for specialist consultation for diagnosis.
Step 9b. Measure Creatine Phosphokinase and Thyroid-Stimulating Hormone Concentrations
When low to normal tone is identified, especially with concomitant weakness, investigations should target diseases of the lower motor neurons or muscles. Among the most common is Duchenne muscular dystrophy (DMD), characterized by weakness, calf hypertrophy, and sometimes cognitive or social delays. DMD usually presents at 2 to 4 years of age, but signs of weakness may be evident earlier. Becker muscular dystrophy is allelic to DMD but typically presents in older children and with a milder phenotype. Initial testing for all children with motor delay and low tone can be performed within the medical home by measuring the serum creatine phosphokinase (CK) concentration. The CK concentration is significantly elevated in DMD, usually >1000 U/L. As an X-linked disorder, there may be a family history of other affected male family members on the maternal side. However, DMD often presents in the absence of a family history for this disorder, with approximately one-third of cases being new mutations.23 If the CK concentration is elevated, the diagnosis of DMD can usually be confirmed with molecular sequencing of the DMD gene. Other neuromuscular disorders include diseases of the peripheral motor nerves or muscles, such as myotonic dystrophy, spinal muscular atrophy, mitochondrial disorders, and congenital myasthenia gravis. Testing for these diseases should be performed by subspecialists, because these patients often require electrodiagnostic or specific genetic testing.
Although congenital hypothyroidism will be identified by newborn screening, acquired hypothyroidism and hyperthyroidism can present in later infancy or childhood with motor delay and low to normal tone. It is reasonable to perform thyroid function studies (thyroxine [T4] and thyroid-stimulating hormone) as part of the general laboratory evaluation for children with low tone or neuromuscular weakness, even without classic signs of thyroid disease.
Cerebral palsy classically presents with spasticity, dystonia, or athetosis, but may also result in hypotonia. Children with cerebral palsy may have a history of perinatal insult with concomitant abnormalities on brain imaging. Other causes of hypotonia should be considered before the diagnosis of hypotonic cerebral palsy is given to a child with an uneventful perinatal history and normal brain imaging.
DCD may be present when a child’s motor coordination performance is significantly below norms for age and intellect, unrelated to a definable medical condition that affects neuromotor function (such as cerebral palsy, ataxia, or myopathy). It can affect gait, handwriting, sports and academic participation, and self-help skills. More than half of individuals with DCD remain symptomatic through adolescence and young adulthood. Intervention, especially task-oriented approaches, can improve motor ability.8
Children with neuromotor abnormalities, who also have failure to thrive, growth abnormalities, dysmorphic facial features, or other visceral anomalies, may have a chromosome abnormality, either common or rare. The American College of Medical Genetics and Genomics recommends microarray testing as the first-line chromosome study.24 Because of the difficulty often encountered in interpretation of results, this test is typically ordered by a subspecialist familiar with this testing. Routine chromosome testing may be appropriate for children with weakness suspected as having recognizable disorders, such as Down syndrome (including mosaic Down syndrome), Turner syndrome, and Klinefelter syndrome. Fragile X syndrome is the most common inherited cause of cognitive impairment, and children with fragile X syndrome may have some element of motor delay. Genetic testing for fragile X syndrome should be considered in both boys and girls, whether they have dysmorphic facial features or a family history.
Common genetic conditions may present with early motor delays (Table 4). The 22q11.2 deletion syndrome (velocardiofacial syndrome) may present with hypotonia and feeding disorder in infancy and delayed motor milestones.25 Noonan syndrome is also a common disorder, and although it is classically associated with short stature, webbed neck, ptosis, and pulmonary stenosis, the phenotype is highly variable, and developmental delays, especially motor delays, are common. Noonan syndrome is genetically heterogeneous and may be caused by mutations in genes in the ras pathway.26 Neurofibromatosis type 1, associated with mutations in the NF1 gene, can lead to developmental delays and hypotonia in infancy and early childhood. This condition should be suspected in children with hypotonia and multiple (greater than 6) café au lait spots.27 Children with known or suspected genetic disorders may benefit from genetic consultation and genetic counseling for the family.
Step 10. Refer to Early Intervention/Child Find, and Consult/Refer to Appropriate Pediatric Subspecialists, and Perform Remainder of Bright Futures Health Supervision Examination
Mild abnormalities that are not accompanied by “red flag” findings (red flag conditions necessitate prompt referral) may be closely followed through “observation,” but a plan for new or worsening symptoms as well as a time-definite follow-up plan must be developed. Families should understand that clinical changes should prompt urgent reevaluation. This includes regression of motor skills, loss of strength, or any concerns with respiration or swallowing. This ensures that the progressive disorders are brought to medical attention immediately.
Depending on the nature of the suspected condition and the age of the child, it may be appropriate to have the child return to his or her medical home for a follow-up visit before the next Bright Futures health supervision visit. This will afford the opportunity for an interval review of noted symptoms, new concerns, and changes in physical examination or other developmental findings.
Education with the family should not be overlooked or delayed, as a suspected condition can cause significant anxiety.28 Although the discussion may not be as in-depth as a situation in which diagnostic studies or referral is involved, families deserve a cogent and appropriate discussion of the findings that are being evaluated and what developmental trajectory is expected. This may help assuage fears and increase compliance with follow-up plans.
All children with suspected neuromotor delay should be referred to early intervention or special education resources. Additionally, concurrent referrals should be made to physical and/or occupational therapists while diagnostic investigations are proceeding.29 Even when a specific neuromotor diagnosis has not been identified, children with motor delays benefit from educationally and medically based therapies.
Each medical home must develop its own local resources and network of subspecialists for assistance with the diagnosis and management of young children with suspected motor delay. Depending on the setting, such subspecialists may include neurologists, developmental pediatricians, geneticists, physiatrists, or orthopedists. In some areas, availability of these resources may be limited, and waiting times may be long.30 Direct physician-to-physician communication is recommended when red flags are identified (Table 3). Sharing digital photographs via a secure Internet connection may further expedite evaluations. However, the absence of red flags does not rule out the presence of significant neuromotor disease, and all children with motor delays should be thoroughly and serially evaluated.
Step 11. Is a Developmental Disorder Identified?
If a developmental disorder is identified, the child should be identified as a child with special health care needs, and chronic-condition management should be initiated (see Step 12b).
Step 12a. Ongoing Developmental Monitoring
If a developmental disorder is not identified through medical and developmental evaluation, the child should be scheduled for an early return visit for further surveillance, as mentioned previously. More frequent visits, with particular attention paid to areas of concern, will facilitate prompt referrals for further evaluation when indicated.
Step 12b. Identify as a Child With Special Health Care Needs and Initiate Chronic Condition Management
When a child has delays of motor development, that child is identified as a child with special health care needs even if that child does not have a specific disease etiology. Children with special health care needs are defined by the Department of Health and Human Services, Health Resources and Services Administration, Maternal and Child Health Bureau as “...those who have or are at increased risk for a chronic physical, developmental, behavioral, or emotional condition and who also require health and related services of a type or amount beyond that required by children generally.”31
Children with special health care needs benefit from chronic-condition management, coordination of care, and regular monitoring in the context of their medical homes. Primary care practices are encouraged to create and maintain a registry for the children in the practice who have special health care needs. The medical home provides a triad of key primary care services, including preventive care, acute illness management, and chronic-condition management. A program of chronic-condition management provides proactive care for children and youth with special health care needs, including condition-related office visits, written care plans, explicit comanagement with specialists, appropriate patient education, and effective information systems for monitoring and tracking. Management plans should be based on a comprehensive needs assessment conducted with the family. Management plans should include relevant, measurable, and valid outcomes. These plans should be reviewed and updated regularly. The clinician should actively participate in all care-coordination activities for children with identified motor disorders. Evidence-based decisions regarding appropriate therapies and their scope and intensity should be determined in consultation with the child’s family, therapists, pediatric medical subspecialists, and educators (including early intervention or school-based programs).
Children with established motor disorders often benefit from referral to community-based family-support services, such as respite care, parent-to-parent programs, and advocacy organizations. Some children may qualify for additional benefits, such as supplemental security income, public insurance, waiver programs, and state programs for children and youth with special health care needs (Title V). Parent organizations, such as Family Voices, and condition-specific associations can provide parents with information and support and can also provide an opportunity for advocacy.
Internet resources are available (www.childmuscleweakness.org) for clinicians to view both typical and atypical motor findings. The identification of motor delays (or any chronic condition) in a child can trigger significant psychosocial stress for families.32 The effects of repeated medical visits, testing, and modifications to home and school environments can place a significant burden on even well-functioning families.33 Appropriate psychological support should be implemented early. A consumer health librarian or medical librarian can be used by families to provide specific resources tailored to individual needs (http://www.nlm.nih.gov/medlineplus/libraries.html).
For conditions with genetic basis or implications for family planning, medical genetics consultation and genetic counseling should be recommended. An international directory of genetics and prenatal diagnosis clinics can be found at http://www.ncbi.nlm.nih.gov/sites/GeneTests/. Additional Web sites, such as www.rarediseases.org, offer information for both physicians and families.
Information on financial assistance programs should also be provided to families of children with established developmental disorders. They may qualify for benefits, such as supplemental security income (http://www.ssa.gov/pgm/ssi.htm), public insurance (http://www.medicaid.gov), and Title V programs for children and youth with special health care needs (http://internet.dscc.uic.edu/dsccroot/titlev.asp). There also may be local community programs that can provide transportation and other assistance.
Developmental Screening Billing and Coding
Separate Current Procedural Terminology (CPT) codes exist for developmental screening (96110: developmental screening) and testing (96111: developmental testing) when completing neuromotor screening and assessment. The relative values for these codes are published in the Medicare Resource-Based Relative Value Scale and reflect physician work, practice expenses, and professional liability expenses. Table 5 outlines the appropriate codes to use when billing for the processes described in the algorithm. Billing processes related to developmental screening and surveillance should be carefully reviewed to ensure that appropriate CPT codes are used to document screening procedures and ensure proper payment. CPT code 96110 does not include any payment for medical provider services. The expectation is that a nonphysician will administer the screening tool(s) to the parent and score the responses. The physician reviews and interprets the screening results; the physician’s work is included in the evaluation and management code used for the child’s visit. The preventive care (or new, consultative, or return visit) code is used with the modifier 25 appended and 96110 listed for each screening tool administered. The CPT code 96111 includes medical provider work. This code would more appropriately be used when the medical provider observes the child performing a neuromotor task and demonstrating a specific developmental skill, using a standardized developmental tool.
The initial responsibility for identifying a child with motor delay rests with the medical home. By using the algorithm presented here, the medical home provider can begin the diagnostic process and make referrals as appropriate. Both during and after diagnosis, communication between the medical home and subspecialists is important,34 and the medical home should remain fully engaged with the child’s care as an integral part of chronic-condition management.
Neuromotor Screening Expert Panel
Nancy A. Murphy, MD, Chairperson – Council on Children With Disabilities
Joseph F. Hagan, Jr, MD – Bright Futures Initiatives
Paul H. Lipkin, MD – Council on Children With Disabilities
Michelle M. Macias, MD – Section on Developmental and Behavioral Pediatrics
Dipesh Navsaria, MD, MPH, MSLIS
Garey H. Noritz, MD – Council on Children With Disabilities
Georgina Peacock, MD, MPH – Centers for Disease Control and Prevention/National Center on Birth Defects
Peter L. Rosenbaum, MD
Howard M. Saal, MD – Committee on Genetics
John F. Sarwark, MD – Section on Orthopedics
Mark E. Swanson, MD, MPH – Centers for Disease Control and Prevention/National Center on Birth Defects
Max Wiznitzer, MD – Section on Neurology
Marshalyn Yeargin-Allsopp, MD – Centers for Disease Control and Prevention/National Center on Birth Defects
Rachel Daskalov, MHA
Michelle Zajac Esquivel, MPH
Holly Noteboom Griffin
Stephanie Mucha, MPH
Jane Bernzweig, PhD
The development of this clinical report was funded by the American Academy of Pediatrics through the Public Health Program to Enhance the Health and Development of Infants and Children through a cooperative agreement (5U58DD000587) with the Centers for Disease Control and Prevention’s National Center on Birth Defects and Developmental Disabilities.
This document is copyrighted and is property of the American Academy of Pediatrics and its Board of Directors. All authors have filed conflict of interest statements with the American Academy of Pediatrics. Any conflicts have been resolved through a process approved by the Board of Directors. The American Academy of Pediatrics has neither solicited nor accepted any commercial involvement in the development of the content of this publication.
The guidance in this report does not indicate an exclusive course of treatment or serve as a standard of medical care. Variations, taking into account individual circumstances, may be appropriate.
All clinical reports from the American Academy of Pediatrics automatically expire 5 years after publication unless reaffirmed, revised, or retired at or before that time.
- Council on Children With Disabilities,
- Section on Developmental Behavioral Pediatrics,
- Bright Futures Steering Committee,
- Medical Home Initiatives for Children With Special Needs Project Advisory Committee
- ↵Centers for Disease Control and Prevention, Division of News and Electronic Media. CDC estimates 1 in 88 children in United States has been identified as having an autism spectrum disorder [press release]. Available at: www.cdc.gov/media/releases/2012/p0329_autism_disorder.html. Accessed November 14, 2012
- Squires J,
- Twombly E,
- Bricker D,
- Potter L
- ↵Glascoe FP. PEDS. Collaborating With Parents. 2nd ed. Nolensville, TN: PEDSTest.com LLC; 2013
- ↵Centers for Disease Control and Prevention. Data and statistics for cerebral palsy: prevalence and characteristics. Available at: www.cdc.gov/NCBDDD/cp/data.html. Accessed November 14, 2012
- Blank R,
- Smits-Engelsman B,
- Polatajko H,
- Wilson P,
- European Academy for Childhood Disability
- ↵American Academy of Pediatrics, National Center for Medical Home Implementation/Centers for Disease Control and Prevention. Neuromotor screening. Available at: www.medicalhomeinfo.org/national/pehdic/neuromotor_screening.aspx. Accessed November 14, 2012
- Hagan JF,
- Shaw JS,
- Duncan PM
- Kraus EH
- ↵Lemons JA, Lockwood J, Blackmon L, Riley L, eds. American Academy of Pediatrics, Committee on Fetus and Newborn; American College of Obstetrics and Gynecology, Committee on Obstetric Practice. Guidelines for Perinatal Care. 6th ed. Elk Grove Village, IL: American Academy of Pediatrics; 2007
- ↵Amiel-Tison C, Grenier A. Normal development during the first year of life: identification of anomalies and use of the grid. In: Amiel-Tison C, Grenier A, eds. Neurological Assessment During the First Year of Life. New York, NY: Oxford University Press; 1986:46–95
- Amiel-Tison C,
- Gosselin J
- Ashwal S,
- Russman BS,
- Blasco PA,
- et al.,
- Quality Standards Subcommittee of the American Academy of Neurology,
- Practice Committee of the Child Neurology Society
- Soucy EA,
- Gao F,
- Gutmann DH,
- Dunn CM
- Dodgson JE,
- Garwick A,
- Blozis SA,
- Patterson JM,
- Bennett FC,
- Blum RW
- Committee on Children With Disabilities
- McPherson M,
- Arango P,
- Fox H,
- et al
- Hamlett KW,
- Pellegrini DS,
- Katz KS
- Stille CJ,
- Primack WA,
- Savageau JA
- Copyright © 2013 by the American Academy of Pediatrics
| 1 | 5 |
<urn:uuid:855f9179-6363-4619-a6cb-be3baedae8f2>
|
Inhibitory neurons act in the central nervous system to regulate the dynamics and spatio-temporal co-ordination of neuronal networks. GABA (γ-aminobutyric acid) is the predominant inhibitory neurotransmitter in the brain. It is released from the presynaptic terminals of inhibitory neurons within highly specialized intercellular junctions known as synapses, where it binds to GABAA receptors (GABAARs) present at the plasma membrane of the synapse-receiving, postsynaptic neurons. Activation of these GABA-gated ion channels leads to influx of chloride resulting in postsynaptic potential changes that decrease the probability that these neurons will generate action potentials.
During development, diverse types of inhibitory neurons with distinct morphological, electrophysiological and neurochemical characteristics have the ability to recognize their target neurons and form synapses which incorporate specific GABAARs subtypes. This principle of selective innervation of neuronal targets raises the question as to how the appropriate synaptic partners identify each other.
To elucidate the underlying molecular mechanisms, a novel in vitro co-culture model system was established, in which medium spiny GABAergic neurons, a highly homogenous population of neurons isolated from the embryonic striatum, were cultured with stably transfected HEK293 cell lines that express different GABAAR subtypes. Synapses form rapidly, efficiently and selectively in this system, and are easily accessible for quantification. Our results indicate that various GABAAR subtypes differ in their ability to promote synapse formation, suggesting that this reduced in vitro model system can be used to reproduce, at least in part, the in vivo conditions required for the recognition of the appropriate synaptic partners and formation of specific synapses. Here the protocols for culturing the medium spiny neurons and generating HEK293 cells lines expressing GABAARs are first described, followed by detailed instructions on how to combine these two cell types in co-culture and analyze the formation of synaptic contacts.
24 Related JoVE Articles!
Imaging Centrosomes in Fly Testes
Institutions: University of Toledo.
Centrosomes are conserved microtubule-based organelles whose structure and function change dramatically throughout the cell cycle and cell differentiation. Centrosomes are essential to determine the cell division axis during mitosis and to nucleate cilia during interphase. The identity of the proteins that mediate these dynamic changes remains only partially known, and the function of many of the proteins that have been implicated in these processes is still rudimentary. Recent work has shown that Drosophila
spermatogenesis provides a powerful system to identify new proteins critical for centrosome function and formation as well as to gain insight into the particular function of known players in centrosome-related processes. Drosophila
is an established genetic model organism where mutants in centrosomal genes can be readily obtained and easily analyzed. Furthermore, recent advances in the sensitivity and resolution of light microscopy and the development of robust genetically tagged centrosomal markers have transformed the ability to use Drosophila
testes as a simple and accessible model system to study centrosomes. This paper describes the use of genetically-tagged centrosomal markers to perform genetic screens for new centrosomal mutants and to gain insight into the specific function of newly identified genes.
Developmental Biology, Issue 79, biology (general), genetics (animal and plant), animal biology, animal models, Life Sciences (General), Centrosome, Spermatogenesis, Spermiogenesis, Drosophila, Centriole, Cilium, Mitosis, Meiosis
Identifying Protein-protein Interaction in Drosophila Adult Heads by Tandem Affinity Purification (TAP)
Institutions: Louisiana State University Health Sciences Center.
Genetic screens conducted using Drosophila melanogaster
(fruit fly) have made numerous milestone discoveries in the advance of biological sciences. However, the use of biochemical screens aimed at extending the knowledge gained from genetic analysis was explored only recently. Here we describe a method to purify the protein complex that associates with any protein of interest from adult fly heads. This method takes advantage of the Drosophila
GAL4/UAS system to express a bait protein fused with a Tandem Affinity Purification (TAP) tag in fly neurons in vivo
, and then implements two rounds of purification using a TAP procedure similar to the one originally established in yeast1
to purify the interacting protein complex. At the end of this procedure, a mixture of multiple protein complexes is obtained whose molecular identities can be determined by mass spectrometry. Validation of the candidate proteins will benefit from the resource and ease of performing loss-of-function studies in flies. Similar approaches can be applied to other fly tissues. We believe that the combination of genetic manipulations and this proteomic approach in the fly model system holds tremendous potential for tackling fundamental problems in the field of neurobiology and beyond.
Biochemistry, Issue 82, Drosophila, GAL4/UAS system, transgenic, Tandem Affinity Purification, protein-protein interaction, proteomics
Cytological Analysis of Spermatogenesis: Live and Fixed Preparations of Drosophila Testes
Institutions: Vanderbilt University Medical Center.
is a powerful model system that has been widely used to elucidate a variety of biological processes. For example, studies of both the female and male germ lines of Drosophila
have contributed greatly to the current understanding of meiosis as well as stem cell biology. Excellent protocols are available in the literature for the isolation and imaging of Drosophila
ovaries and testes3-12
. Herein, methods for the dissection and preparation of Drosophila
testes for microscopic analysis are described with an accompanying video demonstration. A protocol for isolating testes from the abdomen of adult males and preparing slides of live tissue for analysis by phase-contrast microscopy as well as a protocol for fixing and immunostaining testes for analysis by fluorescence microscopy are presented. These techniques can be applied in the characterization of Drosophila
mutants that exhibit defects in spermatogenesis as well as in the visualization of subcellular localizations of proteins.
Basic Protocol, Issue 83, Drosophila melanogaster, dissection, testes, spermatogenesis, meiosis, germ cells, phase-contrast microscopy, immunofluorescence
Detection of the Genome and Transcripts of a Persistent DNA Virus in Neuronal Tissues by Fluorescent In situ Hybridization Combined with Immunostaining
Institutions: CNRS UMR 5534, Université de Lyon 1, LabEX DEVweCAN, CNRS UPR 3296, CNRS UMR 5286.
Single cell codetection of a gene, its RNA product and cellular regulatory proteins is critical to study gene expression regulation. This is a challenge in the field of virology; in particular for nuclear-replicating persistent DNA viruses that involve animal models for their study. Herpes simplex virus type 1 (HSV-1) establishes a life-long latent infection in peripheral neurons. Latent virus serves as reservoir, from which it reactivates and induces a new herpetic episode. The cell biology of HSV-1 latency remains poorly understood, in part due to the lack of methods to detect HSV-1 genomes in situ
in animal models. We describe a DNA-fluorescent in situ
hybridization (FISH) approach efficiently detecting low-copy viral genomes within sections of neuronal tissues from infected animal models. The method relies on heat-based antigen unmasking, and directly labeled home-made DNA probes, or commercially available probes. We developed a triple staining approach, combining DNA-FISH with RNA-FISH and immunofluorescence, using peroxidase based signal amplification to accommodate each staining requirement. A major improvement is the ability to obtain, within 10 µm tissue sections, low-background signals that can be imaged at high resolution by confocal microscopy and wide-field conventional epifluorescence. Additionally, the triple staining worked with a wide range of antibodies directed against cellular and viral proteins. The complete protocol takes 2.5 days to accommodate antibody and probe penetration within the tissue.
Neuroscience, Issue 83, Life Sciences (General), Virology, Herpes Simplex Virus (HSV), Latency, In situ hybridization, Nuclear organization, Gene expression, Microscopy
Identification of Post-translational Modifications of Plant Protein Complexes
Institutions: University of Warwick, Norwich Research Park, The Australian National University.
Plants adapt quickly to changing environments due to elaborate perception and signaling systems. During pathogen attack, plants rapidly respond to infection via
the recruitment and activation of immune complexes. Activation of immune complexes is associated with post-translational modifications (PTMs) of proteins, such as phosphorylation, glycosylation, or ubiquitination. Understanding how these PTMs are choreographed will lead to a better understanding of how resistance is achieved.
Here we describe a protein purification method for nucleotide-binding leucine-rich repeat (NB-LRR)-interacting proteins and the subsequent identification of their post-translational modifications (PTMs). With small modifications, the protocol can be applied for the purification of other plant protein complexes. The method is based on the expression of an epitope-tagged version of the protein of interest, which is subsequently partially purified by immunoprecipitation and subjected to mass spectrometry for identification of interacting proteins and PTMs.
This protocol demonstrates that: i). Dynamic changes in PTMs such as phosphorylation can be detected by mass spectrometry; ii). It is important to have sufficient quantities of the protein of interest, and this can compensate for the lack of purity of the immunoprecipitate; iii). In order to detect PTMs of a protein of interest, this protein has to be immunoprecipitated to get a sufficient quantity of protein.
Plant Biology, Issue 84, plant-microbe interactions, protein complex purification, mass spectrometry, protein phosphorylation, Prf, Pto, AvrPto, AvrPtoB
Visualizing Neuroblast Cytokinesis During C. elegans Embryogenesis
Institutions: Concordia University.
This protocol describes the use of fluorescence microscopy to image dividing cells within developing Caenorhabditis elegans
embryos. In particular, this protocol focuses on how to image dividing neuroblasts, which are found underneath the epidermal cells and may be important for epidermal morphogenesis. Tissue formation is crucial for metazoan development and relies on external cues from neighboring tissues. C. elegans
is an excellent model organism to study tissue morphogenesis in vivo
due to its transparency and simple organization, making its tissues easy to study via microscopy. Ventral enclosure is the process where the ventral surface of the embryo is covered by a single layer of epithelial cells. This event is thought to be facilitated by the underlying neuroblasts, which provide chemical guidance cues to mediate migration of the overlying epithelial cells. However, the neuroblasts are highly proliferative and also may act as a mechanical substrate for the ventral epidermal cells. Studies using this experimental protocol could uncover the importance of intercellular communication during tissue formation, and could be used to reveal the roles of genes involved in cell division within developing tissues.
Neuroscience, Issue 85, C. elegans, morphogenesis, cytokinesis, neuroblasts, anillin, microscopy, cell division
Identification of Protein Interaction Partners in Mammalian Cells Using SILAC-immunoprecipitation Quantitative Proteomics
Institutions: University of Cambridge.
Quantitative proteomics combined with immuno-affinity purification, SILAC immunoprecipitation, represent a powerful means for the discovery of novel protein:protein interactions. By allowing the accurate relative quantification of protein abundance in both control and test samples, true interactions may be easily distinguished from experimental contaminants. Low affinity interactions can be preserved through the use of less-stringent buffer conditions and remain readily identifiable. This protocol discusses the labeling of tissue culture cells with stable isotope labeled amino acids, transfection and immunoprecipitation of an affinity tagged protein of interest, followed by the preparation for submission to a mass spectrometry facility. This protocol then discusses how to analyze and interpret the data returned from the mass spectrometer in order to identify cellular partners interacting with a protein of interest. As an example this technique is applied to identify proteins binding to the eukaryotic translation initiation factors: eIF4AI and eIF4AII.
Biochemistry, Issue 89, mass spectrometry, tissue culture techniques, isotope labeling, SILAC, Stable Isotope Labeling of Amino Acids in Cell Culture, proteomics, Interactomics, immunoprecipitation, pulldown, eIF4A, GFP, nanotrap, orbitrap
Live Imaging of Drosophila Larval Neuroblasts
Institutions: National Institutes of Health.
Stem cells divide asymmetrically to generate two progeny cells with unequal fate potential: a self-renewing stem cell and a differentiating cell. Given their relevance to development and disease, understanding the mechanisms that govern asymmetric stem cell division has been a robust area of study. Because they are genetically tractable and undergo successive rounds of cell division about once every hour, the stem cells of the Drosophila
central nervous system, or neuroblasts, are indispensable models for the study of stem cell division. About 100 neural stem cells are located near the surface of each of the two larval brain lobes, making this model system particularly useful for live imaging microscopy studies. In this work, we review several approaches widely used to visualize stem cell divisions, and we address the relative advantages and disadvantages of those techniques that employ dissociated versus intact brain tissues. We also detail our simplified protocol used to explant whole brains from third instar larvae for live cell imaging and fixed analysis applications.
Neuroscience, Issue 89, live imaging, Drosophila, neuroblast, stem cell, asymmetric division, centrosome, brain, cell cycle, mitosis
Ex vivo Culture of Drosophila Pupal Testis and Single Male Germ-line Cysts: Dissection, Imaging, and Pharmacological Treatment
Institutions: Philipps-Universität Marburg, Philipps-Universität Marburg.
During spermatogenesis in mammals and in Drosophila melanogaster,
male germ cells develop in a series of essential developmental processes. This includes differentiation from a stem cell population, mitotic amplification, and meiosis. In addition, post-meiotic germ cells undergo a dramatic morphological reshaping process as well as a global epigenetic reconfiguration of the germ line chromatin—the histone-to-protamine switch.
Studying the role of a protein in post-meiotic spermatogenesis using mutagenesis or other genetic tools is often impeded by essential embryonic, pre-meiotic, or meiotic functions of the protein under investigation. The post-meiotic phenotype of a mutant of such a protein could be obscured through an earlier developmental block, or the interpretation of the phenotype could be complicated. The model organism Drosophila melanogaster
offers a bypass to this problem: intact testes and even cysts of germ cells dissected from early pupae are able to develop ex vivo
in culture medium. Making use of such cultures allows microscopic imaging of living germ cells in testes and of germ-line cysts. Importantly, the cultivated testes and germ cells also become accessible to pharmacological inhibitors, thereby permitting manipulation of enzymatic functions during spermatogenesis, including post-meiotic stages.
The protocol presented describes how to dissect and cultivate pupal testes and germ-line cysts. Information on the development of pupal testes and culture conditions are provided alongside microscope imaging data of live testes and germ-line cysts in culture. We also describe a pharmacological assay to study post-meiotic spermatogenesis, exemplified by an assay targeting the histone-to-protamine switch using the histone acetyltransferase inhibitor anacardic acid. In principle, this cultivation method could be adapted to address many other research questions in pre- and post-meiotic spermatogenesis.
Developmental Biology, Issue 91,
Ex vivo culture, testis, male germ-line cells, Drosophila, imaging, pharmacological assay
Real-time Imaging of Axonal Transport of Quantum Dot-labeled BDNF in Primary Neurons
Institutions: University of California, San Diego, Shanghai Jiao Tong University, University of California, San Diego, VA San Diego Healthcare System.
BDNF plays an important role in several facets of neuronal survival, differentiation, and function. Structural and functional deficits in axons are increasingly viewed as an early feature of neurodegenerative diseases, including Alzheimer’s disease (AD) and Huntington’s disease (HD). As yet unclear is the mechanism(s) by which axonal injury is induced. We reported the development of a novel technique to produce biologically active, monobiotinylated BDNF (mBtBDNF) that can be used to trace axonal transport of BDNF. Quantum dot-labeled BDNF (QD-BDNF) was produced by conjugating quantum dot 655 to mBtBDNF. A microfluidic device was used to isolate axons from neuron cell bodies. Addition of QD-BDNF to the axonal compartment allowed live imaging of BDNF transport in axons. We demonstrated that QD-BDNF moved essentially exclusively retrogradely, with very few pauses, at a moving velocity of around 1.06 μm/sec. This system can be used to investigate mechanisms of disrupted axonal function in AD or HD, as well as other degenerative disorders.
Neuroscience, Issue 91, live imaging, brain-derived neurotrophic factor (BDNF), quantum dot, trafficking, axonal retrograde transport, microfluidic chamber
FtsZ Polymerization Assays: Simple Protocols and Considerations
Institutions: University of Groningen.
During bacterial cell division, the essential protein FtsZ assembles in the middle of the cell to form the so-called Z-ring. FtsZ polymerizes into long filaments in the presence of GTP in vitro
, and polymerization is regulated by several accessory proteins. FtsZ polymerization has been extensively studied in vitro
using basic methods including light scattering, sedimentation, GTP hydrolysis assays and electron microscopy. Buffer conditions influence both the polymerization properties of FtsZ, and the ability of FtsZ to interact with regulatory proteins. Here, we describe protocols for FtsZ polymerization studies and validate conditions and controls using Escherichia coli
and Bacillus subtilis
FtsZ as model proteins. A low speed sedimentation assay is introduced that allows the study of the interaction of FtsZ with proteins that bundle or tubulate FtsZ polymers. An improved GTPase assay protocol is described that allows testing of GTP hydrolysis over time using various conditions in a 96-well plate setup, with standardized incubation times that abolish variation in color development in the phosphate detection reaction. The preparation of samples for light scattering studies and electron microscopy is described. Several buffers are used to establish suitable buffer pH and salt concentration for FtsZ polymerization studies. A high concentration of KCl is the best for most of the experiments. Our methods provide a starting point for the in vitro
characterization of FtsZ, not only from E. coli
and B. subtilis
but from any other bacterium. As such, the methods can be used for studies of the interaction of FtsZ with regulatory proteins or the testing of antibacterial drugs which may affect FtsZ polymerization.
Basic Protocols, Issue 81, FtsZ, protein polymerization, cell division, GTPase, sedimentation assay, light scattering
Organelle Transport in Cultured Drosophila Cells: S2 Cell Line and Primary Neurons.
Institutions: Feinberg School of Medicine, Northwestern University, Basque Foundation for Science.
S2 cells plated on a coverslip in the presence of any actin-depolymerizing drug form long unbranched processes filled with uniformly polarized microtubules. Organelles move along these processes by microtubule motors. Easy maintenance, high sensitivity to RNAi-mediated protein knock-down and efficient procedure for creating stable cell lines make Drosophila
S2 cells an ideal model system to study cargo transport by live imaging. The results obtained with S2 cells can be further applied to a more physiologically relevant system: axonal transport in primary neurons cultured from dissociated Drosophila
embryos. Cultured neurons grow long neurites filled with bundled microtubules, very similar to S2 processes. Like in S2 cells, organelles in cultured neurons can be visualized by either organelle-specific fluorescent dyes or by using fluorescent organelle markers encoded by DNA injected into early embryos or expressed in transgenic flies. Therefore, organelle transport can be easily recorded in neurons cultured on glass coverslips using living imaging. Here we describe procedures for culturing and visualizing cargo transport in Drosophila
S2 cells and primary neurons. We believe that these protocols make both systems accessible for labs studying cargo transport.
Cellular Biology, Issue 81, Drosophila melanogaster, cytoskeleton, S2 cells, primary neuron culture, microtubules, kinesin, dynein, fluorescence microscopy, live imaging
Viability Assays for Cells in Culture
Institutions: Duquesne University.
Manual cell counts on a microscope are a sensitive means of assessing cellular viability but are time-consuming and therefore expensive. Computerized viability assays are expensive in terms of equipment but can be faster and more objective than manual cell counts. The present report describes the use of three such viability assays. Two of these assays are infrared and one is luminescent. Both infrared assays rely on a 16 bit Odyssey Imager. One infrared assay uses the DRAQ5 stain for nuclei combined with the Sapphire stain for cytosol and is visualized in the 700 nm channel. The other infrared assay, an In-Cell Western, uses antibodies against cytoskeletal proteins (α-tubulin or microtubule associated protein 2) and labels them in the 800 nm channel. The third viability assay is a commonly used luminescent assay for ATP, but we use a quarter of the recommended volume to save on cost. These measurements are all linear and correlate with the number of cells plated, but vary in sensitivity. All three assays circumvent time-consuming microscopy and sample the entire well, thereby reducing sampling error. Finally, all of the assays can easily be completed within one day of the end of the experiment, allowing greater numbers of experiments to be performed within short timeframes. However, they all rely on the assumption that cell numbers remain in proportion to signal strength after treatments, an assumption that is sometimes not met, especially for cellular ATP. Furthermore, if cells increase or decrease in size after treatment, this might affect signal strength without affecting cell number. We conclude that all viability assays, including manual counts, suffer from a number of caveats, but that computerized viability assays are well worth the initial investment. Using all three assays together yields a comprehensive view of cellular structure and function.
Cellular Biology, Issue 83, In-cell Western, DRAQ5, Sapphire, Cell Titer Glo, ATP, primary cortical neurons, toxicity, protection, N-acetyl cysteine, hormesis
Study Glial Cell Heterogeneity Influence on Axon Growth Using a New Coculture Method
Institutions: Cedars Sinai Medical Center, UCLA, Fourth Military Medical University, David Geffen School of Medicine, UCLA, Fourth Military Medical Univeristy.
In the central nervous system of all mammals, severed axons after injury are unable to regenerate to their original targets and functional recovery is very poor 1
. The failure of axon regeneration is a combined result of several factors including the hostile glial cell environment, inhibitory myelin related molecules and decreased intrinsic neuron regenerative capacity 2
. Astrocytes are the most predominant glial cell type in central nervous system and play important role in axon functions under physiology and pathology conditions 3
. Contrast to the homologous oligodendrocytes, astrocytes are a heterogeneous cell population composed by different astrocyte subpopulations with diverse morphologies and gene expression 4
. The functional significance of this heterogeneity, such as their influences on axon growth, is largely unknown.
To study the glial cell, especially the function of astrocyte heterogeneity in neuron behavior, we established a new method by co-culturing high purified dorsal root ganglia neurons with glial cells obtained from the rat cortex. By this technique, we were able to directly compare neuron adhesion and axon growth on different astrocytes subpopulations under the same condition.
In this report, we give the detailed protocol of this method for astrocytes isolation and culture, dorsal root ganglia neurons isolation and purification, and the co-culture of DRG neurons with astrocytes. This method could also be extended to other brain regions to study cellular or regional specific interaction between neurons and glial cells.
Neuroscience, Issue 43, Dorsal root ganglia, glial cell, heterogeneity, co-culture, regeneration, axon growth
Immunohistological Labeling of Microtubules in Sensory Neuron Dendrites, Tracheae, and Muscles in the Drosophila Larva Body Wall
Institutions: RIKEN Brain Science Institute, Saitama University.
To understand how differences in complex cell shapes are achieved, it is important to accurately follow microtubule organization. The Drosophila
larval body wall contains several cell types that are models to study cell and tissue morphogenesis. For example tracheae are used to examine tube morphogenesis1
, and the dendritic arborization (DA) sensory neurons of the Drosophila
larva have become a primary system for the elucidation of general and neuron-class-specific mechanisms of dendritic differentiation2-5
The shape of dendrite branches can vary significantly between neuron classes, and even among different branches of a single neuron7,8
. Genetic studies in DA neurons suggest that differential cytoskeletal organization can underlie morphological differences in dendritic branch shape4,9-11
. We provide a robust immunological labeling method to assay in vivo
microtubule organization in DA sensory neuron dendrite arbor (Figures 1, 2, Movie 1). This protocol illustrates the dissection and immunostaining of first instar larva, a stage when active sensory neuron dendrite outgrowth and branching organization is occurring 12,13
In addition to staining sensory neurons, this method achieves robust labeling of microtubule organization in muscles (Movies 2, 3), trachea (Figure 3, Movie 3), and other body wall tissues. It is valuable for investigators wishing to analyze microtubule organization in situ
in the body wall when investigating mechanisms that control tissue and cell shape.
Neuroscience, Issue 57, developmental biology, Drosophila larvae, immunohistochemistry, microtubule, trachea, dendritic arborization neurons
DiI-Labeling of DRG Neurons to Study Axonal Branching in a Whole Mount Preparation of Mouse Embryonic Spinal Cord
Institutions: Max Delbrück Center for Molecular Medicine.
Here we present a technique to label the trajectories of small groups of DRG neurons into the embryonic spinal cord by diffusive staining using the lipophilic tracer 1,1'-dioctadecyl-3,3,3',3'-tetramethylindocarbocyanine perchlorate (DiI)1
. The comparison of axonal pathways of wild-type with those of mouse lines in which genes are mutated allows testing for a functional role of candidate proteins in the control of axonal branching which is an essential mechanism in the wiring of the nervous system. Axonal branching enables an individual neuron to connect with multiple targets, thereby providing the physical basis for the parallel processing of information. Ramifications at intermediate target regions of axonal growth may be distinguished from terminal arborization. Furthermore, different modes of axonal branch formation may be classified depending on whether branching results from the activities of the growth cone (splitting or delayed branching) or from the budding of collaterals from the axon shaft in a process called interstitial branching2
The central projections of neurons from the DRG offer a useful experimental system to study both types of axonal branching: when their afferent axons reach the dorsal root entry zone (DREZ) of the spinal cord between embryonic days 10 to 13 (E10 - E13) they display a stereotyped pattern of T- or Y-shaped bifurcation. The two resulting daughter axons then proceed in rostral or caudal directions, respectively, at the dorsolateral margin of the cord and only after a waiting period collaterals sprout from these stem axons to penetrate the gray matter (interstitial branching) and project to relay neurons in specific laminae of the spinal cord where they further arborize (terminal branching)3
. DiI tracings have revealed growth cones at the dorsal root entry zone of the spinal cord that appeared to be in the process of splitting suggesting that bifurcation is caused by splitting of the growth cone itself4
), however, other options have been discussed as well5
This video demonstrates first how to dissect the spinal cord of E12.5 mice leaving the DRG attached. Following fixation of the specimen tiny amounts of DiI are applied to DRG using glass needles pulled from capillary tubes. After an incubation step, the labeled spinal cord is mounted as an inverted open-book preparation to analyze individual axons using fluorescence microscopy.
Neuroscience, Issue 58, neurons, axonal branching, DRG, Spinal cord, DiI labeling, cGMP signaling
Genetic Study of Axon Regeneration with Cultured Adult Dorsal Root Ganglion Neurons
Institutions: Johns Hopkins University School of Medicine, Johns Hopkins University School of Medicine.
It is well known that mature neurons in the central nervous system (CNS) cannot regenerate their axons after injuries due to diminished intrinsic ability to support axon growth and a hostile environment in the mature CNS1,2
. In contrast, mature neurons in the peripheral nervous system (PNS) regenerate readily after injuries3
. Adult dorsal root ganglion (DRG) neurons are well known to regenerate robustly after peripheral nerve injuries. Each DRG neuron grows one axon from the cell soma, which branches into two axonal branches: a peripheral branch innervating peripheral targets and a central branch extending into the spinal cord. Injury of the DRG peripheral axons results in substantial axon regeneration, whereas central axons in the spinal cord regenerate poorly after the injury. However, if the peripheral axonal injury occurs prior to the spinal cord injury (a process called the conditioning lesion), regeneration of central axons is greatly improved4
. Moreover, the central axons of DRG neurons share the same hostile environment as descending corticospinal axons in the spinal cord. Together, it is hypothesized that the molecular mechanisms controlling axon regeneration of adult DRG neurons can be harnessed to enhance CNS axon regeneration. As a result, adult DRG neurons are now widely used as a model system to study regenerative axon growth5-7
Here we describe a method of adult DRG neuron culture that can be used for genetic study of axon regeneration in vitro
. In this model adult DRG neurons are genetically manipulated via electroporation-mediated gene transfection6,8
. By transfecting neurons with DNA plasmid or si/shRNA, this approach enables both gain- and loss-of-function experiments to investigate the role of any gene-of-interest in axon growth from adult DRG neurons. When neurons are transfected with si/shRNA, the targeted endogenous protein is usually depleted after 3-4 days in culture, during which time robust axon growth has already occurred, making the loss-of-function studies less effective. To solve this problem, the method described here includes a re-suspension and re-plating step after transfection, which allows axons to re-grow from neurons in the absence of the targeted protein. Finally, we provide an example of using this in vitro
model to study the role of an axon regeneration-associated gene, c-Jun, in mediating axon growth from adult DRG neurons9
Neuroscience, Issue 66, Physiology, Developmental Biology, cell culture, axon regeneration, axon growth, dorsal root ganglion, spinal cord injury
Antibody Transfection into Neurons as a Tool to Study Disease Pathogenesis
Institutions: Veterans Administration Medical Center, Memphis, TN, University of Tennessee Health Science Center, Memphis, TN, University of Tennessee Health Science Center, Memphis, TN.
Antibodies provide the ability to gain novel insight into various events taking place in living systems. The ability to produce highly specific antibodies to target proteins has allowed for very precise biological questions to be addressed. Importantly, antibodies have been implicated in the pathogenesis of a number of human diseases including systemic lupus erythematosus (SLE), rheumatoid arthritis (RA), paraneoplastic syndromes, multiple sclerosis (MS) and human T-lymphotropic virus type 1 (HTLV-1) associated myelopathy/tropical spastic paraparesis (HAM/TSP) 1-9
. How antibodies cause disease is an area of ongoing investigation, and data suggests that interactions between antibodies and various intracellular molecules results in inflammation, altered cellular messaging, and apoptosis 10
. It has been shown that patients with MS and HAM/TSP produce autoantibodies to the intracellular RNA binding protein heterogeneous ribonuclear protein A1 (hnRNP A1) 3, 5-7, 9, 11
. Recent data indicate that antibodies to both intra-neuronal and surface antigens are pathogenic 3, 5-9, 11
. Thus, a procedure that allows for the study of intracellular antibody:protein interactions would lend great insight into disease pathogenesis.
Genes are commonly transfected into primary cells and cell lines in culture, however transfection of antibodies into cells has been hindered by alteration of antibody structure or poor transfection efficiency 12
. Other methods of transfection include antibody transfection based on cationic liposomes (consisting of DOTAP/DOPE) and polyethylenimines (PEI); both of which resulted in a ten-fold decrease in antibody transfection compared to controls 12
. The method performed in our study is similar to cationic lipid-mediated methods and uses a lipid-based mechanism to form non-covalent complexes with the antibodies through electrostatic and hydrophobic interactions 13
. We utilized Ab-DeliverIN reagent, which is a lipid formulation capable of capturing antibodies through non-covalent electrostatic and hydrophobic interactions and delivering them inside cells. Thus chemical and genetic couplings are not necessary for delivery of functional antibodies into living cells. This method has enabled us to perform various antibody tracing and protein localization experiments, as well as the analyses of the molecular consequences of intracellular antibody:protein interactions 9
In this protocol, we will show how to transfect antibodies into neurons rapidly, reproducibly and with a high degree of transfection efficiency. As an example, we will use anti-hnRNP A1 and anti-IgG antibodies. For easy quantification of transfection efficiency we used anti-hnRNP A1 antibodies labelled with Atto-550-NHS and FITC-labeled IgG. Atto550 NHS is a new label with high molecular absorbtion and quantum yield. Excitation source and fluorescent filters for Atto550 are similar to Cy3 (Ex. 556 Em. 578). In addition, Atto550 has high photostability. FITC-labeled IgG were used as a control to show that this method is versatile and not dye dependent. This approach and the data that is generated will assist in understanding of the role that antibodies to intracellular target antigens might play in the pathogenesis of human diseases.
Neuroscience, Issue 67, Medicine, Molecular Biology, Immunology, Transfection, antibodies, neuron, immunocytochemistry, fluorescent microscopy, autoimmunity
A Toolkit to Enable Hydrocarbon Conversion in Aqueous Environments
Institutions: Delft University of Technology, Delft University of Technology.
This work puts forward a toolkit that enables the conversion of alkanes by Escherichia coli
and presents a proof of principle of its applicability. The toolkit consists of multiple standard interchangeable parts (BioBricks)9
addressing the conversion of alkanes, regulation of gene expression and survival in toxic hydrocarbon-rich environments.
A three-step pathway for alkane degradation was implemented in E. coli
to enable the conversion of medium- and long-chain alkanes to their respective alkanols, alkanals and ultimately alkanoic-acids. The latter were metabolized via the native β-oxidation pathway. To facilitate the oxidation of medium-chain alkanes (C5-C13) and cycloalkanes (C5-C8), four genes (alkB2
) of the alkane hydroxylase system from Gordonia
were transformed into E. coli
. For the conversion of long-chain alkanes (C15-C36), theladA
gene from Geobacillus thermodenitrificans
was implemented. For the required further steps of the degradation process, ADH
and ALDH (
originating from G. thermodenitrificans
) were introduced10,11
. The activity was measured by resting cell assays. For each oxidative step, enzyme activity was observed.
To optimize the process efficiency, the expression was only induced under low glucose conditions: a substrate-regulated promoter, pCaiF, was used. pCaiF is present in E. coli
K12 and regulates the expression of the genes involved in the degradation of non-glucose carbon sources.
The last part of the toolkit - targeting survival - was implemented using solvent tolerance genes, PhPFDα and β, both from Pyrococcus horikoshii
OT3. Organic solvents can induce cell stress and decreased survivability by negatively affecting protein folding. As chaperones, PhPFDα and β improve the protein folding process e.g.
under the presence of alkanes. The expression of these genes led to an improved hydrocarbon tolerance shown by an increased growth rate (up to 50%) in the presences of 10% n
-hexane in the culture medium were observed.
Summarizing, the results indicate that the toolkit enables E. coli
to convert and tolerate hydrocarbons in aqueous environments. As such, it represents an initial step towards a sustainable solution for oil-remediation using a synthetic biology approach.
Bioengineering, Issue 68, Microbiology, Biochemistry, Chemistry, Chemical Engineering, Oil remediation, alkane metabolism, alkane hydroxylase system, resting cell assay, prefoldin, Escherichia coli, synthetic biology, homologous interaction mapping, mathematical model, BioBrick, iGEM
Imaging Analysis of Neuron to Glia Interaction in Microfluidic Culture Platform (MCP)-based Neuronal Axon and Glia Co-culture System
Institutions: Tufts University, Tufts Sackler School of Graduate Biomedical Sciences.
Proper neuron to glia interaction is critical to physiological function of the central nervous system (CNS). This bidirectional communication is sophisticatedly mediated by specific signaling pathways between neuron and glia1,2
. Identification and characterization of these signaling pathways is essential to the understanding of how neuron to glia interaction shapes CNS physiology. Previously, neuron and glia mixed cultures have been widely utilized for testing and characterizing signaling pathways between neuron and glia. What we have learned from these preparations and other in vivo
tools, however, has suggested that mutual signaling between neuron and glia often occurred in specific compartments within neurons (i.e.
, axon, dendrite, or soma)3
. This makes it important to develop a new culture system that allows separation of neuronal compartments and specifically examines the interaction between glia and neuronal axons/dendrites. In addition, the conventional mixed culture system is not capable of differentiating the soluble factors and direct membrane contact signals between neuron and glia. Furthermore, the large quantity of neurons and glial cells in the conventional co-culture system lacks the resolution necessary to observe the interaction between a single axon and a glial cell.
In this study, we describe a novel axon and glia co-culture system with the use of a microfluidic culture platform (MCP). In this co-culture system, neurons and glial cells are cultured in two separate chambers that are connected through multiple central channels. In this microfluidic culture platform, only neuronal processes (especially axons) can enter the glial side through the central channels. In combination with powerful fluorescent protein labeling, this system allows direct examination of signaling pathways between axonal/dendritic and glial interactions, such as axon-mediated transcriptional regulation in glia, glia-mediated receptor trafficking in neuronal terminals, and glia-mediated axon growth. The narrow diameter of the chamber also significantly prohibits the flow of the neuron-enriched medium into the glial chamber, facilitating probing of the direct membrane-protein interaction between axons/dendrites and glial surfaces.
Neuroscience, Issue 68, Molecular Biology, Cellular Biology, Biophysics, Microfluidics, Microfluidic culture platform, Compartmented culture, Neuron to glia signaling, neurons, glia, cell culture
Direct Imaging of ER Calcium with Targeted-Esterase Induced Dye Loading (TED)
Institutions: University of Wuerzburg, Max Planck Institute of Neurobiology, Martinsried, Ludwig-Maximilians University of Munich.
Visualization of calcium dynamics is important to understand the role of calcium in cell physiology. To examine calcium dynamics, synthetic fluorescent Ca2+
indictors have become popular. Here we demonstrate TED (= targeted-esterase induced dye loading), a method to improve the release of Ca2+
indicator dyes in the ER lumen of different cell types. To date, TED was used in cell lines, glial cells, and neurons in vitro
. TED bases on efficient, recombinant targeting of a high carboxylesterase activity to the ER lumen using vector-constructs that express Carboxylesterases (CES). The latest TED vectors contain a core element of CES2 fused to a red fluorescent protein, thus enabling simultaneous two-color imaging. The dynamics of free calcium in the ER are imaged in one color, while the corresponding ER structure appears in red. At the beginning of the procedure, cells are transduced with a lentivirus. Subsequently, the infected cells are seeded on coverslips to finally enable live cell imaging. Then, living cells are incubated with the acetoxymethyl ester (AM-ester) form of low-affinity Ca2+
indicators, for instance Fluo5N-AM, Mag-Fluo4-AM, or Mag-Fura2-AM. The esterase activity in the ER cleaves off hydrophobic side chains from the AM form of the Ca2+
indicator and a hydrophilic fluorescent dye/Ca2+
complex is formed and trapped in the ER lumen. After dye loading, the cells are analyzed at an inverted confocal laser scanning microscope. Cells are continuously perfused with Ringer-like solutions and the ER calcium dynamics are directly visualized by time-lapse imaging. Calcium release from the ER is identified by a decrease in fluorescence intensity in regions of interest, whereas the refilling of the ER calcium store produces an increase in fluorescence intensity. Finally, the change in fluorescent intensity over time is determined by calculation of ΔF/F0
Cellular Biology, Issue 75, Neurobiology, Neuroscience, Molecular Biology, Biochemistry, Biomedical Engineering, Bioengineering, Virology, Medicine, Anatomy, Physiology, Surgery, Endoplasmic Reticulum, ER, Calcium Signaling, calcium store, calcium imaging, calcium indicator, metabotropic signaling, Ca2+, neurons, cells, mouse, animal model, cell culture, targeted esterase induced dye loading, imaging
In Vivo Modeling of the Morbid Human Genome using Danio rerio
Institutions: Duke University Medical Center, Duke University, Duke University Medical Center.
Here, we present methods for the development of assays to query potentially clinically significant nonsynonymous changes using in vivo
complementation in zebrafish. Zebrafish (Danio rerio
) are a useful animal system due to their experimental tractability; embryos are transparent to enable facile viewing, undergo rapid development ex vivo,
and can be genetically manipulated.1
These aspects have allowed for significant advances in the analysis of embryogenesis, molecular processes, and morphogenetic signaling. Taken together, the advantages of this vertebrate model make zebrafish highly amenable to modeling the developmental defects in pediatric disease, and in some cases, adult-onset disorders. Because the zebrafish genome is highly conserved with that of humans (~70% orthologous), it is possible to recapitulate human disease states in zebrafish. This is accomplished either through the injection of mutant human mRNA to induce dominant negative or gain of function alleles, or utilization of morpholino (MO) antisense oligonucleotides to suppress genes to mimic loss of function variants. Through complementation of MO-induced phenotypes with capped human mRNA, our approach enables the interpretation of the deleterious effect of mutations on human protein sequence based on the ability of mutant mRNA to rescue a measurable, physiologically relevant phenotype. Modeling of the human disease alleles occurs through microinjection of zebrafish embryos with MO and/or human mRNA at the 1-4 cell stage, and phenotyping up to seven days post fertilization (dpf). This general strategy can be extended to a wide range of disease phenotypes, as demonstrated in the following protocol. We present our established models for morphogenetic signaling, craniofacial, cardiac, vascular integrity, renal function, and skeletal muscle disorder phenotypes, as well as others.
Molecular Biology, Issue 78, Genetics, Biomedical Engineering, Medicine, Developmental Biology, Biochemistry, Anatomy, Physiology, Bioengineering, Genomics, Medical, zebrafish, in vivo, morpholino, human disease modeling, transcription, PCR, mRNA, DNA, Danio rerio, animal model
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo
. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls.
DTI data analysis is performed in a variate fashion, i.e.
voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e.
differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels.
In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Microinjection Techniques for Studying Mitosis in the Drosophila melanogaster Syncytial Embryo
Institutions: University of California, Davis.
This protocol describes the use of the Drosophila melanogaster
syncytial embryo for studying mitosis1
has useful genetics with a sequenced genome, and it can be easily maintained and manipulated. Many mitotic mutants exist, and transgenic flies expressing functional fluorescently (e.g. GFP) - tagged mitotic proteins have been and are being generated. Targeted gene expression is possible using the GAL4/UAS system2
early embryo carries out multiple mitoses very rapidly (cell cycle duration, ≈10 min). It is well suited for imaging mitosis, because during cycles 10-13, nuclei divide rapidly and synchronously without intervening cytokinesis at the surface of the embryo in a single monolayer just underneath the cortex. These rapidly dividing nuclei probably use the same mitotic machinery as other cells, but they are optimized for speed; the checkpoint is generally believed to not be stringent, allowing the study of mitotic proteins whose absence would cause cell cycle arrest in cells with a strong checkpoint. Embryos expressing GFP labeled proteins or microinjected with fluorescently labeled proteins can be easily imaged to follow live dynamics (Fig. 1). In addition, embryos can be microinjected with function-blocking antibodies or inhibitors of specific proteins to study the effect of the loss or perturbation of their function3
. These reagents can diffuse throughout the embryo, reaching many spindles to produce a gradient of concentration of inhibitor, which in turn results in a gradient of defects comparable to an allelic series of mutants. Ideally, if the target protein is fluorescently labeled, the gradient of inhibition can be directly visualized4
. It is assumed that the strongest phenotype is comparable to the null phenotype, although it is hard to formally exclude the possibility that the antibodies may have dominant effects in rare instances, so rigorous controls and cautious interpretation must be applied. Further away from the injection site, protein function is only partially lost allowing other functions of the target protein to become evident.
Developmental Biology, Issue 31, mitosis, Drosophila melanogaster syncytial embryo, microinjection, protein inhibition
| 1 | 7 |
<urn:uuid:8bb451db-e6be-4b44-bdd8-7efd9d9da402>
|
|Delta Air Lines Boeing 767-300 taking off|
|Role||Wide-body jet airliner|
|National origin||United States|
|Manufacturer||Boeing Commercial Airplanes|
|First flight||September 26, 1981|
|Introduction||September 8, 1982 with United Airlines|
|Primary users||Delta Air Lines
All Nippon Airways
|Number built||1,099 through May 2017|
The Boeing 767 is a mid- to large-size, long-range, wide-body twin-engine jet airliner built by Boeing Commercial Airplanes. It was Boeing's first wide-body twinjet and its first airliner with a two-crew glass cockpit. The aircraft has two turbofan engines, a conventional tail, and, for reduced aerodynamic drag, a supercritical wing design. Designed as a smaller wide-body airliner than earlier aircraft such as the 747, the 767 has seating capacity for 181 to 375 people, and a design range of 3,850 to 6,385 nautical miles (7,130 to 11,825 km), depending on variant. Development of the 767 occurred in tandem with a narrow-body twinjet, the 757, resulting in shared design features which allow pilots to obtain a common type rating to operate both aircraft.
The 767 is produced in three fuselage lengths. The original 767-200 entered service in 1982, followed by the 767-300 in 1986 and the 767-400ER, an extended-range (ER) variant, in 2000. The extended-range 767-200ER and 767-300ER models entered service in 1984 and 1988, respectively, while a production freighter version, the 767-300F, debuted in 1995. Conversion programs have modified passenger 767-200 and 767-300 series aircraft for cargo use, while military derivatives include the E-767 surveillance aircraft, the KC-767 and KC-46 aerial tankers, and VIP transports. Engines featured on the 767 include the General Electric CF6, Pratt & Whitney JT9D and PW4000, and Rolls-Royce RB211 turbofans.
United Airlines first placed the 767 in commercial service in 1982. The aircraft was initially flown on domestic and transcontinental routes, during which it demonstrated the reliability of its twinjet design. In 1985, the 767 became the first twin-engined airliner to receive regulatory approval for extended overseas flights. The aircraft was then used to expand non-stop service on medium- to long-haul intercontinental routes. In 1986, Boeing initiated studies for a higher-capacity 767, ultimately leading to the development of the 777, a larger wide-body twinjet. In the 1990s, the 767 became the most frequently used airliner for transatlantic flights between North America and Europe.
The 767 is the first twinjet wide-body type to reach 1,000 aircraft delivered. As of May 2017, Boeing has received 1,204 orders for the 767 from 74 customers; 1,099 have been delivered. A total of 742 of these aircraft were in service in July 2016; the most popular variant is the 767-300ER, with 583 delivered; Delta Air Lines is the largest operator, with 91 aircraft. Competitors have included the Airbus A300, A310, and A330-200, while a successor, the 787 Dreamliner, entered service in October 2011. Despite this, the 767 still remains in production.
- 1 Development
- 2 Design
- 3 Variants
- 4 Operators
- 5 Accidents and notable incidents
- 6 Retirement and display
- 7 Specifications
- 8 See also
- 9 References
- 10 External links
In 1970, Boeing's 747 became the first wide-body jetliner to enter service. The 747 was the first passenger jet wide enough to feature a twin-aisle cabin. Two years later, the manufacturer began a development study, code-named 7X7, for a new wide-body aircraft intended to replace the 707 and other early generation narrow-body jets. The aircraft would also provide twin-aisle seating, but in a smaller fuselage than the existing 747, McDonnell Douglas DC-10, and Lockheed L-1011 TriStar wide-bodies. To defray the high cost of development, Boeing signed risk-sharing agreements with Italian corporation Aeritalia and the Civil Transport Development Corporation (CTDC), a consortium of Japanese aerospace companies. This marked the manufacturer's first major international joint venture, and both Aeritalia and the CTDC received supply contracts in return for their early participation. The initial 7X7 was conceived as a short take-off and landing airliner intended for short-distance flights, but customers were unenthusiastic about the concept, leading to its redefinition as a mid-size, transcontinental-range airliner. At this stage the proposed aircraft featured two or three engines, with possible configurations including over-wing engines and a T-tail.
By 1976, a twinjet layout, similar to the one which had debuted on the Airbus A300, became the baseline configuration. The decision to use two engines reflected increased industry confidence in the reliability and economics of new-generation jet powerplants. While airline requirements for new wide-body aircraft remained ambiguous, the 7X7 was generally focused on mid-size, high-density markets. As such, it was intended to transport large numbers of passengers between major cities. Advancements in civil aerospace technology, including high-bypass-ratio turbofan engines, new flight deck systems, aerodynamic improvements, and lighter construction materials were to be applied to the 7X7. Many of these features were also included in a parallel development effort for a new mid-size narrow-body airliner, code-named 7N7, which would become the 757. Work on both proposals proceeded through the airline industry upturn in the late 1970s.
In January 1978, Boeing announced a major extension of its Everett factory—which was then dedicated to manufacturing the 747—to accommodate its new wide-body family. In February 1978, the new jetliner received the 767 model designation, and three variants were planned: a 767-100 with 190 seats, a 767-200 with 210 seats, and a trijet 767MR/LR version with 200 seats intended for intercontinental routes. The 767MR/LR was subsequently renamed 777 for differentiation purposes. The 767 was officially launched on July 14, 1978, when United Airlines ordered 30 of the 767-200 variant, followed by 50 more 767-200 orders from American Airlines and Delta Air Lines later that year. The 767-100 was ultimately not offered for sale, as its capacity was too close to the 757's seating, while the 777 trijet was eventually dropped in favor of standardizing around the twinjet configuration.
In the late 1970s, operating cost replaced capacity as the primary factor in airliner purchases. As a result, the 767's design process emphasized fuel efficiency from the outset. Boeing targeted a 20 to 30 percent cost saving over earlier aircraft, mainly through new engine and wing technology. As development progressed, engineers used computer-aided design for over a third of the 767's design drawings, and performed 26,000 hours of wind tunnel tests. Design work occurred concurrently with the 757 twinjet, leading Boeing to treat both as almost one program to reduce risk and cost. Both aircraft would ultimately receive shared design features, including avionics, flight management systems, instruments, and handling characteristics. Combined development costs were estimated at $3.5 to $4 billion.
Early 767 customers were given the choice of Pratt & Whitney JT9D or General Electric CF6 turbofans, marking the first time that Boeing had offered more than one engine option at the launch of a new airliner. Both jet engine models had a maximum output of 48,000 pounds-force (210 kN) of thrust. The engines were mounted approximately one-third the length of the wing from the fuselage, similar to previous wide-body trijets. The larger wings were designed using an aft-loaded shape which reduced aerodynamic drag and distributed lift more evenly across their surface span than any of the manufacturer's previous aircraft. The wings provided higher-altitude cruise performance, added fuel capacity, and expansion room for future stretched variants. The initial 767-200 was designed for sufficient range to fly across North America or across the northern Atlantic, and would be capable of operating routes up to 3,850 nautical miles (7,130 km).
The 767's fuselage width was set midway between that of the 707 and the 747 at 16.5 feet (5.03 m). While it was narrower than previous wide-body designs, seven abreast seating with two aisles could be fitted, and the reduced width produced less aerodynamic drag. However, the fuselage was not wide enough to accommodate two standard LD3 wide-body unit load devices side-by-side. As a result, a smaller container, the LD2, was created specifically for the 767. Using a conventional tail design also allowed the rear fuselage to be tapered over a shorter section, providing for parallel aisles along the full length of the passenger cabin, and eliminating irregular seat rows toward the rear of the aircraft.
The 767 was the first Boeing wide-body to be designed with a two-crew digital glass cockpit. Cathode ray tube (CRT) color displays and new electronics replaced the role of the flight engineer by enabling the pilot and co-pilot to monitor aircraft systems directly. Despite the promise of reduced crew costs, United Airlines initially demanded a conventional three-person cockpit, citing concerns about the risks associated with introducing a new aircraft. The carrier maintained this position until July 1981, when a US presidential task force determined that a crew of two was safe for operating wide-body jets. A three-crew cockpit remained as an option and was fitted to the first production models. Ansett Australia ordered 767s with three-crew cockpits due to union demands; it was the only airline to operate 767s so configured. The 767's two-crew cockpit was also applied to the 757, allowing pilots to operate both aircraft after a short conversion course, and adding incentive for airlines to purchase both types. Although nominally similar in control design, flying the 767 feels different from the 757. The 757's controls are heavy, similar to the 727 and 747; the control yoke can be rotated to 90 degrees in each direction. The 767 has far lighter control feel in pitch and roll, and the control yoke has approximately 2/3 the rotation travel.
Production and testing
To produce the 767, Boeing formed a network of subcontractors which included domestic suppliers and international contributions from Italy's Aeritalia and Japan's CTDC. The wings and cabin floor were produced in-house, while Aeritalia provided control surfaces, Boeing Vertol made the leading edge for the wings, and Boeing Wichita produced the forward fuselage. The CTDC provided multiple assemblies through its constituent companies, namely Fuji Heavy Industries (wing fairings and gear doors), Kawasaki Heavy Industries (center fuselage), and Mitsubishi Heavy Industries (rear fuselage, doors, and tail). Components were integrated during final assembly at the Everett factory. For expedited production of wing spars, the main structural member of aircraft wings, the Everett factory received robotic machinery to automate the process of drilling holes and inserting fasteners. This method of wing construction expanded on techniques developed for the 747. Final assembly of the first aircraft began in July 1979.
The prototype aircraft, registered N767BA and equipped with JT9D turbofans, rolled out on August 4, 1981. By this time, the 767 program had accumulated 173 firm orders from 17 customers, including Air Canada, All Nippon Airways, Britannia Airways, Transbrasil, and Trans World Airlines (TWA). On September 26, 1981, the prototype took its maiden flight under the command of company test pilots Tommy Edmonds, Lew Wallick, and John Brit. The maiden flight was largely uneventful, save for the inability to retract the landing gear because of a hydraulic fluid leak. The prototype was used for subsequent flight tests.
The 10-month 767 flight test program utilized the first six aircraft built. The first four aircraft were equipped with JT9D engines, while the fifth and sixth were fitted with CF6 engines. The test fleet was largely used to evaluate avionics, flight systems, handling, and performance, while the sixth aircraft was used for route-proving flights. During testing, pilots described the 767 as generally easy to fly, with its maneuverability unencumbered by the bulkiness associated with larger wide-body jets. Following 1,600 hours of flight tests, the JT9D-powered 767-200 received certification from the US Federal Aviation Administration (FAA) and the UK Civil Aviation Authority (CAA) in July 1982. The first delivery occurred on August 19, 1982, to United Airlines. The CF6-powered 767-200 received certification in September 1982, followed by the first delivery to Delta Air Lines on October 25, 1982.
Service entry and operations
The 767 entered service with United Airlines on September 8, 1982. The aircraft's first commercial flight used a JT9D-powered 767-200 on the Chicago-to-Denver route. The CF6-powered 767-200 commenced service three months later with Delta Air Lines. Upon delivery, early 767s were mainly deployed on domestic routes, including US transcontinental services. American Airlines and TWA began flying the 767-200 in late 1982, while Air Canada, China Airlines, and El Al began operating the aircraft in 1983. The aircraft's introduction was relatively smooth, with few operational glitches and greater dispatch reliability than prior jetliners. In its first year, the 767 logged a 96.1 percent dispatch rate, which exceeded the industry average for new aircraft. Operators reported generally favorable ratings for the twinjet's sound levels, interior comfort, and economic performance. Resolved issues were minor and included the recalibration of a leading edge sensor to prevent false readings, the replacement of an evacuation slide latch, and the repair of a tailplane pivot to match production specifications.
Seeking to capitalize on its new wide-body's potential for growth, Boeing offered an extended-range model, the 767-200ER, in its first year of service. Ethiopian Airlines placed the first order for the type in December 1982. Featuring increased gross weight and greater fuel capacity, the extended-range model could carry heavier payloads at distances up to 6,385 nautical miles (11,825 km), and was targeted at overseas customers. The 767-200ER entered service with El Al Airline on March 27, 1984. The type was mainly ordered by international airlines operating medium-traffic, long-distance flights.
In the mid-1980s, the 767 spearheaded the growth of twinjet flights across the northern Atlantic under extended-range twin-engine operational performance standards (ETOPS) regulations, the FAA's safety rules governing transoceanic flights by aircraft with two engines. Before the 767, overwater flight paths of twinjets could be no more than 90 minutes away from diversion airports. In May 1985, the FAA granted its first approval for 120-minute ETOPS flights to 767 operators, on an individual airline basis starting with TWA, provided that the operator met flight safety criteria. This allowed the aircraft to fly overseas routes at up to two hours' distance from land. The larger safety margins were permitted because of the improved reliability demonstrated by the twinjet and its turbofan engines. The FAA lengthened the ETOPS time to 180 minutes for CF6-powered 767s in 1989, making the type the first to be certified under the longer duration, and all available engines received approval by 1993. Regulatory approval spurred the expansion of transoceanic 767 flights and boosted the aircraft's sales.
Forecasting airline interest in larger-capacity models, Boeing announced the stretched 767-300 in 1983 and the extended-range 767-300ER in 1984. Both models offered a 20 percent passenger capacity increase, while the extended-range version was capable of operating flights up to 5,990 nautical miles (11,090 km). Japan Airlines placed the first order for the 767-300 in September 1983. Following its first flight on January 30, 1986, the type entered service with Japan Airlines on October 20, 1986. The 767-300ER completed its first flight on December 9, 1986, but it was not until March 1987 that the first firm order, from American Airlines, was placed. The type entered service with American Airlines on March 3, 1988. The 767-300 and 767-300ER gained popularity after entering service, and came to account for approximately two-thirds of all 767s sold.
After the debut of the first stretched 767s, Boeing sought to address airline requests for greater capacity by proposing larger models, including a partial double-deck version informally named the "Hunchback of Mukilteo" (from a town near Boeing's Everett factory) with a 757 body section mounted over the aft main fuselage. In 1986, Boeing proposed the 767-X, a revised model with extended wings and a wider cabin, but received little interest. By 1988, the 767-X had evolved into an all-new twinjet, which revived the 777 designation. Until the 777's 1995 debut, the 767-300 and 767-300ER remained Boeing's second-largest wide-bodies behind the 747.
Buoyed by a recovering global economy and ETOPS approval, 767 sales accelerated in the mid-to-late 1980s; 1989 was the most prolific year with 132 firm orders. By the early 1990s, the wide-body twinjet had become its manufacturer's annual best-selling aircraft, despite a slight decrease due to economic recession. During this period, the 767 became the most common airliner for transatlantic flights between North America and Europe. By the end of the decade, 767s crossed the Atlantic more frequently than all other aircraft types combined. The 767 also propelled the growth of point-to-point flights which bypassed major airline hubs in favor of direct routes. Taking advantage of the aircraft's lower operating costs and smaller capacity, operators added non-stop flights to secondary population centers, thereby eliminating the need for connecting flights. The increased number of cities receiving non-stop services caused a paradigm shift in the airline industry as point-to-point travel gained prominence at the expense of the traditional hub-and-spoke model.
In February 1990, the first 767 equipped with Rolls-Royce RB211 turbofans, a 767-300, was delivered to British Airways. Six months later, the carrier temporarily grounded its entire 767 fleet after discovering cracks in the engine pylons of several aircraft. The cracks were related to the extra weight of the RB211 engines, which are 2,205 pounds (1,000 kg) heavier than other 767 engines. During the grounding, interim repairs were conducted to alleviate stress on engine pylon components, and a parts redesign in 1991 prevented further cracks. Boeing also performed a structural reassessment, resulting in production changes and modifications to the engine pylons of all 767s in service.
In January 1993, following an order from UPS Airlines, Boeing launched a freighter variant, the 767-300F, which entered service with UPS on October 16, 1995. The 767-300F featured a main deck cargo hold, upgraded landing gear, and strengthened wing structure. In November 1993, the Japanese government launched the first 767 military derivative when it placed orders for the E-767, an Airborne Early Warning and Control (AWACS) variant based on the 767-200ER. The first two E-767s, featuring extensive modifications to accommodate surveillance radar and other monitoring equipment, were delivered in 1998 to the Japan Self-Defense Forces.
In November 1995, after abandoning development of a smaller version of the 777, Boeing announced that it was revisiting studies for a larger 767. The proposed 767-400X, a second stretch of the aircraft, offered a 12 percent capacity increase versus the 767-300, and featured an upgraded flight deck, enhanced interior, and greater wingspan. The variant was specifically aimed at Delta Air Lines' pending replacement of its aging Lockheed L-1011 TriStars, and faced competition from the A330-200, a shortened derivative of the Airbus A330. In March 1997, Delta Air Lines launched the 767-400ER when it ordered the type to replace its L-1011 fleet. In October 1997, Continental Airlines also ordered the 767-400ER to replace its McDonnell Douglas DC-10 fleet. The type completed its first flight on October 9, 1999, and entered service with Continental Airlines on September 14, 2000.
In the early 2000s, cumulative 767 deliveries approached 900, but new sales declined during an airline industry downturn. In 2001, Boeing dropped plans for a longer-range model, the 767-400ERX, in favor of the proposed Sonic Cruiser, a new jetliner which aimed to fly 15 percent faster while having comparable fuel costs as the 767. The following year, Boeing announced the KC-767 Tanker Transport, a second military derivative of the 767-200ER. Launched with an order in October 2002 from the Italian Air Force, the KC-767 was intended for the dual role of refueling other aircraft and carrying cargo. The Japanese government became the second customer for the type in March 2003. In May 2003, the United States Air Force (USAF) announced its intent to lease KC-767s to replace its aging KC-135 tankers. The plan was suspended in March 2004 amid a conflict of interest scandal, resulting in multiple US government investigations and the departure of several Boeing officials, including Philip Condit, the company's chief executive officer, and chief financial officer Michael Sears. The first KC-767s were delivered in 2008 to the Japan Self-Defense Forces.
In late 2002, after airlines expressed reservations about its emphasis on speed over cost reduction, Boeing halted development of the Sonic Cruiser. The following year, the manufacturer announced the 7E7, a mid-size 767 successor made from composite materials which promised to be 20 percent more fuel efficient. The new jetliner was the first stage of a replacement aircraft initiative called the Boeing Yellowstone Project. Customers embraced the 7E7, later renamed 787 Dreamliner, and within two years it had become the fastest-selling airliner in the company's history. In 2005, Boeing opted to continue 767 production despite record Dreamliner sales, citing a need to provide customers waiting for the 787 with a more readily available option. Subsequently, the 767-300ER was offered to customers affected by 787 delays, including All Nippon Airways and Japan Airlines. Some aging 767s, exceeding 20 years in age, were also kept in service past planned retirement dates due to the delays. To extend the operational lives of older aircraft, airlines increased heavy maintenance procedures, including D-check teardowns and inspections for corrosion, a recurring issue on aging 767s. The first 787s entered service with All Nippon Airways in October 2011, 42 months behind schedule.
In 2007, the 767 received a production boost when UPS and DHL Aviation placed a combined 33 orders for the 767-300F. Renewed freighter interest led Boeing to consider enhanced versions of the 767-200 and 767-300F with increased gross weights, 767-400ER wing extensions, and 777 avionics. However, net orders for the 767 declined from 24 in 2008 to just three in 2010. During the same period, operators upgraded aircraft already in service; in 2008, the first 767-300ER retrofitted with blended winglets from Aviation Partners Incorporated debuted with American Airlines. The manufacturer-sanctioned winglets, at 11 feet (3.35 m) in height, improved fuel efficiency by an estimated 6.5 percent. Other carriers including All Nippon Airways and Delta Air Lines also ordered winglet kits.
On February 2, 2011, the 1,000th 767 rolled out, destined for All Nippon Airways. The aircraft was the 91st 767-300ER ordered by the Japanese carrier, and with its completion the 767 became the second wide-body airliner to reach the thousand-unit milestone after the 747. The 1,000th aircraft also marked the last model produced on the original 767 assembly line. Beginning with the 1,001st aircraft, production moved to another area in the Everett factory which occupied about half of the previous floor space. The new assembly line made room for 787 production and aimed to boost manufacturing efficiency by over twenty percent.
At the inauguration of its new assembly line, the 767's order backlog numbered approximately 50, only enough for production to last until 2013. Despite the reduced backlog, Boeing officials expressed optimism that additional orders would be forthcoming. On February 24, 2011, the USAF announced its selection of the KC-767 Advanced Tanker, an upgraded variant of the KC-767, for its KC-X fleet renewal program. The selection followed two rounds of tanker competition between Boeing and Airbus parent EADS, and came eight years after the USAF's original 2003 announcement of its plan to lease KC-767s. The tanker order encompassed 179 aircraft and was expected to sustain 767 production past 2013.
In December 2011, FedEx Express announced a 767-300F order for 27 aircraft to replace its DC-10 freighters, citing the USAF tanker order and Boeing's decision to continue production as contributing factors. FedEx Express agreed to buy an additional 19 of the −300F variant in June 2012. In June 2015, FedEx said it was accelerating retirements of planes both to reflect demand and to modernize its fleet, recording charges of $276 million. On July 21, 2015 FedEx announced an order for 50 767-300F with options on another 50, the largest order for the type. FedEx confirmed that it has firm orders for 106 of the freighters for delivery between 2018 and 2023.
The 767 is a low-wing cantilever monoplane with a conventional tail unit featuring a single fin and rudder. The wings are swept at 31.5 degrees and optimized for a cruising speed of Mach 0.8 (533 mph or 858 km/h). Each wing features a supercritical cross-section and is equipped with six-panel leading edge slats, single- and double-slotted flaps, inboard and outboard ailerons, and six spoilers. The airframe further incorporates Carbon-fiber-reinforced polymer composite material wing surfaces, Kevlar fairings and access panels, plus improved aluminum alloys, which together reduce overall weight by 1,900 pounds (860 kg) versus preceding aircraft.
To distribute the aircraft's weight on the ground, the 767 has a retractable tricycle landing gear with four wheels on each main gear and two for the nose gear. The original wing and gear design accommodated the stretched 767-300 without major changes. The 767-400ER features a larger, more widely spaced main gear with 777 wheels, tires, and brakes. To prevent damage if the tail section contacts the runway surface during takeoff, 767-300 and 767-400ER models are fitted with a retractable tailskid. The 767 has left-side exit doors near the front and rear of the aircraft.
In addition to shared avionics and computer technology, the 767 uses the same auxiliary power unit, electric power systems, and hydraulic parts as the 757. A raised cockpit floor and the same forward cockpit windows result in similar pilot viewing angles. Related design and functionality allows 767 pilots to obtain a common type rating to operate the 757 and share the same seniority roster with pilots of either aircraft.
The original 767 flight deck uses six Rockwell Collins CRT screens to display Electronic flight instrument system (EFIS) and engine indication and crew alerting system (EICAS) information, allowing pilots to handle monitoring tasks previously performed by the flight engineer. The CRTs replace conventional electromechanical instruments found on earlier aircraft. An enhanced flight management system, improved over versions used on early 747s, automates navigation and other functions, while an automatic landing system facilitates CAT IIIb instrument landings in low visibility situations. The 767 became the first aircraft to receive CAT IIIb certification from the FAA for landings with 980 feet (300 m) minimum visibility in 1984. On the 767-400ER, the cockpit layout is simplified further with six Rockwell Collins liquid crystal display (LCD) screens, and adapted for similarities with the 777 and the Next Generation 737. To retain operational commonality, the LCD screens can be programmed to display information in the same manner as earlier 767s. In 2012, Boeing and Rockwell Collins launched a further 787-based cockpit upgrade for the 767, featuring three landscape-format LCD screens that can display two windows each.
The 767 is equipped with three redundant hydraulic systems for operation of control surfaces, landing gear, and utility actuation systems. Each engine powers a separate hydraulic system, and the third system uses electric pumps. A ram air turbine provides power for basic controls in the event of an emergency. An early form of fly-by-wire is employed for spoiler operation, utilizing electric signaling instead of traditional control cables. The fly-by-wire system reduces weight and allows independent operation of individual spoilers.
The 767 features a twin-aisle cabin with a typical configuration of six abreast in business class and seven across in economy. The standard seven abreast, 2–3–2 economy class layout places approximately 87 percent of all seats at a window or aisle. As a result, the aircraft can be largely occupied before center seats need to be filled, and each passenger is no more than one seat from the aisle. It is possible to configure the aircraft with extra seats for up to an eight abreast configuration, but this is less common.
The 767 interior introduced larger overhead bins and more lavatories per passenger than previous aircraft. The bins are wider to accommodate garment bags without folding, and strengthened for heavier carry-on items. A single, large galley is installed near the aft doors, allowing for more efficient meal service and simpler ground resupply. Passenger and service doors are an overhead plug type, which retract upwards, and commonly used doors can be equipped with an electric-assist system.
In 2000, a 777-style interior, known as the Boeing Signature Interior, debuted on the 767-400ER. Subsequently adopted for all new-build 767s, the Signature Interior features even larger overhead bins, indirect lighting, and sculpted, curved panels. The 767-400ER also received larger windows derived from the 777. Older 767s can be retrofitted with the Signature Interior. Some operators have adopted a simpler modification known as the Enhanced Interior, featuring curved ceiling panels and indirect lighting with minimal modification of cabin architecture, as well as aftermarket modifications such as the NuLook 767 package by Heath Tecna.
The 767 has been produced in three fuselage lengths. These debuted in progressively larger form as the 767-200, 767-300, and 767-400ER. Longer-range variants include the 767-200ER and 767-300ER, while cargo models include the 767-300F, a production freighter, and conversions of passenger 767-200 and 767-300 models.
When referring to different variants, Boeing and airlines often collapse the model number (767) and the variant designator (e.g. –200 or –300) into a truncated form (e.g. "762" or "763"). Subsequent to the capacity number, designations may append the range identifier. The International Civil Aviation Organization (ICAO) aircraft type designator system uses a similar numbering scheme, but adds a preceding manufacturer letter; all variants based on the 767-200 and 767-300 are classified under the codes "B762" and "B763"; the 767-400ER receives the designation of "B764."
The 767-200 was the original model and entered service with United Airlines in 1982. The type has been used primarily by mainline U.S. carriers for domestic routes between major hub centers such as Los Angeles to Washington. The 767-200 was the first aircraft to be used on transatlantic ETOPS flights, beginning with TWA on February 1, 1985 under 90-minute diversion rules. Deliveries for the variant totaled 128 aircraft. There were 44 passenger and freighter conversions of the model in commercial service as of July 2016. The type's competitors included the Airbus A300 and A310.
The 767-200 ceased production in the late 1980s, superseded by the extended-range 767-200ER. Some early 767-200s were subsequently upgraded to extended-range specification. In 1998, Boeing began offering 767-200 conversions to 767-200SF (Special Freighter) specification for cargo use, and Israel Aerospace Industries has been licensed to perform cargo conversions since 2005. The conversion process entails the installation of a side cargo door, strengthened main deck floor, and added freight monitoring and safety equipment. The 767-200SF was positioned as a replacement for Douglas DC-8 freighters.
A commercial freighter version of the Boeing 767-200 with series 300 wings and an updated flightdeck was first flown on 29 December 2014. A military tanker variant of the Boeing 767-2C is being developed for the USAF as the KC-46. Boeing is building two aircraft as commercial freighters which will be used to obtain Federal Aviation Administration certification, a further two Boeing 767-2Cs will be modified as military tankers. As of 2014[update], Boeing does not have customers for the freighter.
The 767-200ER was the first extended-range model and entered service with El Al in 1984. The type's increased range is due to an additional center fuel tank and a higher maximum takeoff weight (MTOW) of up to 395,000 lb (179,000 kg). The type was originally offered with the same engines as the 767-200, while more powerful Pratt & Whitney PW4000 and General Electric CF6 engines later became available. The 767-200ER was the first 767 to complete a non-stop transatlantic journey, and broke the flying distance record for a twinjet airliner on April 17, 1988 with an Air Mauritius flight from Halifax, Nova Scotia to Port Louis, Mauritius, covering 8,727 nmi (10,000 mi; 16,200 km). The 767-200ER has been acquired by international operators seeking smaller wide-body aircraft for long-haul routes such as New York to Beijing. Deliveries of the type totaled 121 with no unfilled orders. As of July 2016, 32 examples of passenger and freighter conversion versions were in airline service. The type's main competitors of the time included the Airbus A300-600R and the A310-300.
The 767-300, the first stretched version of the aircraft, entered service with Japan Airlines in 1986. The type features a 21.1-foot (6.43 m) fuselage extension over the 767-200, achieved by additional sections inserted before and after the wings, for an overall length of 180.25 ft (54.9 m). Reflecting the growth potential built into the original 767 design, the wings, engines, and most systems were largely unchanged on the 767-300. An optional mid-cabin exit door is positioned ahead of the wings on the left, while more powerful Pratt & Whitney PW4000 and Rolls-Royce RB211 engines later became available. The 767-300's increased capacity has been used on high-density routes within Asia and Europe. Deliveries for the type totaled 104 aircraft with no unfilled orders remaining. As of July 2016, 54 of the variant were in airline service. The type's main competitor was the Airbus A300.
The 767-300ER, the extended-range version of the 767-300, entered service with American Airlines in 1988. The type's increased range was made possible by greater fuel tankage and a higher MTOW of 407,000 lb (185,000 kg). Design improvements allowed the available MTOW to increase to 412,000 lb (187,000 kg) by 1993. Power is provided by Pratt & Whitney PW4000, General Electric CF6, or Rolls-Royce RB211 engines. Typical routes for the type include Los Angeles to Frankfurt. The combination of increased capacity and range offered by the 767-300ER has been particularly attractive to both new and existing 767 operators. It is the most successful version of the aircraft, with more orders placed than all other variants combined. As of May 2017, 767-300ER deliveries stand at 583 with no unfilled orders. There were 441 examples in service as of July 2016. The type's main competitor is the Airbus A330-200.
The 767-300F, the production freighter version of the 767-300ER, entered service with UPS Airlines in 1995. The 767-300F can hold up to 24 standard 88-by-125-inch (220 by 320 cm) pallets on its main deck and up to 30 LD2 unit load devices on the lower deck, with a total cargo volume of 15,469 cubic feet (438 m3). The freighter has a main deck cargo door and crew exit, while the lower deck features two port-side cargo doors and one starboard cargo door. A general market version with onboard freight-handling systems, refrigeration capability, and crew facilities was delivered to Asiana Airlines on August 23, 1996. As of May 2017, 767-300F deliveries stand at 125 with 67 unfilled orders. Airlines operated 134 examples of the freighter variant and freighter conversions in July 2016.
In June 2008, All Nippon Airways took delivery of the first 767-300BCF (Boeing Converted Freighter), a modified passenger-to-freighter model. The conversion work was performed in Singapore by ST Aerospace Services, the first supplier to offer a 767-300BCF program, and involved the addition of a main deck cargo door, strengthened main deck floor, and additional freight monitoring and safety equipment. Since then, Boeing, Israel Aerospace Industries, and Wagner Aeronautical have also offered passenger-to-freighter conversion programs for 767-300 series aircraft.
The 767-400ER, the first Boeing wide-body jet resulting from two fuselage stretches, entered service with Continental Airlines in 2000. The type features a 21.1-foot (6.43-metre) stretch over the 767-300, for a total length of 201.25 feet (61.3 m). The wingspan is also increased by 14.3 feet (4.36 m) through the addition of raked wingtips. Other differences include an updated cockpit, redesigned landing gear, and 777-style Signature Interior. Power is provided by uprated Pratt & Whitney PW4000 or General Electric CF6 engines.
The FAA granted approval for the 767-400ER to operate 180-minute ETOPS flights before it entered service. Because its fuel capacity was not increased over preceding models, the 767-400ER has a range of 5,625 nautical miles (10,418 km), less than previous extended-range 767s. This is roughly the distance from Shenzhen to Seattle. No 767-400 version was developed, while a longer-range version, the 767-400ERX, was offered for sale in 2000 before it was cancelled a year later, leaving the 767-400ER as the sole version of the largest 767. Boeing dropped the 767-400ER and the -200ER from its pricing list in 2014. A total of 37 aircraft were delivered to the variant's two airline customers, Continental Airlines (now merged with United Airlines) and Delta Air Lines, with no unfilled orders. All 37 examples of the -400ER were in service in July 2016. One additional example was produced as a military testbed, and later sold as a VIP transport. The type's closest competitor is the Airbus A330-200.
Military and government
Versions of the 767 serve in a number of military and government applications, with responsibilities ranging from airborne surveillance and refueling to cargo and VIP transport. Several military 767s have been derived from the 767-200ER, the longest-range version of the aircraft.
- Airborne Surveillance Testbed – the Airborne Optical Adjunct (AOA) was modified from the prototype 767-200 for a United States Army program, under a contract signed with the Strategic Air Command in July 1984. Intended to evaluate the feasibility of using airborne optical sensors to detect and track hostile intercontinental ballistic missiles, the modified aircraft first flew on August 21, 1987. Alterations included a large "cupola" or hump on the top of the aircraft from above the cockpit to just behind the trailing edge of the wings, and a pair of ventral fins below the rear fuselage. Inside the cupola was a suite of infrared seekers used for tracking theater ballistic missile launches. The aircraft was later renamed as the Airborne Surveillance Testbed (AST). Following the end of the AST program in 2002, the aircraft was retired for scrapping.
- E-767 – the Airborne Early Warning and Control (AWACS) platform for the Japan Self-Defense Forces; it is essentially the Boeing E-3 Sentry mission package on a 767-200ER platform. E-767 modifications, completed on 767-200ERs flown from the Everett factory to Boeing Integrated Defense Systems in Wichita, Kansas, include strengthening to accommodate a dorsal surveillance radar system, engine nacelle alterations, as well as electrical and interior changes. Japan operates four E-767s. The first E-767s were delivered in March 1998.
- KC-767 Advanced Tanker – the 767-200ER-based aerial tanker developed for the USAF KC-X tanker competition. It is an updated version of the KC-767, originally selected as the USAF's new tanker aircraft in 2003, designated KC-767A, and then dropped amid conflict of interest allegations. The KC-767 Advanced Tanker is derived from studies for a longer-range cargo version of the 767-200ER, and features a fly-by-wire refueling boom, a remote vision refueling system, and a 767-400ER-based flight deck with LCD screens and head-up displays. Boeing was awarded the KC-X contract to build a 767-based tanker, to be designated KC-46A, in February 2011.
- KC-767 Tanker Transport – the 767-200ER-based aerial refueling platform operated by the Italian Air Force (Aeronautica Militare), and the Japan Self-Defense Forces. Modifications conducted by Boeing Integrated Defense Systems include the addition of a fly-by-wire refueling boom, strengthened flaps, and optional auxiliary fuel tanks, as well as structural reinforcement and modified avionics. The four KC-767Js ordered by Japan have been delivered. The Aeronautica Militare received the first of its four KC-767As in January 2011.
- Tanker conversions – the 767 MMTT or Multi-Mission Tanker Transport is a 767-200ER-based aircraft operated by the Colombian Air Force (Fuerza Aérea Colombiana) and modified by Israel Aerospace Industries. In 2013, the Brazilian Air Force ordered two 767-300ER tanker conversions from IAI for its KC-X2 program.
Boeing offered the 767-400ERX, a longer-range version of the largest 767 model, in 2000. Introduced concurrently with the 747X, the type was to be powered by the 747X's engines, the Engine Alliance GP7000 and the Rolls-Royce Trent 600. An increased range of 6,492 nautical miles (12,023 km) was specified. Kenya Airways provisionally ordered three 767-400ERXs to supplement its 767 fleet, but after Boeing cancelled the type's development in 2001, switched the order to the 777-200ER.
The Northrop Grumman E-10 MC2A was to be a 767-400ER-based replacement for the USAF's 707-based E-3 Sentry AWACS, Northrop Grumman E-8 Joint STARS, and RC-135 SIGINT aircraft. The E-10 MC2A would have included an all-new AWACS system, with a powerful active electronically scanned array (AESA) that was also capable of jamming enemy aircraft or missiles. One 767-400ER aircraft was produced as a testbed for systems integration, but the program was terminated in January 2009 and the prototype was sold to Bahrain as a VIP transport.
In July 2016, 742 aircraft were in airline service: 76 -200s, 629 -300 and 37 -400 with 77 -300 on order; the largest operators are Delta Air Lines (91), UPS Airlines (59 - largest cargo operator), United Airlines (51), American Airlines (40), Japan Airlines (40), All Nippon Airways (37).
The largest customers are Delta Air Lines with 117 orders, FedEx (108), All Nippon Airways (96), and United Airlines (82). Delta and United are the only customers of all -200, -300 and -400 passenger variants. In July 2015, FedEx placed a firm order for 50 Boeing 767 freighters with deliveries from 2018 to 2023.
Orders and deliveries
|Model Series||ICAO code||Orders||Deliveries||Unfilled orders|
- Data through end of May 2017.
Accidents and notable incidents
As of May 2017, the Boeing 767 has been in 45 aviation occurrences, including 16 hull-loss accidents. Six fatal crashes, including three hijackings, have resulted in a total of 851 occupant fatalities. The airliner's first fatal crash, Lauda Air Flight 004, occurred near Bangkok on May 26, 1991, following the in-flight deployment of the left engine thrust reverser on a 767-300ER; none of the 223 aboard survived; as a result of this accident all 767 thrust reversers were deactivated until a redesign was implemented. Investigators determined that an electronically controlled valve, common to late-model Boeing aircraft, was to blame. A new locking device was installed on all affected jetliners, including 767s. On October 31, 1999, EgyptAir Flight 990, a 767-300ER, crashed off Nantucket Island, Massachusetts, in international waters killing all 217 people on board. The US National Transportation Safety Board (NTSB) determined the probable cause to be due to a deliberate action by the first officer; Egypt disputed this conclusion. On April 15, 2002, Air China Flight 129, a 767-200ER, crashed into a hill amid inclement weather while trying to land at Gimhae International Airport in Busan, South Korea. The crash resulted in the death of 129 of the 166 people on board, and the cause was attributed to pilot error.
An early 767 incident was survived by all on board. On July 23, 1983, Air Canada Flight 143, a 767-200, ran out of fuel in-flight and had to glide with both engines out for almost 43 nautical miles (80 km) to an emergency landing at Gimli, Manitoba. The pilots used the aircraft's ram air turbine to power the hydraulic systems for aerodynamic control. There were no fatalities and only minor injuries. This aircraft was nicknamed "Gimli Glider" after its landing site. The aircraft, registered C-GAUN, continued flying for Air Canada until its retirement in January 2008.
The 767 has been involved in six hijackings, three resulting in loss of life, for a combined total of 282 occupant fatalities. On November 23, 1996, Ethiopian Airlines Flight 961, a 767-200ER, was hijacked and crash-landed in the Indian Ocean near the Comoros Islands after running out of fuel, killing 125 out of the 175 persons on board; survivors have been rare among instances of land-based aircraft ditching on water. Two 767s were involved in the September 11 attacks on the World Trade Center in 2001, resulting in the collapse of its two main towers. American Airlines Flight 11, a 767-200ER, crashed into the north tower, killing all 92 people on board, and United Airlines Flight 175, a 767-200, crashed into the south tower, with the death of all 65 on board. In addition, more than 2,600 people were killed in the towers or on the ground. A foiled 2001 shoe bomb plot involving an American Airlines 767-300ER resulted in passengers being required to remove their shoes for scanning at US security checkpoints.
On November 1, 2011, LOT Polish Airlines Flight 16, a 767-300ER, safely landed at Warsaw Frederic Chopin Airport in Warsaw, Poland after a mechanical failure of the landing gear forced an emergency landing with the landing gear up. There were no injuries, but the aircraft involved was damaged and subsequently written off. At the time of the incident, aviation analysts speculated that it may have been the first instance of a complete landing gear failure in the 767's service history. Subsequent investigation however determined that while a damaged hose had disabled the aircraft's primary landing gear extension system, an otherwise functional backup system was inoperative due to an accidentally deactivated circuit breaker.
In January 2014, the US Federal Aviation Administration issued a directive that ordered inspections of the elevators on more than 400 767s beginning in March 2014; the focus is on fasteners and other parts that can fail and cause the elevators to jam. The issue was first identified in 2000 and has been the subject of several Boeing service bulletins. The inspections and repairs are required to be completed within six years. The aircraft has also had multiple occurrences of "uncommanded escape slide inflation" during maintenance or operations, and during flight. In late 2015, the FAA issued a preliminary directive to address the issue.
On October 28, 2016, American Airlines Flight 383, a 767-300ER with 161 passengers and 9 crew members, aborted takeoff at Chicago O'Hare Airport following an uncontained failure of the right GE CF6-80C2 engine. The engine failure, which hurled fragments over a considerable distance, caused a fuel leak resulting in a fire under the right wing. Fire and smoke entered the cabin. All passengers and crew evacuated the aircraft, with 20 passengers and one flight attendant sustaining minor injuries using the evacuation slides.
Retirement and display
As new 767s roll off the assembly line, older models have been retired and scrapped. One complete aircraft is known to have been retained for exhibition: N102DA, the first 767-200 to operate for Delta Air Lines and the twelfth example built. The exhibition aircraft, named "The Spirit of Delta" by the employees who helped purchase it in 1982, underwent restoration at the Delta Air Lines Air Transport Heritage Museum in Atlanta, Georgia. The restoration was completed in 2010.
|Three-class(pp23–29)||174 (15F, 40J, 119Y)||210 (18F, 42J, 150Y)||243 (16F, 36J, 189Y)|
|Two-class(pp23–29)||216 (18J, 196 Y)||261 (24J, 237Y)||296 (24J, 272Y)|
|Cargo capacity(pp9–14)||3,070 ft³ / 86.9m³||4,030 ft³ / 114.1m³||15,469 ft³ / 438m³
58-ton / 52.7 tonnes
|4,905 ft³ / 138.9m³|
|Unit load devices(pp32–36)||22 LD2s||30 LD2s||30 LD2s + 24 88×108in pallets||38 LD2s|
|Length(pp15–18)||159 ft 2in / 48.51m||180 ft 3in / 54.94m||201 ft 4in / 61.37m|
|Wingspan(pp15–18)||156 ft 1in / 47.57m||170 ft 4in / 51.92m|
|Wing area||3,050 ft² / 283.3m²||3,130 ft² / 290.7m²[verification needed]|
|Fuselage Height||17 ft 9in / 5.41m(pp15–18)|
|Fuselage Width||16 ft 6in / 5.03m(pp15–18)|
|Cabin width||186in/ 4.72m(pp30)|
|MTOW(pp9–14)||315,000 lb / 142,882 kg||395,000 lb / 179,169 kg||350,000 lb / 158,758 kg||412,000 lb / 186,880 kg||450,000 lb / 204,116 kg|
|MLW(pp9–14)||272,000 lb / 123,377 kg||300,000 lb / 136,078 kg||300,000 lb / 136,078 kg||320,000 lb / 145,150 kg||326,000 lb / 147,871 kg||350,000 lb / 158,757 kg|
|MZFW(pp9–14)||250,000 lb / 113,398 kg||260,000 lb / 117,934 kg||278,000 lb / 126,099 kg||295,000 lb / 133,810 kg||309,000 lb / 140,160 kg||330,000 lb / 149,685 kg|
|OEW(pp9–14)||176,650 lb / 80,127 kg||181,610 lb / 82,377 kg||189,750 lb / 86,069 kg||198,440 lb / 90,011 kg||190,000 lb / 86,183 kg||229,000 lb / 103,872 kg|
|Fuel capacity(pp9–14)||16,700USgal / 63,217L||24,140Usgal / 91,380L||16,700USgal / 63,216L||24,140USgal / 91,380L|
|Max Fuel(pp9–14)||111,890 lb / 50,753 kg||161,738 lb / 73,363 kg||111,890 lb / 50,753 kg||161,740 lb / 73,364 kg|
|Range||3,900 nmi (7,200 km)[a](p47)||6,590nmi / 12,200 km[b]||3,900 nmi (7,200 km)[c](p49)||5,980nmi / 11,070 km[d]||3,225nmi / 6,025 km [e]||5,625nmi / 10,415 km[f]|
|Long range cruise||459 kn (850 km/h) at 39,000 ft (12,000 m)|
|Maximum cruise||486 kn (900 km/h) at 39,000 ft (12,000 m)|
|Takeoff[g]||6,300 ft (1,900 m)(p58)||2,480m / 8,150 ft||9,200 ft (2,800 m)(p64)||2,650m / 8,700 ft||3,290m / 10,800 ft|
|Service Ceiling||43,100 ft (13,100 m)(p10)|
|Engines (×2)(p10)||P&W JT9D-7R4/7R4E / P&W PW4052 / GE CF6-80A/A2/C2-B2||P&W JT9D-7R4/7R4E / P&W PW4052/56 / GE CF6-80A/A2/C2-B2/C2-B4 / RB211-524G/H||P&W JT9D-7R4/7R4E / P&W PW4052 / GE CF6-80A/A2/C2-B2 / RB211-524H||GE CF6-80C2-B4/0C2-B6/C2-B8F/C2-B7F1 / PW4056/60/62 / RB211-524G/H||GE CF6-80C2-B8F/C2-B7F1 / PW4062|
|Thrust (×2)(p10)||48,000-52,500 lbf / 21,772-23,814kgf||48,000-60,600 lbf / 27,488-27,488kgf||48,000-60,600 lbf / 21,772-27,488kgf||56,750-61,500 lbf / 25,741-27,896kgf||60,600 lbf / 27,488kgf|
- Related development
- Aircraft of comparable role, configuration and era
- Airbus A300
- Airbus A310
- Airbus A330-200
- Boeing Business Jet
- Boeing 757
- Boeing 777
- Boeing 787 Dreamliner
- Related lists
- 216 pax, 176,100lb / 79,878kg OEW, ISA
- 181 pax (15F/40J/126Y), CF6
- 269 pax, 187,900lb / 85235kg OEW, ISA
- 218 pax (18F/46J/154Y), PW4000
- 58-ton / 52.7 tonnes payload
- 245 pax (20F/50J/175Y), CF6
- MTOW, SL, 30°C/86°F
- "767 Model Summary (orders and deliveries)". Boeing. May 31, 2017. Retrieved June 12, 2017.
- "Boeing Commercial Airplanes Jet Prices". Boeing. Archived from the original on 2012-08-31. Retrieved August 8, 2012.
- Eden 2008, pp. 102–03
- Sutter 2006, p. 103
- Norris & Wagner 1998, pp. 156–57.
- Velupillai, David (August 8, 1981). "Boeing 767: The new fuel saver". Flight International. pp. 436–37, 439, 440–41, 445–48, 453. Retrieved July 30, 2011.
- Norris & Wagner 1998, p. 156
- Norris & Wagner 1999, pp. 20–21
- Eden 2008, p. 103
- Norris & Wagner 1999, pp. 18–19
- Davies 2000, p. 103
- Norris & Wagner 1998, p. 143
- Birtles 1999, p. 8
- Becher 1999, p. 24
- Donald 1997, p. 173
- Norris & Wagner 1998, pp. 159–60
- Norris & Wagner 1999, p. 23
- Norris & Wagner 1999, pp. 21–22
- Norris & Wagner 1998, p. 160.
- Sutter 2006, pp. 241–46
- Haenggi 2003, pp. 43–44
- Haenggi 2003, p. 29
- Birtles 1999, p. 14
- "767 Airplane Characteristics for Airport Planning" (PDF). Boeing. May 2011. pp. 4–6, 9–14, 23, 28, 32, 35, 37. Retrieved October 20, 2013.
- Norris & Wagner 1998, p. 158
- "History of the 767 Two-Crew Flight Deck". Boeing. Archived from the original on 2011-08-07. Retrieved July 29, 2011.
- Becher 1999, p. 32
- Becher 1999, p. 33
- Wilson 2002, p. 117
- Velupillai, David (January 2, 1983). "Boeing 757: introducing the big-fan narrowbody". Flight International. Retrieved February 2, 2011.
- Shaw 1999, p. 64
- Norris & Wagner 1998, pp. 161–62
- Birtles 1999, pp. 16–18, 27
- Birtles 1999, pp. 49–52
- Sweetman, Bill (March 20, 1982). "Boeing tests the twins". Flight International. Retrieved July 15, 2011.
- Haenggi 2003, pp. 31–5
- Birtles 1999, pp. 49–53
- Birtles 1999, pp. 55–58
- Lynn, Norman (April 2, 1983). "Boeing 767 moves smoothly into service". Flight International. Retrieved January 20, 2011.
- Norris & Wagner 1998, p. 163
- "Boeing 767 Program Background". Boeing. Archived from the original on 2011-08-21. Retrieved July 30, 2011.
- "767-200ER Technical Characteristics". Boeing. Retrieved October 20, 2013.
- Haenggi 2003, pp. 38–40.
- Becher 1999, pp. 150, 154–55
- Haenggi 2003, p. 42
- Eden 2008, pp. 103–04.
- "767-300ER Technical Characteristics". Boeing. Retrieved October 20, 2013.
- McKinzie, Gordon. "How United Airlines Helped Design The World's Most Remarkable Airliner". American Institute of Aeronautics and Astronautics. Archived from the original on June 1, 2009. Retrieved July 1, 2011.
- Norris & Wagner 2001, pp. 11–13, 15
- Smil 1998, p. 28
- Davies 2000, pp. 88–89
- Norris & Wagner 2009, p. 12
- Birtles 1999, pp. 27–28
- Birtles 1999, p. 64
- Norris, Guy (May 24, 1995). "Boeing acts to solve 757/767 pylon cracks". Flight International. Retrieved December 26, 2011.
- Eden 2008, p. 105
- Frawley 2001, p. 63
- Birtles 1999, pp. 44–5
- "Military Aircraft Directory: Boeing". Flight International. July 29, 1998. Retrieved December 9, 2011.
- Guy, Norris (November 16, 1994). "Boeing poised to fly first 767 AWACS". Flight International. Retrieved August 30, 2011.
- Norris & Wagner 1999, pp. 116–21
- Becher 1999, p. 125
- Birtles 1999, p. 40
- Lopez, Ramon (June 18, 1997). "Continental goes Boeing". Flight International. Retrieved December 28, 2011.
- Norris, Guy; Kingsley-Jones, Max (January 4, 2003). "Long players". Flight International. Archived from the original on 2013-09-25. Retrieved July 30, 2011.
- Norris, Guy; Kelly, Emma (April 3, 2001). "Boeing Sonic Cruiser Ousts 747X". Flight International. Retrieved August 15, 2011.
- "Sonic Cruiser seeks mission definition". Interavia Business & Technology. 56 (655): 25. July 2001. Retrieved June 30, 2015 – via HighBeam. (Subscription required (. ))
- Norris, Guy (November 7, 2006). "Pumped for action". Flight International. Archived from the original on 2007-07-04. Retrieved August 30, 2011.
- Shalal-Esa, Andrea (February 24, 2010). "Pentagon nears new contract in air tanker saga". Reuters. Archived from the original on 2012-11-13. Retrieved December 26, 2011.
- McCarthy, John; Price, Wayne. (March 9, 2010). "Northrop pulls out of tanker bidding war." Florida Today, p. A1
- Wallace, James (February 20, 2004). "Stalled 767 deal may cost jobs". Seattle Post-Intelligencer. Retrieved December 26, 2011.
- Hoyle, Craig (January 14, 2010). "Japan receives last Boeing KC-767 tanker". Flight International. Archived from the original on 2010-01-18. Retrieved August 18, 2011.
- Norris & Wagner 2009, pp. 32–35
- "Boeing 787 Dreamliner Aircraft Profile". Flight International. 2011. Archived from the original on 2011-09-06. Retrieved July 30, 2011.
- "767 earns reprieve as 787 ramp-up considered". Flight International. June 5, 2005. Archived from the original on 2012-11-05. Retrieved July 30, 2011.
- Ionides, Nicholes (September 17, 2008). "JAL to take 11 767s and 777s in 787-delay compensation deal". Flight International. Archived from the original on 2008-09-21. Retrieved September 15, 2011.
- "Ageing jets to fly on due to delay in Boeing Dreamliner". Herald Sun. September 9, 2011. Retrieved January 19, 2011.
- Goold, Ian (June 2010). "Checking Up on the 767". MRO Management. Archived from the original on 2012-04-26. Retrieved 2015-07-03.
- Walker, Karen (October 27, 2011). "Finally ... the 787 enters service". Air Transport World. Archived from the original on 2011-12-27. Retrieved December 9, 2011.
- "UPS order revives 767 line". Seattle Post-Intelligencer. February 5, 2007. Retrieved August 19, 2011.
- Cassidy, Padraic (March 8, 2007). "DHL orders 6 Boeing 767 freighters". MarketWatch. Retrieved August 19, 2011.
- Thomas, Geoffrey (March 2, 2007). "Boeing considering new 767 freighter to counter A330-200F". Aviation Week & Space Technology. Retrieved July 29, 2011.
- "Boeing Company Annual Orders Summary". Boeing. Retrieved January 23, 2011.
- Ranson, Lori (July 22, 2008). "Blended winglets debut on Boeing 767". Flight International. Archived from the original on 2008-07-25. Retrieved August 19, 2011.
- Yeo, Ghim-Lay (July 23, 2010). "Farnborough: Hainan and ANA to equip Boeing aircraft with winglets". Flight Daily News. Archived from the original on 2010-07-26. Retrieved August 19, 2011.
- "Delta takes tips from Aviation Partners". Flight Daily News. June 20, 2007. Archived from the original on 2013-10-03. Retrieved August 19, 2011.
- Ostrower, Jon (February 3, 2011). "Boeing unveils 1,000th 767". Air Transport Intelligence. Archived from the original on 2011-02-06. Retrieved February 6, 2011.
- "Thousandth 767 etches twinjet's place in history". Flight International. December 1, 2010. Archived from the original on 2014-02-28. Retrieved January 20, 2011.
- Dunlop, Michelle (March 6, 2011). "767 now built faster and in less space". The Weekly Herald. Retrieved July 30, 2011.
- "Boeing presents KC-767 proposal to USAF". United Press International. January 3, 2008. Retrieved December 26, 2011.
- Reed, Ted (December 20, 2011). "Boeing 767 Removed From Life Support". The Street. Retrieved December 26, 2011.
- "FedEx Express Plans to Acquire 19 Boeing 767-300F Aircraft and Convert Four 777 Freighter Orders". FedEx Express press release. June 29, 2012. Retrieved July 2, 2012.
- "FedEx to buy additional aircraft from Boeing". Businessweek. Associated Press. June 29, 2012. Archived from the original on 2012-07-03. Retrieved 2015-07-03.
- "FedEx to Buy as Many as 100 Boeing 767 Freighters". The Wall Street Journal. 21 July 2015. Retrieved 21 July 2015.
- FedEx Express (FDX) Will Acquire Additional 50 Boeing (BA) 767-300F Aircraft
- Birtles 1999, pp. 15–16.
- Norris & Wagner 1999, pp. 119, 121
- Sopranos, Katherine (December 2004). "Striking out tailstrikes". Frontiers. Retrieved August 21, 2012.
- Norris & Wagner 1996, p. 69
- Wells & Clarence 2004, p. 252
- Birtles 1999, pp. 20, 25
- Young, David (June 17, 1982). "767's maiden O'Hare landing is automatic". Chicago Tribune, p. 3
- "FAA Order 8900.1 Flight Standards Information Management System (FSIMS), Volume 4 Aircraft Equipment and Operational Authorizations". Federal Aviation Administration. September 13, 2007. Retrieved July 29, 2011.
RVR 300, Runway Visual Range 300 meters
- Norris & Wagner 1999, p. 117
- Warwick, Graham (July 10, 2012). "Boeing's KC-46A Tanker Sparks 767 Cockpit Upgrade". Aviation Week & Space Technology. Archived from the original on February 21, 2014. Retrieved 2015-07-04.
- Waterman, A. The Boeing 767 Hydraulic System, SAE International
- Birtles 1999, p. 24
- Birtles 1999, p. 50
- Kane 2003, p. 553
- Haenggi 2003, p. 34
- Pace, Eric (May 24, 1981). "How Airline Cabins are being Reshaped". The New York Times. Retrieved February 1, 2011.
- Davis, Elizabeth (April 2003). "Boeing Signature Interior a hit with flying public". Frontiers. Retrieved August 19, 2011.
- Norris & Wagner 1999, p. 122
- Norris, Guy (July 24, 2000). "Stretching and Testing". Flight International. Retrieved August 19, 2011.
- "Thomsonfly.com Launches Enhanced Interior Package for Boeing 757-200 and 767-200". Boeing. March 9, 2005. Retrieved 2015-07-04.
- "B767 Interior Upgrade Systems". Heath Tecna. Archived from the original on 2011-10-01. Retrieved August 18, 2011.
- Eden 2008, pp. 104–05
- Kane 2003, p. 555
- "Federal Aviation Administration Type Certificate Data Sheet A1NM" (PDF). Federal Aviation Administration. March 4, 2011. pp. 6–8. Retrieved December 26, 2011.
- "Our Planes". American Airlines. 2011. Retrieved August 28, 2011.
- "Boeing 767-200ER". Continental Airlines. 2011. Archived from the original on 2011-08-05. Retrieved 2015-07-04.
- "ICAO Document 8643". International Civil Aviation Organization. Retrieved December 10, 2011.
- Dan Thisdell; Antoine Fafard (9 August 2016). "World Airliner Census 2016" (PDF). Flightglobal.
- Becher 1999, p. 175
- Norris, Guy (April 22, 1998). "Boeing enters UPS bidding with 767 'Special Freighter'". Flight International. Retrieved December 28, 2011.
- "Supplemental Type-Certificate Data Sheet" (PDF). European Aviation Safety Agency. February 23, 2011. p. 4. Retrieved August 18, 2011.
- Trimble, Stephen (28 December 2014), "Boeing completes first flight of new freighter and tanker", Flightglobal, Reed Business Information, archived from the original on December 30, 2014, retrieved 30 December 2014
- Birtles 1999, pp. 62, 90–95
- Haenggi 2003, p. 43
- "Airbus A330-200". Flug Revue. July 18, 2000. Archived from the original on February 18, 2001. Retrieved August 18, 2011.
- Becher 1999, p. 178
- "767-300F Technical Characteristics". Boeing. Retrieved October 20, 2013.
- Francis, Leithen (June 16, 2008). "ST Aero delivers world's first 767-300BCF to All Nippon Airways". Flight International. Archived from the original on 2008-06-19. Retrieved August 19, 2011.
- Sobie, Brendan (October 21, 2010). "Wagner plans to launch 767 cargo conversion programme". Air Transport Intelligence. Retrieved October 31, 2010.
- Norris & Wagner 1999, p. 114
- Norris & Wagner 1999, p. 120
- Norris & Wagner 1999, pp. 119–120, 123
- "Boeing 767-400ER gets FAA clearance". Flight International. July 25, 2000. Archived from the original on 2013-09-29. Retrieved February 1, 2011.
- "Introducing the 767-400 Extended Range Airplane". AERO Magazine. Boeing. July 1998.
- "Shenzhen to Seattle distance". Great Circle Mapper.
- "Boeing Drops the 767-200ER and 767-400ER from its Pricing List: the End of an Era - Airchive". 2013-09-19. Retrieved 2016-07-17.
- Sarsfield, Kate (January 27, 2009). "Bahrain acquires 767-400ER testbed for VIP use". Flight International. Archived from the original on 2009-01-30. Retrieved January 21, 2011.
- "Boeing 767-400ER". Flug Revue. March 4, 2002. Archived from the original on May 13, 2008. Retrieved August 18, 2011.
- Birtles 1999, pp. 39–46
- Borak, Donna (February 12, 2007). "Boeing unveils tanker for $40 billion deal". Seattle Times. Retrieved January 21, 2011.
- "Strategic Defense Initiative Program: Status of Airborne Optical Adjunct and Terminal Imaging Radar". United States Government Accountability Office. June 1986. pp. 1, 9, 10. Retrieved December 28, 2011.
- Taylor 1989, pp. 373–74
- Becher 1999, pp. 183–84
- Norris & Wagner 1996, p. 87
- Chism, Neal (March 20, 2006) "Correspondence: Save the First Boeing 767". Aviation Week & Space Technology, Volume 164, Issue 12, pp. 6–8
- "DoD 4120-15L, Model Designation of Military Aerospace Vehicles" (PDF). US Department of Defense. May 12, 2004. p. 30.
- "Boeing Offers KC-767 Advanced Tanker to US Air Force". Boeing. February 12, 2007. Archived from the original on 2007-02-14. Retrieved 2015-07-04.
- "Il portale dell'Aeronautica Militare – KC-767A". difesa.it. Archived from the original on March 25, 2015. Retrieved April 1, 2015.
- Kington, Tom (January 27, 2011). "Italian Air Force Receives 1st Tanker From Boeing". Defense News. Archived from the original on December 7, 2016. Retrieved January 28, 2011.
- Egozi, Arie (June 9, 2010). "IAI tests Colombia's new 767 tanker". Flight International. Archived from the original on 2010-09-09. Retrieved September 2, 2011.
- "Israel Aerospace Industries to work in Brazilian tank program". UPI.com. March 15, 2013. Retrieved June 2, 2013.
- Moxon, Julian; Norris, Guy (July 25, 2000). "R-R offers Trent 600 for 767-400ERX and 747X". Flight International. Retrieved August 19, 2011.
- Norris, Guy (March 20, 2000). "Lauda and Kenya eye heavy 767". Flight International. Retrieved August 19, 2010.
- Wallace, James (March 19, 2002). "Kenya Airways sticks to Boeing". Seattle Post-Intelligencer. Retrieved June 7, 2011.
- Tirpak, John (October 2007). "The big squeeze". Air Force Magazine. Archived from the original on July 25, 2009. Retrieved August 30, 2011.
- Fulghum, David (July 26, 2004). "E-10 Radar Secretly Designed To Jam Missiles". Aviation Week & Space Technology. Retrieved August 19, 2011.[dead link]
- "FedEx to Buy as Many as 100 Boeing 767 Freighters". The Wall Street Journal. July 21, 2015. Retrieved July 21, 2015.
- "Boeing Recent Orders". Boeing. July 21, 2015. Retrieved December 31, 2015.
- "Boeing Company Current Deliveries". Boeing. June 2015. Retrieved December 31, 2015.
- "Orders and Deliveries search page". Boeing. December 2015. Retrieved December 31, 2015.
- "Boeing 767 occurrences". Aviation Safety Network. May 25, 2017. Retrieved May 25, 2017.
- "Boeing 767 hull-losses". Aviation Safety Network. May 25, 2017. Retrieved May 25, 2017.
- "Boeing 767 Statistics". Aviation Safety Network. September 27, 2015. Retrieved September 27, 2015.
- James, Barry (August 17, 1991). "U.S. Orders Thrust Reversers Deactivated on 767s". The New York Times. Retrieved August 19, 2011.
- Acohido, Byron (September 1, 1991). "Air Disasters: Critics Question FAA's Response". Seattle Times. Retrieved December 26, 2011.
- Lane, Polly (May 26, 1992). "New Locks Installed For Boeing Reversers". Seattle Times. Retrieved December 26, 2011.
- "Accident description". Aviation Safety Network. July 27, 2004. Retrieved January 19, 2011.
- Ellison, Michael (June 9, 2000). "US and Egypt split on fatal plane crash". The Guardian. Retrieved August 18, 2011.
- "Accident description". Aviation Safety Network. August 27, 2005. Retrieved January 19, 2011.
- "Storied 'Gimli Glider' on final approach". Globe and Mail. January 24, 2008. Retrieved August 18, 2011.
- "Hijacking description". Aviation Safety Network. March 7, 2009. Retrieved January 19, 2011.
- Lendon, Brad (January 16, 2009). "Previous jet ditchings yielded survival lessons". CNN. Retrieved February 5, 2012.
- Goldiner, Dave (January 15, 2009). "A very rare happy ending in Hudson River plane crash". New York Daily News. Retrieved August 19, 2011.
- "Threats and Responses; Excerpts from the Report of the Sept. 11 Commission: 'A Unity of Purpose'". The New York Times. July 23, 2004. Retrieved January 22, 2011.
- Belluck, Pam (January 31, 2003). "Threats and Responses: The Bomb Plot: Unrepentant Shoe Bomber Is Given a Life Sentence". The New York Times. Retrieved August 19, 2011.
- "TSA Travel Assistant". US Transportation Security Administration. September 26, 2006. Archived from the original on 2011-05-11. Retrieved January 20, 2011.
- "Warsaw Airport still closed after LOT gear-up landing". Flight International. November 2, 2011. Retrieved December 9, 2011.
- Kaminski-Morrow, David (December 5, 2011). "Circuit-breaker at heart of LOT 767 gear-up landing probe". Flight International. Retrieved December 26, 2011.
- Kaminski-Morrow, David (November 1, 2012). "LOT 767 gear-up crash probe advises checklist changes". Flight International. Retrieved August 20, 2014.
- "CNN Transcript - Erin Burnett OutFront". CNN. November 1, 2011. Retrieved December 26, 2011.
- Pasztor, Andy. FAA orders safety checks on Boeing 767s. Wall Street Journal, January 27, 2014, p. B3
- "FAA Proposes Fixes to Boeing 767 Emergency Escape Slides". WSJ
- . St. Petersburg Times
- "Evacuation slide deploys midair on United flight". CNN, July 1, 2014
- FAA Targets 767 Escape Slides That Deploy When They Aren't Supposed To (Skift)
- "GE alerts airlines about engine part after American Airlines fire". Reuters. Retrieved 6 November 2016.
- "US blames American Airlines fire on engine failure".
- "Unusual Failure in American Airlines’ Jet Engine Prompts Investigation".
- "American Airlines Jet Catches Fire at Chicago, No Fatalities". The Daily Voice. October 28, 2016. Retrieved October 29, 2016.
- Birtles 1999, pp. 55, 116
- "December 15, 1982: A Special Day for Delta Air Lines". Delta Air Lines Air Transport Heritage Museum. Retrieved May 7, 2009.
- "Type Certificate Data Sheet No. A1NM" (PDF). June 20, 2016.
- "767 Backgrounder" (PDF). Boeing Commercial Airplanes. February 2014.
- "Aircraft Data File: Boeing Aircraft". civil jet aircraft design. Elsevier. 1999.
- "767 performance summary" (PDF). Boeing. 2006. Archived from the original on April 15, 2015.
- "TWA looks at stretched 757s to replace ageing 767 fleet". Flight International. January 11, 2000. Retrieved April 16, 2017.
- Becher, Thomas (1999). Boeing 757 and 767. Marlborough, Wiltshire: Crowood Press. ISBN 1-86126-197-7.
- Birtles, Philip (1999). Modern Civil Aircraft: 6, Boeing 757/767/777. 3rd ed.. London: Ian Allen Publishing. ISBN 0-7110-2665-3.
- Davies, R.E.G. (2000). TWA: an airline and its aircraft. McLean, Virginia: Paladwr Press. ISBN 1-888962-16-X.
- Donald, David, ed. (1997). The Complete Encyclopedia of World Aircraft. New York, NY: Barnes & Noble Books. ISBN 0-7607-0592-5.
- Eden, Paul, ed. (2008). Civil Aircraft Today: The World's Most Successful Commercial Aircraft. Silverdale, Washington: Amber Books Ltd. ISBN 1-84509-324-0.
- Frawley, Gerard (2001). The International Directory of Civil Aircraft. Weston Creek, Australian Capital Territory: Aerospace Publications. ISBN 1-875671-52-8.
- Haenggi, Michael (2003). 767 Transatlantic Titan. "Boeing Widebodies" series. Osceola, Wisconsin: Motorbooks International. ISBN 0-7603-0842-X.
- Kane, Robert M. (2003). Air Transportation 1903–2003. 14th ed.. Dubuque, Iowa: Kendall Hunt Publishing. ISBN 978-0-7872-8881-5.
- Norris, Guy; Wagner, Mark (1996). Boeing Jetliners. Osceola, Wisconsin: MBI Publishing. ISBN 0-7603-0034-8.
- Norris, Guy; Wagner, Mark (1998). Boeing. Osceola, Wisconsin: MBI Publishing. ISBN 0-7603-0497-1.
- Norris, Guy; Wagner, Mark (1999). "767: Stretching and Growing". Modern Boeing Jetliners. Osceola, Wisconsin: Zenith Imprint. ISBN 0-7603-0717-2.
- Norris, Guy; Wagner, Mark (2001). Boeing 777, The Technological Marvel. Osceola, Wisconsin: Zenith Press. ISBN 0-7603-0890-X.
- Norris, Guy; Wagner, Mark (2009). Boeing 787 Dreamliner. Osceola, Wisconsin: Zenith Press. ISBN 978-0-7603-2815-6.
- Shaw, Robbie (1999). Boeing 757 & 767, Medium Twins. Reading, Pennsylvania: Osprey Publishing. ISBN 1-85532-903-4.
- Smil, Vaclav (1998). Transforming the Twentieth Century: Technical Innovations and Their Consequences. Oxford, Oxfordshire: Oxford University Press. ISBN 0-19-516875-5.
- Sutter, Joe (2006). 747: Creating the World's First Jumbo Jet and Other Adventures from a Life in Aviation. Washington, D.C.: Smithsonian Books. ISBN 0-06-088241-7.
- Taylor, John W.R., ed. (1989). Jane's All the World's Aircraft 1989–90. London: Jane's Yearbooks. ISBN 0-7106-0896-9.
- Wells, Alexander T.; Rodrigues, Clarence C. (2004). Commercial Aviation Safety. New York, NY: McGraw-Hill Professional. ISBN 0-07-141742-7.
- Wilson, Stewart (2002). Ansett: The Story of the Rise and Fall of Ansett, 1936–2002. Weston Creek, Australian Capital Territory: Aerospace Publications. ISBN 978-1-875671-57-1.
|Wikimedia Commons has media related to Boeing 767.|
|Flight International cutaway diagrams|
- Official website
- "Introducing the 767-400ER". Aero Magazine. Boeing. July 1998.
- "Strategic stretch". Flight International. 25 August 1999.
- "767-300BCF converted freighter" (PDF). Boeing. 2007.
Boeing 7x7 aircraft production timeline, 1955–present
|Boeing 717 (MD-95)|
|Boeing 737 Original||Boeing 737 Classic||Boeing 737 NG||737 MAX|
|Boeing 747 (Boeing 747SP)||Boeing 747-400||747-8|
|= Narrow-body||= Wide-body|
|*Overlapping production times like between the 747-400 and the 747-8 have been decided in favor of newer models|
| 1 | 48 |
<urn:uuid:3e4e275d-e8d9-43fa-a788-0119d833f7ae>
|
To provide you with an overview on New And existing technologies, hopefully helping you understand the changes in the technology. Together with the overviews we hope to bring topical issues to light from a series of independent reviewers saving you the time And hassle of fact finding over the web.
We will over time provide you with quality content which you can browse and subscribe to at your leisure.
A graphics card is a hardware component in most PCs that enables you to play games with realistic-looking effects.
Two companies currently design and manufacture graphics cores: NVIDIA and ATI, and both companies' add-in board partners manufacture the cards themselves. NVIDIA and ATI's GPU (Graphics Processing Unit) architectures are broadly similar and both have complete cards ranging from £20 through to £350. Generally speaking, the more you spend on a card, the better the performance.
It's no surprise that both companies' products, at a similar price-point, also match up with respect to performance.
Gaming code is run through what are termed APIs (Application Programming Interface) and the two most common are DirectX and OpenGL. Writing code via these regimented protocols means that it will run on various operating systems, including, of course, Microsoft Windows.
However, recent GPU designs have sought to expand the usefulness of graphics cards beyond regular gaming. The reason for doing so lies with the massively parallel computational ability of today's mid-range GPUs, which, in turn, can lead to significantly faster processing.
Measured in GFLOPs, NVIDIA and ATI's highest-performing GPUs have the basic building blocks to execute program algorithms much quicker than a conventional processor (CPU) - up to 20x in certain cases, and the burgeoning industry that's behind efforts to use GPUs for compute-intensive tasks is known as GPGPU (General-Purpose Computations on a GPU).
But one cannot port existing applications that run on x86-class CPUs - Intel and AMD's, for example - to GPUs without using special coding techniques that harness the fantastic parallelism inherent in the fastest cards. OpenGL and DirectX are optimised for gaming and not general programming, remember.NVIDIA's CUDA
First released in February 2007, NVIDIA engineered a set of tools that allowed application developers to write programs to run on NVIDIA's hardware. The set of tools included a compiler that uses a derivation of the 'C' programming language.
Using basic 'C' with some additions, developers can now code algorithms that gain practically unfettered access to the GPUs' immense memory bandwidth and computational power.
The tools are collectively referred to as CUDA (Complete Unified Device Architecture) and it has now been rolled into general graphics card drivers for a wide range of GPUs. The toolkit can be downloaded here.
At the time of writing, NVIDIA supported CUDA on GeForce 8, GeForce 9, GeForce GTX 200, Mobile GeForce 8/9, Quadro, Quadro Mobile, and Tesla GPUs - which encompasses most of the shipping range.
CUDA works best when the application workset is able to be run, concurrently, on the many processing cores that make up a present-generation GPU - and this is where we can see a 10x improvement over CPU-based processing.
Should the application be more serial in nature, the benefit of CUDA is diminished, but it still may be quicker to execute than on a CPU.Practical applications
Working on Windows XP, Vista, Mac OS and Linux, CUDA-written applications have begun to surface this year.
Understanding the processing parallelism of GPUs means that certain types of applications scale well with the cores, and those which are media- or graphics-related tend to be particularly partial to running on a many-processor engine such as a GPU.
One of the first programs to be written via CUDA and released to the public is the BadaBOOM media converter. Testing has shown that converting from the MPEG-2 DVD format to quality-optimised H.264 is around is 8x faster on a GeForce GTX 280 than on an Intel quad-core CPU using a similar encoder.
Another compute-intensive task to take advantage of CUDA is [email protected], a distributed application that analysis the way in which proteins fold. Knowing this will give the medical community far greater insight into how advanced biology works. Folding on a class-leading GeForce GTX 280 is up to 20x faster than a quad-core CPU and up to 8x quicker than the Cell processor in the PlayStation 3.
Recently, NVIDIA announced a collaboration with Adobe on its Creative Suite 4 range of programs. Photoshop CS4 ships with some CUDA/OpenGL-written speed-up enhancements that are activated if a compatible NVIDIA GPU is detected.
The same is true of After Effects CS4 and Premier Pro CS4, with compute-intensive work farmed out to run on the GPU(s) in the system. NVIDIA also reckons that an Elemental RapiHD CUDA-written plug-in for Premiere Pro CS4 will be released in November 2008 - promising an up-to 7x speed-up when compared to a CPU's performance.The future, and downsides
NVIDIA is apportioning significant resources behind ensuring that CUDA is successful. We're likely to see a greater number of multimedia-related applications released that take advantage of the speed-up availed by using a GPU over a CPU. Indeed, NVIDIA is keen to stress this very point with its Balanced PC campaign.
One obvious downside to CUDA's chances of continued success is that it is currently tied into select NVIDIA GPUs; it won't work with rival ATI's cards without extensive modification. NVIDIA will claim that it's a totally open platform where anyone with sufficient knowledge can code algorithms to run on hardware, but, really, it is limited by design. Compare this with the x86 instruction-set for CPUs and it's clear that the latter will always have a larger install base.
CUDA is just one method of running GPGPU programs. ATI disbanded its approach, titled Close To Metal, in favour of supporting simpler higher-level languages such as Brook/Brook+. Microsoft, too, is getting in on the GPGPU act with the release of its Compute Shader in the DirectX11 update. It can be thought of as the GPGPU equivalent of gaming's DX1x, but lifts some of the gaming-oriented constraints in favour of much broader support.
With the backing of an industry giant and owner of key operating systems, Compute Shader will be open to all DirectX11-class of hardware, irrespective of GPU designer, and we can foresee developers dropping CUDA-specific algorithms and supporting Microsoft's GPU-wide initiative instead.Concluding thoughts
Modern graphics cards are designed to do more than just play the latest games. Specific architecture changes in the last two years - necessitated by a tightening of DX specifications - have meant that GPUs are imbued with incredible parallel processing ability that runs into the TFLOPs range for a single card. Such abundant power can be harnessed by programs whose code can be run, concurrently, over many processing cores. This can translate into a significant speed-up of many applications.
NVIDIA's seen the benefit of such a parallel approach and tapped into it by releasing a compiler and toolkit, known as CUDA, that lets application developers code in a derivation of the popular 'C' language. The manifestation of such an approach has lead to programs which execute many times faster than on even a quad-core CPU - taking advantage of the 240 cores on a GeForce GTX 280, for example.
CUDA, therefore, extends the usefulness of a compatible GPU, and we're likely to see more multimedia-related CUDA-written apps in the near future. Microsoft, too, is taking the GPGPU environment seriously with the release of its Compute Shader as part of DX11's roll out, and only time will tell which most developers opt for.
| 1 | 2 |
<urn:uuid:ad7a0793-84fa-456f-b090-cab68f0cf79b>
|
Thermal cameras have traditionally been very expensive, often costing thousands of dollars. Today there are a variety of options that are far more affordable, making them ideal for hobbyists and as a low-cost engineering tool. For electrical engineers thermal cameras have a variety of uses.
Thermal cameras can be a valuable tool for PCB debugging. With the help of a thermal camera, an engineer can easily see if an IC is heating up drastically, a voltage regulator is climbing close to its temperature limit or if any other parts are too close to their power rating. For example, here is a the inside of an iPhone 4s. Notice how the areas around the processor are hotter. Most people could assume that fact, but how much hotter is it than the other components? This temperature differential would be difficult to see with other means.
Here is another example, a typical 1k 0.125w resistor with 9v across it.
Thermal cameras can be a valuable tool for working with and troubleshooting power electronics. For example here is a small gearmotor that has been ran at double its rated voltage. The DC motor has heated up as well as the bushing that the output shaft passes through on the gearbox.
Electrical wiring can be another practical use for thermal cameras. If a circuit breaker, outlet, or wiring is heating up these all can be signs that something may be wrong with the system. Here is an example of a power cable for a smartphone heating up during charging. The second image is of charger that the phone plugged into. Temperatures like this are pretty typical of chargers.
Today there are a handful of low-cost options available for thermal cameras. All of the images in this article were captured with a Seek Reveal camera. This is a rugged handheld camera with a built-in flashlight. It’s small, stand-alone, design makes it easy to toss in a pocket or bag. Besides Seek, other companies make thermal cameras such as FLIR and Fluke.
Seek Thermal Reveal
The Seek Reveal is a compact thermal imager that also contains a flashlight. The Reveal uses a 205px by 156px sensor that can measure temperatures from -40F to 626F. The Reveals flashlight is 300 Lumens and the brightness is controllable via the settings menu. The Seek Reveal contains a 320px by 240px 2.4" display and a microSD card to store the images to. The camera is powered by an internal 3.7v lithium battery that delivers up to 10 hours of run time. A major drawback with the Seek Reveal is the lack of adjustable emissivity, this could cause some measurements to have errors in them.
The TG Series is FLIR's entry level thermal imagers. The TG Series features an 80px by 60px FLIR Lepton thermal sensor that can provide measurements from -13 to 716F. The TG series comes in two different versions, the TG167 features a 50-degree field of view making it ideal for measuring temperatures from a further distance. The TG165 features a 25-degree field of view making it ideal for measuring objects that are closer, such as circuit boards! The duo of FLIR cameras feature the same 2" LCD display and an internal 3.7v lithium battery that delivers more than 5 hours of measurements. Unlike the Seek, the FLIR has adjustable emissivity.
Fluke VT04 Visual IR Thermometer
Fluke's low-cost option for thermal imagers is it's VT04 Visual thermometer. Fluke doesn't list the resolution of its thermal sensor, but this camera uses two separate image sensors that it combines. The VT04 uses a visual sensor like you would find a typical digital camera and a thermal camera sensor. This produces a hybrid or overlaid image of the two cameras resulting in an image that shows more detail. The dual cameras have a field of view of 28 degrees. FLIR also produces thermal cameras that produce hybrid images like this. The Fluke VT04 is powered by either 4 AA-sized batteries or a rechargeable Lithium battery depending on the version. Like the FLIR, the Fluke VT04 has adjustable emissivity.
With thermal cameras becoming cheaper every year, now might be a great time to get one for your toolbox. The cameras mentioned above are some of the entry level All-In-One cameras. Seek Thermal and FLIR make phone attachments, FLIR One and Seek Compact, that also provide similar functionality. The capabilities of these cameras far exceeded normal IR thermometers, plus an engineer can easily save the image and data. In the next few months, CAT is releasing a cell phone with these capabilities built in.
| 1 | 2 |
<urn:uuid:d3ff7cbf-fed4-4c93-a776-bb907609f81b>
|
Unidirectional influx and efflux of nutrients and toxicants, and their resultant net fluxes, are central to the nutrition and toxicology of plants. Radioisotope tracing is a major technique used to measure such fluxes, both within plants, and between plants and their environments. Flux data obtained with radiotracer protocols can help elucidate the capacity, mechanism, regulation, and energetics of transport systems for specific mineral nutrients or toxicants, and can provide insight into compartmentation and turnover rates of subcellular mineral and metabolite pools. Here, we describe two major radioisotope protocols used in plant biology: direct influx (DI) and compartmental analysis by tracer efflux (CATE). We focus on flux measurement of potassium (K+) as a nutrient, and ammonia/ammonium (NH3/NH4+) as a toxicant, in intact seedlings of the model species barley (Hordeum vulgare L.). These protocols can be readily adapted to other experimental systems (e.g., different species, excised plant material, and other nutrients/toxicants). Advantages and limitations of these protocols are discussed.
26 Related JoVE Articles!
Investigating Tissue- and Organ-specific Phytochrome Responses using FACS-assisted Cell-type Specific Expression Profiling in Arabidopsis thaliana
Institutions: Michigan State University (MSU), Michigan State University (MSU).
Light mediates an array of developmental and adaptive processes throughout the life cycle of a plant. Plants utilize light-absorbing molecules called photoreceptors to sense and adapt to light. The red/far-red light-absorbing phytochrome photoreceptors have been studied extensively. Phytochromes exist as a family of proteins with distinct and overlapping functions in all higher plant systems in which they have been studied1
. Phytochrome-mediated light responses, which range from seed germination through flowering and senescence, are often localized to specific plant tissues or organs2
. Despite the discovery and elucidation of individual and redundant phytochrome functions through mutational analyses, conclusive reports on distinct sites of photoperception and the molecular mechanisms of localized pools of phytochromes that mediate spatial-specific phytochrome responses are limited. We designed experiments based on the hypotheses that specific sites of phytochrome photoperception regulate tissue- and organ-specific aspects of photomorphogenesis, and that localized phytochrome pools engage distinct subsets of downstream target genes in cell-to-cell signaling. We developed a biochemical approach to selectively reduce functional phytochromes in an organ- or tissue-specific manner within transgenic plants. Our studies are based on a bipartite enhancer-trap approach that results in transactivation of the expression of a gene under control of the Upstream Activation Sequence (UAS) element by the transcriptional activator GAL43
. The biliverdin reductase (BVR
) gene under the control of the UAS is silently maintained in the absence of GAL4 transactivation in the UAS-BVR parent4
. Genetic crosses between a UAS-BVR transgenic line and a GAL4-GFP enhancer trap line result in specific expression of the BVR
gene in cells marked by GFP
. BVR accumulation in Arabidopsis plants results in phytochrome chromophore deficiency in planta5-7
. Thus, transgenic plants that we have produced exhibit GAL4-dependent activation of the BVR
gene, resulting in the biochemical inactivation of phytochrome, as well as GAL4-dependent GFP
expression. Photobiological and molecular genetic analyses of BVR
transgenic lines are yielding insight into tissue- and organ-specific phytochrome-mediated responses that have been associated with corresponding sites of photoperception4, 7, 8
. Fluorescence Activated Cell Sorting (FACS) of GFP-positive, enhancer-trap-induced BVR
-expressing plant protoplasts coupled with cell-type-specific gene expression profiling through microarray analysis is being used to identify putative downstream target genes involved in mediating spatial-specific phytochrome responses. This research is expanding our understanding of sites of light perception, the mechanisms through which various tissues or organs cooperate in light-regulated plant growth and development, and advancing the molecular dissection of complex phytochrome-mediated cell-to-cell signaling cascades.
Plant Biology, Issue 39, Arabidopsis thaliana, confocal microscopy, expression profiling, microarray, fluorescence, FACS, photomorphogenesis, phytochrome, protoplasting
Patch Clamp and Perfusion Techniques for Studying Ion Channels Expressed in Xenopus oocytes
Institutions: Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis.
The protocol presented here is designed to study the activation of the large conductance, voltage- and Ca2+
(BK) channels. The protocol may also be used to study the structure-function relationship for other ion channels and neurotransmitter receptors1
. BK channels are widely expressed in different tissues and have been implicated in many physiological functions, including regulation of smooth muscle contraction, frequency tuning of inner hair cells and regulation of neurotransmitter release2-6
. BK channels are activated by membrane depolarization and by intracellular Ca2+
. Therefore, the protocol is designed to control both the membrane voltage and the intracellular solution. In this protocol, messenger RNA of BK channels is injected into Xenopus laevis
oocytes (stage V-VI) followed by 2-5 days of incubation at 18°C10-13
. Membrane patches that contain single or multiple BK channels are excised with the inside-out configuration using patch clamp techniques10-13
. The intracellular side of the patch is perfused with desired solutions during recording so that the channel activation under different conditions can be examined. To summarize, the mRNA of BK channels is injected into Xenopus laevis
oocytes to express channel proteins on the oocyte membrane; patch clamp techniques are used to record currents flowing through the channels under controlled voltage and intracellular solutions.
Cellular Biology, Issue 47, patch clamp, ion channel, electrophysiology, biophysics, exogenous expression system, Xenopus oocyte, mRNA, transcription
Isolation and Kv Channel Recordings in Murine Atrial and Ventricular Cardiomyocytes
Institutions: Charité Medical Faculty and Max-Delbrück Center for Molecular Medicine (MDC), Charité - Universitätsmedizin Berlin, Charité - Universitätsmedizin Berlin.
KCNE genes encode for a small family of Kv channel ancillary subunits that form heteromeric complexes with Kv channel alpha subunits to modify their functional properties. Mutations in KCNE genes have been found in patients with cardiac arrhythmias such as the long QT syndrome and/or atrial fibrillation. However, the precise molecular pathophysiology that leads to these diseases remains elusive. In previous studies the electrophysiological properties of the disease causing mutations in these genes have mostly been studied in heterologous expression systems and we cannot be sure if the reported effects can directly be translated into native cardiomyocytes. In our laboratory we therefore use a different approach. We directly study the effects of KCNE gene deletion in isolated cardiomyocytes from knockout mice by cellular electrophysiology - a unique technique that we describe in this issue of the Journal of Visualized Experiments
. The hearts from genetically engineered KCNE mice are rapidly excised and mounted onto a Langendorff apparatus by aortic cannulation. Free Ca2+
in the myocardium is bound by EGTA, and dissociation of cardiac myocytes is then achieved by retrograde perfusion of the coronary arteries with a specialized low Ca2+
buffer containing collagenase. Atria, free right ventricular wall and the left ventricle can then be separated by microsurgical techniques. Calcium is then slowly added back to isolated cardiomyocytes in a multiple step comprising washing procedure. Atrial and ventricular cardiomyocytes of healthy appearance with no spontaneous contractions are then immediately subjected to electrophysiological analyses by patch clamp technique or other biochemical analyses within the first 6 hours following isolation.
Physiology, Issue 73, Medicine, Cellular Biology, Molecular Biology, Genetics, Biomedical Engineering, Anatomy, Cardiology, Cardiac Output, Low, Cardiomyopathies, Heart Failure, Arrhythmias, Cardiac, Ventricular Dysfunction, Cardiomyocytes, Kv channel, cardiac arrythmia, electrophysiology, patch clamp, mouse, animal model
Ice-Cap: A Method for Growing Arabidopsis and Tomato Plants in 96-well Plates for High-Throughput Genotyping
Institutions: University of Wisconsin-Madison, Oregon State University .
It is becoming common for plant scientists to develop projects that require the genotyping of large numbers of plants. The first step in any genotyping project is to collect a tissue sample from each individual plant. The traditional approach to this task is to sample plants one-at-a-time. If one wishes to genotype hundreds or thousands of individuals, however, using this strategy results in a significant bottleneck in the genotyping pipeline. The Ice-Cap method that we describe here provides a high-throughput solution to this challenge by allowing one scientist to collect tissue from several thousand seedlings in a single day 1,2
. This level of throughput is made possible by the fact that tissue is harvested from plants 96-at-a-time, rather than one-at-a-time.
The Ice-Cap method provides an integrated platform for performing seedling growth, tissue harvest, and DNA extraction. The basis for Ice-Cap is the growth of seedlings in a stacked pair of 96-well plates. The wells of the upper plate contain plugs of agar growth media on which individual seedlings germinate. The roots grow down through the agar media, exit the upper plate through a hole, and pass into a lower plate containing water. To harvest tissue for DNA extraction, the water in the lower plate containing root tissue is rapidly frozen while the seedlings in the upper plate remain at room temperature. The upper plate is then peeled away from the lower plate, yielding one plate with 96 root tissue samples frozen in ice and one plate with 96 viable seedlings. The technique is named "Ice-Cap" because it uses ice to capture the root tissue. The 96-well plate containing the seedlings can then wrapped in foil and transferred to low temperature. This process suspends further growth of the seedlings, but does not affect their viability. Once genotype analysis has been completed, seedlings with the desired genotype can be transferred from the 96-well plate to soil for further propagation. We have demonstrated the utility of the Ice-Cap method using Arabidopsis thaliana
, tomato, and rice seedlings. We expect that the method should also be applicable to other species of plants with seeds small enough to fit into the wells of 96-well plates.
Plant Biology, Issue 57, Plant, Arabidopsis thaliana, tomato, 96-well plate, DNA extraction, high-throughput, genotyping
Non-radioactive in situ Hybridization Protocol Applicable for Norway Spruce and a Range of Plant Species
Institutions: Uppsala University, Swedish University of Agricultural Sciences.
The high-throughput expression analysis technologies available today give scientists an overflow of expression profiles but their resolution in terms of tissue specific expression is limited because of problems in dissecting individual tissues. Expression data needs to be confirmed and complemented with expression patterns using e.g. in situ
hybridization, a technique used to localize cell specific mRNA expression. The in situ
hybridization method is laborious, time-consuming and often requires extensive optimization depending on species and tissue. In situ
experiments are relatively more difficult to perform in woody species such as the conifer Norway spruce (Picea abies
). Here we present a modified DIG in situ
hybridization protocol, which is fast and applicable on a wide range of plant species including P. abies
. With just a few adjustments, including altered RNase treatment and proteinase K concentration, we could use the protocol to study tissue specific expression of homologous genes in male reproductive organs of one gymnosperm and two angiosperm species; P. abies, Arabidopsis thaliana
and Brassica napus
. The protocol worked equally well for the species and genes studied. AtAP3
were observed in second and third whorl floral organs in A. thaliana
and B. napus
and DAL13 in microsporophylls of male cones from P. abies
. For P. abies
the proteinase K concentration, used to permeablize the tissues, had to be increased to 3 g/ml instead of 1 g/ml, possibly due to more compact tissues and higher levels of phenolics and polysaccharides. For all species the RNase treatment was removed due to reduced signal strength without a corresponding increase in specificity. By comparing tissue specific expression patterns of homologous genes from both flowering plants and a coniferous tree we demonstrate that the DIG in situ
protocol presented here, with only minute adjustments, can be applied to a wide range of plant species. Hence, the protocol avoids both extensive species specific optimization and the laborious use of radioactively labeled probes in favor of DIG labeled probes. We have chosen to illustrate the technically demanding steps of the protocol in our film.
Anna Karlgren and Jenny Carlsson contributed equally to this study.
Corresponding authors: Anna Karlgren at [email protected] and Jens F. Sundström at [email protected]
Plant Biology, Issue 26, RNA, expression analysis, Norway spruce, Arabidopsis, rapeseed, conifers
Membrane Potentials, Synaptic Responses, Neuronal Circuitry, Neuromodulation and Muscle Histology Using the Crayfish: Student Laboratory Exercises
Institutions: University of Kentucky, University of Toronto.
The purpose of this report is to help develop an understanding of the effects caused by ion gradients across a biological membrane. Two aspects that influence a cell's membrane potential and which we address in these experiments are: (1) Ion concentration of K+
on the outside of the membrane, and (2) the permeability of the membrane to specific ions. The crayfish abdominal extensor muscles are in groupings with some being tonic (slow) and others phasic (fast) in their biochemical and physiological phenotypes, as well as in their structure; the motor neurons that innervate these muscles are correspondingly different in functional characteristics. We use these muscles as well as the superficial, tonic abdominal flexor muscle to demonstrate properties in synaptic transmission. In addition, we introduce a sensory-CNS-motor neuron-muscle circuit to demonstrate the effect of cuticular sensory stimulation as well as the influence of neuromodulators on certain aspects of the circuit. With the techniques obtained in this exercise, one can begin to answer many questions remaining in other experimental preparations as well as in physiological applications related to medicine and health. We have demonstrated the usefulness of model invertebrate preparations to address fundamental questions pertinent to all animals.
Neuroscience, Issue 47, Invertebrate, Crayfish, neurophysiology, muscle, anatomy, electrophysiology
Measuring Spatial and Temporal Ca2+ Signals in Arabidopsis Plants
Institutions: Purdue University, Purdue University, Jiangsu Academy of Agricultural Sciences, Zhejiang University, Shanxi Academy of Agricultural Sciences, Chinese Academy of Sciences.
Developmental and environmental cues induce Ca2+
fluctuations in plant cells. Stimulus-specific spatial-temporal Ca2+
patterns are sensed by cellular Ca2+
binding proteins that initiate Ca2+
signaling cascades. However, we still know little about how stimulus specific Ca2+
signals are generated. The specificity of a Ca2+
signal may be attributed to the sophisticated regulation of the activities of Ca2+
channels and/or transporters in response to a given stimulus. To identify these cellular components and understand their functions, it is crucial to use systems that allow a sensitive and robust recording of Ca2+
signals at both the tissue and cellular levels. Genetically encoded Ca2+
indicators that are targeted to different cellular compartments have provided a platform for live cell confocal imaging of cellular Ca2+
signals. Here we describe instructions for the use of two Ca2+
detection systems: aequorin based FAS (film adhesive seedlings) luminescence Ca2+
imaging and case12 based live cell confocal fluorescence Ca2+
imaging. Luminescence imaging using the FAS system provides a simple, robust and sensitive detection of spatial and temporal Ca2+
signals at the tissue level, while live cell confocal imaging using Case12 provides simultaneous detection of cytosolic and nuclear Ca2+
signals at a high resolution.
Plant Biology, Issue 91, Aequorin, Case12, abiotic stress, heavy metal stress, copper ion, calcium imaging, Arabidopsis
Setting-up an In Vitro Model of Rat Blood-brain Barrier (BBB): A Focus on BBB Impermeability and Receptor-mediated Transport
Institutions: VECT-HORUS SAS, CNRS, NICN UMR 7259.
The blood brain barrier (BBB) specifically regulates molecular and cellular flux between the blood and the nervous tissue. Our aim was to develop and characterize a highly reproducible rat syngeneic in vitro
model of the BBB using co-cultures of primary rat brain endothelial cells (RBEC) and astrocytes to study receptors involved in transcytosis across the endothelial cell monolayer. Astrocytes were isolated by mechanical dissection following trypsin digestion and were frozen for later co-culture. RBEC were isolated from 5-week-old rat cortices. The brains were cleaned of meninges and white matter, and mechanically dissociated following enzymatic digestion. Thereafter, the tissue homogenate was centrifuged in bovine serum albumin to separate vessel fragments from nervous tissue. The vessel fragments underwent a second enzymatic digestion to free endothelial cells from their extracellular matrix. The remaining contaminating cells such as pericytes were further eliminated by plating the microvessel fragments in puromycin-containing medium. They were then passaged onto filters for co-culture with astrocytes grown on the bottom of the wells. RBEC expressed high levels of tight junction (TJ) proteins such as occludin, claudin-5 and ZO-1 with a typical localization at the cell borders. The transendothelial electrical resistance (TEER) of brain endothelial monolayers, indicating the tightness of TJs reached 300 ohm·cm2
on average. The endothelial permeability coefficients (Pe) for lucifer yellow (LY) was highly reproducible with an average of 0.26 ± 0.11 x 10-3
cm/min. Brain endothelial cells organized in monolayers expressed the efflux transporter P-glycoprotein (P-gp), showed a polarized transport of rhodamine 123, a ligand for P-gp, and showed specific transport of transferrin-Cy3 and DiILDL across the endothelial cell monolayer. In conclusion, we provide a protocol for setting up an in vitro
BBB model that is highly reproducible due to the quality assurance methods, and that is suitable for research on BBB transporters and receptors.
Medicine, Issue 88, rat brain endothelial cells (RBEC), mouse, spinal cord, tight junction (TJ), receptor-mediated transport (RMT), low density lipoprotein (LDL), LDLR, transferrin, TfR, P-glycoprotein (P-gp), transendothelial electrical resistance (TEER),
In Vitro Reconstitution of Light-harvesting Complexes of Plants and Green Algae
Institutions: VU University Amsterdam.
In plants and green algae, light is captured by the light-harvesting complexes (LHCs), a family of integral membrane proteins that coordinate chlorophylls and carotenoids. In vivo
, these proteins are folded with pigments to form complexes which are inserted in the thylakoid membrane of the chloroplast. The high similarity in the chemical and physical properties of the members of the family, together with the fact that they can easily lose pigments during isolation, makes their purification in a native state challenging. An alternative approach to obtain homogeneous preparations of LHCs was developed by Plumley and Schmidt in 19871
, who showed that it was possible to reconstitute these complexes in vitro
starting from purified pigments and unfolded apoproteins, resulting in complexes with properties very similar to that of native complexes. This opened the way to the use of bacterial expressed recombinant proteins for in vitro
reconstitution. The reconstitution method is powerful for various reasons: (1) pure preparations of individual complexes can be obtained, (2) pigment composition can be controlled to assess their contribution to structure and function, (3) recombinant proteins can be mutated to study the functional role of the individual residues (e.g.,
pigment binding sites) or protein domain (e.g.,
protein-protein interaction, folding). This method has been optimized in several laboratories and applied to most of the light-harvesting complexes. The protocol described here details the method of reconstituting light-harvesting complexes in vitro
currently used in our laboratory,
and examples describing applications of the method are provided.
Biochemistry, Issue 92, Reconstitution, Photosynthesis, Chlorophyll, Carotenoids, Light Harvesting Protein, Chlamydomonas reinhardtii, Arabidopsis thaliana
AFM-based Mapping of the Elastic Properties of Cell Walls: at Tissue, Cellular, and Subcellular Resolutions
Institutions: Université Paris Diderot, INRA Centre de Versailles-Grignon.
We describe a recently developed method to measure mechanical properties of the surfaces of plant tissues using atomic force microscopy (AFM) micro/nano-indentations, for a JPK AFM. Specifically, in this protocol we measure the apparent Young’s modulus of cell walls at subcellular resolutions across regions of up to 100 µm x 100 µm in floral meristems, hypocotyls, and roots. This requires careful preparation of the sample, the correct selection of micro-indenters and indentation depths. To account for cell wall properties only, measurements are performed in highly concentrated solutions of mannitol in order to plasmolyze the cells and thus remove the contribution of cell turgor pressure.
In contrast to other extant techniques, by using different indenters and indentation depths, this method allows simultaneous multiscale measurements, i.e.
at subcellular resolutions and across hundreds of cells comprising a tissue. This means that it is now possible to spatially-temporally characterize the changes that take place in the mechanical properties of cell walls during development, enabling these changes to be correlated with growth and differentiation. This represents a key step to understand how coordinated microscopic cellular changes bring about macroscopic morphogenetic events.
However, several limitations remain: the method can only be used on fairly small samples (around 100 µm in diameter) and only on external tissues; the method is sensitive to tissue topography; it measures only certain aspects of the tissue’s complex mechanical properties. The technique is being developed rapidly and it is likely that most of these limitations will be resolved in the near future.
Plant Biology, Issue 89, Tissue growth, Cell wall, Plant mechanics, Elasticity, Young’s modulus, Root, Apical meristem, Hypocotyl, Organ formation, Biomechanics, Morphogenesis
Multi-analyte Biochip (MAB) Based on All-solid-state Ion-selective Electrodes (ASSISE) for Physiological Research
Institutions: Purdue University, NASA Ames Research Center, Pennsylvania State University Hazleton, Cooley LLP, NASA Headquarters.
Lab-on-a-chip (LOC) applications in environmental, biomedical, agricultural, biological, and spaceflight research require an ion-selective electrode (ISE) that can withstand prolonged storage in complex biological media 1-4
. An all-solid-state ion-selective-electrode (ASSISE) is especially attractive for the aforementioned applications. The electrode should have the following favorable characteristics: easy construction, low maintenance, and (potential for) miniaturization, allowing for batch processing. A microfabricated ASSISE intended for quantifying H+
, and CO32-
ions was constructed. It consists of a noble-metal electrode layer (i.e.
Pt), a transduction layer, and an ion-selective membrane (ISM) layer. The transduction layer functions to transduce the concentration-dependent chemical potential of the ion-selective membrane into a measurable electrical signal.
The lifetime of an ASSISE is found to depend on maintaining the potential at the conductive layer/membrane interface 5-7
. To extend the ASSISE working lifetime and thereby maintain stable potentials at the interfacial layers, we utilized the conductive polymer (CP) poly(3,4-ethylenedioxythiophene) (PEDOT) 7-9
in place of silver/silver chloride (Ag/AgCl) as the transducer layer. We constructed the ASSISE in a lab-on-a-chip format, which we called the multi-analyte biochip (MAB) (Figure 1
Calibrations in test solutions demonstrated that the MAB can monitor pH (operational range pH 4-9), CO32-
(measured range 0.01 mM - 1 mM), and Ca2+
(log-linear range 0.01 mM to 1 mM). The MAB for pH provides a near-Nernstian slope response after almost one month storage in algal medium. The carbonate biochips show a potentiometric profile similar to that of a conventional ion-selective electrode. Physiological measurements were employed to monitor biological activity of the model system, the microalga Chlorella vulgaris
The MAB conveys an advantage in size, versatility, and multiplexed analyte sensing capability, making it applicable to many confined monitoring situations, on Earth or in space.
Biochip Design and Experimental Methods
The biochip is 10 x 11 mm in dimension and has 9 ASSISEs designated as working electrodes (WEs) and 5 Ag/AgCl reference electrodes (REs). Each working electrode (WE) is 240 μm in diameter and is equally spaced at 1.4 mm from the REs, which are 480 μm in diameter. These electrodes are connected to electrical contact pads with a dimension of 0.5 mm x 0.5 mm. The schematic is shown in Figure 2
Cyclic voltammetry (CV) and galvanostatic deposition methods are used to electropolymerize the PEDOT films using a Bioanalytical Systems Inc. (BASI) C3 cell stand (Figure 3
). The counter-ion for the PEDOT film is tailored to suit the analyte ion of interest. A PEDOT with poly(styrenesulfonate) counter ion (PEDOT/PSS) is utilized for H+
, while one with sulphate (added to the solution as CaSO4
) is utilized for Ca2+
. The electrochemical properties of the PEDOT-coated WE is analyzed using CVs in redox-active solution (i.e.
2 mM potassium ferricyanide (K3
)). Based on the CV profile, Randles-Sevcik analysis was used to determine the effective surface area 10
. Spin-coating at 1,500 rpm is used to cast ~2 μm thick ion-selective membranes (ISMs) on the MAB working electrodes (WEs).
The MAB is contained in a microfluidic flow-cell chamber filled with a 150 μl volume of algal medium; the contact pads are electrically connected to the BASI system (Figure 4
). The photosynthetic activity of Chlorella vulgaris
is monitored in ambient light and dark conditions.
Bioengineering, Issue 74, Medicine, Biomedical Engineering, Chemical Engineering, Electrical Engineering, Mechanical Engineering, Chemistry, Biochemistry, Anatomy, Physiology, Miniaturization, Microtechnology, Electrochemical Techniques, electrochemical processes, astrobiology, Analytical, Diagnostic and Therapeutic Techniques and Equipment, Investigative Techniques, Technology, Industry, Agriculture, electrochemical sensor, all-solid-state ion-selective electrode (ASSISE), conductive polymer transducer, poly(3,4-ethylenedioxythiophene) (PEDOT), lab-on-a-chip, Chlorella vulgaris, photosynthesis, microfluidics
Using Flatbed Scanners to Collect High-resolution Time-lapsed Images of the Arabidopsis Root Gravitropic Response
Institutions: Doane College, Doane College.
Research efforts in biology increasingly require use of methodologies that enable high-volume collection of high-resolution data. A challenge laboratories can face is the development and attainment of these methods. Observation of phenotypes in a process of interest is a typical objective of research labs studying gene function and this is often achieved through image capture. A particular process that is amenable to observation using imaging approaches is the corrective growth of a seedling root that has been displaced from alignment with the gravity vector. Imaging platforms used to measure the root gravitropic response can be expensive, relatively low in throughput, and/or labor intensive. These issues have been addressed by developing a high-throughput image capture method using inexpensive, yet high-resolution, flatbed scanners. Using this method, images can be captured every few minutes at 4,800 dpi. The current setup enables collection of 216 individual responses per day. The image data collected is of ample quality for image analysis applications.
Basic Protocol, Issue 83, root gravitropism, Arabidopsis, high-throughput phenotyping, flatbed scanners, image analysis, undergraduate research
One-channel Cell-attached Patch-clamp Recording
Institutions: University at Buffalo, SUNY, University at Buffalo, SUNY, The Scripps Research Institute, University at Buffalo, SUNY.
Ion channel proteins are universal devices for fast communication across biological membranes. The temporal signature of the ionic flux they generate depends on properties intrinsic to each channel protein as well as the mechanism by which it is generated and controlled and represents an important area of current research. Information about the operational dynamics of ion channel proteins can be obtained by observing long stretches of current produced by a single molecule. Described here is a protocol for obtaining one-channel cell-attached patch-clamp current recordings for a ligand gated ion channel, the NMDA receptor, expressed heterologously in HEK293 cells or natively in cortical neurons. Also provided are instructions on how to adapt the method to other ion channels of interest by presenting the example of the mechano-sensitive channel PIEZO1. This method can provide data regarding the channel’s conductance properties and the temporal sequence of open-closed conformations that make up the channel’s activation mechanism, thus helping to understand their functions in health and disease.
Neuroscience, Issue 88, biophysics, ion channels, single-channel recording, NMDA receptors, gating, electrophysiology, patch-clamp, kinetic analysis
Introduction to Solid Supported Membrane Based Electrophysiology
Institutions: Max Planck Institute of Biophysics, Goethe University Frankfurt.
The electrophysiological method we present is based on a solid supported membrane (SSM) composed of an octadecanethiol layer chemisorbed on a gold coated sensor chip and a phosphatidylcholine monolayer on top. This assembly is mounted into a cuvette system containing the reference electrode, a chlorinated silver wire.
After adsorption of membrane fragments or proteoliposomes containing the membrane protein of interest, a fast solution exchange is used to induce the transport activity of the membrane protein. In the single solution exchange protocol two solutions, one non-activating and one activating solution, are needed. The flow is controlled by pressurized air and a valve and tubing system within a faraday cage.
The kinetics of the electrogenic transport activity is obtained via capacitive coupling between the SSM and the proteoliposomes or membrane fragments. The method, therefore, yields only transient currents. The peak current represents the stationary transport activity. The time dependent transporter currents can be reconstructed by circuit analysis.
This method is especially suited for prokaryotic transporters or eukaryotic transporters from intracellular membranes, which cannot be investigated by patch clamp or voltage clamp methods.
Biochemistry, Issue 75, Biophysics, Molecular Biology, Cellular Biology, Physiology, Proteins, Membrane Lipids, Membrane Transport Proteins, Kinetics, Electrophysiology, solid supported membrane, SSM, membrane transporter, lactose permease, lacY, capacitive coupling, solution exchange, model membrane, membrane protein, transporter, kinetics, transport mechanism
The Xenopus Oocyte Cut-open Vaseline Gap Voltage-clamp Technique With Fluorometry
Institutions: Washington University in St. Louis.
The cut-open oocyte Vaseline gap (COVG) voltage clamp technique allows for analysis of electrophysiological and kinetic properties of heterologous ion channels in oocytes. Recordings from the cut-open setup are particularly useful for resolving low magnitude gating currents, rapid ionic current activation, and deactivation. The main benefits over the two-electrode voltage clamp (TEVC) technique include increased clamp speed, improved signal-to-noise ratio, and the ability to modulate the intracellular and extracellular milieu.
Here, we employ the human cardiac sodium channel (hNaV
1.5), expressed in Xenopus
oocytes, to demonstrate the cut-open setup and protocol as well as modifications that are required to add voltage clamp fluorometry capability.
The properties of fast activating ion channels, such as hNaV
1.5, cannot be fully resolved near room temperature using TEVC, in which the entirety of the oocyte membrane is clamped, making voltage control difficult. However, in the cut-open technique, isolation of only a small portion of the cell membrane allows for the rapid clamping required to accurately record fast kinetics while preventing channel run-down associated with patch clamp techniques.
In conjunction with the COVG technique, ion channel kinetics and electrophysiological properties can be further assayed by using voltage clamp fluorometry, where protein motion is tracked via cysteine conjugation of extracellularly applied fluorophores, insertion of genetically encoded fluorescent proteins, or the incorporation of unnatural amino acids into the region of interest1
. This additional data yields kinetic information about voltage-dependent conformational rearrangements of the protein via changes in the microenvironment surrounding the fluorescent molecule.
Developmental Biology, Issue 85, Voltage clamp, Cut-open, Oocyte, Voltage Clamp Fluorometry, Sodium Channels, Ionic Currents, Xenopus laevis
Isolation, Culture, and Functional Characterization of Adult Mouse Cardiomyoctyes
Institutions: Beth Israel Deaconess Medical Center, Harvard Medical School, Sapienza University.
The use of primary cardiomyocytes (CMs) in culture has provided a powerful complement to murine models of heart disease in advancing our understanding of heart disease. In particular, the ability to study ion homeostasis, ion channel function, cellular excitability and excitation-contraction coupling and their alterations in diseased conditions and by disease-causing mutations have led to significant insights into cardiac diseases. Furthermore, the lack of an adequate immortalized cell line to mimic adult CMs, and the limitations of neonatal CMs (which lack many of the structural and functional biomechanics characteristic of adult CMs) in culture have hampered our understanding of the complex interplay between signaling pathways, ion channels and contractile properties in the adult heart strengthening the importance of studying adult isolated cardiomyocytes. Here, we present methods for the isolation, culture, manipulation of gene expression by adenoviral-expressed proteins, and subsequent functional analysis of cardiomyocytes from the adult mouse. The use of these techniques will help to develop mechanistic insight into signaling pathways that regulate cellular excitability, Ca2+
dynamics and contractility and provide a much more physiologically relevant characterization of cardiovascular disease.
Cellular Biology, Issue 79, Medicine, Cardiology, Cellular Biology, Anatomy, Physiology, Mice, Ion Channels, Primary Cell Culture, Cardiac Electrophysiology, adult mouse cardiomyocytes, cell isolation, IonOptix, Cell Culture, adenoviral transfection, patch clamp, fluorescent nanosensor
Yeast Luminometric and Xenopus Oocyte Electrophysiological Examinations of the Molecular Mechanosensitivity of TRPV4
Institutions: University of Wisconsin – Madison, University of Wisconsin – Madison.
TRPV4 (Transient Receptor Potentials, vanilloid family, type 4) is widely expressed in vertebrate tissues and is activated by several stimuli, including by mechanical forces. Certain TRPV4 mutations cause complex hereditary bone or neuronal pathologies in human. Wild-type or mutant TRPV4 transgenes are commonly expressed in cultured mammalian cells and examined by Fura-2 fluorometry and by electrodes. In terms of the mechanism of mechanosensitivity and the molecular bases of the diseases, the current literature is confusing and controversial. To complement existing methods, we describe two additional methods to examine the molecular properties of TRPV4. (1) Rat TRPV4 and an aequorin transgene are transformed into budding yeast. A hypo-osmtic shock of the transformant population yields a luminometric signal due to the combination of aequorin with Ca2+
, released through the TRPV4 channel. Here TRPV4 is isolated from its usual mammalian partner proteins and reveals its own mechanosensitivity. (2) cRNA of TRPV4 is injected into Xenopus oocytes. After a suitable period of incubation, the macroscopic TRPV4 current is examined with a two-electrode voltage clamp. The current rise upon removal of inert osmoticum from the oocyte bath is indicative of mechanosensitivity. The microAmpere (10-6
A) currents from oocytes are much larger than the subnano- to nanoAmpere (10-10
A) currents from cultured cells, yielding clearer quantifications and more confident assessments. Microscopic currents reflecting the activities of individual channel proteins can also be directly registered under a patch clamp, in on-cell or excised mode. The same oocyte provides multiple patch samples, allowing better data replication. Suctions applied to the patches can activate TRPV4 to directly assess mechanosensitivity. These methods should also be useful in the study of other types of TRP channels.
Basic Protocol, Issue 82, Eukaryota, Archaea, Bacteria, Life Sciences (General), Mechanosensation, Ion channels, Lipids, patch clamp, Xenopus Oocytes, yeast, luminometry, force sensing, voltage clamp, TRPV4, electrophysiology
Optimization and Utilization of Agrobacterium-mediated Transient Protein Production in Nicotiana
Institutions: Fraunhofer USA Center for Molecular Biotechnology.
-mediated transient protein production in plants is a promising approach to produce vaccine antigens and therapeutic proteins within a short period of time. However, this technology is only just beginning to be applied to large-scale production as many technological obstacles to scale up are now being overcome. Here, we demonstrate a simple and reproducible method for industrial-scale transient protein production based on vacuum infiltration of Nicotiana
plants with Agrobacteria
carrying launch vectors. Optimization of Agrobacterium
cultivation in AB medium allows direct dilution of the bacterial culture in Milli-Q water, simplifying the infiltration process. Among three tested species of Nicotiana
, N. excelsiana
× N. excelsior
) was selected as the most promising host due to the ease of infiltration, high level of reporter protein production, and about two-fold higher biomass production under controlled environmental conditions. Induction of Agrobacterium
harboring pBID4-GFP (Tobacco mosaic virus
-based) using chemicals such as acetosyringone and monosaccharide had no effect on the protein production level. Infiltrating plant under 50 to 100 mbar for 30 or 60 sec resulted in about 95% infiltration of plant leaf tissues. Infiltration with Agrobacterium
laboratory strain GV3101 showed the highest protein production compared to Agrobacteria
laboratory strains LBA4404 and C58C1 and wild-type Agrobacteria
strains at6, at10, at77 and A4. Co-expression of a viral RNA silencing suppressor, p23 or p19, in N. benthamiana
resulted in earlier accumulation and increased production (15-25%) of target protein (influenza virus hemagglutinin).
Plant Biology, Issue 86, Agroinfiltration, Nicotiana benthamiana, transient protein production, plant-based expression, viral vector, Agrobacteria
Measuring Cation Transport by Na,K- and H,K-ATPase in Xenopus Oocytes by Atomic Absorption Spectrophotometry: An Alternative to Radioisotope Assays
Institutions: Technical University of Berlin, Oregon Health & Science University.
Whereas cation transport by the electrogenic membrane transporter Na+
-ATPase can be measured by electrophysiology, the electroneutrally operating gastric H+
-ATPase is more difficult to investigate. Many transport assays utilize radioisotopes to achieve a sufficient signal-to-noise ratio, however, the necessary security measures impose severe restrictions regarding human exposure or assay design. Furthermore, ion transport across cell membranes is critically influenced by the membrane potential, which is not straightforwardly controlled in cell culture or in proteoliposome preparations. Here, we make use of the outstanding sensitivity of atomic absorption spectrophotometry (AAS) towards trace amounts of chemical elements to measure Rb+
transport by Na+
- or gastric H+
-ATPase in single cells. Using Xenopus
oocytes as expression system, we determine the amount of Rb+
) transported into the cells by measuring samples of single-oocyte homogenates in an AAS device equipped with a transversely heated graphite atomizer (THGA) furnace, which is loaded from an autosampler. Since the background of unspecific Rb+
uptake into control oocytes or during application of ATPase-specific inhibitors is very small, it is possible to implement complex kinetic assay schemes involving a large number of experimental conditions simultaneously, or to compare the transport capacity and kinetics of site-specifically mutated transporters with high precision. Furthermore, since cation uptake is determined on single cells, the flux experiments can be carried out in combination with two-electrode voltage-clamping (TEVC) to achieve accurate control of the membrane potential and current. This allowed e.g.
to quantitatively determine the 3Na+
transport stoichiometry of the Na+
-ATPase and enabled for the first time to investigate the voltage dependence of cation transport by the electroneutrally operating gastric H+
-ATPase. In principle, the assay is not limited to K+
-transporting membrane proteins, but it may work equally well to address the activity of heavy or transition metal transporters, or uptake of chemical elements by endocytotic processes.
Biochemistry, Issue 72, Chemistry, Biophysics, Bioengineering, Physiology, Molecular Biology, electrochemical processes, physical chemistry, spectrophotometry (application), spectroscopic chemical analysis (application), life sciences, temperature effects (biological, animal and plant), Life Sciences (General), Na+,K+-ATPase, H+,K+-ATPase, Cation Uptake, P-type ATPases, Atomic Absorption Spectrophotometry (AAS), Two-Electrode Voltage-Clamp, Xenopus Oocytes, Rb+ Flux, Transversely Heated Graphite Atomizer (THGA) Furnace, electrophysiology, animal model
Single Cell Measurement of Dopamine Release with Simultaneous Voltage-clamp and Amperometry
Institutions: University of Florida , University of Florida .
After its release into the synaptic cleft, dopamine exerts its biological properties via its pre- and post-synaptic targets1
. The dopamine signal is terminated by diffusion2-3
, extracellular enzymes4
, and membrane transporters5
. The dopamine transporter, located in the peri-synaptic cleft of dopamine neurons clears the released amines through an inward dopamine flux (uptake). The dopamine transporter can also work in reverse direction to release amines from inside to outside in a process called outward transport or efflux of dopamine5
. More than 20 years ago Sulzer et al.
reported the dopamine transporter can operate in two modes of activity: forward (uptake) and reverse (efflux)5
. The neurotransmitter released via efflux through the transporter can move a large amount of dopamine to the extracellular space, and has been shown to play a major regulatory role in extracellular dopamine homeostasis6
. Here we describe how simultaneous patch clamp and amperometry recording can be used to measure released dopamine via the efflux mechanism with millisecond time resolution when the membrane potential is controlled. For this, whole-cell current and oxidative (amperometric) signals are measured simultaneously using an Axopatch 200B amplifier (Molecular Devices, with a low-pass Bessel filter set at 1,000 Hz for whole-cell current recording). For amperometry recording a carbon fiber electrode is connected to a second amplifier (Axopatch 200B) and is placed adjacent to the plasma membrane and held at +700 mV. The whole-cell and oxidative (amperometric) currents can be recorded and the current-voltage relationship can be generated using a voltage step protocol. Unlike the usual amperometric calibration, which requires conversion to concentration, the current is reported directly without considering the effective volume7
. Thus, the resulting data represent a lower limit to dopamine efflux because some transmitter is lost to the bulk solution.
Neuroscience, Issue 69, Cellular Biology, Physiology, Medicine, Simultaneous Patch Clamp and Voltametry, In Vitro Voltametry, Dopamine, Oxidation, Whole-cell Patch Clamp, Dopamine Transporter, Reverse transport, Efflux
Demonstration of Proteolytic Activation of the Epithelial Sodium Channel (ENaC) by Combining Current Measurements with Detection of Cleavage Fragments
Institutions: Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU).
The described methods can be used to investigate the effect of proteases on ion channels, receptors, and other plasma membrane proteins heterologously expressed in Xenopus laevis
oocytes. In combination with site-directed mutagenesis, this approach provides a powerful tool to identify functionally relevant cleavage sites. Proteolytic activation is a characteristic feature of the amiloride-sensitive epithelial sodium channel (ENaC). The final activating step involves cleavage of the channel’s γ-subunit in a critical region potentially targeted by several proteases including chymotrypsin and plasmin. To determine the stimulatory effect of these serine proteases on ENaC, the amiloride-sensitive whole-cell current (ΔIami
) was measured twice in the same oocyte before and after exposure to the protease using the two-electrode voltage-clamp technique. In parallel to the electrophysiological experiments, a biotinylation approach was used to monitor the appearance of γENaC cleavage fragments at the cell surface. Using the methods described, it was demonstrated that the time course of proteolytic activation of ENaC-mediated whole-cell currents correlates with the appearance of a γENaC cleavage product at the cell surface. These results suggest a causal link between channel cleavage and channel activation. Moreover, they confirm the concept that a cleavage event in γENaC is required as a final step in proteolytic channel activation. The methods described here may well be applicable to address similar questions for other types of ion channels or membrane proteins.
Biochemistry, Issue 89, two-electrode voltage-clamp, electrophysiology, biotinylation, Xenopus laevis oocytes, epithelial sodium channel, ENaC, proteases, proteolytic channel activation, ion channel, cleavage sites, cleavage fragments
Deriving the Time Course of Glutamate Clearance with a Deconvolution Analysis of Astrocytic Transporter Currents
Institutions: National Institutes of Health.
The highest density of glutamate transporters in the brain is found in astrocytes. Glutamate transporters couple the movement of glutamate across the membrane with the co-transport of 3 Na+
and 1 H+
and the counter-transport of 1 K+
. The stoichiometric current generated by the transport process can be monitored with whole-cell patch-clamp recordings from astrocytes. The time course of the recorded current is shaped by the time course of the glutamate concentration profile to which astrocytes are exposed, the kinetics of glutamate transporters, and the passive electrotonic properties of astrocytic membranes. Here we describe the experimental and analytical methods that can be used to record glutamate transporter currents in astrocytes and isolate the time course of glutamate clearance from all other factors that shape the waveform of astrocytic transporter currents. The methods described here can be used to estimate the lifetime of flash-uncaged and synaptically-released glutamate at astrocytic membranes in any region of the central nervous system during health and disease.
Neurobiology, Issue 78, Neuroscience, Biochemistry, Molecular Biology, Cellular Biology, Anatomy, Physiology, Biophysics, Astrocytes, Synapses, Glutamic Acid, Membrane Transport Proteins, Astrocytes, glutamate transporters, uptake, clearance, hippocampus, stratum radiatum, CA1, gene, brain, slice, animal model
Choice and No-Choice Assays for Testing the Resistance of A. thaliana to Chewing Insects
Institutions: Cornell University.
Larvae of the small white cabbage butterfly are a pest in agricultural settings. This caterpillar species feeds from plants in the cabbage family, which include many crops such as cabbage, broccoli, Brussel sprouts etc. Rearing of the insects takes place on cabbage plants in the greenhouse. At least two cages are needed for the rearing of Pieris rapae. One for the larvae and the other to contain the adults, the butterflies. In order to investigate the role of plant hormones and toxic plant chemicals in resistance to this insect pest, we demonstrate two experiments. First, determination of the role of jasmonic acid (JA - a plant hormone often indicated in resistance to insects) in resistance to the chewing insect Pieris rapae. Caterpillar growth can be compared on wild-type and mutant plants impaired in production of JA. This experiment is considered "No Choice", because larvae are forced to subsist on a single plant which synthesizes or is deficient in JA. Second, we demonstrate an experiment that investigates the role of glucosinolates, which are used as oviposition (egg-laying) signals. Here, we use WT and mutant Arabidopsis impaired in glucosinolate production in a "Choice" experiment in which female butterflies are allowed to choose to lay their eggs on plants of either genotype. This video demonstrates the experimental setup for both assays as well as representative results.
Plant Biology, Issue 15, Annual Review, Plant Resistance, Herbivory, Arabidopsis thaliana, Pieris rapae, Caterpillars, Butterflies, Jasmonic Acid, Glucosinolates
Generation of Composite Plants in Medicago truncatula used for Nodulation Assays
Institutions: St. Louis, Missouri.
Similar to Agrobacterium tumerfaciens, Agrobacterium rhizogenes
can transfer foreign DNAs into plant cells based on the autonomous root-inducing (Ri) plasmid. A. rhizogenes
can cause hairy root formation on plant tissues and form composite plants after transformation. On these composite plants, some of the regenerated roots are transgenic, carrying the wild type T-DNA and the engineered binary vector; while the shoots are still non-transgenic, serving to provide energy and growth support. These hairy root composite plants will not produce transgenic seeds, but there are a number of important features that make these composite plants very useful in plant research. First, with a broad host range,A. rhizogenes
can transform many plant species, especially dicots, allowing genetic engineering in a variety of species. Second, A. rhizogenes
infect tissues and explants directly; no tissue cultures prior to transformation is necessary to obtain composite plants, making them ideal for transforming recalcitrant plant species. Moreover, transgenic root tissues can be generated in a matter of weeks. For Medicago truncatula
, we can obtain transgenic roots in as short as three weeks, faster than normal floral dip Arabidopsis transformation. Overall, the hairy root composite plant technology is a versatile and useful tool to study gene functions and root related-phenotypes. Here we demonstrate how hairy root composite plants can be used to study plant-rhizobium interactions and nodulation in the difficult-to-transform species M. truncatula
Plant Biology, Issue 49, hairy root, composite plants, Medicago truncatula, rhizobia, GFP
Use of Arabidopsis eceriferum Mutants to Explore Plant Cuticle Biosynthesis
Institutions: University of British Columbia - UBC, University of British Columbia - UBC.
The plant cuticle is a waxy outer covering on plants that has a primary role in water conservation, but is also an important barrier against the entry of pathogenic microorganisms. The cuticle is made up of a tough crosslinked polymer called "cutin" and a protective wax layer that seals the plant surface. The waxy layer of the cuticle is obvious on many plants, appearing as a shiny film on the ivy leaf or as a dusty outer covering on the surface of a grape or a cabbage leaf thanks to light scattering crystals present in the wax. Because the cuticle is an essential adaptation of plants to a terrestrial environment, understanding the genes involved in plant cuticle formation has applications in both agriculture and forestry. Today, we'll show the analysis of plant cuticle mutants identified by forward and reverse genetics approaches.
Plant Biology, Issue 16, Annual Review, Cuticle, Arabidopsis, Eceriferum Mutants, Cryso-SEM, Gas Chromatography
Patch Clamp Recording of Ion Channels Expressed in Xenopus Oocytes
Institutions: Stanford University , Stanford University School of Medicine.
Since its development by Sakmann and Neher 1, 2
, the patch clamp has become established as an extremely useful technique for electrophysiological measurement of single or multiple ion channels in cells. This technique can be applied to ion channels in both their native environment and expressed in heterologous cells, such as oocytes harvested from the African clawed frog, Xenopus laevis. Here, we describe the well-established technique of patch clamp recording from Xenopus oocytes. This technique is used to measure the properties of expressed ion channels either in populations (macropatch) or individually (single-channel recording). We focus on techniques to maximize the quality of oocyte preparation and seal generation. With all factors optimized, this technique gives a probability of successful seal generation over 90 percent. The process may be optimized differently by every researcher based on the factors he or she finds most important, and we present the approach that have lead to the greatest success in our hands.
Cellular Biology, Issue 20, Electrophysiology, Patch Clamp, Voltage Clamp, Oocytes, Biophysics, Gigaseal, Ion Channels
| 1 | 4 |
<urn:uuid:c8b7f57a-8851-4b7e-9ad4-7fbf855db858>
|
29 Jan 2015
Long the object of ivory tower fascination, quantum dots are entering the commercial realm. Factories that manufacture the nanomaterials are opening, and popular consumer products that use them are hitting the market.
Behind the gee-whiz technology are three companies with three different approaches to producing and delivering quantum dots.
Developed at Bell Labs in the 1980s, quantum dots are semiconducting inorganic particles small enough to force the quantum confinement of electrons. Ranging in size from 2 to 6 nm, the dots emit light after electrons are excited and return to the ground state. Larger ones emit red light, medium-sized ones emit green, and smaller ones emit blue.
Quantum dots have been proposed for all sorts of applications, including lighting and medical diagnostics, but the market that is taking off now is enhancing liquid-crystal displays (LCDs).
According to Yoosung Chung, an analyst who follows the quantum dot business for the consulting firm NPD DisplaySearch, last year saw the introduction of the first commercial display products to incorporate quantum dots: Bravia brand televisions from Sony and the Kindle Fire HDX tablet from Amazon. This year, the Chinese company TCL introduced a quantum-dot-containing TV and Taiwan’s Asus shipped a quantum dot laptop.
What quantum dots bring to displays is more vibrant colors generated with less energy. The liquid crystals in conventional LCD screens create colors by selectively filtering white light emitted by a light-emitting diode (LED) backlight, which typically runs along one edge of the screen. But that white light is broad spectrum and not optimal for producing the highly saturated reds, greens, and blues needed for lifelike images.
Jeff Yurek, a marketing manager at Nanosys, says the color performance of LCDs is only 70% of what is provided by more expensive organic light-emitting diode (OLED) displays.
Quantum-dot-enabled displays incorporate a backlight that gives off blue light, some of which the dots convert into pure red and green. The three colors combine into an improved white light that the LCDs draw on to create pictures that are almost as vivid as those achieved with OLEDs.
Moreover, because no light is wasted, energy costs are lowered. That’s important, according to Yurek, because the display accounts for half of the power consumed in a mobile device. By incorporating Nanosys’s quantum dots in its new HDX tablet, Amazon was able to cut display power consumption by 20%, he claims.
“Going from the HD to the HDX, they made a thinner, lighter, higher resolution, more colorful display with longer battery life,” Yurek says.
On the strength of demand from companies such as Amazon, Nanosys has been investing in its quantum dot plant in Milpitas, Calif. According to Yurek, the company is now completing an expansion that will more than double its output. Soon, he says, the firm will have the capacity to supply dots for 250 million 10-inch tablet devices a year.
Also expanding is QD Vision, a Lexington, Mass.-based firm founded on chemistry developed at Massachusetts Institute of Technology. Its dots can be found in Sony’s Bravia line and are set to appear in TVs made by TCL, which is the third-largest TV maker after Samsung and LG.
Seth Coe-Sullivan, QD Vision’s chief technology officer and cofounder, explains that his firm and Nanosys use the same basic manufacturing technique: They decompose organocadmium and other compounds at high heat in the presence of surfactants and solvents. The resulting monomers nucleate and form nanocrystals. Size can be controlled stoichiometrically or by thermally quenching the growing crystals.
Where the two firms differ is the way in which they embed quantum dots in a consumer product. Nanosys works with companies such as 3M to create quantum-dot-containing films that are placed between the LED backlight and the LCDs in tablets and other displays. For example, the Asus quantum-dot-containing laptop, known as the NX500 Notebook PC, incorporates the 3M/Nanosys film.
QD Vision, in contrast, encapsulates its quantum dots in a polymer matrix inside a glass tube that is placed directly against the LED backlight. It’s a hot environment but one that the dots can withstand, Coe-Sullivan says, because of how they are synthesized and packaged.
QD Vision manufactures its dots in Lexington and ships them to a contractor in Asia to be packaged in the tubes. The contractor is in the process of quadrupling capacity to 4 million tubes per month, which is enough, Coe-Sullivan says, to supply a quarter of the world’s TV industry.
He argues that his firm’s tube approach is suited to TVs and other large displays, whereas a film works better with smaller tablets and laptops. So far, marketplace adoption bears this contention out. “I honestly don’t feel our products compete with each other,” Coe-Sullivan says.
Dow, however, is throwing down the gauntlet against both approaches. Using technology licensed from the British firm Nanoco, Dow is developing cadmium-free quantum dots. It is betting that the display industry is uneasy with the cadmium content of dots from Nanosys and QD Vision and that it will flock to a cadmium-free alternative.
In September, Dow announced that it will use the Nanoco technology to build the world’s first large-scale, cadmium-free quantum dot plant at its site in Cheonan, South Korea. When the plant opens in the first half of 2015, Dow says, it will enable the manufacture of millions of quantum dot TVs and other display devices.
Dow and Nanoco haven’t disclosed the active material in their quantum dots and declined an interview with C&EN. They acknowledge that the dots contain indium but insist that they aren’t indium phosphide, as their competitors claim.
The use of one heavy metal versus another might not seem to make a big difference environmentally. But in the European Union, cadmium is one of six substances regulated by the Restriction of Hazardous Substances, or RoHS, directive. Cadmium cannot be present in electronics at levels above 100 ppm without an exemption.
Larger amounts of cadmium are allowed in LED-containing displays under an exemption that expired on July 1. Late last year, in a consultation process moderated by Oeko-Institut (Institute for Applied Ecology), a German nonprofit, the major quantum dot players made their cases for why the expiring exemption should or shouldn’t be extended.
Nanosys, QD Vision, 3M, and others lobbied for extension to at least 2019, arguing that the benefits of cadmium-based quantum dots outweigh any potential harm. One big reason is that they lower energy consumption by devices, meaning less use of coal in power plants and fewer of the cadmium emissions that can come from burning coal.
In April, Oeko recommended to the EU that the exemption be extended—but only to July 1, 2017, in light of emerging technology that could reduce or eliminate the need for cadmium quantum dots. Industry executives expect the EU to adopt the recommendation by the end of the year.
In their submissions to the consultation process, Dow and Nanoco argued that no extension is necessary because cadmium-free dots are already here. In fact, the Korea Times recently reported that LG and Samsung plan to launch cadmium-free TVs in 2015 with quantum dots from Dow.
Coe-Sullivan says he’ll believe it when he sees it. “The idea that the product is just around the corner has been around for a long time,” he observes. Cadmium-free displays from LG and Samsung were expected to appear at the recent IFA electronics trade show in Berlin, he says, but ended up being a no-show.
The reason, according to cadmium dot proponents, is that indium-based dots have about half the energy efficiency and a narrower color range. “Cad-free today does not have the same performance as cadmium-containing quantum dots,” Coe-Sullivan says. QD Vision and Nanosys also contend that indium-containing quantum dots aren’t environmentally superior, pointing to indium phosphide’s presence on a list of substances being considered for inclusion in RoHS.
Meanwhile, Coe-Sullivan notes, QD Vision has moved away from the metal-alkyl precursors and phosphorus-containing solvents that can make quantum dot manufacturing hazardous. It now uses metal-carboxylate precursors and more benign alkane solvents. Last month, the shift won it one of the Environmental Protection Agency’s Presidential Green Chemistry Challenge Awards.
Chung, the DisplaySearch analyst, is watching the jousting between the cadmium and cadmium-free camps with interest, although he isn’t ready to predict a winner yet. Display makers are concerned about cadmium, he notes, yet they also have qualms about the lower efficiency of cadmium-free quantum dots.
Chung may not know which technology will prevail, but he is sure about one thing. “Now is the time for quantum dots to penetrate the market,” he says.
- Chemical & Engineering News
- ISSN 0009-2347
- Copyright © 2015 American Chemical Society
29 Jan 2015
Researchers report a high-resolution method for printing quantum dots to make light-emitting diodes (Nano Lett. 2015, DOI: 10.1021/nl503779e). With further development, the technique could be used to print pixels for richly colored, low-power displays in cell phones and other electronic devices.
Quantum dots are appealing materials for displays because engineers can finely tune the light the semiconducting nanocrystals emit by controlling their dimensions.
Electronics makers already use quantum dots in some backlit displays on the market, in which red and green quantum dots convert blue light from a light-emitting diode (LED) into white light. Quantum dots also emit light in response to voltage changes, so researchers are looking into using them in red, green, and blue pixels in displays that wouldn’t need a backlight.
Quantum dot LED displays should provide richer colors and use less power than the liquid-crystal displays (LCDs) used in many flat screens, which require filters and polarizers that reduce efficiency and limit color quality. But it’s not yet clear how quantum dot LED displays would be made commercially, says John A. Rogers, a materials scientist at the University of Illinois, Urbana-Champaign.
In 2011, researchers at Samsung made the first full-color quantum dot LED display by using a rubber stamp to pick up and transfer quantum dot inks (Nat. Photonics, DOI: 10.1038/nphoton.2011.12). As a manufacturing strategy, printing from ink nozzles would offer more flexibility to change designs on the fly, without the need for making new transfer stamps. Jet printing also would require less material, Rogers says.
Unfortunately, the resolution of conventional ink-jet printers, which use a heating element to force vapor droplets out of a nozzle, is limited. “It’s hard to get droplets smaller than about 25 µm,” Rogers says, because the smaller the nozzle diameter, the more pressure required to get the droplet out.
So for the past seven years, Rogers has been developing another method called electrohydrodynamic jet printing. This kind of printer works by pulling ink droplets out of the nozzle rather than pushing them, allowing for smaller droplets. An electric field at the nozzle opening causes ions to form on the meniscus of the ink droplet. The electric field pulls the ions forward, deforming the droplet into a conical shape. Then a tiny droplet shears off and lands on the printing surface. A computer program controls the printer by directing the movement of the substrate and varying the voltage at the nozzle to print a given pattern.
The Illinois researchers used this new method, including specialized quantum dot inks, to print lines on average about 500 nm wide. This allowed them to fabricate red and green quantum dot LEDs. They also showed they could carefully control the thickness of the printed film, which is difficult to do with stamp transfer and ink-jet printing methods.
The ultimate resolution possible with these kinds of printers is very high, says David J. Norris, a materials engineer at the Swiss Federal Institute of Technology (ETH), Zurich. Last year, Norris used a similar printing method to print spots containing as few as 10 quantum dots (Nano Lett. 2014, DOI: 10.1021/nl5026997). He says it’s even possible to place single quantum dots using electrohydrodynamic nozzles, albeit with less control and repeatability. Single-particle printing isn’t needed for making pixels for displays, but it is useful for studying other kinds of optical effects in quantum dots, he says.
- Chemical & Engineering News
- ISSN 0009-2347
- Copyright © 2015 American Chemical Society
An ultra-thin nanomaterial is at the heart of a major breakthrough by Univ. of Waterloo scientists who are in a global race to invent a cheaper, lighter and more powerful rechargeable battery for electric vehicles.
Chemistry Prof. Linda Nazar and her research team in the Faculty of Science at the Univ. of Waterloo have announced a breakthrough in lithium-sulphur battery technology in Nature Communications.
Their discovery of a material that maintains a rechargable sulphur cathode helps to overcome a primary hurdle to building a lithium-sulphur (Li-S) battery. Such a battery can theoretically power an electric car three times further than current lithium-ion batteries for the same weight—at much lower cost.
“This is a major step forward and brings the lithim-sulphur battery one step closer to reality,” said Nazar, who also holds the Canada Research Chair in Solid State Energy Materials and was named a Highly Cited Researcher by Thomson Reuters.
Nazar’s group is best known for their 2009 Nature Materials paper demonstrating the feasibility of a Li-S battery using nanomaterials. In theory, sulphur can provide a competitive cathode material to lithium cobalt oxide in current lithium-ion cells.
Sulphur as a battery material is extremely abundant, relatively light and very cheap. Unfortunately, the sulphur cathode exhausts itself after only a few cycles because the sulphur dissolves into the electrolyte solution as it’s reduced by incoming electrons to form polysulphides.
Nazar’s group originally thought that porous carbons or graphenes could stabilize the polysulphides by physically trapping them. But in an unexpected twist, they discovered metal oxides could be the key. Their initial work on a metallic titanium oxide was published earlier in August in Nature Communications.
While the researchers found since then that nanosheets of manganese dioxide (MnO2) work even better than titanium oxides, their main goal in this paper was to clarify the mechanism at work.
“You have to focus on the fundamental understanding of the phenomenon before you can develop new, advanced materials,” said Nazar.
They found that the oxygenated surface of the ultrathin MnO2 nanosheet chemically recycles the sulphides in a two-step process involving a surface-bound intermediate, polythiosulfate. The result is a high-performance cathode that can recharge more than 2000 cycles.
The surface reaction is similar to the chemical process behind Wackenroder’s Solution discovered in 1845 during a golden age of German sulfur chemistry.
“Very few researchers study or even teach sulphur chemistry anymore,” said Nazar. “It’s ironic we had to look so far back in the literature to understand something that may so radically change our future.”
Source: Univ. of Waterloo
22 Jan 2015
A new tool capable of carrying out simultaneous nano-sized measurements could soon lead to more innovative nanotech-based products and help boost the EU economy. Indeed the tool, developed by scientists cooperating through the EU-funded UNIVSEM project, has the potential to revolutionise research and development in a number of sectors, ranging from electronics and energy to biomedicine and consumer products.
Nanotechnology, which involves the manipulation of matter at the atomic and molecular scale, has led to new materials – such as graphene – and microscopic devices that include new surgical tools and medicines. Up until now however, nanotech R&D has been hampered by the fact that it has not been possible to achieve simultaneous information on 3D structure, chemical composition and surface properties.
This is what makes the UNIVSEM project, due for completion in March 2015, so innovative. By integrating different sensors capable of measuring these different aspects of nano-sized materials, EU scientists have created a single instrument that enables researchers to work much more efficiently. By providing clearer visual and other sensory information, the tool will help scientists to manipulate nano-sized particles with greater ease and help cut R&D costs for industry.
The project team began in April 2012 by developing a vacuum chamber capable of accommodating the complex sensory tools required. In parallel, they significantly improved the capabilities of each individual analytical technique. This means that users now need just one instrument to achieve key capabilities such as vision and chemical analysis.
Preliminary tests demonstrated that the achieved optical resolution of 360 nanometres (nm) far exceeds the original 500 nm target set out at the start of the project. This should be of significant interest to numerous sectors where cost-efficient but incredibly precise measurements are required, such as in the manufacture of nano-sized surgical tools and nano-medicines.
Electronics is another key area. For example, the UNIVSEM project could help scientists learn more about the properties of quasiparticles such as plasmons. Since plasmons can support much higher frequencies than today’s silicon based chips, researchers believe they could be the future for optical connections on next-generation computer chips.
Plasmon research could also lead to the development of new lasers and molecular-imaging systems, and increase solar cell efficiencies due to their interaction with light. Another exciting area of nanotechnology concerns silver nanowires (AgNWs). These nanowires can form a transparent conductive network, and thus are a promising candidate for solar cell contacts or transparent layers in displays.
The next stage is the commercialisation of the instrument. The multi-modal tool is expected to spur nanotechnology development and enhanced quality control in numerous areas – such as the development of third generation solar cells – and create new opportunities in sectors that have until now not fully tapped into the potential of nanotechnology.
Explore further: Using nanoparticles to better protect industrial applications
More information: For further information, please visit: www.univsem.eu/
22 Jan 2015
Super-hydrophobic materials are desirable for a number of applications such as rust prevention, anti-icing, or even in sanitation uses. However, as Rochester’s Chunlei Guo explains, most current hydrophobic materials rely on chemical coatings.
In a paper published today in the Journal of Applied Physics, Guo and his colleague at the University’s Institute of Optics, Anatoliy Vorobyev, describe a powerful and precise laser-patterning technique that creates an intricate pattern of micro- and nanoscale structures to give the metals their new properties. This work builds on earlier research by the team in which they used a similar laser-patterning technique that turned metals black. Guo states that using this technique they can create multifunctional surfaces that are not only super-hydrophobic but also highly-absorbent optically.
University of Rochester’s Institute of Optics Professor Chunlei Guo has developed a technique that uses lasers to render materials hydrophobic, illustrated in this image of a water droplet bouncing off a treated sample. Credit: J. Adam Fenster/University of Rochester
Watch A Short Video Here:
Guo adds that one of the big advantages of his team’s process is that “the structures created by our laser on the metals are intrinsically part of the material surface.” That means they won’t rub off. And it is these patterns that make the metals repel water.
“The material is so strongly water-repellent, the water actually gets bounced off. Then it lands on the surface again, gets bounced off again, and then it will just roll off from the surface,” said Guo, professor of optics at the University of Rochester. That whole process takes less than a second.
A femtosecond laser created detailed hierarchical structures in the metals, as shown in this SEM image of the platinum surface. Credit: The Guo Lab/University of Rochester
The materials Guo has created are much more slippery than Teflon—a common hydrophobic material that often coats nonstick frying pans. Unlike Guo’s laser-treated metals, the Teflon kitchen tools are not super-hydrophobic. The difference is that to make water to roll-off a Teflon coated material, you need to tilt the surface to nearly a 70-degree angle before the water begins to slide off. You can make water roll off Guo’s metals by tilting them less than five degrees.
As the water bounces off the super-hydrophobic surfaces, it also collects dust particles and takes them along for the ride. To test this self-cleaning property, Guo and his team took ordinary dust from a vacuum cleaner and dumped it onto the treated surface. Roughly half of the dust particles were removed with just three drops of water. It took only a dozen drops to leave the surface spotless. Better yet, it remains completely dry.
Guo is excited by potential applications of super-hydrophobic materials in developing countries. It is this potential that has piqued the interest of the Bill and Melinda Gates Foundation, which has supported the work.
“In these regions, collecting rain water is vital and using super-hydrophobic materials could increase the efficiency without the need to use large funnels with high-pitched angles to prevent water from sticking to the surface,” says Guo. “A second application could be creating latrines that are cleaner and healthier to use.”
Latrines are a challenge to keep clean in places with little water. By incorporating super-hydrophobic materials, a latrine could remain clean without the need for water flushing.
But challenges still remain to be addressed before these applications can become a reality, Guo states. It currently takes an hour to pattern a 1 inch by 1 inch metal sample, and scaling up this process would be necessary before it can be deployed in developing countries. The researchers are also looking into ways of applying the technique to other, non-metal materials.
Guo and Vorobyev use extremely powerful, but ultra-short, laser pulses to change the surface of the metals. A femtosecond laser pulse lasts on the order of a quadrillionth of a second but reaches a peak power equivalent to that of the entire power grid of North America during its short burst.
Guo is keen to stress that this same technique can give rise to multifunctional metals. Metals are naturally excellent reflectors of light. That’s why they appear to have a shiny luster. Turning them black can therefore make them very efficient at absorbing light. The combination of light-absorbing properties with making metals water repellent could lead to more efficient solar absorbers – solar absorbers that don’t rust and do not need much cleaning.
Guo’s team had previously blasted materials with the lasers and turned them hydrophilic, meaning they attract water. In fact, the materials were so hydrophilic that putting them in contact with a drop of water made water run “uphill”.
Guo’s team is now planning on focusing on increasing the speed of patterning the surfaces with the laser, as well as studying how to expand this technique to other materials such as semiconductors or dielectrics, opening up the possibility of water repellent electronics.
22 Jan 2015
First report of Comprehensive Initiative on Technology Evaluation offers new framework for assessment.
It’s a challenge development agencies, nongovernmental organizations, and consumers themselves face every day: With so many products on the market, how do you choose the right one?
Now MIT researchers have released a report that could help answer that question through a new framework for technology evaluation. Their report — titled “Experimentation in Product Evaluation: The Case of Solar Lanterns in Uganda, Africa” — details the first experimental evaluations designed and implemented by the Comprehensive Initiative on Technology Evaluation (CITE), a U.S. Agency for International Development (USAID)-supported program led by a multidisciplinary team of faculty, staff, and students.
Building an evaluation framework
CITE’s framework is based on the idea that evaluating a product from a technical perspective alone is not enough, according to CITE Director Bishwapriya Sanyal, the Ford International Professor in MIT’s Department of Urban Studies and Planning.
“There are many products designed to improve the lives of poor people, but there are few in-depth evaluations of which ones work, and why,” Sanyal says. “CITE not only looks at suitability — how well does a product work? — but also at scalability — how well does it scale? — and sustainability — does a product have sticking power, given social, economic, and environmental context?”
CITE seeks to integrate each of these criteria — suitability, scalability, and sustainability — to develop a deep understanding of what makes products successful in emerging economies. The program’s evaluations and framework are intended to better inform the development community’s purchasing decisions.
“CITE’s work is incredibly energizing for the development community,” said Ticora V. Jones, director of the USAID Higher Education Solutions Network. “These evaluations won’t live on a shelf. The results are actionable. It’s an approach that could fundamentally transform the way we choose, source, and even design technologies for development work.”
Evaluating solar lanterns in Uganda
In summer 2013, a team of MIT faculty and students set off for western Uganda to conduct CITE’s evaluation of solar lanterns. Researchers conducted hundreds of surveys with consumers, suppliers, manufacturers, and nonprofits to evaluate 11 locally available solar lantern models.
To assess each product’s suitability, researchers computed a ratings score from 0 to 100 based on how the product’s attributes and features fared. “Attributes” included characteristics inherent to solar lanterns, such as brightness, run time, and time to charge. “Features” included less-central characteristics, such as a lantern’s ability to charge a cellphone.
The importance of cellphone charging was a surprising and noteworthy finding, Sanyal says.
“One of the things that stuck with me was that [consumers] were most concerned with whether or not the solar lantern charged their cellphone. It was a feature we never expected would be so important,” Sanyal says. “For some, having connections may be more valuable than having light.”
Learning from partnerships
CITE worked with USAID to select solar lanterns as the product family for its first evaluation. Sanyal says evaluating solar lanterns allowed CITE to learn from USAID’s existing partnership with Solar Sister, a social enterprise that distributes solar lanterns in Uganda, a country where few people have access to light after dark.
CITE researchers also worked closely with Jeffrey Asher, a former technical director at Consumer Reports, to learn from an existing product-evaluation model.
Evaluating products in a laboratory at MIT or Consumer Reports is much different than evaluating them in rural Uganda, but both are important, says Asher, who is a co-author of the CITE report.
“Consumer Reports’ greatest challenge has been evaluating products that are currently in the U.S. marketplace,” Asher says. “CITE has found that, in developing countries, we have to be even more nimble to keep up with an ever-changing market.”
Putting CITE’s results to work
Over the next two years, CITE will hone its approach, using experimental evaluations of technologies like water filters, post-harvest storage solutions, and malaria rapid-diagnostic tests to design a replicable approach that development professionals can use in their day-to-day work, Sanyal says.
“We’re aiming to make our evaluation process leaner, less expensive, and more nimble, while maintaining rigor. That’s our challenge, looking forward,” Sanyal says.
David Nicholson, director of the environment, energy, and climate change technical support unit at the international development organization Mercy Corps, says evaluation tools like CITE’s can be invaluable in making procurement decisions, especially when organizations are working with finite resources.
“Development agencies like Mercy Corps are increasingly looking to the commercial sector for solutions to long-term development challenges,” says Nicholson, who did not participate in the CITE research. “Evaluations like this can help program managers make informed decisions on which commercial products are most suitable for the program goals and the target communities.”
CITE’s research is funded by the USAID U.S. Global Development Lab. CITE is led by MIT’s Department of Urban Studies and Planning and supported by MIT’s D-Lab, Public Service Center, Sociotechnical Systems Research Center, and Center for Transportation and Logistics.
In addition to Sanyal and Asher, co-authors on the CITE report include Daniel Frey, Derek Brine, Jennifer Green, Jonars Spielberg, Stephen Graves, and Olivier de Weck.
See Also: “MIT a Linchpin of Major New USAID Program”
Institute researchers aim to spur development and evaluation of useful technologies to help the world’s poor.
22 Jan 2015
Princeton University researchers have built a rice grain-sized laser powered by single electrons tunneling through artificial atoms known as quantum dots. The tiny microwave laser, or “maser,” is a demonstration of the fundamental interactions between light and moving electrons.
The researchers built the device — which uses about one-billionth the electric current needed to power a hair dryer — while exploring how to use quantum dots, which are bits of semiconductor material that act like single atoms, as components for quantum computers.
“It is basically as small as you can go with these single-electron devices,” said Jason Petta, an associate professor of physics at Princeton who led the study, which was published in the journal Science.
Princeton University researchers have built a rice grain-sized microwave laser, or “maser,” powered by single electrons that demonstrates the fundamental interactions between light and moving electrons, and is a major step toward building quantum-computing systems out of semiconductor materials. A battery forces electrons to tunnel one by one through two double quantum dots located at each end of a cavity (above), moving from a higher energy level to a lower energy level and in the process giving off microwaves that build into a coherent beam of light. (Photo courtesy of Jason Petta, Department of Physics)
The device demonstrates a major step forward for efforts to build quantum-computing systems out of semiconductor materials, according to co-author and collaborator Jacob Taylor, an adjunct assistant professor at the Joint Quantum Institute, University of Maryland-National Institute of Standards and Technology. “I consider this to be a really important result for our long-term goal, which is entanglement between quantum bits in semiconductor-based devices,” Taylor said.
The original aim of the project was not to build a maser, but to explore how to use double quantum dots — which are two quantum dots joined together — as quantum bits, or qubits, the basic units of information in quantum computers.
Yinyu Liu, first author of the study and a graduate student in Princeton’s Department of Physics, holds a prototype of the device. (Photo by Catherine Zandonella, Office of the Dean for Research)
“The goal was to get the double quantum dots to communicate with each other,” said Yinyu Liu, a physics graduate student in Petta’s lab. The team also included graduate student Jiri Stehlik and associate research scholar Christopher Eichler in Princeton’s Department of Physics, as well as postdoctoral researcher Michael Gullans of the Joint Quantum Institute.
Because quantum dots can communicate through the entanglement of light particles, or photons, the researchers designed dots that emit photons when single electrons leap from a higher energy level to a lower energy level to cross the double dot.
Each double quantum dot can only transfer one electron at a time, Petta explained. “It is like a line of people crossing a wide stream by leaping onto a rock so small that it can only hold one person,” he said. “They are forced to cross the stream one at a time. These double quantum dots are zero-dimensional as far as the electrons are concerned — they are trapped in all three spatial dimensions.”
When the power (P) is turned on, single electrons (small arrows) begin to flow through the two double quantum dots (Left DQD and Right DQD) from the drain (D) to the source (S). As the electrons move from the higher energy level to the lower energy level, they give off particles of light in the microwave region of the spectrum. These microwaves bounce off mirrors on either side of the cavity (k-in and k-out) to produce the maser’s beam. (Photo courtesy of Science/AAAS)
The researchers fabricated the double quantum dots from extremely thin nanowires (about 50 nanometers, or a billionth of a meter, in diameter) made of a semiconductor material called indium arsenide. They patterned the indium arsenide wires over other even smaller metal wires that act as gate electrodes, which control the energy levels in the dots.
To construct the maser, they placed the two double dots about 6 millimeters apart in a cavity made of a superconducting material, niobium, which requires a temperature near absolute zero, around minus 459 degrees Fahrenheit. “This is the first time that the team at Princeton has demonstrated that there is a connection between two double quantum dots separated by nearly a centimeter, a substantial distance,” Taylor said.
When the device was switched on, electrons flowed single-file through each double quantum dot, causing them to emit photons in the microwave region of the spectrum. These photons then bounced off mirrors at each end of the cavity to build into a coherent beam of microwave light.
One advantage of the new maser is that the energy levels inside the dots can be fine-tuned to produce light at other frequencies, which cannot be done with other semiconductor lasers in which the frequency is fixed during manufacturing, Petta said. The larger the energy difference between the two levels, the higher the frequency of light emitted.
A double quantum dot as imaged by a scanning electron microscope. Current flows one electron at a time through two quantum dots (red circles) that are formed in an indium arsenide nanowire. (Photo courtesy of Science/AAAS)
Claire Gmachl, who was not involved in the research and is Princeton’s Eugene Higgins Professor of Electrical Engineering and a pioneer in the field of semiconductor lasers, said that because lasers, masers and other forms of coherent light sources are used in communications, sensing, medicine and many other aspects of modern life, the study is an important one.
“In this paper the researchers dig down deep into the fundamental interaction between light and the moving electron,” Gmachl said. “The double quantum dot allows them full control over the motion of even a single electron, and in return they show how the coherent microwave field is created and amplified. Learning to control these fundamental light-matter interaction processes will help in the future development of light sources.”
The paper, “Semiconductor double quantum dot micromaser,” was published in the journal Science on Jan. 16, 2015. The research was supported by the David and Lucile Packard Foundation, the National Science Foundation (DMR-1409556 and DMR-1420541), the Defense Advanced Research Projects Agency QuEST (HR0011-09-1-0007), and the Army Research Office (W911NF-08-1-0189).
22 Jan 2015
Reducing the amount of sunlight that bounces off the surface of solar cells helps maximize the conversion of the sun’s rays to electricity, so manufacturers use coatings to cut down on reflections. Now scientists at the U.S. Department of Energy’s Brookhaven National Laboratory show that etching a nanoscale texture onto the silicon material itself creates an antireflective surface that works as well as state-of-the-art thin-film multilayer coatings.
Their method, described in the journal Nature Communications and submitted for patent protection, has potential for streamlining silicon solar cell production and reducing manufacturing costs. The approach may find additional applications in reducing glare from windows, providing radar camouflage for military equipment, and increasing the brightness of light-emitting diodes.
“For antireflection applications, the idea is to prevent light or radio waves from bouncing at interfaces between materials,” said physicist Charles Black, who led the research at Brookhaven Lab’s Center for Functional Nanomaterials (CFN), a DOE Office of Science User Facility.
Preventing reflections requires controlling an abrupt change in “refractive index,” a property that affects how waves such as light propagate through a material. This occurs at the interface where two materials with very different refractive indices meet, for example at the interface between air and silicon. Adding a coating with an intermediate refractive index at the interface eases the transition between materials and reduces the reflection, Black explained.
“The issue with using such coatings for solar cells,” he said, “is that we’d prefer to fully capture every color of the light spectrum within the device, and we’d like to capture the light irrespective of the direction it comes from. But each color of light couples best with a different antireflection coating, and each coating is optimized for light coming from a particular direction. So you deal with these issues by using multiple antireflection layers. We were interested in looking for a better way.”
For inspiration, the scientists turned to a well-known example of an antireflective surface in nature, the eyes of common moths. The surfaces of their compound eyes have textured patterns made of many tiny “posts,” each smaller than the wavelengths of light. This textured surface improves moths’ nighttime vision, and also prevents the “deer in the headlights” reflecting glow that might allow predators to detect them.
“We set out to recreate moth eye patterns in silicon at even smaller sizes using methods of nanotechnology,” said Atikur Rahman, a postdoctoral fellow working with Black at the CFN and first author of the study.
The scientists started by coating the top surface of a silicon solar cell with a polymer material called a “block copolymer,” which can be made to self-organize into an ordered surface pattern with dimensions measuring only tens of nanometers. The self-assembled pattern served as a template for forming posts in the solar cell like those in the moth eye using a plasma of reactive gases-a technique commonly used in the manufacture of semiconductor electronic circuits.
The resulting surface nanotexture served to gradually change the refractive index to drastically cut down on reflection of many wavelengths of light simultaneously, regardless of the direction of light impinging on the solar cell.
“Adding these nanotextures turned the normally shiny silicon surface absolutely black,” Rahman said.
Solar cells textured in this way outperform those coated with a single antireflective film by about 20 percent, and bring light into the device as well as the best multi-layer-coatings used in the industry.
“We are working to understand whether there are economic advantages to assembling silicon solar cells using our method, compared to other, established processes in the industry,” Black said.
Hidden layer explains better-than-expected performance
One intriguing aspect of the study was that the scientists achieved the antireflective performance by creating nanoposts only half as tall as the required height predicted by a mathematical model describing the effect. So they called upon the expertise of colleagues at the CFN and other Brookhaven scientists to help sort out the mystery.
“This is a powerful advantage of doing research at the CFN-both for us and for academic and industrial researchers coming to use our facilities,” Black said. “We have all these experts around who can help you solve your problems.”
Using a combination of computational modeling, electron microscopy, and surface science, the team deduced that a thin layer of silicon oxide similar to what typically forms when silicon is exposed to air seemed to be having an outsized effect.
“On a flat surface, this layer is so thin that its effect is minimal,” explained Matt Eisaman of Brookhaven’s Sustainable Energy Technologies Department and a professor at Stony Brook University. “But on the nanopatterned surface, with the thin oxide layer surrounding all sides of the nanotexture, the oxide can have a larger effect because it makes up a significant portion of the nanotextured material.”
Said Black, “This ‘hidden’ layer was the key to the extra boost in performance.”
The scientists are now interested in developing their self-assembly based method of nanotexture patterning for other materials, including glass and plastic, for antiglare windows and coatings for solar panels.
This research was supported by the DOE Office of Science.
Scientists at the US Department of Energy’s Oak Ridge National Laboratory are learning how the properties of water molecules on the surface of metal oxides can be used to better control these minerals and use them to make products such as more efficient semiconductors for organic light emitting diodes and solar cells, safer vehicle glass in fog and frost, and more environmentally friendly chemical sensors for industrial applications.
The behavior of water at the surface of a mineral is determined largely by the ordered array of atoms in that area, called the interfacial region. However, when the particles of the mineral or of any crystalline solid are nanometer-sized, interfacial water can alter the crystalline structure of the particles, control interactions between particles that cause them to aggregate, or strongly encapsulate the particles, which allows them to persist for long periods in the environment. As water is an abundant component of our atmosphere, it is usually present on nanoparticle surfaces exposed to air.
A great scientific challenge is to develop ways to look closely at the interfacial region and understand how it determines the properties of nanoparticles. The ORNL researchers are taking advantage of two of the lab’s signature strengths—neutron and computational sciences—to reveal the influence of just a few monolayers of water on the behavior of materials.
In a set of papers published in the Journal of the American Chemical Society and the Journal of Physical Chemistry C, the team of researchers studied cassiterite (SnO2, a tin oxide), representative of a large class of isostructural oxides, including rutile (TiO2). These minerals are common in nature, and water wets their surfaces. The behavior of water confined on the surface of metal oxides readily relates to applications in such diverse areas as heterogeneous catalysis, protein folding, environmental remediation, mineral growth and dissolution, and light-energy conversion in solar cells, to name just a few.
When metal oxide nanoparticles are produced, they spontaneously adsorb water from the atmosphere, bonding it to their surface, explained Hsiu-Wen Wang, a research scientist currently at the ORNL–University of Tennessee Joint Institute for Neutron Sciences who performed this research while conducting a postdoctoral fellowship in the Chemical Sciences Division (CSD) at ORNL.
This water can interfere with the function of SnO2-containing products in surprising ways that are hard to predict. Wang’s team used neutron scattering at ORNL’s Spallation Neutron Source (SNS) to help understand the role that bound water plays in the stability of SnO2 nanoparticles and to learn more about the bound water’s structure and dynamics. Wang said neutrons are perfect for studying light elements such as the hydrogen and oxygen that make up water, and molecular dynamics simulations are an ideal tool to reinforce the observations. In fact, hydrogen is essentially invisible to X-ray and electron beams but scatters neutrons strongly, making neutron diffraction and inelastic scattering the ideal tools for probing the properties of water and other hydrogen-bearing species.
“When we drive all the water off the surface of the nanoparticles, this destabilizes the structure of the nanoparticles, and they grow larger,” said David J. Wesolowski, a co-author and Wang’s supervisor when she worked in CSD.
“The lifetime of engineered nanoparticles in the environment is an important environmental safety and health issue,” Wesolowski said. “We show that water sorbed on the nanoparticles, which naturally happens when they are exposed to normal humid air, prolongs their lifetimes as nanomaterials, thus prolonging their potential environmental impacts. In addition, the high surface area of nanoparticles is desirable. If the particles grow, which happens as they are heated and dehumidified, their surface area drops rapidly.”
To remove sorbed water, the nanoparticles are heated under vacuum. Water dissipation begins at around 250°C (nearly 500°F, or about as hot as you can set your kitchen’s oven). Much energy is required to drive off the water completely from the nanoparticles, which stay stable to these relatively high temperatures precisely because of the presence of the bound water. Once the water begins to dissipate, destabilization begins. Before completing this study, researchers did not know to what degree the removal of water would cause destabilization.
“It may be that the surfaces without water have different and useful chemical properties, but because water is everywhere in the environment, it is very important to know that the surfaces of oxide nanoparticles are likely to be already covered with a few molecular layers of water,” Wesolowski said.
Researchers used SNS’s Nanoscale-Ordered Materials Diffractometer (NOMAD) instrument to determine the structure of water on cassiterite nanoparticle surfaces, as well as the structure of the particles themselves. NOMAD is dedicated to local structure studies of various materials from liquids to nanoparticles, using the neutron scattering pattern produced during experiments, said Mikhail Feygenson, NOMAD instrument scientist.
“The combination of the high neutron flux of SNS and the wide detector coverage of NOMAD enables rapid data collection on very small samples, like our nanoparticles,” Feygenson said. “NOMAD is much faster than similar instruments around the world. In fact, the measurements of our samples that took about 24 hours of NOMAD time could have required as much as a full week on a similar instrument at another lab.”
The second step of the study took place at SNS on the Fine-Resolution Fermi Chopper Spectrometer (SEQUOIA), which allows for forefront research on dynamical processes in materials. “This part of the study focuses on the role of surface hydrogen bonds and the surface water vibrational properties,” said Alexander Kolesnikov, SEQUOIA instrument scientist.
The NOMAD and SEQUOIA studies enabled the research team to validate computational models they created to fully capture the structural ordering of the surface-bound water on the SnO2 nanocrystals. Integrating neutron scattering experiments with classical and first principles molecular dynamics simulations provided evidence that strong hydrogen bonds—as strong as in water under ultrahigh pressure of >500,000 atm—drive water molecules to dissociate at the interfaces and result in a weak interaction of the hydrated SnO2 surface with additional water layers.
“The results are significant in demonstrating many new features of surface-confined water that can provide general guidance into tuning of surface hydrophilic interactions at the molecular level,” said Jorge Sofo, professor of physics at Pennsylvania State University.
More information: H.-W. Wang, M. DelloStritto, N. Kumar, A. I. Kolesnikov, P. R. C. Kent, J. D. Kubicki, D. J. Wesolowski, and J. O. Sofo, “Vibrational density of states of strongly H-bonded interfacial water: Insights from inelastic neutron scattering and theory.” The Journal of Physical Chemistry C, 118, 10805–1083 (2014); DOI: dx.doi.org/10.1021/jp500954v
Quantum dots glow a specific color when they are hit with any kind of light. Here, a vial of green quantum dots are activated by a blue LED backlight system.
If you look at the CES 2015 word cloud—a neon blob of buzz radiating from the Nevada desert, visible from space—much of it is a retweet of last year’s list. Wearables. 4K. The Internet of Things, still unbowed by its stupid name. Connected cars. HDR. Curved everything. It’s the same-old, same-old, huddled together for their annual #usie at the butt-end of a selfie stick.
But there at the margin, ready to photobomb the shot, is the new kid: quantum dot. It goes by other names, too, which is confusing, and we’ll get to that in a minute. Regardless of what you call it, QD was all over CES this year, rubbing shoulders with the 4K crowd. You may have heard people say it’s all hype. Those people can go pound sand. Quantum dot is gonna be the next big thing in TVs, bringing better image quality to cheaper sets.
A Quantum-Dot TV Is an LCD TV
The first thing to know is quantum-dot televisions are a new type of LED-backlit LCD TV. The image is created just like it is on an LCD screen, but quantum-dot technology enhances the color.
On an LCD TV, you have a backlight system, which is a bank of LEDs mounted at the edge of the screen or immediately behind it. That light is diffused, directed by a light-guide plate and beamed through a polarized filter. The photons then hit a layer of liquid crystals that either block the light or allow it to pass through a second polarized filter.
Before it gets to that second polarizer, light passes through a layer of red, blue, and green (and sometimes yellow) color filters. These are the subpixels. Electrical charges applied to the subpixels moderate the blend of colored light visible on the other side. This light cocktail creates the color value of each pixel on the screen.
With a quantum-dot set, there are no major changes to that process. The same pros and cons cited for LCD TVs also apply. You can have full-array backlit quantum-dot sets with local-dimming technology (Translation: good for image uniformity and deeper blacks). There can be edge-lit quantum-dot sets with no local dimming (Translation: thinner, but you may see light banding and grayer blacks). You can have 1080p quantum-dot sets, but you’re more likely to see only 4K quantum-dot sets because of the industry’s big push toward UltraHD/4K resolution.
But a Quantum-Dot TV Is Different
In a quantum-dot set, the changes start with the color of the backlight. The LEDs in most LCD TVs emit white light, but those in quantum-dot televisions emit blue light. Both types actually use blue LEDs, but they’re coated with yellow phosphor in normal LCD televisions and therefore emit white light.
Here’s where the quantum dots come in. The blue LED light drives the blue hues of the picture, but red and green light is created by the quantum dots. The quantum dots are either arranged in a tube—a “quantum rail”—adjacent to the LEDs or in a sheet of film atop the light-guide plate.
Quantum dots have one job, and that is to emit one color. They excel at this. When a quantum dot is struck by light, it glows with a very specific color that can be finely tuned. When those blue LEDs shine on the quantum dots, the dots glow with the intensity of angry fireflies.
“Blue is an important part of the spectrum, and it’s the highest-energy portion—greater than red or green,” explains John Volkmann, chief marketing officer at QD Vision, which makes quantum dots for several TVs and monitors. “You start with high energy light and refract it to a lower energy state to create red or green… Starting with red or green would be pushing a rock uphill.”
Quantum dots are tiny, and their size determines their color. There are two sizes of dots in these TVs. The “big” ones glow red, and they have a diameter of about 50 atoms. The smaller ones, which glow green, have a diameter of about 30 atoms. There are billions of them in a quantum-dot TV.
If you observed quantum-dot light with a spectrometer, you would see a very sharp and narrow emission peak. Translation: Pure red and pure green light, which travels with the blue light through the polarizers, liquid crystals, and color filters.
Because that colored light is the good stuff, quantum dots have an advantage over traditional LCD TVs when it comes to vivid hues and color gamut. In a normal LCD, white light produced by the LEDs has a wider spectrum. It’s kind of dirty, with a lot of light falling in a color range unusable by the set’s color filters.
“A filter is a very lossy thing,” says Nanosys President and CEO Jason Hartlove. Nanosys makes film-based quantum-dot systems for several products. “When you purify the color using a color filter, then you will get practically no transmission through the filter. The purer the color you start with, the more relaxed the filter function can be. That translates directly to efficiency.”
So with a quantum-dot set, there is very little wasted light. You can get brighter, more-saturated, and more-accurate colors. The sets I saw in person at CES 2015 certainly looked punchier than your average LCD.
That Sounds Expensive
There’s no doubt that quantum-dot TVs will cost more than normal LCDs—especially because they’re likely to be 4K sets. But quantum-dot is getting a lot of buzz because its cheaper than OLED.
In most peoples’ eyes, OLED TVs are the best tech available. But they’re expensive to build and expensive to buy—you’re looking at $3,500 to as much as $20,000—and the manufacturing process differs in several key ways. That’s a big reason LG is the only company putting big money into building them.
Conversely, quantum-dot sets don’t require overhauling the LCD fabrication process, and they produce a much wider color gamut than traditional LCDs. They’re closer to OLED in color performance, and they also can get brighter. That’s important for HDR video.
“The attraction to the OEM is that this is a pure drop-in solution,” says Nanoco CEO Michael Edelman, whose company makes quantum-dot film in a licensing deal with Dow Chemical. “They remove a diffuser sheet in front of the light-guide plate and replace it with quantum-dot film. Nothing in the supply chain gets changed, nothing in the factory gets changed. They get, in some cases, better than OLED-type color at a fraction of the cost.”
As you’d expect, companies making film-based and tube-based solutions are touting each approach as superior. QD Vision claims its tube-based approach is easier and cheaper to implement, and it can boost the color performance of cheaper edge-lit LCD sets. According to QD Vision, the oxygen-barrier film needed for film-based dots is costly, which explains why Nanoco and Nanosys are partnering with Dow and 3M for that film.
Film-based suppliers say their method has the upper hand due to “light coupling,” or the ability to feed all that quantum-dot light directly into a light-guide plate. The film layer also purportedly works better with full-array backlight systems, which will be used in a lot of UHD and HDR TVs.
Super! So This Is OLED for Less Money?
Not entirely. Color gamut is important, but it’s only one aspect of picture quality. Because these are LCD sets, they won’t have the blackest blacks, super-wide viewing angles, and amazing contrast of OLED. And while the extra brightness and saturation makes onscreen colors really pop, all that luminance may create light bleeding.
Some quantum dots also contain cadmium, which is toxic at high levels—think “factory emission” levels rather than “sealed tube or film in your TV” levels. Still, there are health and environmental concerns, especially if a bunch of quantum-dot TVs end up in landfills. The European Union restricts the use of cadmium in household appliances. Some quantum-dot producers are marketing their product as cadmium-free. QD Vision, which supplies quantum dots for TCL’s new flagship 4K TV, Sony’s well-reviewed 2013 Triluminos sets, and Philips and AOC monitors, still uses cadmium.
“There are only a couple of materials that deliver on the promise of quantum dots,” says QD Vision’s Volkmann. “The other is based on indium. Cadmium is superior with respect to delivering higher-quality color, meaning a broader color gamut. But also much more energy-efficient at converting blue light to other forms of light that allow you to fill out that spectrum. The folks making indium-based solutions like to paint cadmium as the bad guy… Cadmium is under observation by different regulatory agencies around the world, but it turns out indium is too.”
Nanosys, which produces both cadmium and cadmium-free quantum dots, agrees that cadmium-based dots are more efficient.
“Cadmium-based materials have a narrower spectral width,” says Nanosys’s Hartlove. “More pure color. And what that means is the other things the system has to do in order to keep that color pure, the burden on the rest of the system is reduced.”
Hartlove also says that cadmium may be a greener solution. The cad selenide crystal used in quantum dots isn’t as toxic as pure metallic cadmium, and the efficiency of their color-producing ways has benefits.
“The type of power we generate in the US from coal-based power plants throws cadmium into the atmosphere,” says Hartlove. “That’s one of the byproducts of burning coal. And you look at the net cadmium content over this whole lifecycle, and it turns out that cadmium sequestration is actually net better for the environment.”
Why Isn’t Everybody Calling It “Quantum Dot”?
Each manufacturer with a quantum-dot TV set seemingly has a different name for the technology. Samsung likes “nano-crystal semiconductors.” Sony has new Triluminos TVs that “incorporate the same benefits as quantum dots.” LG, TCL, Hisense, and Changhong are actually calling it quantum dot, which is nice.
“The term quantum dot is generic,” says Hartlove. “Each company kind of wants to grab this for their own and brand it their own way. That will probably lead to some consumer confusion… but I think most of the industry will converge on a way to describe this technology.”
There are slight differences between the technologies everyone’s using, but they’re variations on a theme. The differences center on whether the TVs are edge-lit or back-lit with quantum dots, and whether the systems use cadmium- or indium-based quantum dots.
Who Is Making Quantum Dots?
At this stage, three companies are the big players in the quantum-dot TV landscape.
QD Vision specializes in glass-tube “edge-lit” components, and its systems will be found in TCL TVs and monitors from Philips and AOC. It supplied the quantum-dot component for Sony’s 2013 Triluminos sets, but Sony recently ditched the company in favor of another.
Nanoco focuses on cadmium-free, film-based quantum dot systems. They have a licensing deal with Dow Chemical, and Dow is currently building a factory in South Korea to ramp up production of quantum-dot film. Nanoco’s cadmium-free technology will be found in LG’s quantum-dot TVs in 2015.
Nanosys is another film-based producer that has partnered with 3M on the film-sheet tech. It makes both cadmium-based and cadmium-free quantum dots. They are the company behind Amazon’s HDX 7 display and the Asus Zenbook NX500, and Samsung licenses the cadmium-free quantum-dot tech in its new SUHD 4K sets from Nanosys. Nanosys is also working with Panasonic, Hisense, TCL, Changhong, and Skyworth on future TVs.
When Can I Get One, and What Will It Cost?
The new TVs showcased at CES each year usually start hitting stores in the spring, but some higher-end models don’t arrive until the fall. That’s a little bit of a wait, but it’s probably for the best—there are UltraHD content-delivery complications to work out, anyway.
The TV we know the most about in terms of pricing is TCL’s 55-inch H9700, and we still don’t know much. It’s already available in China for around $2,000 U.S., and TCL representatives at CES hinted that it will be close to that mark when it hits the U.S.
Expect that to be at the low end of the quantum-dot price bracket; LG, Samsung, and Sony generally have pricy TVs, and similar 4K LCDs from last year—minus the quantum dots—went in the $2,000 to $3,000 range for a 55-incher. For this initial wave of quantum-dot TVs, most MSRPs will probably fall between $2,500 to $4,000 for a 55-inch 4K set.
| 1 | 5 |
<urn:uuid:9629e62a-69b2-4c64-8cac-c13421d27dea>
|
Whether you are at work, school, home, or sports, it is essential to wear the best form of eye protection recommended by experts for specific situations. Exposure to a range of hazards, from sunlight and intense heat, to blood and chemical splashes, or dust and industrial debris, can cause serious eye injuries and permanent loss of vision.
For the best eye protection from the ultraviolet (UV ) radiation of sunlight, the American Academy of Ophthalmology recommends oversized or wraparound sunglasses. They should be labeled 99 percent or 100 percent UV protection, or "UV400." These lenses are effective in absorbing UV-A and UV-B radiation. According to Gary Heiting, OD, of All About Vision, direct or indirect exposure to UV rays of can damage your eyes and cause cataracts in the lenses, retinal damage, and other eye diseases.
It is important to protect your eyes from the sun not just during the summer but also during the winter, especially when you are on the ski slopes. Vision Care Specialists of Denver, Colorado notes ski goggles offer the best protection when skiing. They protect your eyes from UV radiation, as well as from intense glare, wind, and snow.
Protection From Lasers
Maximum eye protection is essential in all situations where lasers are used, such as manufacturing, graphic design, military and medical applications, and cosmetology procedures like laser hair removal. Special-purpose laser safety goggles provide the best form of protection. The American National Standards Institute (ANSI) sets the standards for eye protection during laser use in guideline ANSI 136.1.
Lasers emit infrared, UV, and visible radiation, and a direct or reflected beam can penetrate and permanently damage the eye and cause blindness. Phillips Safety Products notes your laser safety goggles should protect your eyes from the specific radiation wavelengths of your laser system. The best choice of laser goggles also depends on:
- The maximum intensity of the laser beam of the system
- The purpose for which you are using the laser
- The environment in which you are working
The optical density (OD) of the lens in the goggles is also important - higher OD lenses provide better protection from penetration of your eye with the laser beam. The wavelengths and OD specifications are printed on the lens of the goggles.
Medical, Surgical and Lab Procedures
According to the National Institute for Occupational Safety and Health (NIOSH), safety goggles offer the best eye protection from splashes, splatter, and spray of blood, other body fluids, or chemicals. In medical, surgical, and lab procedures, and other risk situations, they reduce the risk of transmission of infection to the eye or permanent damage from chemicals.
These safety goggles may be vented to reduce fogging of the lenses. If so, ensure yours are indirectly rather than directly vented to lower the chance of fluids reaching the eyes through the vents. Your goggles can be worn over prescription glasses or contact lenses but should fit snugly around the eyes and across the brow. A pair of goggles can also be outfitted with prescription lenses.
A face shield that is tinted or heat-treated to prevent splatter is recommended as a secondary form of eye protection over goggles. It should protect the rest of the face from brow to chin and ear to ear. However, a face shield should not be used as the primary form of protection against fluid splashes, splatter, and spray.
Gardening Eye Protection
Multi-purpose goggles provide the best all-around protection during the chores of gardening, such as planting, weeding, or pruning. When using a lawer mower, hedger, trimmer, spade, or other gardening tools, choose a pair of goggles with impact-resistant lenses for protection from flying stones, stems, grass, and other debris. These goggles should meet ANSI Z87.1 standards for design, construction, and testing of the frames and lenses.
Tinted lenses add protection from UV exposure. Alternatively, wear your goggles over prescription or non-prescription sunglasses. Close-fitting goggles protect your eyes from dust, grass, and other airborne particles. Wear a face shield over your goggles to add secondary eye and face protection when spraying or using chemicals.
Carpentry and Woodworking
The appropriate form of eye protection for carpentry and craft woodworking and other do-it-yourself projects depends on the size and quantity of the debris generated.
According to Woodworking FAQ: The Workshop Companion, wear goggles as the primary eye protector when the work generates a lot of dust, or you use wood finishes and chemicals that could splash in your eyes. They also protect your eyes from wood chips that might project into your eyes, eye socket, or surrounding skin, and the lenses should be impact-resistant.
You can wear your goggles over prescription glasses. Tight fitting ones prevent dust from flying into and abrading your eyes, particularly important if you wear contact lenses. Add a face shield for further protection for eyes and face against debris and woodworking fluids. If you expect to have a lot of heavy dust and debris from sawing, sanding, chiseling, and chopping, consider also wearing a full face piece respirator to protect your lungs.
Safety glasses can protect your eyes if your work only produces small wood chips. Glasses with side shields provide added protection from debris entering your eyes from the side. Wrap around safety glasses can be converted to function like goggles with a soft plastic or rubber face seal insert. Safety glasses can be outfitted with either non-prescription or built-in prescription lenses.
Wear a face shield over the safety glasses as a secondary protector for your eyes ad face when generating large wood chips or other airborne particles and for added defense against woodworking chemicals.
Industrial and Manufacturing Plants
Eye and face protection in the workplace must meet Occupational Safety and Health Administration (OSHA) regulations, developed to ensure a safe work environment. It is the responsibility of employers in industrial or manufacturing plants and other work sites to outfit their employees with the best form of eye protection suited for the job. Special-purpose safety goggles are the best primary eye protector from anticipated hazards generated in these work environment, such as:
- High-impact projectiles from pieces of a drill bit, screwdriver, grinding and other mill wheels, nails, and staples
- Metal grindings, rock, wood chips, dust, wool, or other fabric fibers
- Fumes and splashes from liquid or dry chemicals
- Sources of intense heat and light, such as welding
The lens and frame of the industrial-purpose goggles must meet OSHA and ANSI Z87.1 standards. Lenses made of polycarbonate or Trivex withstand high-impact threats from airborne projectiles better than those made of glass or plastic. All goggles should be protected from fogging, easy to adjust, and should fit comfortably to encourage 100 percent use by workers.
You can use safety glasses with eye shields if you determine the hazard is minor. For added protection for eyes and face, wear a face shield attached to a headpiece over goggles or safety glasses. The shield should also be made to ANZI Z87.1 standard for optimum protection.
Welding Eye Protection
The high intensity light of the welding torch emits UV and infrared radiation and radiant heat, as well as sparks (welder's flash) that can damage the eye. Welding goggles, welding glasses, and welding helmets provide effective protection depending on the type of welding, according to the Occupational Safety and Health Administration (OSHA) guide (pages 13 to 15).
Electric (Arc) Welding
To protect the eyes and face from the sparks, molten metal, intense light and heat radiation, and other hazards of electric welding, OSHA recommends wearing tinted welding safety glasses or goggles under a welding helmet. To protect the eyes against burns, the tint should be the darkest appropriate shade to absorb the UV rays from the light intensity of the welding torch.
According to the American Optometric Society (AOA), welding helmets are fitted with a multi-layered faceplate with filtered lenses so the welder can see. The helmet itself does not protect from impact injuries and is only a secondary protection for the eyes and face.
Acetylene Torch Welding
Based on the OSHA guidelines, tinted welding goggles are protective for the less intense hazards of acetylene torch welding. Always wear a face shield designed for the task over the goggles for added protection for your face and eyes.
Disaster and Other Emergency Situations
A full face piece respirator is the best form of protection in disaster and other emergency situations when the lungs, as well as the eyes and face, are at risk, according to NIOSH recommendations. The face piece should be impact-resistant according to ANSI Z87 standards to protect against threats to the eyes from flying debris. In addition, if you must wear glasses, get prescription inserts made to fit your specific respirator, because ordinary glasses prevent the respirator from sealing properly to your face.
In fire, disaster rescue, and other significant emergency situations, a full-face respirator protects emergency responders' eyes and lungs from:
- Smoke, heat, and electrical arcs
- Dust and noxious gases
- Other fine particulates, such as concrete and metal
- Chemicals and body fluids
In a situation where oxygen is expected to be deficient - including non-emergency hazardous work environments - an air-supplying full face piece respiratory is best.
For emergency situations where there is no risk of lung inhalation of noxious materials, appropriate safety goggles are an effective form of eye protection. They will protect the eyes from chemicals, dust, and impact from other debris at the site. These goggles should fit tightly over your prescription glasses or contact lenses. Wear them with a face shield to provide additional protection for your face in the event of chemical or body fluid spray or splash.
A Selection of Top-Rated Eyewear
Consider some of the following top-rated eyewear choices.
These inexpensive ATTCL brand sunglasses, sold on Amazon for about $17, fit the bill for oversize frames and full UV protection lenses - in a choice of five colors. Almost 90 percent of reviewers on Amazon give this product high ratings. If you prefer to try on your sunglasses first, look for these or similar styles in your department store.
The lightweight, wraparound Pyramex Intruder Safety Glasses with Clear Lens are listed by Safety Glasses USA as its top selling safety glasses. The lens are polycarbonate and protect from impacts and UV rays and are ANSI Z87.1 compliant. These glasses are listed for $1.40 on the Safety Glasses USA website. A companion, tinted pair, the Pyramex Intruder Safety Glasses with Grey Lens retails for $1.60.
Specific-Purpose Safety Goggles
The following safety goggles for specific purposes are good choices.
- These indirectly vented Chemical Splash Impact Goggles, from The Home Depot at only $3, are a cost-effective option for scientists and science students. Reviewers rate the goggles 4.2 out of 5 stars. The product meets ANSI Z87.1 standards and protects the eyes from chemicals and other liquid splashes and sprays, flying chips, and dust. It is also OSHA rated for eye protection during industrial tasks.
- The ESS Innerzone 1 Goggles for fire fighting and rescue sell for $56 on Safety Glasses USA. According to the website, the ESS Innerzone series, including the Innerzone 2 and Innerzone 3 goggles, are "used by the world's most elite fire fighting teams." The goggles are OSHA and ANSI Z87.1 compliant, filter out smoke, dust, and particles, and protect against ballistic impact. The anti-fog and anti-scratch, coated, polycarbonate lenses provide 100 percent UV radiation protection.
Multipurpose Safety Goggles
Consider the following multipurpose safety goggles.
- For inside work, the companion un-tinted Dewalt DPG82-11 Concealer Clear Anti-Fog Lens Dual Mold Safety Goggles offer the same protection as the Smoke Anti-Fog version against chemicals, dust, debris, wood chips, and other impact hazards. This product sells for about $12 on Amazon, where reviewers rate it 4.2 out of 5 stars as a multi-purpose eyewear. Best Consumer Reviews rate this product as the winner of its 2015 top-rated safety goggles. The polycarbonate lenses are impact-resistant and ANSI Z87.1 compliant.
The MSA Safety Works Clear Adjustable Face Shield retails for about $14 at Home Depot. Customers give the product a 4.5 out of 5 stars rating. The face shield has an adjustable headgear and is ANSI Z87.1 and OSHA compliant and provides secondary protection over safety glasses and goggles. It also protects against UV radiation.
The lightweight Hobart 770753 Pro welding helmet is rated by welding helmet expert.com as the top helmet with a 4.7 out of 5 stars rating. The auto darkening LCD technology lens in the large viewing area provides flexibility to work in varying light conditions. It is suitable for the professional welder, as well as the do-it-your-selfer. The helmet is available online for $161 at Cyber Weld.
Full Face Respirator
According to Best Consumer Reviews, the lightweight MSA Safety Works 10041139 Full Face Multi Purpose Respirator is the winner of top-rated consumer reviews for a respirator mask, with A 4.6 out of 5 stars rating. The respirator is NIOSH approved and protects from a variety of chemicals, noxious gases, foul odors, dust and other particles, and is currently available on eBay for $102 to $136. The large lens in the face piece of this industrial grade respirator provides wearers a clear view.
Laser Safety Goggles
Because of the complex factors that go into choosing laser safety goggles, don't try to pick out one on your own. Unless you are knowledgeable about laser physics, rely on your workplace laser safety officer (LSO) to choose the best product for the type of system you are using, or ask another expert. Visit the Kentek laser online store to get an idea of the range of laser safety eyewear available.
Maximum Protection at All Times
Maximum eye protection is essential every time you are in a situation that puts your eyes at risk, even if you perceive the threat to be minor. Choose the best eye protection that is designed for a specific type or types of hazard, whether at work, home, or play.
| 1 | 10 |
<urn:uuid:87645a99-e229-4588-93e1-43932f8659ca>
|
Obesity has increased dramatically in the last few decades and affects over one third of the adult US population. The economic effect of obesity in 2005 reached a staggering sum of $190.2 billion in direct medical costs alone. Obesity is a major risk factor for a wide host of diseases. Historically, little was known regarding adipose and its major and essential functions in the body. Brown and white adipose are the two main types of adipose but current literature has identified a new type of fat called brite or beige adipose. Research has shown that adipose depots have specific metabolic profiles and certain depots allow for a propensity for obesity and other related disorders. The goal of this protocol is to provide researchers the capacity to identify and excise adipose depots that will allow for the analysis of different factorial effects on adipose; as well as the beneficial or detrimental role adipose plays in disease and overall health. Isolation and excision of adipose depots allows investigators to look at gross morphological changes as well as histological changes. The adipose isolated can also be used for molecular studies to evaluate transcriptional and translational change or for in vitro experimentation to discover targets of interest and mechanisms of action. This technique is superior to other published techniques due to the design allowing for isolation of multiple depots with simplicity and minimal contamination.
23 Related JoVE Articles!
Laser Microdissection Applied to Gene Expression Profiling of Subset of Cells from the Drosophila Wing Disc
Institutions: University of Naples.
Heterogeneous nature of tissues has proven to be a limiting factor in the amount of information that can be generated from biological samples, compromising downstream analyses. Considering the complex and dynamic cellular associations existing within many tissues, in order to recapitulate the in vivo
interactions thorough molecular analysis one must be able to analyze specific cell populations within their native context. Laser-mediated microdissection can achieve this goal, allowing unambiguous identification and successful harvest of cells of interest under direct microscopic visualization while maintaining molecular integrity. We have applied this technology to analyse gene expression within defined areas of the developing Drosophila
wing disc, which represents an advantageous model system to study growth control, cell differentiation and organogenesis. Larval imaginal discs are precociously subdivided into anterior and posterior, dorsal and ventral compartments by lineage restriction boundaries. Making use of the inducible GAL4-UAS binary expression system, each of these compartments can be specifically labelled in transgenic flies expressing an UAS-GFP transgene under the control of the appropriate GAL4-driver construct. In the transgenic discs, gene expression profiling of discrete subsets of cells can precisely be determined after laser-mediated microdissection, using the fluorescent GFP signal to guide laser cut.
Among the variety of downstream applications, we focused on RNA transcript profiling after localised RNA interference (RNAi). With the advent of RNAi technology, GFP labelling can be coupled with localised knockdown of a given gene, allowing to determinate the transcriptional response of a discrete cell population to the specific gene silencing. To validate this approach, we dissected equivalent areas of the disc from the posterior (labelled by GFP expression), and the anterior (unlabelled) compartment upon regional silencing in the P compartment of an otherwise ubiquitously expressed gene. RNA was extracted from microdissected silenced and unsilenced areas and comparative gene expression profiling determined by quantitative real-time RT-PCR. We show that this method can effectively be applied for accurate transcriptomics of subsets of cells within the Drosophila
imaginal discs. Indeed, while massive disc preparation as source of RNA generally assumes cell homogeneity, it is well known that transcriptional expression can vary greatly within these structures in consequence of positional information. Using localized fluorescent GFP signal to guide laser cut, more accurate transcriptional analyses can be performed and profitably applied to disparate applications, including transcript profiling of distinct cell lineages within their native context.
Developmental Biology, Issue 38, Drosophila, Imaginal discs, Laser microdissection, Gene expression, Transcription profiling, Regulatory pathways , in vivo RNAi, GAL4-UAS, GFP labelling, Positional information
Functional Imaging of Brown Fat in Mice with 18F-FDG micro-PET/CT
Institutions: The Methodist Hospital Research Institute, Houston, The Methodist Hospital Research Institute, Houston.
Brown adipose tissue (BAT) differs from white adipose tissue (WAT) by its discrete location and a brown-red color due to rich vascularization and high density of mitochondria. BAT plays a major role in energy expenditure and non-shivering thermogenesis in newborn mammals as well as the adults 1
. BAT-mediated thermogenesis is highly regulated by the sympathetic nervous system, predominantly via β adrenergic receptor 2, 3
. Recent studies have shown that BAT activities in human adults are negatively correlated with body mass index (BMI) and other diabetic parameters 4-6
. BAT has thus been proposed as a potential target for anti-obesity/anti-diabetes therapy focusing on modulation of energy balance 6-8
. While several cold challenge-based positron emission tomography (PET) methods are established for detecting human BAT 9-13
, there is essentially no standardized protocol for imaging and quantification of BAT in small animal models such as mice. Here we describe a robust PET/CT imaging method for functional assessment of BAT in mice. Briefly, adult C57BL/6J mice were cold treated under fasting conditions for a duration of 4 hours before they received one dose of 18
F-Fluorodeoxyglucose (FDG). The mice were remained in the cold for one additional hour post FDG injection, and then scanned with a small animal-dedicated micro-PET/CT system. The acquired PET images were co-registered with the CT images for anatomical references and analyzed for FDG uptake in the interscapular BAT area to present BAT activity. This standardized cold-treatment and imaging protocol has been validated through testing BAT activities during pharmacological interventions, for example, the suppressed BAT activation by the treatment of β-adrenoceptor antagonist propranolol 14, 15
, or the enhanced BAT activation by β3 agonist BRL37344 16
. The method described here can be applied to screen for drugs/compounds that modulate BAT activity, or to identify genes/pathways that are involved in BAT development and regulation in various preclinical and basic studies.
Molecular Biology, Issue 69, Neuroscience, Anatomy, Physiology, Medicine, Brown adipose tissue, mice, 18F-Fluorodeoxyglucose, micro-PET, PET, CT, CT scan, tomography, imaging
Isolation and Enrichment of Human Adipose-derived Stromal Cells for Enhanced Osteogenesis
Institutions: Stanford University School of Medicine, Stanford University.
Bone marrow-derived mesenchymal stromal cells (BM-MSCs) are considered the gold standard for stem cell-based tissue engineering applications. However, the process by which they must be harvested can be associated with significant donor site morbidity. In contrast, adipose-derived stromal cells (ASCs) are more readily abundant and more easily harvested, making them an appealing alternative to BM-MSCs. Like BM-MSCs, ASCs can differentiate into osteogenic lineage cells and can be used in tissue engineering applications, such as seeding onto scaffolds for use in craniofacial skeletal defects. ASCs are obtained from the stromal vascular fraction (SVF) of digested adipose tissue, which is a heterogeneous mixture of ASCs, vascular endothelial and mural cells, smooth muscle cells, pericytes, fibroblasts, and circulating cells. Flow cytometric analysis has shown that the surface marker profile for ASCs is similar to that for BM-MSCs. Despite several published reports establishing markers for the ASC phenotype, there is still a lack of consensus over profiles identifying osteoprogenitor cells in this heterogeneous population. This protocol describes how to isolate and use a subpopulation of ASCs with enhanced osteogenic capacity to repair critical-sized calvarial defects.
Developmental Biology, Issue 95, CD90, Thy-1, sorting, positive selection, osteogenic, differentiation
Programming Stem Cells for Therapeutic Angiogenesis Using Biodegradable Polymeric Nanoparticles
Institutions: Stanford University , Stanford University .
Controlled vascular growth is critical for successful tissue regeneration and wound healing, as well as for treating ischemic diseases such as stroke, heart attack or peripheral arterial diseases. Direct delivery of angiogenic growth factors has the potential to stimulate new blood vessel growth, but is often associated with limitations such as lack of targeting and short half-life in vivo
. Gene therapy offers an alternative approach by delivering genes encoding angiogenic factors, but often requires using virus, and is limited by safety concerns. Here we describe a recently developed strategy for stimulating vascular growth by programming stem cells to overexpress angiogenic factors in situ
using biodegradable polymeric nanoparticles. Specifically our strategy utilized stem cells as delivery vehicles by taking advantage of their ability to migrate toward ischemic tissues in vivo
. Using the optimized polymeric vectors, adipose-derived stem cells were modified to overexpress an angiogenic gene encoding vascular endothelial growth factor (VEGF). We described the processes for polymer synthesis, nanoparticle formation, transfecting stem cells in vitro
, as well as methods for validating the efficacy of VEGF-expressing stem cells for promoting angiogenesis in a murine hindlimb ischemia model.
Empty Value, Issue 79, Stem Cells, animal models, bioengineering (general), angiogenesis, biodegradable, non-viral, gene therapy
Gene Transfer into Older Chicken Embryos by ex ovo Electroporation
Institutions: School of Medicine University of Rostock, School of Medicine University of Jena.
The chicken embryo provides an excellent model system for studying gene function and regulation during embryonic development. In ovo
electroporation is a powerful method to over-express exogenous genes or down-regulate endogenous genes in vivo
in chicken embryos1
. Different structures such as DNA plasmids encoding genes2-4
, small interfering RNA (siRNA) plasmids5
, small synthetic RNA oligos6
, and morpholino antisense oligonucleotides7
can be easily transfected into chicken embryos by electroporation. However, the application of in ovo
electroporation is limited to embryos at early incubation stages (younger than stage HH20 - according to Hamburg and Hamilton)8
and there are some disadvantages for its application in embryos at later stages (older than stage HH22 - approximately 3.5 days of development). For example, the vitelline membrane at later stages is usually stuck to the shall membrane and opening a window in the shell causes rupture of the vessels, resulting in death of the embryos; older embryos are covered by vitelline and allantoic vessels, where it is difficult to access and manipulate the embryos; older embryos move vigorously and is difficult to control the orientation through a relatively small window in the shell.
In this protocol we demonstrate an ex ovo
electroporation method for gene transfer into chicken embryos at late stages (older than stage HH22). For ex ovo
electroporation, embryos are cultured in Petri dishes9
and the vitelline and allantoic vessels are widely spread. Under these conditions, the older chicken embryos are easily accessed and manipulated. Therefore, this method overcomes the disadvantages of in ovo
electroporation applied to the older chicken embryos. Using this method, plasmids can be easily transfected into different parts of the older chicken embryos10-12
Molecular Biology, Issue 65, Genetics, Developmental Biology, Gene transfer, gene function, electroporation, chicken, development
Optimization and Utilization of Agrobacterium-mediated Transient Protein Production in Nicotiana
Institutions: Fraunhofer USA Center for Molecular Biotechnology.
-mediated transient protein production in plants is a promising approach to produce vaccine antigens and therapeutic proteins within a short period of time. However, this technology is only just beginning to be applied to large-scale production as many technological obstacles to scale up are now being overcome. Here, we demonstrate a simple and reproducible method for industrial-scale transient protein production based on vacuum infiltration of Nicotiana
plants with Agrobacteria
carrying launch vectors. Optimization of Agrobacterium
cultivation in AB medium allows direct dilution of the bacterial culture in Milli-Q water, simplifying the infiltration process. Among three tested species of Nicotiana
, N. excelsiana
× N. excelsior
) was selected as the most promising host due to the ease of infiltration, high level of reporter protein production, and about two-fold higher biomass production under controlled environmental conditions. Induction of Agrobacterium
harboring pBID4-GFP (Tobacco mosaic virus
-based) using chemicals such as acetosyringone and monosaccharide had no effect on the protein production level. Infiltrating plant under 50 to 100 mbar for 30 or 60 sec resulted in about 95% infiltration of plant leaf tissues. Infiltration with Agrobacterium
laboratory strain GV3101 showed the highest protein production compared to Agrobacteria
laboratory strains LBA4404 and C58C1 and wild-type Agrobacteria
strains at6, at10, at77 and A4. Co-expression of a viral RNA silencing suppressor, p23 or p19, in N. benthamiana
resulted in earlier accumulation and increased production (15-25%) of target protein (influenza virus hemagglutinin).
Plant Biology, Issue 86, Agroinfiltration, Nicotiana benthamiana, transient protein production, plant-based expression, viral vector, Agrobacteria
Dual Labeling of Neural Crest Cells and Blood Vessels Within Chicken Embryos Using ChickGFP Neural Tube Grafting and Carbocyanine Dye DiI Injection
Institutions: UCL Institute of Child Health, Queen Mary University of London, Barts and The London School of Medicine and Dentistry, Erasmus University Medical Center, Rotterdam.
All developing organs need to be connected to both the nervous system (for sensory and motor control) as well as the vascular system (for gas exchange, fluid and nutrient supply). Consequently both the nervous and vascular systems develop alongside each other and share striking similarities in their branching architecture. Here we report embryonic manipulations that allow us to study the simultaneous development of neural crest-derived nervous tissue (in this case the enteric nervous system), and the vascular system. This is achieved by generating chicken chimeras via
transplantation of discrete segments of the neural tube, and associated neural crest, combined with vascular DiI injection in the same embryo. Our method uses transgenic chickGFP
embryos for intraspecies grafting, making the transplant technique more powerful than the classical quail-chick interspecies grafting protocol used with great effect since the 1970s. ChickGFP
-chick intraspecies grafting facilitates imaging of transplanted cells and their projections in intact tissues, and eliminates any potential bias in cell development linked to species differences. This method takes full advantage of the ease of access of the avian embryo (compared with other vertebrate embryos) to study the co-development of the enteric nervous system and the vascular system.
Developmental Biology, Issue 99, Intraspecies grafting, chimera, neural tube, vessel painting, carbocyanine dye, vascular network, transgenic GFP chicken, neural crest cells, enteric nervous system
Use of the TetON System to Study Molecular Mechanisms of Zebrafish Regeneration
Institutions: Ulm University.
The zebrafish has become a very important model organism for studying vertebrate development, physiology, disease, and tissue regeneration. A thorough understanding of the molecular and cellular mechanisms involved requires experimental tools that allow for inducible, tissue-specific manipulation of gene expression or signaling pathways. Therefore, we and others have recently adapted the TetON system for use in zebrafish. The TetON system facilitates temporally and spatially-controlled gene expression and we have recently used this tool to probe for tissue-specific functions of Wnt/beta–catenin signaling during zebrafish tail fin regeneration. Here we describe the workflow for using the TetON system to achieve inducible, tissue-specific gene expression in the adult regenerating zebrafish tail fin. This includes the generation of stable transgenic TetActivator and TetResponder lines, transgene induction and techniques for verification of tissue-specific gene expression in the fin regenerate. Thus, this protocol serves as blueprint for setting up a functional TetON system in zebrafish and its subsequent use, in particular for studying fin regeneration.
Developmental Biology, Issue 100, Tetracycline-controlled transcriptional activation, TetON, zebrafish, Regeneration, fin, tissue-specific gene expression, doxycycline, cryosectioning, transgenic, Tol2, I-SceI, anesthesia
Efficient Agroinfiltration of Plants for High-level Transient Expression of Recombinant Proteins
Institutions: Arizona State University .
Mammalian cell culture is the major platform for commercial production of human vaccines and therapeutic proteins. However, it cannot meet the increasing worldwide demand for pharmaceuticals due to its limited scalability and high cost. Plants have shown to be one of the most promising alternative pharmaceutical production platforms that are robust, scalable, low-cost and safe. The recent development of virus-based vectors has allowed rapid and high-level transient expression of recombinant proteins in plants. To further optimize the utility of the transient expression system, we demonstrate a simple, efficient and scalable methodology to introduce target-gene containing Agrobacterium
into plant tissue in this study. Our results indicate that agroinfiltration with both syringe and vacuum methods have resulted in the efficient introduction of Agrobacterium
into leaves and robust production of two fluorescent proteins; GFP and DsRed. Furthermore, we demonstrate the unique advantages offered by both methods. Syringe infiltration is simple and does not need expensive equipment. It also allows the flexibility to either infiltrate the entire leave with one target gene, or to introduce genes of multiple targets on one leaf. Thus, it can be used for laboratory scale expression of recombinant proteins as well as for comparing different proteins or vectors for yield or expression kinetics. The simplicity of syringe infiltration also suggests its utility in high school and college education for the subject of biotechnology. In contrast, vacuum infiltration is more robust and can be scaled-up for commercial manufacture of pharmaceutical proteins. It also offers the advantage of being able to agroinfiltrate plant species that are not amenable for syringe infiltration such as lettuce and Arabidopsis
. Overall, the combination of syringe and vacuum agroinfiltration provides researchers and educators a simple, efficient, and robust methodology for transient protein expression. It will greatly facilitate the development of pharmaceutical proteins and promote science education.
Plant Biology, Issue 77, Genetics, Molecular Biology, Cellular Biology, Virology, Microbiology, Bioengineering, Plant Viruses, Antibodies, Monoclonal, Green Fluorescent Proteins, Plant Proteins, Recombinant Proteins, Vaccines, Synthetic, Virus-Like Particle, Gene Transfer Techniques, Gene Expression, Agroinfiltration, plant infiltration, plant-made pharmaceuticals, syringe agroinfiltration, vacuum agroinfiltration, monoclonal antibody, Agrobacterium tumefaciens, Nicotiana benthamiana, GFP, DsRed, geminiviral vectors, imaging, plant model
Rescue of Recombinant Newcastle Disease Virus from cDNA
Institutions: Icahn School of Medicine at Mount Sinai, Icahn School of Medicine at Mount Sinai, Icahn School of Medicine at Mount Sinai, University of Rochester.
Newcastle disease virus (NDV), the prototype member of the Avulavirus
genus of the family Paramyxoviridae1
, is a non-segmented, negative-sense, single-stranded, enveloped RNA virus (Figure 1)
with potential applications as a vector for vaccination and treatment of human diseases. In-depth exploration of these applications has only become possible after the establishment of reverse genetics techniques to rescue recombinant viruses from plasmids encoding their complete genomes as cDNA2-5
. Viral cDNA can be conveniently modified in vitro
by using standard cloning procedures to alter the genotype of the virus and/or to include new transcriptional units. Rescue of such genetically modified viruses provides a valuable tool to understand factors affecting multiple stages of infection, as well as allows for the development and improvement of vectors for the expression and delivery of antigens for vaccination and therapy. Here we describe a protocol for the rescue of recombinant NDVs.
Immunology, Issue 80, Paramyxoviridae, Vaccines, Oncolytic Virotherapy, Immunity, Innate, Newcastle disease virus (NDV), MVA-T7, reverse genetics techniques, plasmid transfection, recombinant virus, HA assay
Repair of a Critical-sized Calvarial Defect Model Using Adipose-derived Stromal Cells Harvested from Lipoaspirate
Institutions: Stanford University , Duke University , Saint Joseph Mercy Hospital, University of California, San Francisco , University of California, Los Angeles .
Craniofacial skeletal repair and regeneration offers the promise of de novo
tissue formation through a cell-based approach utilizing stem cells. Adipose-derived stromal cells (ASCs) have proven to be an abundant source of multipotent stem cells capable of undergoing osteogenic, chondrogenic, adipogenic, and myogenic differentiation. Many studies have explored the osteogenic potential of these cells in vivo
with the use of various scaffolding biomaterials for cellular delivery. It has been demonstrated that by utilizing an osteoconductive, hydroxyapatite-coated poly(lactic-co-glycolic acid) (HA-PLGA) scaffold seeded with ASCs, a critical-sized calvarial defect, a defect that is defined by its inability to undergo spontaneous healing over the lifetime of the animal, can be effectively show robust osseous regeneration. This in vivo
model demonstrates the basis of translational approaches aimed to regenerate the bone tissue - the cellular component and biological matrix. This method serves as a model for the ultimate clinical application of a progenitor cell towards the repair of a specific tissue defect.
Medicine, Issue 68, Stem Cells, Skeletal Tissue Engineering, Calvarial Defect, Scaffold, Tissue Regeneration, adipose-derived stromal cells
Generation of Human Adipose Stem Cells through Dedifferentiation of Mature Adipocytes in Ceiling Cultures
Institutions: IUCPQ Research Center, CHU de Québec Research Center, Laval University.
Mature adipocytes have been shown to reverse their phenotype into fibroblast-like cells in vitro
through a technique called ceiling culture. Mature adipocytes can also be isolated from fresh adipose tissue for depot-specific characterization of their function and metabolic properties. Here, we describe a well-established protocol to isolate mature adipocytes from adipose tissues using collagenase digestion, and subsequent steps to perform ceiling cultures. Briefly, adipose tissues are incubated in a Krebs-Ringer-Henseleit buffer containing collagenase to disrupt tissue matrix. Floating mature adipocytes are collected on the top surface of the buffer. Mature cells are plated in a T25-flask completely filled with media and incubated upside down for a week. An alternative 6-well plate culture approach allows the characterization of adipocytes undergoing dedifferentiation. Adipocyte morphology drastically changes over time of culture. Immunofluorescence can be easily performed on slides cultivated in 6-well plates as demonstrated by FABP4 immunofluorescence staining. FABP4 protein is present in mature adipocytes but down-regulated through dedifferentiation of fat cells. Mature adipocyte dedifferentiation may represent a new avenue for cell therapy and tissue engineering.
Developmental Biology, Issue 97, Adipocyte, dedifferentiation, DFAT, collagenase, adipose tissue, cell biology, obesity
Isolation and Differentiation of Stromal Vascular Cells to Beige/Brite Cells
Institutions: University of California, San Francisco , University of Copenhagen, Denmark, National Institute of Nutrition and Seafood Research, Bergen, Norway.
Brown adipocytes have the ability to uncouple the respiratory chain in mitochondria and dissipate chemical energy as heat. Development of UCP1-positive brown adipocytes in white adipose tissues (so called beige or brite cells) is highly induced by a variety of environmental cues such as chronic cold exposure or by PPARγ agonists, therefore, this cell type has potential as a therapeutic target for obesity treatment. Although most immortalized adipocyte lines cannot recapitulate the process of "browning" of white fat in culture, primary adipocytes isolated from stromal vascular fraction in subcutaneous white adipose tissue (WAT) provide a reliable cellular system to study the molecular control of beige/brite cell development. Here we describe a protocol for effective isolation of primary preadipocytes and for inducing differentiation to beige/brite cells in culture. The browning effect can be assessed by the expression of brown fat-selective markers such as UCP1.
Cellular Biology, Issue 73, Medicine, Anatomy, Physiology, Molecular Biology, Surgery, Adipose Tissue, Adipocytes, Transcription Factors, Cell Differentiation, Obesity, Diabetes, brown adipose tissue, beige/brite cells, primary adipocytes, stromal-vascular fraction, differentiation, uncoupling protein 1, rosiglitazone, differentiation, cells, isolation, fat, animal model
Manual Isolation of Adipose-derived Stem Cells from Human Lipoaspirates
Institutions: Cytori Therapeutics Inc, David Geffen School of Medicine at UCLA, David Geffen School of Medicine at UCLA, David Geffen School of Medicine at UCLA, David Geffen School of Medicine at UCLA.
In 2001, researchers at the University of California, Los Angeles, described the isolation of a new population of adult stem cells from liposuctioned adipose tissue that they initially termed Processed Lipoaspirate Cells or PLA cells. Since then, these stem cells have been renamed as Adipose-derived Stem Cells or ASCs and have gone on to become one of the most popular adult stem cells populations in the fields of stem cell research and regenerative medicine. Thousands of articles now describe the use of ASCs in a variety of regenerative animal models, including bone regeneration, peripheral nerve repair and cardiovascular engineering. Recent articles have begun to describe the myriad of uses for ASCs in the clinic. The protocol shown in this article outlines the basic procedure for manually and enzymatically isolating ASCs from large amounts of lipoaspirates obtained from cosmetic procedures. This protocol can easily be scaled up or down to accommodate the volume of lipoaspirate and can be adapted to isolate ASCs from fat tissue obtained through abdominoplasties and other similar procedures.
Cellular Biology, Issue 79, Adipose Tissue, Stem Cells, Humans, Cell Biology, biology (general), enzymatic digestion, collagenase, cell isolation, Stromal Vascular Fraction (SVF), Adipose-derived Stem Cells, ASCs, lipoaspirate, liposuction
Scalable 96-well Plate Based iPSC Culture and Production Using a Robotic Liquid Handling System
Institutions: InvivoSciences, Inc., Gilson, Inc..
Continued advancement in pluripotent stem cell culture is closing the gap between bench and bedside for using these cells in regenerative medicine, drug discovery and safety testing. In order to produce stem cell derived biopharmaceutics and cells for tissue engineering and transplantation, a cost-effective cell-manufacturing technology is essential. Maintenance of pluripotency and stable performance of cells in downstream applications (e.g.
, cell differentiation) over time is paramount to large scale cell production. Yet that can be difficult to achieve especially if cells are cultured manually where the operator can introduce significant variability as well as be prohibitively expensive to scale-up. To enable high-throughput, large-scale stem cell production and remove operator influence novel stem cell culture protocols using a bench-top multi-channel liquid handling robot were developed that require minimal technician involvement or experience. With these protocols human induced pluripotent stem cells (iPSCs) were cultured in feeder-free conditions directly from a frozen stock and maintained in 96-well plates. Depending on cell line and desired scale-up rate, the operator can easily determine when to passage based on a series of images showing the optimal colony densities for splitting. Then the necessary reagents are prepared to perform a colony split to new plates without a centrifugation step. After 20 passages (~3 months), two iPSC lines maintained stable karyotypes, expressed stem cell markers, and differentiated into cardiomyocytes with high efficiency. The system can perform subsequent high-throughput screening of new differentiation protocols or genetic manipulation designed for 96-well plates. This technology will reduce the labor and technical burden to produce large numbers of identical stem cells for a myriad of applications.
Developmental Biology, Issue 99, iPSC, high-throughput, robotic, liquid-handling, scalable, stem cell, automated stem cell culture, 96-well
Human Brown Adipose Tissue Depots Automatically Segmented by Positron Emission Tomography/Computed Tomography and Registered Magnetic Resonance Images
Institutions: Vanderbilt University, Vanderbilt University School of Medicine, Vanderbilt University Medical Center, Vanderbilt University.
Reliably differentiating brown adipose tissue (BAT) from other tissues using a non-invasive imaging method is an important step toward studying BAT in humans. Detecting BAT is typically confirmed by the uptake of the injected radioactive tracer 18
F-FDG) into adipose tissue depots, as measured by positron emission tomography/computed tomography (PET-CT) scans after exposing the subject to cold stimulus. Fat-water separated magnetic resonance imaging (MRI) has the ability to distinguish BAT without the use of a radioactive tracer. To date, MRI of BAT in adult humans has not been co-registered with cold-activated PET-CT. Therefore, this protocol uses 18
F-FDG PET-CT scans to automatically generate a BAT mask, which is then applied to co-registered MRI scans of the same subject. This approach enables measurement of quantitative MRI properties of BAT without manual segmentation. BAT masks are created from two PET-CT scans: after exposure for 2 hr to either thermoneutral (TN) (24 °C) or cold-activated (CA) (17 °C) conditions. The TN and CA PET-CT scans are registered, and the PET standardized uptake and CT Hounsfield values are used to create a mask containing only BAT. CA and TN MRI scans are also acquired on the same subject and registered to the PET-CT scans in order to establish quantitative MRI properties within the automatically defined BAT mask. An advantage of this approach is that the segmentation is completely automated and is based on widely accepted methods for identification of activated BAT (PET-CT). The quantitative MRI properties of BAT established using this protocol can serve as the basis for an MRI-only BAT examination that avoids the radiation associated with PET-CT.
Medicine, Issue 96, magnetic resonance imaging, brown adipose tissue, cold-activation, adult human, fat water imaging, fluorodeoxyglucose, positron emission tomography, computed tomography
Analysis of Nephron Composition and Function in the Adult Zebrafish Kidney
Institutions: University of Notre Dame.
The zebrafish model has emerged as a relevant system to study kidney development, regeneration and disease. Both the embryonic and adult zebrafish kidneys are composed of functional units known as nephrons, which are highly conserved with other vertebrates, including mammals. Research in zebrafish has recently demonstrated that two distinctive phenomena transpire after adult nephrons incur damage: first, there is robust regeneration within existing nephrons that replaces the destroyed tubule epithelial cells; second, entirely new nephrons are produced from renal progenitors in a process known as neonephrogenesis. In contrast, humans and other mammals seem to have only a limited ability for nephron epithelial regeneration. To date, the mechanisms responsible for these kidney regeneration phenomena remain poorly understood. Since adult zebrafish kidneys undergo both nephron epithelial regeneration and neonephrogenesis, they provide an outstanding experimental paradigm to study these events. Further, there is a wide range of genetic and pharmacological tools available in the zebrafish model that can be used to delineate the cellular and molecular mechanisms that regulate renal regeneration. One essential aspect of such research is the evaluation of nephron structure and function. This protocol describes a set of labeling techniques that can be used to gauge renal composition and test nephron functionality in the adult zebrafish kidney. Thus, these methods are widely applicable to the future phenotypic characterization of adult zebrafish kidney injury paradigms, which include but are not limited to, nephrotoxicant exposure regimes or genetic methods of targeted cell death such as the nitroreductase mediated cell ablation technique. Further, these methods could be used to study genetic perturbations in adult kidney formation and could also be applied to assess renal status during chronic disease modeling.
Cellular Biology, Issue 90,
zebrafish; kidney; nephron; nephrology; renal; regeneration; proximal tubule; distal tubule; segment; mesonephros; physiology; acute kidney injury (AKI)
Production and Use of Lentivirus to Selectively Transduce Primary Oligodendrocyte Precursor Cells for In Vitro Myelination Assays
Institutions: The University of Melbourne, The University of Melbourne.
Myelination is a complex process that involves both neurons and the myelin forming glial cells, oligodendrocytes in the central nervous system (CNS) and Schwann cells in the peripheral nervous system (PNS). We use an in vitro
myelination assay, an established model for studying CNS myelination in vitro
. To do this, oligodendrocyte precursor cells (OPCs) are added to the purified primary rodent dorsal root ganglion (DRG) neurons to form myelinating co-cultures. In order to specifically interrogate the roles that particular proteins expressed by oligodendrocytes exert upon myelination we have developed protocols that selectively transduce OPCs using the lentivirus overexpressing wild type, constitutively active or dominant negative proteins before being seeded onto the DRG neurons. This allows us to specifically interrogate the roles of these oligodendroglial proteins in regulating myelination. The protocols can also be applied in the study of other cell types, thus providing an approach that allows selective manipulation of proteins expressed by a desired cell type, such as oligodendrocytes for the targeted study of signaling and compensation mechanisms. In conclusion, combining the in vitro
myelination assay with lentiviral infected OPCs provides a strategic tool for the analysis of molecular mechanisms involved in myelination.
Developmental Biology, Issue 95, lentivirus, cocultures, oligodendrocyte, myelination, oligodendrocyte precursor cells, dorsal root ganglion neurons
Mosaic Zebrafish Transgenesis for Functional Genomic Analysis of Candidate Cooperative Genes in Tumor Pathogenesis
Institutions: Mayo Clinic College of Medicine, Center for Individualized Medicine, Tufts University School of Medicine, Mayo Clinic.
Comprehensive genomic analysis has uncovered surprisingly large numbers of genetic alterations in various types of cancers. To robustly and efficiently identify oncogenic “drivers” among these tumors and define their complex relationships with concurrent genetic alterations during tumor pathogenesis remains a daunting task. Recently, zebrafish have emerged as an important animal model for studying human diseases, largely because of their ease of maintenance, high fecundity, obvious advantages for in vivo
imaging, high conservation of oncogenes and their molecular pathways, susceptibility to tumorigenesis and, most importantly, the availability of transgenic techniques suitable for use in the fish. Transgenic zebrafish models of cancer have been widely used to dissect oncogenic pathways in diverse tumor types. However, developing a stable transgenic fish model is both tedious and time-consuming, and it is even more difficult and more time-consuming to dissect the cooperation of multiple genes in disease pathogenesis using this approach, which requires the generation of multiple transgenic lines with overexpression of the individual genes of interest followed by complicated breeding of these stable transgenic lines. Hence, use of a mosaic transient transgenic approach in zebrafish offers unique advantages for functional genomic analysis in vivo
. Briefly, candidate transgenes can be coinjected into one-cell-stage wild-type or transgenic zebrafish embryos and allowed to integrate together into each somatic cell in a mosaic pattern that leads to mixed genotypes in the same primarily injected animal. This permits one to investigate in a faster and less expensive manner whether and how the candidate genes can collaborate with each other to drive tumorigenesis. By transient overexpression of activated ALK
in the transgenic fish overexpressing MYCN
, we demonstrate here the cooperation of these two oncogenes in the pathogenesis of a pediatric cancer, neuroblastoma that has resisted most forms of contemporary treatment.
Developmental Biology, Issue 97, zebrafish, animal model, mosaic transgenesis, coinjection, functional genomics, tumor initiation
Affinity-based Isolation of Tagged Nuclei from Drosophila Tissues for Gene Expression Analysis
Institutions: Purdue University.
embryonic and larval tissues often contain a highly heterogeneous mixture of cell types, which can complicate the analysis of gene expression in these tissues. Thus, to analyze cell-specific gene expression profiles from Drosophila
tissues, it may be necessary to isolate specific cell types with high purity and at sufficient yields for downstream applications such as transcriptional profiling and chromatin immunoprecipitation. However, the irregular cellular morphology in tissues such as the central nervous system, coupled with the rare population of specific cell types in these tissues, can pose challenges for traditional methods of cell isolation such as laser microdissection and fluorescence-activated cell sorting (FACS). Here, an alternative approach to characterizing cell-specific gene expression profiles using affinity-based isolation of tagged nuclei, rather than whole cells, is described. Nuclei in the specific cell type of interest are genetically labeled with a nuclear envelope-localized EGFP tag using the Gal4/UAS binary expression system. These EGFP-tagged nuclei can be isolated using antibodies against GFP that are coupled to magnetic beads. The approach described in this protocol enables consistent isolation of nuclei from specific cell types in the Drosophila
larval central nervous system at high purity and at sufficient levels for expression analysis, even when these cell types comprise less than 2% of the total cell population in the tissue. This approach can be used to isolate nuclei from a wide variety of Drosophila
embryonic and larval cell types using specific Gal4 drivers, and may be useful for isolating nuclei from cell types that are not suitable for FACS or laser microdissection.
Biochemistry, Issue 85, Gene Expression, nuclei isolation, Drosophila, KASH, GFP, cell-type specific
Applying an Inducible Expression System to Study Interference of Bacterial Virulence Factors with Intracellular Signaling
Institutions: Friedrich-Alexander-Universität, Friedrich-Loeffler-Institut, Universitätsklinikum Erlangen.
The technique presented here allows one to analyze at which step a target protein, or alternatively a small molecule, interacts with the components of a signaling pathway. The method is based, on the one hand, on the inducible expression of a specific protein to initiate a signaling event at a defined and predetermined step in the selected signaling cascade. Concomitant expression, on the other hand, of the gene of interest then allows the investigator to evaluate if the activity of the expressed target protein is located upstream or downstream of the initiated signaling event, depending on the readout of the signaling pathway that is obtained. Here, the apoptotic cascade was selected as a defined signaling pathway to demonstrate protocol functionality. Pathogenic bacteria, such as Coxiella burnetii
, translocate effector proteins that interfere with host cell death induction in the host cell to ensure bacterial survival in the cell and to promote their dissemination in the organism. The C. burnetii
effector protein CaeB effectively inhibits host cell death after induction of apoptosis with UV-light or with staurosporine. To narrow down at which step CaeB interferes with the propagation of the apoptotic signal, selected proteins with well-characterized pro-apoptotic activity were expressed transiently in a doxycycline-inducible manner. If CaeB acts upstream of these proteins, apoptosis will proceed unhindered. If CaeB acts downstream, cell death will be inhibited. The test proteins selected were Bax, which acts at the level of the mitochondria, and caspase 3, which is the major executioner protease. CaeB interferes with cell death induced by Bax expression, but not by caspase 3 expression. CaeB, thus, interacts with the apoptotic cascade between these two proteins.
Infection, Issue 100, Apoptosis, Bax, Caspase 3, Coxiella burnetii, Doxycycline, Effector protein, Inducible expression, stable cell line, Tet system, Type IV Secretion System
Principles of Site-Specific Recombinase (SSR) Technology
Institutions: Max Plank Institute for Molecular Cell Biology and Genetics, Dresden.
Site-specific recombinase (SSR) technology allows the manipulation of gene structure to explore gene function and has become an integral tool of molecular biology. Site-specific recombinases are proteins that bind to distinct DNA target sequences. The Cre/lox system was first described in bacteriophages during the 1980's. Cre recombinase is a Type I topoisomerase that catalyzes site-specific recombination of DNA between two loxP (locus of X-over P1) sites. The Cre/lox system does not require any cofactors. LoxP sequences contain distinct binding sites for Cre recombinases that surround a directional core sequence where recombination and rearrangement takes place. When cells contain loxP sites and express the Cre recombinase, a recombination event occurs. Double-stranded DNA is cut at both loxP sites by the Cre recombinase, rearranged, and ligated ("scissors and glue"). Products of the recombination event depend on the relative orientation of the asymmetric sequences.
SSR technology is frequently used as a tool to explore gene function. Here the gene of interest is flanked with Cre target sites loxP ("floxed"). Animals are then crossed with animals expressing the Cre recombinase under the control of a tissue-specific promoter. In tissues that express the Cre recombinase it binds to target sequences and excises the floxed gene. Controlled gene deletion allows the investigation of gene function in specific tissues and at distinct time points. Analysis of gene function employing SSR technology --- conditional mutagenesis -- has significant advantages over traditional knock-outs where gene deletion is frequently lethal.
Cellular Biology, Issue 15, Molecular Biology, Site-Specific Recombinase, Cre recombinase, Cre/lox system, transgenic animals, transgenic technology
Assessing Species-specific Contributions To Craniofacial Development Using Quail-duck Chimeras
Institutions: University of California at San Francisco.
The generation of chimeric embryos is a widespread and powerful approach to study cell fates, tissue interactions, and species-specific contributions to the histological and morphological development of vertebrate embryos. In particular, the use of chimeric embryos has established the importance of neural crest in directing the species-specific morphology of the craniofacial complex. The method described herein utilizes two avian species, duck and quail, with remarkably different craniofacial morphology. This method greatly facilitates the investigation of molecular and cellular regulation of species-specific pattern in the craniofacial complex. Experiments in quail and duck chimeric embryos have already revealed neural crest-mediated tissue interactions and cell-autonomous behaviors that regulate species-specific pattern in the craniofacial skeleton, musculature, and integument. The great diversity of neural crest derivatives suggests significant potential for future applications of the quail-duck chimeric system to understanding vertebrate development, disease, and evolution.
Developmental Biology, Issue 87, neural crest, quail-duck chimeras, craniofacial development, epithelial-mesenchymal interactions, tissue transplants, evolutionary developmental biology
| 1 | 4 |
<urn:uuid:5261aae9-e479-4af7-ab67-d3a855d2f52d>
|
By Bo Yoo and Mauricio Narvaez
The amount of traffic in data centers has been increasing continuously. To accommodate large traffic, it is essential for data centers to be able to grow incrementally while avoiding bandwidth bottlenecks. One of the data center network topologies that has been proposed to facilitate high bandwidth usage is full-bisection bandwidth fat-tree topology, which is a hierarchical structure that assigns higher weight or in this case more neighbors as the node gets closer to the root. This rigid structure required by fat-tree topology restricts the granularity of the data center size growth since the number of servers depends on the number of ports available per switch. Therefore, the authors of the paper Jellyfish: Networking Data Centers Randomly , propose a new data center network topology: Jellyfish.
The Original Paper
Goals and Motivation
There is a market need for data centers to grow their capacity incrementally to handle the increasing amount of traffic. Since this network layout is dictated by the number of switch ports available, a hard rigid structure such as fat-tree inhibits this. The goal of this paper is to introduce a new data center network topology that allows smooth incorporation of additional servers for the future growth. It is also important that this topology promotes efficient utilization of the network. The authors suggest a degree bounded random graph topology among top-of-rack (ToR) switches called Jellyfish to achieve this, and show that this flexible structure allows easy incremental growth while being highly efficient.
In the paper, the authors first show that Jellyfish outperforms other suggested data center network topology in terms of performance. Jellyfish supports 27% more servers at full capacity than fat-tree topology of the same size and it is highly failure resilient. Also, its network capacity is over 91% of the best-known, bandwidth-efficient, degree-diameter graph. The degree-diameter network gives the best throughput but does not satisfy the incremental network growth criteria. The authors also report that Jellyfish generally has shorter average path length between servers which means that this random network is highly connected and therefore efficient. Bisection bandwidth is often considered as a measurement of the network capacity. This measures the lowest bandwidth panning two subnetworks of the same size in a network. As shown in the Figure 1 below (Figure 2.a in the original paper), Jellyfish networks can support more servers compared to fat-tree under the same condition.
Figure 1. Number of servers Jellyfish and fat-tree networks can handle by the normalized bisection bandwidth
Jellyfish’s flexible structure also allows smooth incremental expandability. Their idea is that when one rack of servers is added, choose random links in the existing network, remove them, and then link the two newly freed ports to the new ToR and repeat as needed. The authors explain that because the path lengths increase slowly, the bandwidth per server remain high even after large expansions. This shows that the expansion is easy and can be done incrementally without sacrificing significant bandwidth loss for bigger data centers.
Goals and Motivation
Our goal of this project is to first implement a degree bounded random graph Jellyfish network to simulate data center traffic. Then we plan to recreate Figure 2 below (Figure 9 in the original paper). This figure shows that because of Jellyfish’s highly interconnected structure, equal cost multi path (ECMP) routing algorithm does not allow full utilization of the links. Instead they show that k-shortest path routing works much better with Jellyfish. We chose this particular figure because by looking at the network utilization in different routing schemes we can learn these two routing algorithms in detail, compare the two and how one outperforms the other in the context of Jellyfish structure.
We also want to explore the network properties of Jellyfish. The authors of the paper talked about the high bisection bandwidth and short average shortest paths, but we plan to measure additional network properties that can explain high throughput of Jellyfish. We will generate multiple Jellyfish topologies to measure average shortest path, the diameter, and global clustering coefficient of the networks. Since we expect Jellyfish to be highly interconnected we expect to see low diameter and high clustering coefficient (which measure how close the network is to a full clique) across multiple iterations. We will also plot distribution of betweenness centrality for all nodes in Jellyfish. For Jellyfish to have no bottleneck server and utilize the network efficiently it should have a uniform betweenness and have no node with unusually high betweenness centrality such as figure 3 below. Networks like figure 3 would actually give a decent bidirectional bandwidth because the bottleneck node with high betweenness centrality would be hidden inside the subnetwork since two subnetworks have to be equal size. Looking at betweenness centrality distribution of the whole network will allow us to see if nodes like H exists in Jellyfish.
Figure 2. Figure we plan to reproduce as the main portion of our project
Figure 3. An example of a network with a high betweenness centrality node, H. Betweenness centrality measure how many shortest paths includes that particular node or how essential the node is in connecting the network together.
From the project groups who did this project during the previous offerings of the course, we learned that Mininet is not a good platform this project since it cannot handle enough servers for us to simulate the traffic routing algorithms shown in the figure 2 above (our main goal of the project) correctly. Therefore, we chose to implement everything using Python to generate random networks and then simulate network traffic to analyze the network structure and traffic routing algorithms.
We implemented a random network topology generator to create Jellyfish network structures. The algorithm is very simple, it takes in just three parameters: switch count, ports per switch, and rack height (how many ports on each switch are used for servers instead of other switches). It initializes all the nodes and adds random links until it fails (one of the two nodes is saturated) 100 times in a row – at that point we can assume the network is pretty saturated. Figure 4 below demonstrates two example graphs we generated with default parameters along with their network properties.
Figure 4. Random graphs generated with 686 nodes.. Here we show the topology of 2 independent iterations. The network properties and the graph are generated from an edge list we output using Cytoscape. The color of the node corresponds to the degree of each nodes. Blue nodes have lower degrees (min = 13) and red nodes have higher degrees (max = 29).
Generating Figure 9
Figure 9 in the Jellyfish paper measures the number of paths every link appears on and displays the number as a rank vs number (where rank is from least number of paths to highest number of paths). It displays this data for paths generated with ECMP 8 way, ECMP 64 way, and 8-shortest paths. The paths come from random permutation traffic – each server sends data to a random other server. We simplified this by sending traffic from switch n to switch n+1, n+2…n+i, n+(rackHeight). This assumes each server behind a switch contacts a server behind a different switch. The network was already random and the switch’s number was completely unrelated to the structure of the network, so we can pick any (rackHeight) other servers and still get good data. To generate the set of paths, we implemented Yen’s Algorithm for k shortest paths and ran it once per pair of switches, for 64 paths. We then extracted ECMP from this by taking the first set of paths with the same length (capped at 8 and 64), and 8 shortest paths by taking the first 8 paths in the original list. Then went through each set of paths to count how many times each individual link appeared in a dictionary that mapped link id to number of occurrences. Finally, we ordered the dictionary entries by value and generated the points (rank of link, number of paths) and input this to matplotlib to generate the below graph. Note that each run generates a random graph so the exact values may be different but all show the same trend:
Figure 5. Our recreation of Figure 9 from the Jellyfish paper, using 686 servers with a port count of 24 per switch and a rack height of 5. The Jellyfish paper didn’t specify which parameters they used so we found the ones that created the most similar graph by trial and error.
Challenges and Critiques
The largest challenge in reproducing the paper’s results was the lack of detail the paper contained. They didn’t include concrete numbers for rack height, switch size, etc. so we had to figure those out through trial and error to produce a graph that looked most similar. They didn’t explain where they got the paths used in this calculation other than saying “random permutation traffic” which took a while to decipher. Finally, we couldn’t find any concrete description of what exactly went into calculating ECMP. All papers were analyses of ECMP and its shortcomings, or vague descriptions that referenced ECMP but there was no official “ECMP Paper” that outline what exactly it was, everything simple said it used multiple paths of equal lengths. In our calculations, we interpreted ECMP to be include up to N paths whose length are equal to the shortest path. Any more are excluded, and any less results in simply less paths. Even if there 1 path of length 3 and 9 paths of length 4, our ECMP calculation would return the single 3-long path.
To extend this project even further we decided to explore network properties of random networks (Jellyfish network) in detail. In particular, we chose to calculate and plot global/local clustering coefficients, diameter, average shortest paths, and betweenness centralities of Jellyfish networks . Due to the number of connections each switch can make (which comes from the number of ports on the switches), randomly generated networks with hardware constraints seem to consistently build networks with similar topological structure which can explain the consistency of performance by Jellyfish network although it was created randomly. We ran all the experiments below with our default parameters (686 switches, 12 rack height, and 48 switch size).
Global clustering coefficients measure how close the network is to a full clique or a fully connected network. Simply put it measures out of triplets of connected nodes in the network what fraction of them are closed triangles. Local clustering coefficients is a measurement per node and this measures the likeliness of a node and its neighbors to cluster together or form a full clique. This is calculated by per node in the network out of all the edges that can exist among its neighbors (directly connected nodes to the node in question) how many are actual edges. Originally we thought the network structure will actually look very close to a clique since that is the network topology that will give us the best throughput. Interestingly, as shown in figure 5 below, we find that global clustering coefficient of Jellyfish networks while consistent but are pretty low around 0.05. Local clustering coefficients distributions seem to show peak around 0.01-0.03 range and no node has local clustering coefficient above 0.2. Considering a full clique like structure will have clustering coefficient of 1, this shows that Jellyfish networks are actually quite different from cliques.
Diameter and average shortest paths are other metrics to represent how well connected a network is. Instead of only considering direct links, these two look at what are the shortest paths between any two given nodes. Diameter is the longest shortest path of a network. To explain the performance of the Jellyfish network we expected diameter to be consistently small so any traffic can communicate between two nodes without having to do many hops and average shortest paths to be even lower so that in majority of the case two nodes can be connected with even smaller hops. As shown in figure 6 below, we can see that this is actually the case for Jellyfish. In 5 separate instances all of the networks have diameter of 4 but average shortest paths is actually lower at just above 2.
We briefly discussed betweenness centrality in Goals and Motivation section but we define how we calculated it more formally here. Betweenness centrality measures how often a node participates in a shortest path between any two other nodes in the network. To calculate this we first obtain shortest paths (actual paths not lengths) between all nodes and then betweenness centrality of a node is just a fraction of paths that node is part of but not the source or the target. We then normalize the value and plot the distribution plot of 5 iterations. Due to normalization we do see some nodes with extreme centrality value but the distribution is skewed towards left and peaks sharply around 0.25-0.45 which means that there are more nodes with smaller betweenness centralities as shown in Figure 7. This is what we expected to see because this means that many nodes have similar betweenness centrality and only few that have low or high value. There are many shortests paths in the network that do not have to go through a particular node and this reduces the chance of having a bottleneck and congestion at that node, and this helps to explain the performance of Jellyfish.
Although we thought about including network properties of a full clique as a baseline, the values were too trivial and was not worth the computing power to generate them. A full clique will have global/local clustering coefficients of 1 (for every node and for the entire network), a diameter of 1, an average shortest paths of 1, and betweenness centrality would not make sense because for any two nodes the shortest path is a direct path and will not include a particular node unless it was the source node or the target node.
Figure 6. Global clustering coefficient (top left), local clustering coefficient (top right), diameter (bottom left), average shortest paths (bottom right) plot of 5 iterations of Jellyfish networks. The top left plot, bottom left plot, and bottom right plot simply plot the Global clustering coefficient, diameter, and average shortest paths values, respectively, and the top right plot is a distribution plot of local clustering coefficients where each bar color represents each iteration.
Figure 7. Distribution plot of betweenness centrality. Each color represent each iteration
In order to run our code using Google Cloud VM and open the images you must allow X11 forwarding with your local computer. The following direction is intended for Mac users. If you want to skip the step of X11 forwarding (won’t be able to display the image using the terminal) simply use “SSH” button provided by Google Cloud Platform when you create an instance.
1. Open a VM instance in Google Cloud Platform Console:
We used 8v CPU, 52GB Memory, and Debian GNU/Linux 8 (jessie) image, default for everything else. If you are not using X11 forwarding open the instance using SSH button provided on the VM page and move to step 8.
2. Install Google Cloud SDK in your local terminal:
curl https://sdk.cloud.google.com | bash
exec -l $SHELL
3. Generate a key-pair for authentication. In your local terminal:
ssh-keygen -t rsa -f ~/.ssh/my-ssh-key-jellyfish -C jellyfish #no passphrase
chmod 400 ~/.ssh/my-ssh-key-jellyfish
Go to the metadata page in Google Cloud Platform Console -> SSH Keys -> Edit -> Add item. Where it says “enter entire key data”, enter the output of the following command:
4. Make sure you have X11 downloaded in your local computer (https://support.apple.com/kb/DL1605?locale=en_US)
Add these three lines:
Restart the shell
5. Open your Google Cloud instance using the following command:
ssh -i ~/.ssh/my-ssh-key-jellyfish jellyfish@[EXTERNAL IP ADDRESS] #can be found on the Google Cloud Platform Page
6. Download xauth and edit /etc/ssh/sshd_config:
sudo apt-get install xauth
sudo vim /etc/ssh/sshd_config
Add these two lines:
On the same page make sure this line is uncommented:
sudo /etc/init.d/ssh restart # restart the shell
7. Relog-in to the instance using -X flag:
ssh -X -C -i ~/.ssh/my-ssh-key-jellyfish jellyfish@[EXTERNAL IP ADDRESS]
8. Install dependencies:
sudo apt-get update
sudo apt-get install git-core
sudo apt-get install python-matplotlib
sudo apt-get install build-essential checkinstall libx11-dev libxext-dev zlib1g-dev libpng12-dev libjpeg-dev libfreetype6-dev libxml2-dev
# the next one is only if you are using X11 forwarding
sudo apt-get install imagemagick
9. Clone our repository
10. To see detail of how to run our code run:
sh run.sh h
To run our code with default parameters without running network analysis
sh run.sh d
To run our code with default parameters and run the network analysis [full analysis]
sh run.sh n
Our script with full network will take 4-5 hours to run. Therefore, we recommend you run the script on the background. To do so use the following command instead. This command will forward all standard outputs to a ‘log.txt’ file. Betweenness centrality calculation is the bottleneck step. The rest of the script will run in less than 5 mins.
screen python main.py -a True -f True -i 5 -t 100 # then in screen press control A control D to detach from screen
To list all screen sessions running
To reattach to a screen
screen -r [pid] # do not need to enter pid if there is only one process running
To run our code with smaller parameters and run the network analysis [fast]. This takes around 1 min to run.
sh run.sh s
To run our code with your own parameters directly follow instructions when you run
sh run.sh h
python main.py -h
11. Output explanations:
If you run our script without doing the network analyses it will only generate one figure:
figure_9.png # our recreated version of figure 9 in the original Jellyfish paper
If you run our script with the network analyses it will generate these text files and figures:
figure_9.png # our recreated version of figure 9 in the original Jellyfish paper
global_clusc_results.txt #numeric results of global clustering coefficients
Global_CC.png #graph of global clustering coefficient results
local_clusc_*_results.txt #numeric results from local clustering coefficient, * represent network number (iterations)
Local_CC.png #graph of local clustering coefficient results
diameter_results.txt #numeric results of diameter
Diameter.png #graph of diameter results
avg_shortest_paths_results.txt #numeric results of average shortest paths
Avg_Shortest_Paths.png #graph of average shortest paths results
betwc_*_results.txt #numeric results of betweenness centrality
Betweenness.png #graph of betweenness centrality results
If you are using X11 forwarding, you can use the following command to view the figures
display [filename] #e.g. figure_9.png
A. Singla, C.Hong, L. Popa and P. B. Godfrey. Jellyfish: Networking Data Centers Randomly. Proceedings of USENIX Symposium on Networked Systems Design and Implementation, 225–238, Presented as part of the 9th USENIX Symposium on Networked Systems Design and Implementation (NSDI 12), San Jose, CA, 2012, USENIX.
Fazle E. Faisal and Tijana Milenkovic (2014), Dynamic networks reveal key players in aging, Bioinformatics, 30(12):1721-1729.
| 1 | 6 |
<urn:uuid:d1a80b3c-a4dd-48e9-8334-48f22b9d1a5d>
|
The future of technology always has roots in the past. And the past is indeed long in the case of virtualization, a technology that is reshaping today's IT industry and will likely play a huge role in the building of next-generation data centers. Few people are more aware of that history than Jim Rymarczyk, who joined IBM as a programmer in the 1960s just as the mainframe giant was inventing virtualization.
Rymarczyk, still at Big Blue today as an IBM fellow and chief virtualization technologist, recalls using CP-67 software, one of IBM's first attempts at virtualizing mainframe operating systems. CP-67 and its follow-ups launched the virtualization market, giving customers the ability to greatly increase hardware utilization by running many applications at once. The partitioning concepts IBM developed for the mainframe eventually served as inspiration for VMware, which brought virtualization to x86 servers in 1999.
“Back in the mid-60s, everyone was using key punches and submitting batch jobs,” Rymarczyk says in a recent interview with Network World. “It was very inefficient and machines were quite expensive.”
The problem of implementing a time-sharing system that would let multiple users access the same computer simultaneously was not an easy one to solve. Most engineers were taking traditional batch operating systems and making them more interactive to let multiple users come into the system, but the operating system itself became extremely complex, Rymarczyk explains. IBM's engineering team in Cambridge, Mass., came up with a novel approach that gave each user a virtual machine (VM), with an operating system that doesn't have to be complex because it only has to support one user, he says.
The first stake in the ground was CP-40, an operating system for the System/360 mainframe that IBM's Robert Creasy and Les Comeau started developing in 1964 to create VMs within the mainframe. It was quickly replaced by CP-67, the second version of IBM's hypervisor, which Rymarczyk began using upon joining IBM's Cambridge operations in 1968. The early hypervisor gave each mainframe user what was called a conversational monitor system (CSM), essentially a single-user operating system. The hypervisor provided the resources while the CMS supported the time-sharing capabilities. CP-67 enabled memory sharing across VMs while giving each user his own virtual memory space.
Rymarczyk says he got to know several of the CP-67 developers and describes himself as one of their “guinea pigs.” But even in these early days of virtualization, the technology's benefits were clear.
“What was most impressive was how well it worked and how powerful it was,” Rymarczyk says. “It let you provide test platforms for software testing and development so that now all of that activity could be done so much more efficiently. It could be interactive too. You could be running a test operating system. When it failed you could look in virtual memory at exactly what was happening. It made debugging and testing much more effective.”
IBM's first hypervisors were used internally and made available publicly in a quasi-open source model. Virtualization was “an internal research project, experimental engineering and design,” Rymarczyk says. “It wasn't originally planned as a product.”
The hypervisor did become a commercially available product in 1972 with VM technology for the mainframe. But it was an important technology even before its commercial release, Rymarczyk says.
“In the late 1960s it very quickly became a critical piece of IT technology,” he says. “People were using it heavily to do interactive computing, to develop programs. It was a far more productive way to do it, rather than submit batch jobs.”
When Rymarczyk joined IBM on a full-time basis he was working on an experimental time-sharing system, a separate project that was phased out in favor of the CP-67 code base. CP-67 was more flexible and efficient in terms of deploying VMs for all kinds of development scenarios, and for consolidating physical hardware, he says.
While Rymarczyk didn't invent virtualization, he has played a key role in advancing the technology over the past four decades. A graduate of Massachusetts Institute of Technology in electrical engineering and computer science, Rymarczyk worked for IBM in Cambridge until 1974, when he transferred to the Poughkeepsie, N.Y., lab, where he stayed for two decades.
In the early 1990s, Rymarczyk helped develop Parallel Sysplex, an IBM technology that lets customers build clusters of as many as 32 mainframe systems to share workloads and ensure high availability. He was also one of the lead designers of Processor Resource/System Manger, which let users logically slice a single processor into multiple partitions.
In 1994, Rymarczyk transferred to IBM's lab in Austin, Texas, as part of an effort to bring mainframe technology and expertise to IBM Power systems. This helped spur the creation of a hypervisor for IBM's Power-based servers in 1999. Rymarczyk is still based in Austin, and has no plans to leave IBM.
As chief virtualization technologist, “my main focus now is looking at the bigger picture of IT complexity and cost, and how we can leverage virtualization as well as other technologies to get cost and complexity under control,” he says. “We just can't afford to keep doing IT the way we do it today.”
Rymarczyk watched with interest as VMware adapted the concepts behind IBM's virtualization technology to x86 systems. In some ways, VMware's task was more difficult than IBM's because the Intel and AMD x86 processors used in most corporate data centers were not built with virtualization in mind. With the mainframe, IBM has total control over both the hardware and virtualization software, but VMware had to overcome the idiosyncrasies of x86 hardware developed by other vendors.
Like IBM, “VMware is creating a virtual machine for every user. But they started before there was any hardware assist. It turns out the x86 architecture has some nasty characteristics,” Rymarczyk says. To run Windows in a VM on an x86 platform, VMware had to intercept “difficult” instructions and replace them, he says.
“The x86 architecture had some things that computer scientists would really frown upon,” he says. “Intel now has put in some hardware features to make it easier. They have started going down a similar path to what we did in the 1960s.”
While there was a clear need for virtualization on the mainframe in the 1960s, the idea of building hypervisors for new platforms was “effectively abandoned during the 1980s and 1990s when client-server applications and inexpensive x86 servers and desktops led to distributed computing,” according to a short history of virtualization written by VMware.
In the 1980s and early 1990s, x86 servers lacked the horsepower to run multiple operating systems, and they were so inexpensive that enterprises would deploy dedicated hardware for each application without a second thought, Rymarczyk says. But chip performance has increased so dramatically that the typical Windows machine needs less than 10% of the processing power actually delivered by a server today, he says.
That's one of the reasons x86 virtualization has become so important, but it still lags significantly behind the technology available on IBM's mainframes and Power systems, in Rymarczyk's opinion. One reason is that with mainframes and Power servers, virtualization isn't an optional add-on – it's part of the system's firmware. “It's sort of routine for customers on our Power servers to be running 40 or 50 virtual machines or LPARs [logical partitions] concurrently, and many of these virtual machines may be mission critical,” he says.
Simply creating VMs is just the tip of the iceberg, though. Rymarczyk says tomorrow's data center “needs robust I/O virtualization, which we've had on the mainframe for decades.” But he does credit VMware with being the first to introduce live migration, the ability to move a VM from one physical host to another without suffering downtime. Live migration is a key enabler of cloud computing because it helps ensure high availability and gives IT pros extra flexibility in the deployment of VMs.
While IBM is a major producer of x86 servers, Big Blue has no plans to develop its own x86 hypervisor. But IBM is trying to position itself as one of the leaders in using virtualization technology to make tomorrow's data center more scalable and efficient.
“You're going to see the hypervisor on x86 essentially become free and there will be multiple choices,” Rymarczyk says. “Open source, VMware, Microsoft, maybe even something from Intel that comes with the platform. There's little reason [for IBM] to invest in trying to make money by building a better [x86] hypervisor. Where the real opportunity exists in adding value for data centers is much higher up the stack.”
IBM and VMware have advanced similar concepts that leverage virtualization technologies to aggregate data center resources into small numbers of logical computing pools that can be managed from single consoles. VMware just announced vSphere, which it calls a “cloud operating system,” while Rymarczyk at IBM came up with “Ensembles.” Similar to Parallel Sysplex on the mainframe, Ensembles seeks to pool together compatible servers and automatically move virtual resources around the pool as needs change.
Rymarczyk is working with IBM's Tivoli software team to develop architectures that will lead to more dynamic and responsive data centers.
“Today's data center tends to be ad hoc and rigid, with lots of constraints,” Rymarczyk says. “We are working on development of architectures that will make the entire data center much simpler. It's largely management software that is going to make the difference.”
| 1 | 23 |
<urn:uuid:a77c5436-b00c-465c-8afe-0cf2a1acf49a>
|
Creating A Ghost Train: A Visual Display for Preston Hall
COW Library : Lightwave : Tom Sefton : Creating A Ghost Train: A Visual Display for Preston Hall
Preston Hall is a 19th century mansion with a rich history dating back to England's early 1500s. Suffering loss of ownership during the English Civil War, these gorgeous grounds have cantered through years of building additions, decay and refurbishment, to finally having found her glory in 1953 as a museum for eventually holding remembrances of nearly 90,000 antiquities -- ranging from historic weapons, toys, and costumes to one of the three, and only three, Georges de la Tour paintings in Britain today, The Dice Players.
Close to the industrial city of Middlesbrough, this Victorian beauty has seen centuries of change, but it was shipping magnate and Member of Parliament Robert Ropner who created the home befitting of his rising status in 1880s society. Just imagine the ghosts that must walk there, and the memories that echo through the hallways -- emanate from the relics.
We were approached earlier this year by a regular client, RS Displays, to create the audio visual displays for this new museum development in the North East of England.
The project had seven AV exhibits, but one in particular was more challenging and ambitious than the others. We were asked to create a ghost train that would smash out of a bookshelf within a room! The train had to be a replica of the first passenger carrying train in the world, one that had passed through the grounds on its maiden journey -- George Stephenson's Locomotion No.1.
The room was being dressed to re-create its original use; a library, and the books would have to be modelled in 3D space. The train also had to carry passengers, all of which would be dressed in period costume and makeup. The client also asked us to recommend appropriate hardware for playback.
After looking at some of the drawings that RS Displays design team had created, we set to work; travelling to a nearby railway museum to photograph a rebuilt model of the train. Using these photographs, we then started to construct the model in 3D space using LightWave 3D. Luckily, we also had some footage from 1925 that showed the (nearly wrecked) train and how it moved.
Photo of original Locomotion No.1 at Darlington Railway Museum. Click image for larger view.
Original concept design for Library room. Click image for larger view.
Screen grabs from LightWave 3D showing the train model. Click individual images for larger views.
Pollen Technical Train Loop. First stage animation prepared for client to show motion of train and its layering.
Once the modelling was well underway, we started filming the passengers. Rather than using actors, the client team wanted to use museum staff so that they could be seen in the exhibit whilst guiding visitors. The staff came to our studio, with costumes hired from a nearby theatre supplies company. We tried to position them in the appropriate angle and perspective for them to be composited into the moving train. For the shoot, a Sony PMW-EX3 was used and was captured directly via HD-SDI into our main Blackmagic Design edit suite.
Once we had the footage for the passengers, we had the tricky job of capturing the driver at the correct angle and perspective, whilst making it appear he was sitting on the driver's seat. The shoot again took place in our studio, using the PMW-EX3 with the same capture into our Blackmagic Design suite. The footage was captured to a networked G-Tech RAID device, and then backed up to another machine's HDD.
Keying and compositing then took place using The Foundry's KEYLIGHT plugin within After Effects CS6. Masking was used for each character to ensure the key remained as clean as possible.
The first shots of the train after the material skins had been added. These were sent at regular intervals to the client team to ensure that we didn't waste valuable production time. Click image for larger view.
Once the client team had signed off the train and its motion, we started to compose the passengers and drivers. The train's motion was rendered from LightWave, taking around 36 hours. This was then imported to AE CS6, and the characters were tracked into the train's motion. After this portion of our work had been approved, we began colouring and adding lighting to the scene. The clients requested a cloudy, dark sky for the background, which we created in AE using Trapcode Particular.
The cloudy, dark sky was created in AE using Trapcode Particular. Click image for larger view.
Next we started to design the books. Unfortunately, we couldn't use skins from authentic books; we had to design them to look exactly the same as the printed themes for the Library.
Click image for larger view.
Three different sizes of book were created, and then once the designs were sent to us for the skins for the books, we added them within LightWave.
Click image for larger view.
Next, we gathered together sounds created from a train that was very similar in design and technology to Locomotion No.1 and used these to create the sound mix for the video.
Click image for larger view.
After previous experiences with video walls on conference work, and the small space available for rear projection, we knew that a four-screen video wall was the best solution. The video wall would take up less space, would not need bulbs replacing every year, and would have no light drop off in a very bright environment. We recommended that RS Displays purchase the Samsung SM460UT-2. These are HD displays that come with inbuilt software that allows each screen to understand its position within a wall up to 4x4 in size.
RS Displays then purchased the brackets for the screens and built the supporting frame themselves; this was an impressive feat as the screens had to line up perfectly, with an access hatch underneath and have adequate strength. For playback, we decided to use a Frame Jazz media player which was capable of outputting 1080p images via HDMI, with a separate sound output via a stereo phono plug. The media player is programmed to play on a 3 minute loop, meaning that visitors sometimes don't get to see the motion.
The video on the four screen displays within RS Displays workshops before installation. Click image for larger view.
After installation, with the themed graphics for the library installed in the room. Click image for larger view.
After colour balancing had taken place on the video wall, the display fit in perfectly within the room. Click image for larger view.
After completion, the client was very impressed. Our workflow was quick and allowed us to send updates and still frames to the client team at regular intervals. Graphics and images were often sent to us in Mac format (our edit suites are all custom-build PC's) without any problems at all. Capturing was really easy using the Blackmagic HD Extreme 3D, and allowed us to review footage and make a quick keying pass before moving on to the next shot. The Adobe CS6 software was fantastic. We had used FCP and Avid before, but have gradually moved all of our suites to Adobe -- and the latest CS6 release has been great. The Dynamic Link tool is fantastic for use within post, allowing us to shuttle between Premiere, After Effects and Photoshop with a single file.
Pollen Studio's Ghost Train A shorter loop of the full version, which runs at 3 minutes -- 2m40 of which is the static shot of books on the shelf, to fit in with the themic background.
Pollen Studio was started in 1973 as a sound recording studio. Since then, they have moved into moving image, hardware supply and consultancy, interactive production, app design and production, live event management and much more.
| 1 | 3 |
<urn:uuid:7940050e-2b8c-48aa-a53f-b95d011c978c>
|
Agua Caliente, meaning hot water, is a unique 101-acre park with a perennial warm spring, located on the far northeast side of Tucson. Literally an oasis in the desert, Agua Caliente contains spring-fed ponds that support diverse wildlife and fish populations, as well as attract many of the 29 bat species known from Arizona. By partnering with our local host at Pima County Natural Resources, Parks and Recreation (NRPR), we will highlight park resources at Agua Caliente and demonstrate protocols for implementing bat inventory and monitoring programs with a special focus on western bat species and habitats.
The workshop combines indoor classroom lectures and discussions with outdoor field outings. Participants receive an introduction to the use of SonoBat software for conducing acoustic monitoring and inventories as well as a comprehensive understanding of common echolocation call characteristics used for species identification. Guided classroom demonstrations and hands-on experience with equipment in the field will acquaint participants with a full range of methods, techniques, and technologies available for acoustic analysis. See below for a complete list of lecture and discussion topics, demonstrations, and evening field activities. Daily goals and objectives for the course are described more fully at the bottom of this page. A detailed agenda will be provided to all registered participants prior to the course.
The SonoBat Field Techniques Workshop is open to biologists and naturalists from federal, state, or local agencies, college and/or graduate students, and other professionals or enthusiasts with a desire to learn more about full-spectrum echolocation recording and bat call analysis using SonoBat software.
One session: April 15-18, 2011 (Friday-Monday). Class size: Limited to 20 participants. Location: Auga Caliente Regional Park, Tucson AZ
Joe Szewczak, B.S.E. (1980) Duke University, Ph.D. (1991) Brown University, is an Associate Professor at Humboldt State University in Arcata, CA. His research has investigated the physiological capabilities of bats and other small mammals, from cold hibernative torpor to the intense demands of flight and high altitude, and the physiological ecology of bats,. His teaching includes Using SonoBat for Non-invasive Bat Monitoring for the University of California, Biology of the Chiroptera at Humboldt State University, and The Ecology and Conservation of California Bats through San Francisco State University. Joe has also taught acoustic monitoring workshops for BCI and other groups in California, Oregon, Arizona, Washington, South Dakota, Kentucky, and Pennsylvania. He is the developer of SonoBat software to analyze and interpret bat echolocation calls and is currently developing automated bird and bat acoustic monitoring and identification methods for the Department of Defense (SERDP) and other agencies.
Janet Tyburec, B.A. (1989) Trinity University, a full-time employee at Bat Conservation International, Inc. (BCI), from 1989 thru September 2002, has been involved in the structure and execution of training workshops since the inception of BCI's workshop efforts in 1992. She has been extensively trained by BCI founder, Merlin D. Tuttle. Over the years, she has personally taught over 1,500 wildlife biologists, land managers, and students of conservation in the course of presenting over 100 field workshops. She currently oversees all training and instruction at BCI's Arizona, California, Kentucky, and Pennsylvania locations. She continues to be involved with many aspects of BCI's workshop program and its growth as a contract employee, a position she has held from September 2002 to the present. She has also contracted with other federal and state agencies, including the USDA Forest Service, USDI National Park Service and the Department of Defense to conduct custom training workshops for directors, staff, seasonal employees, and volunteers.
John Chenger, president of Bat Conservation and Management, Inc. (BCM), has worked with the Pennsylvania Game Commission (PGC) to conduct cave and mine assessments and other bat inventories. He has also worked with BCI since 1997 to facilitate training workshops in Arizona, California, Kentucky, and Pennsylvania. He founded BCM in 1999 to address nuisance bat management issues by providing man-made roosts and performing bat-exclusion and bat- proofing services. His company has grown to include seasonal bat roost and habitat surveys, U.S. Fish and Wildlife Service (USF&WS) endangered species compliance inventories, acoustic monitoring studies, and large-scale migratory bat radio-tracking projects. His work has led him to develop and manufacture commercially available survey gear including mist net poles, portable triple-high mist-net sets, harp traps, and bat houses certified by BCI.
Lectures and demonstrations cover a full range of bat echolocation and acoustic monitoring subjects, with a focus on the use of SonoBat software for designing inventory and monitoring programs for bats. Topics will include:
Introduction to bat bio-acoustics, echolocation, and bat detectors
Hands on demonstration with available bat detector models
Bat detector use in the field for active and passive monitoring
Bat monitoring program designs and choosing the right bat detector for the job
Introduction to SonoBat software for recording and signal analysis
Call characteristics for bat identification on the basis of echolocation calls
Auto-classification using SonoBat 3.0, data handling, storage, and interpretation
Evening Field Practicals
Instructors will provide guided, hands-on demonstrations during evening and night-time field practicals. Participants will be split up into small groups for added opportunity for individual instruction. Topics will include:
Active monitoring using bat detectors, tips for following bats
Key morphological characteristics to help identify bats on the wing
Passive setups using bat detectors and digital audio recorders (e.g., Pettersson D240x and Samson Zoom)
Passive deployment of direct recording detectors (e.g., AR125, Pettersson D500x, SM2)
Implementing mobile acoustic transects
Addressing power, security, and weatherproofing for long-term, passive deployments
SonoBat Field Techniques Workshop
Location and Directions: Roy Drachman-Agua Caliente Regional Park, 12325 East Roger Road, Tucson, AZ 85749
Please see the orientation map of Tucson and the park here.
More detailed directions and maps are at: http://www.pima.gov/nrpr/eeduc/interpretive/aguacal.htm.
Dates and times: April 15 (Friday) thru April 18 (Monday). Check in starts at noon at the Agua Caliente Visitor Center. The first classroom session begins at 1 PM . Formal presentations will conclude by noon April 18.
Lodging Resources 2-1/2 to 5 miles (10 minutes) from workshop location; +/- $80 per nite:
Comfort Suites at Sabino Canyon Tucson
7007 E. Tanque Verde. (800) 424-6423
Ramada Foothills Inn and Suites
6944 E. Tanque Verde Rd. (520) 886-9595
Molino Basin, Mt. Lemmon (Catalina Highway)
Saguaro National Park, East (back-country camping only)
Equipment: Participants need to bring appropriate field gear, including hiking boots, a headlamp with batteries, a personal pack, and a water bottle.
No prior experience with bat detectors or acoustic monitoring is required. Staff will provide several models of time-expansion and direct recording bat detectors, digital audio recorders, and demo copies of SonoBat 2.9 and 3.0 software for participant use during the course. Because we will not be handling bats during this course, rabies pre-exposure vaccination is not required. A complete list of what to bring and how to prepare for the course will be mailed to all registered participants prior to the start of the workshop. Participants should be prepared to bring the following to enhance their workshop experience:
Laptop (Windows XP, Vista, or 7; or Intel Mac OSX) with associated battery pack and/or power cables
Journal or binder for note-taking and storing handouts
Headlamp and other appropriate nighttime field gear
Memory stick 2GB or larger
(Optional) folding table and chair for nighttime recording sessions
Meals: Picnic dinners onsite on the 15th, 16th, and 17th are included with the registration fee. Please indicate below if you require vegetarian meals. All other meals are "on your own". Numerous restaurant lunch options are located nearby for the afternoon break.
| 1 | 3 |
<urn:uuid:c491f768-4fbd-4e76-b4cb-3024b578da2e>
|
Credit default swap
A credit default swap (CDS) is a financial swap agreement that the seller of the CDS will compensate the buyer (usually the creditor of the reference loan) in the event of a loan default (by the debtor) or other credit event. That is, the seller of the CDS insures the buyer against some reference loan defaulting. The buyer of the CDS makes a series of payments (the CDS "fee" or "spread") to the seller and, in exchange, receives a payoff if the loan defaults. It was invented by Blythe Masters from JP Morgan in 1994.
In the event of default the buyer of the CDS receives compensation (usually the face value of the loan), and the seller of the CDS takes possession of the defaulted loan. However, anyone can purchase a CDS, even buyers who do not hold the loan instrument and who have no direct insurable interest in the loan (these are called "naked" CDSs). If there are more CDS contracts outstanding than bonds in existence, a protocol exists to hold a credit event auction; the payment received is usually substantially less than the face value of the loan.
Credit default swaps have existed since 1994, and increased in use in the early 2000s. By the end of 2007, the outstanding CDS amount was $62.2 trillion, falling to $26.3 trillion by mid-year 2010 and reportedly $25.5 trillion in early 2012. CDSs are not traded on an exchange and there is no required reporting of transactions to a government agency. During the 2007–2010 financial crisis the lack of transparency in this large market became a concern to regulators as it could pose a systemic risk. In March 2010, the Depository Trust & Clearing Corporation (see Sources of Market Data) announced it would give regulators greater access to its credit default swaps database.
CDS data can be used by financial professionals, regulators, and the media to monitor how the market views credit risk of any entity on which a CDS is available, which can be compared to that provided by the Credit Rating Agencies. U.S. Courts may soon be following suit.
Most CDSs are documented using standard forms drafted by the International Swaps and Derivatives Association (ISDA), although there are many variants. In addition to the basic, single-name swaps, there are basket default swaps (BDSs), index CDSs, funded CDSs (also called credit-linked notes), as well as loan-only credit default swaps (LCDS). In addition to corporations and governments, the reference entity can include a special purpose vehicle issuing asset-backed securities.
Some claim that derivatives such as CDS are potentially dangerous in that they combine priority in bankruptcy with a lack of transparency. A CDS can be unsecured (without collateral) and be at higher risk for a default.
- 1 Description
- 2 Uses
- 3 History
- 4 Terms of a typical CDS contract
- 5 Credit default swap and sovereign debt crisis
- 6 Settlement
- 7 Pricing and valuation
- 8 Criticisms
- 9 Tax and accounting issues
- 10 LCDS
- 11 See also
- 12 Notes
- 13 References
- 14 External links
A CDS is linked to a "reference entity" or "reference obligor", usually a corporation or government. The reference entity is not a party to the contract. The buyer makes regular premium payments to the seller, the premium amounts constituting the "spread" charged by the seller to insure against a credit event. If the reference entity defaults, the protection seller pays the buyer the par value of the bond in exchange for physical delivery of the bond, although settlement may also be by cash or auction.
A default is often referred to as a "credit event" and includes such events as failure to pay, restructuring and bankruptcy, or even a drop in the borrower's credit rating. CDS contracts on sovereign obligations also usually include as credit events repudiation, moratorium and acceleration. Most CDSs are in the $10–$20 million range with maturities between one and 10 years. Five years is the most typical maturity.
An investor or speculator may “buy protection” to hedge the risk of default on a bond or other debt instrument, regardless of whether such investor or speculator holds an interest in or bears any risk of loss relating to such bond or debt instrument. In this way, a CDS is similar to credit insurance, although CDS are not subject to regulations governing traditional insurance. Also, investors can buy and sell protection without owning debt of the reference entity. These "naked credit default swaps" allow traders to speculate on the creditworthiness of reference entities. CDSs can be used to create synthetic long and short positions in the reference entity. Naked CDS constitute most of the market in CDS. In addition, CDSs can also be used in capital structure arbitrage.
A "credit default swap" (CDS) is a credit derivative contract between two counterparties. The buyer makes periodic payments to the seller, and in return receives a payoff if an underlying financial instrument defaults or experiences a similar credit event. The CDS may refer to a specified loan or bond obligation of a “reference entity”, usually a corporation or government.
As an example, imagine that an investor buys a CDS from AAA-Bank, where the reference entity is Risky Corp. The investor—the buyer of protection—will make regular payments to AAA-Bank—the seller of protection. If Risky Corp defaults on its debt, the investor receives a one-time payment from AAA-Bank, and the CDS contract is terminated.
If the investor actually owns Risky Corp's debt (i.e., is owed money by Risky Corp), a CDS can act as a hedge. But investors can also buy CDS contracts referencing Risky Corp debt without actually owning any Risky Corp debt. This may be done for speculative purposes, to bet against the solvency of Risky Corp in a gamble to make money, or to hedge investments in other companies whose fortunes are expected to be similar to those of Risky Corp (see Uses).
If the reference entity (i.e., Risky Corp) defaults, one of two kinds of settlement can occur:
- the investor delivers a defaulted asset to Bank for payment of the par value, which is known as physical settlement;
- AAA-Bank pays the investor the difference between the par value and the market price of a specified debt obligation (even if Risky Corp defaults there is usually some recovery, i.e., not all the investor's money is lost), which is known as cash settlement.
The "spread" of a CDS is the annual amount the protection buyer must pay the protection seller over the length of the contract, expressed as a percentage of the notional amount. For example, if the CDS spread of Risky Corp is 50 basis points, or 0.5% (1 basis point = 0.01%), then an investor buying $10 million worth of protection from AAA-Bank must pay the bank $50,000. Payments are usually made on a quarterly basis, in arrears. These payments continue until either the CDS contract expires or Risky Corp defaults.
All things being equal, at any given time, if the maturity of two credit default swaps is the same, then the CDS associated with a company with a higher CDS spread is considered more likely to default by the market, since a higher fee is being charged to protect against this happening. However, factors such as liquidity and estimated loss given default can affect the comparison. Credit spread rates and credit ratings of the underlying or reference obligations are considered among money managers to be the best indicators of the likelihood of sellers of CDSs having to perform under these contracts.
Differences from insurance
CDS contracts have obvious similarities with insurance, because the buyer pays a premium and, in return, receives a sum of money if an adverse event occurs.
However, there are also many differences, the most important being that an insurance contract provides an indemnity against the losses actually suffered by the policy holder on an asset in which it holds an insurable interest. By contrast a CDS provides an equal payout to all holders, calculated using an agreed, market-wide method. The holder does not need to own the underlying security and does not even have to suffer a loss from the default event. The CDS can therefore be used to speculate on debt objects.
The other differences include:
- The seller might in principle not be a regulated entity (though in practice most are banks);
- The seller is not required to maintain reserves to cover the protection sold (this was a principal cause of AIG's financial distress in 2008; it had insufficient reserves to meet the "run" of expected payouts caused by the collapse of the housing bubble);
- Insurance requires the buyer to disclose all known risks, while CDSs do not (the CDS seller can in many cases still determine potential risk, as the debt instrument being "insured" is a market commodity available for inspection, but in the case of certain instruments like CDOs made up of "slices" of debt packages, it can be difficult to tell exactly what is being insured);
- Insurers manage risk primarily by setting loss reserves based on the Law of large numbers and actuarial analysis. Dealers in CDSs manage risk primarily by means of hedging with other CDS deals and in the underlying bond markets;
- CDS contracts are generally subject to mark-to-market accounting, introducing income statement and balance sheet volatility while insurance contracts are not;
- Hedge accounting may not be available under US Generally Accepted Accounting Principles (GAAP) unless the requirements of FAS 133 are met. In practice this rarely happens.
- to cancel the insurance contract the buyer can typically stop paying premiums, while for CDS the contract needs to be unwound.
- The buyer takes the risk that the seller may default. If AAA-Bank and Risky Corp. default simultaneously ("double default"), the buyer loses its protection against default by the reference entity. If AAA-Bank defaults but Risky Corp. does not, the buyer might need to replace the defaulted CDS at a higher cost.
- The seller takes the risk that the buyer may default on the contract, depriving the seller of the expected revenue stream. More important, a seller normally limits its risk by buying offsetting protection from another party — that is, it hedges its exposure. If the original buyer drops out, the seller squares its position by either unwinding the hedge transaction or by selling a new CDS to a third party. Depending on market conditions, that may be at a lower price than the original CDS and may therefore involve a loss to the seller.
In the future, in the event that regulatory reforms require that CDS be traded and settled via a central exchange/clearing house, such as ICE TCC, there will no longer be 'counterparty risk', as the risk of the counterparty will be held with the central exchange/clearing house.
As is true with other forms of over-the-counter derivative, CDS might involve liquidity risk. If one or both parties to a CDS contract must post collateral (which is common), there can be margin calls requiring the posting of additional collateral. The required collateral is agreed on by the parties when the CDS is first issued. This margin amount may vary over the life of the CDS contract, if the market price of the CDS contract changes, or the credit rating of one of the parties changes. Many CDS contracts even require payment of an upfront fee (composed of "reset to par" and an "initial coupon.").
Another kind of risk for the seller of credit default swaps is jump risk or jump-to-default risk. A seller of a CDS could be collecting monthly premiums with little expectation that the reference entity may default. A default creates a sudden obligation on the protection sellers to pay millions, if not billions, of dollars to protection buyers. This risk is not present in other over-the-counter derivatives.
Sources of market data
Data about the credit default swaps market is available from three main sources. Data on an annual and semiannual basis is available from the International Swaps and Derivatives Association (ISDA) since 2001 and from the Bank for International Settlements (BIS) since 2004. The Depository Trust & Clearing Corporation (DTCC), through its global repository Trade Information Warehouse (TIW), provides weekly data but publicly available information goes back only one year. The numbers provided by each source do not always match because each provider uses different sampling methods. Daily, intraday and real time data is available from S&P Capital IQ through their acquisition of Credit Market Analysis in 2012.
According to DTCC, the Trade Information Warehouse maintains the only "global electronic database for virtually all CDS contracts outstanding in the marketplace."
The Office of the Comptroller of the Currency publishes quarterly credit derivative data about insured U.S commercial banks and trust companies.
Credit default swaps allow investors to speculate on changes in CDS spreads of single names or of market indices such as the North American CDX index or the European iTraxx index. An investor might believe that an entity's CDS spreads are too high or too low, relative to the entity's bond yields, and attempt to profit from that view by entering into a trade, known as a basis trade, that combines a CDS with a cash bond and an interest rate swap.
Finally, an investor might speculate on an entity's credit quality, since generally CDS spreads increase as credit-worthiness declines, and decline as credit-worthiness increases. The investor might therefore buy CDS protection on a company to speculate that it is about to default. Alternatively, the investor might sell protection if it thinks that the company's creditworthiness might improve. The investor selling the CDS is viewed as being “long” on the CDS and the credit, as if the investor owned the bond. In contrast, the investor who bought protection is “short” on the CDS and the underlying credit.
Credit default swaps opened up important new avenues to speculators. Investors could go long on a bond without any upfront cost of buying a bond; all the investor need do was promise to pay in the event of default. Shorting a bond faced difficult practical problems, such that shorting was often not feasible; CDS made shorting credit possible and popular. Because the speculator in either case does not own the bond, its position is said to be a synthetic long or short position.
For example, a hedge fund believes that Risky Corp will soon default on its debt. Therefore, it buys $10 million worth of CDS protection for two years from AAA-Bank, with Risky Corp as the reference entity, at a spread of 500 basis points (=5%) per annum.
- If Risky Corp does indeed default after, say, one year, then the hedge fund will have paid $500,000 to AAA-Bank, but then receives $10 million (assuming zero recovery rate, and that AAA-Bank has the liquidity to cover the loss), thereby making a profit. AAA-Bank, and its investors, will incur a $9.5 million loss minus recovery unless the bank has somehow offset the position before the default.
- However, if Risky Corp does not default, then the CDS contract runs for two years, and the hedge fund ends up paying $1 million, without any return, thereby making a loss. AAA-Bank, by selling protection, has made $1 million without any upfront investment.
Note that there is a third possibility in the above scenario; the hedge fund could decide to liquidate its position after a certain period of time in an attempt to realise its gains or losses. For example:
- After 1 year, the market now considers Risky Corp more likely to default, so its CDS spread has widened from 500 to 1500 basis points. The hedge fund may choose to sell $10 million worth of protection for 1 year to AAA-Bank at this higher rate. Therefore, over the two years the hedge fund pays the bank 2 * 5% * $10 million = $1 million, but receives 1 * 15% * $10 million = $1.5 million, giving a total profit of $500,000.
- In another scenario, after one year the market now considers Risky much less likely to default, so its CDS spread has tightened from 500 to 250 basis points. Again, the hedge fund may choose to sell $10 million worth of protection for 1 year to AAA-Bank at this lower spread. Therefore, over the two years the hedge fund pays the bank 2 * 5% * $10 million = $1 million, but receives 1 * 2.5% * $10 million = $250,000, giving a total loss of $750,000. This loss is smaller than the $1 million loss that would have occurred if the second transaction had not been entered into.
Transactions such as these do not even have to be entered into over the long-term. If Risky Corp's CDS spread had widened by just a couple of basis points over the course of one day, the hedge fund could have entered into an offsetting contract immediately and made a small profit over the life of the two CDS contracts.
Credit default swaps are also used to structure synthetic collateralized debt obligations (CDOs). Instead of owning bonds or loans, a synthetic CDO gets credit exposure to a portfolio of fixed income assets without owning those assets through the use of CDS. CDOs are viewed as complex and opaque financial instruments. An example of a synthetic CDO is Abacus 2007-AC1, which is the subject of the civil suit for fraud brought by the SEC against Goldman Sachs in April 2010. Abacus is a synthetic CDO consisting of credit default swaps referencing a variety of mortgage-backed securities.
Naked credit default swaps
In the examples above, the hedge fund did not own any debt of Risky Corp. A CDS in which the buyer does not own the underlying debt is referred to as a naked credit default swap, estimated to be up to 80% of the credit default swap market. There is currently a debate in the United States and Europe about whether speculative uses of credit default swaps should be banned. Legislation is under consideration by Congress as part of financial reform.
Critics assert that naked CDSs should be banned, comparing them to buying fire insurance on your neighbor’s house, which creates a huge incentive for arson. Analogizing to the concept of insurable interest, critics say you should not be able to buy a CDS—insurance against default—when you do not own the bond. Short selling is also viewed as gambling and the CDS market as a casino. Another concern is the size of the CDS market. Because naked credit default swaps are synthetic, there is no limit to how many can be sold. The gross amount of CDSs far exceeds all “real” corporate bonds and loans outstanding. As a result, the risk of default is magnified leading to concerns about systemic risk.
Financier George Soros called for an outright ban on naked credit default swaps, viewing them as “toxic” and allowing speculators to bet against and “bear raid” companies or countries. His concerns were echoed by several European politicians who, during the Greek Financial Crisis, accused naked CDS buyers of making the crisis worse.
Despite these concerns, Secretary of Treasury Geithner and Commodity Futures Trading Commission Chairman Gensler are not in favor of an outright ban on naked credit default swaps. They prefer greater transparency and better capitalization requirements. These officials think that naked CDSs have a place in the market.
Proponents of naked credit default swaps say that short selling in various forms, whether credit default swaps, options or futures, has the beneficial effect of increasing liquidity in the marketplace. That benefits hedging activities. Without speculators buying and selling naked CDSs, banks wanting to hedge might not find a ready seller of protection. Speculators also create a more competitive marketplace, keeping prices down for hedgers. A robust market in credit default swaps can also serve as a barometer to regulators and investors about the credit health of a company or country.
Despite assertions that speculators are making the Greek crisis worse, Germany's market regulator BaFin found no proof supporting the claim. Some suggest that without credit default swaps, Greece’s borrowing costs would be higher. As of November 2011, the Greek bonds have a bond yield of 28%.
A bill in the U.S. Congress proposed giving a public authority the power to limit the use of CDSs other than for hedging purposes, but the bill did not become law.
Credit default swaps are often used to manage the risk of default that arises from holding debt. A bank, for example, may hedge its risk that a borrower may default on a loan by entering into a CDS contract as the buyer of protection. If the loan goes into default, the proceeds from the CDS contract cancel out the losses on the underlying debt.
There are other ways to eliminate or reduce the risk of default. The bank could sell (that is, assign) the loan outright or bring in other banks as participants. However, these options may not meet the bank’s needs. Consent of the corporate borrower is often required. The bank may not want to incur the time and cost to find loan participants.
If both the borrower and lender are well-known and the market (or even worse, the news media) learns that the bank is selling the loan, then the sale may be viewed as signaling a lack of trust in the borrower, which could severely damage the banker-client relationship. In addition, the bank simply may not want to sell or share the potential profits from the loan. By buying a credit default swap, the bank can lay off default risk while still keeping the loan in its portfolio. The downside to this hedge is that without default risk, a bank may have no motivation to actively monitor the loan and the counterparty has no relationship to the borrower.
Another kind of hedge is against concentration risk. A bank’s risk management team may advise that the bank is overly concentrated with a particular borrower or industry. The bank can lay off some of this risk by buying a CDS. Because the borrower—the reference entity—is not a party to a credit default swap, entering into a CDS allows the bank to achieve its diversity objectives without impacting its loan portfolio or customer relations. Similarly, a bank selling a CDS can diversify its portfolio by gaining exposure to an industry in which the selling bank has no customer base.
A bank buying protection can also use a CDS to free regulatory capital. By offloading a particular credit risk, a bank is not required to hold as much capital in reserve against the risk of default (traditionally 8% of the total loan under Basel I). This frees resources the bank can use to make other loans to the same key customer or to other borrowers.
Hedging risk is not limited to banks as lenders. Holders of corporate bonds, such as banks, pension funds or insurance companies, may buy a CDS as a hedge for similar reasons. Pension fund example: A pension fund owns five-year bonds issued by Risky Corp with par value of $10 million. To manage the risk of losing money if Risky Corp defaults on its debt, the pension fund buys a CDS from Derivative Bank in a notional amount of $10 million. The CDS trades at 200 basis points (200 basis points = 2.00 percent). In return for this credit protection, the pension fund pays 2% of $10 million ($200,000) per annum in quarterly installments of $50,000 to Derivative Bank.
- If Risky Corporation does not default on its bond payments, the pension fund makes quarterly payments to Derivative Bank for 5 years and receives its $10 million back after five years from Risky Corp. Though the protection payments totaling $1 million reduce investment returns for the pension fund, its risk of loss due to Risky Corp defaulting on the bond is eliminated.
- If Risky Corporation defaults on its debt three years into the CDS contract, the pension fund would stop paying the quarterly premium, and Derivative Bank would ensure that the pension fund is refunded for its loss of $10 million minus recovery (either by physical or cash settlement — see Settlement below). The pension fund still loses the $600,000 it has paid over three years, but without the CDS contract it would have lost the entire $10 million minus recovery.
In addition to financial institutions, large suppliers can use a credit default swap on a public bond issue or a basket of similar risks as a proxy for its own credit risk exposure on receivables.
Although credit default swaps have been highly criticized for their role in the recent financial crisis, most observers conclude that using credit default swaps as a hedging device has a useful purpose.
Capital Structure Arbitrage is an example of an arbitrage strategy that uses CDS transactions. This technique relies on the fact that a company's stock price and its CDS spread should exhibit negative correlation; i.e., if the outlook for a company improves then its share price should go up and its CDS spread should tighten, since it is less likely to default on its debt. However, if its outlook worsens then its CDS spread should widen and its stock price should fall.
Techniques reliant on this are known as capital structure arbitrage because they exploit market inefficiencies between different parts of the same company's capital structure; i.e., mis-pricings between a company's debt and equity. An arbitrageur attempts to exploit the spread between a company's CDS and its equity in certain situations.
For example, if a company has announced some bad news and its share price has dropped by 25%, but its CDS spread has remained unchanged, then an investor might expect the CDS spread to increase relative to the share price. Therefore, a basic strategy would be to go long on the CDS spread (by buying CDS protection) while simultaneously hedging oneself by buying the underlying stock. This technique would benefit in the event of the CDS spread widening relative to the equity price, but would lose money if the company's CDS spread tightened relative to its equity.
An interesting situation in which the inverse correlation between a company's stock price and CDS spread breaks down is during a Leveraged buyout (LBO). Frequently this leads to the company's CDS spread widening due to the extra debt that will soon be put on the company's books, but also an increase in its share price, since buyers of a company usually end up paying a premium.
Another common arbitrage strategy aims to exploit the fact that the swap-adjusted spread of a CDS should trade closely with that of the underlying cash bond issued by the reference entity. Misalignments in spreads may occur due to technical reasons such as:
- Specific settlement differences
- Shortages in a particular underlying instrument
- The cost of funding a position
- Existence of buyers constrained from buying exotic derivatives.
The difference between CDS spreads and asset swap spreads is called the basis and should theoretically be close to zero. Basis trades can aim to exploit any differences to make risk-free profit.
Forms of credit default swaps had been in existence from at least the early 1990s, with early trades carried out by Bankers Trust in 1991. J.P. Morgan & Co. is widely credited with creating the modern credit default swap in 1994. In that instance, J.P. Morgan had extended a $4.8 billion credit line to Exxon, which faced the threat of $5 billion in punitive damages for the Exxon Valdez oil spill. A team of J.P. Morgan bankers led by Blythe Masters then sold the credit risk from the credit line to the European Bank of Reconstruction and Development in order to cut the reserves that J.P. Morgan was required to hold against Exxon's default, thus improving its own balance sheet.
In 1997, JPMorgan developed a proprietary product called BISTRO (Broad Index Securitized Trust Offering) that used CDS to clean up a bank’s balance sheet. The advantage of BISTRO was that it used securitization to split up the credit risk into little pieces that smaller investors found more digestible, since most investors lacked EBRD's capability to accept $4.8 billion in credit risk all at once. BISTRO was the first example of what later became known as synthetic collateralized debt obligations (CDOs).
Mindful of the concentration of default risk as one of the causes of the S&L crisis, regulators initially found CDS's ability to disperse default risk attractive. In 2000, credit default swaps became largely exempt from regulation by both the U.S. Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC). The Commodity Futures Modernization Act of 2000, which was also responsible for the Enron loophole, specifically stated that CDSs are neither futures nor securities and so are outside the remit of the SEC and CFTC.
At first, banks were the dominant players in the market, as CDS were primarily used to hedge risk in connection with its lending activities. Banks also saw an opportunity to free up regulatory capital. By March 1998, the global market for CDS was estimated at about $300 billion, with JP Morgan alone accounting for about $50 billion of this.
The high market share enjoyed by the banks was soon eroded as more and more asset managers and hedge funds saw trading opportunities in credit default swaps. By 2002, investors as speculators, rather than banks as hedgers, dominated the market. National banks in the USA used credit default swaps as early as 1996. In that year, the Office of the Comptroller of the Currency measured the size of the market as tens of billions of dollars. Six years later, by year-end 2002, the outstanding amount was over $2 trillion.
Although speculators fueled the exponential growth, other factors also played a part. An extended market could not emerge until 1999, when ISDA standardized the documentation for credit default swaps. Also, the 1997 Asian Financial Crisis spurred a market for CDS in emerging market sovereign debt. In addition, in 2004, index trading began on a large scale and grew rapidly.
The market size for Credit Default Swaps more than doubled in size each year from $3.7 trillion in 2003. By the end of 2007, the CDS market had a notional value of $62.2 trillion. But notional amount fell during 2008 as a result of dealer "portfolio compression" efforts (replacing offsetting redundant contracts), and by the end of 2008 notional amount outstanding had fallen 38 percent to $38.6 trillion.
Explosive growth was not without operational headaches. On September 15, 2005, the New York Fed summoned 14 banks to its offices. Billions of dollars of CDS were traded daily but the record keeping was more than two weeks behind. This created severe risk management issues, as counterparties were in legal and financial limbo. U.K. authorities expressed the same concerns.
Market as of 2008
Since default is a relatively rare occurrence (historically around 0.2% of investment grade companies default in any one year), in most CDS contracts the only payments are the premium payments from buyer to seller. Thus, although the above figures for outstanding notionals are very large, in the absence of default the net cash flows are only a small fraction of this total: for a 100 bp = 1% spread, the annual cash flows are only 1% of the notional amount.
Regulatory concerns over CDS
In the days and weeks leading up to Bear's collapse, the bank's CDS spread widened dramatically, indicating a surge of buyers taking out protection on the bank. It has been suggested that this widening was responsible for the perception that Bear Stearns was vulnerable, and therefore restricted its access to wholesale capital, which eventually led to its forced sale to JP Morgan in March. An alternative view is that this surge in CDS protection buyers was a symptom rather than a cause of Bear's collapse; i.e., investors saw that Bear was in trouble, and sought to hedge any naked exposure to the bank, or speculate on its collapse.
In September, the bankruptcy of Lehman Brothers caused a total close to $400 billion to become payable to the buyers of CDS protection referenced against the insolvent bank. However the net amount that changed hands was around $7.2 billion. (The given citation does not support either of the two purported facts stated in previous two sentences.). This difference is due to the process of 'netting'. Market participants co-operated so that CDS sellers were allowed to deduct from their payouts the inbound funds due to them from their hedging positions. Dealers generally attempt to remain risk-neutral, so that their losses and gains after big events offset each other.
Also in September American International Group (AIG) required a $85 billion federal loan because it had been excessively selling CDS protection without hedging against the possibility that the reference entities might decline in value, which exposed the insurance giant to potential losses over $100 billion. The CDS on Lehman were settled smoothly, as was largely the case for the other 11 credit events occurring in 2008 that triggered payouts. And while it is arguable that other incidents would have been as bad or worse if less efficient instruments than CDS had been used for speculation and insurance purposes, the closing months of 2008 saw regulators working hard to reduce the risk involved in CDS transactions.
In 2008 there was no centralized exchange or clearing house for CDS transactions; they were all done over the counter (OTC). This led to recent calls for the market to open up in terms of transparency and regulation.
In November 2008 the Depository Trust & Clearing Corporation (DTCC), which runs a warehouse for CDS trade confirmations accounting for around 90% of the total market, announced that it will release market data on the outstanding notional of CDS trades on a weekly basis. The data can be accessed on the DTCC's website here:
By 2010, Intercontinental Exchange, through its subsidiaries, ICE Trust in New York, launched in 2008, and ICE Clear Europe Limited in London, UK, launched in July 2009, clearing entities for credit default swaps (CDS) had cleared more than $10 trillion in credit default swaps (CDS) (Terhune Bloomberg Business Week 2010-07-29). [notes 1] Bloomberg's Terhune (2010) explained how investors seeking high-margin returns use Credit Default Swaps (CDS) to bet against financial instruments owned by other companies and countries. Intercontinental's clearing houses guarantee every transaction between buyer and seller providing a much-needed safety net reducing the impact of a default by spreading the risk. ICE collects on every trade.(Terhune Bloomberg Business Week 2010-07-29). Brookings senior research fellow, Robert E. Litan, cautioned however, "valuable pricing data will not be fully reported, leaving ICE's institutional partners with a huge informational advantage over other traders. He calls ICE Trust "a derivatives dealers' club" in which members make money at the expense of nonmembers (Terhune citing Litan in Bloomberg Business Week 2010-07-29). (Litan Derivatives Dealers’ Club 2010)." Actually, Litan conceded that "some limited progress toward central clearing of CDS has been made in recent months, with CDS contracts between dealers now being cleared centrally primarily through one clearinghouse (ICE Trust) in which the dealers have a significant financial interest (Litan 2010:6)." However, "as long as ICE Trust has a monopoly in clearing, watch for the dealers to limit the expansion of the products that are centrally cleared, and to create barriers to electronic trading and smaller dealers making competitive markets in cleared products (Litan 2010:8)."
In 2009 the U.S. Securities and Exchange Commission granted an exemption for Intercontinental Exchange to begin guaranteeing credit-default swaps. The SEC exemption represented the last regulatory approval needed by Atlanta-based Intercontinental. A derivatives analyst at Morgan Stanley, one of the backers for IntercontinentalExchange's subsidiary, ICE Trust in New York, launched in 2008, claimed that the "clearinghouse, and changes to the contracts to standardize them, will probably boost activity". IntercontinentalExchange's subsidiary, ICE Trust's larger competitor, CME Group Inc., hasn’t received an SEC exemption, and agency spokesman John Nester said he didn’t know when a decision would be made.
Market as of 2009
The early months of 2009 saw several fundamental changes to the way CDSs operate, resulting from concerns over the instruments' safety after the events of the previous year. According to Deutsche Bank managing director Athanassios Diplas "the industry pushed through 10 years worth of changes in just a few months". By late 2008 processes had been introduced allowing CDSs that offset each other to be cancelled. Along with termination of contracts that have recently paid out such as those based on Lehmans, this had by March reduced the face value of the market down to an estimated $30 trillion.
The Bank for International Settlements estimates that outstanding derivatives total $708 trillion. U.S. and European regulators are developing separate plans to stabilize the derivatives market. Additionally there are some globally agreed standards falling into place in March 2009, administered by International Swaps and Derivatives Association (ISDA). Two of the key changes are:
1. The introduction of central clearing houses, one for the US and one for Europe. A clearing house acts as the central counterparty to both sides of a CDS transaction, thereby reducing the counterparty risk that both buyer and seller face.
2. The international standardization of CDS contracts, to prevent legal disputes in ambiguous cases where what the payout should be is unclear.
Speaking before the changes went live, Sivan Mahadevan, a derivatives analyst at Morgan Stanley, one of the backers for IntercontinentalExchange's subsidiary, ICE Trust in New York, launched in 2008, claimed that
|“||A clearinghouse, and changes to the contracts to standardize them, will probably boost activity. ... Trading will be much easier.... We'll see new players come to the market because they’ll like the idea of this being a better and more traded product. We also feel like over time we'll see the creation of different types of products (Mahadevan cited in Bloomberg 2009).||”|
In the U.S., central clearing operations began in March 2009, operated by InterContinental Exchange (ICE). A key competitor also interested in entering the CDS clearing sector is CME Group.
In Europe, CDS Index clearing was launched by IntercontinentalExchange's European subsidiary ICE Clear Europe on July 31, 2009. It launched Single Name clearing in Dec 2009. By the end of 2009, it had cleared CDS contracts worth EUR 885 billion reducing the open interest down to EUR 75 billion
By the end of 2009, banks had reclaimed much of their market share; hedge funds had largely retreated from the market after the crises. According to an estimate by the Banque de France, by late 2009 the bank JP Morgan alone now had about 30% of the global CDS market.
Government approvals relating to ICE and its competitor CME
The SEC's approval for ICE Futures' request to be exempted from rules that would prevent it clearing CDSs was the third government action granted to Intercontinental in one week. On March 3, its proposed acquisition of Clearing Corp., a Chicago clearinghouse owned by eight of the largest dealers in the credit-default swap market, was approved by the Federal Trade Commission and the Justice Department. On March 5, 2009, the Federal Reserve Board, which oversees the clearinghouse, granted a request for ICE to begin clearing.
Clearing Corp. shareholders including JPMorgan Chase & Co., Goldman Sachs Group Inc. and UBS AG, received $39 million in cash from Intercontinental in the acquisition, as well as the Clearing Corp.’s cash on hand and a 50–50 profit-sharing agreement with Intercontinental on the revenue generated from processing the swaps.
SEC spokesperson John Nestor stated
|“||For several months the SEC and our fellow regulators have worked closely with all of the firms wishing to establish central counterparties.... We believe that CME should be in a position soon to provide us with the information necessary to allow the commission to take action on its exemptive requests.||”|
Other proposals to clear credit-default swaps have been made by NYSE Euronext, Eurex AG and LCH.Clearnet Ltd. Only the NYSE effort is available now for clearing after starting on Dec. 22. As of Jan. 30, no swaps had been cleared by the NYSE’s London- based derivatives exchange, according to NYSE Chief Executive Officer Duncan Niederauer.
Clearing house member requirements
Members of the Intercontinental clearinghouse ICE Trust (now ICE Clear Credit) in March 2009 would have to have a net worth of at least $5 billion and a credit rating of A or better to clear their credit-default swap trades. Intercontinental said in the statement today that all market participants such as hedge funds, banks or other institutions are open to become members of the clearinghouse as long as they meet these requirements.
A clearinghouse acts as the buyer to every seller and seller to every buyer, reducing the risk of counterparty defaulting on a transaction. In the over-the-counter market, where credit- default swaps are currently traded, participants are exposed to each other in case of a default. A clearinghouse also provides one location for regulators to view traders’ positions and prices.
J.P. Morgan losses
In April 2012, hedge fund insiders became aware that the market in credit default swaps was possibly being affected by the activities of Bruno Iksil, a trader for J.P. Morgan Chase & Co., referred to as "the London whale" in reference to the huge positions he was taking. Heavy opposing bets to his positions are known to have been made by traders, including another branch of J.P. Morgan, who purchased the derivatives offered by J.P. Morgan in such high volume. Major losses, $2 billion, were reported by the firm in May 2012 in relationship to these trades. The disclosure, which resulted in headlines in the media, did not disclose the exact nature of the trading involved, which remains in progress. The item traded, possibly related to CDX IG 9, an index based on the default risk of major U.S. corporations, has been described as a "derivative of a derivative".
Terms of a typical CDS contract
A CDS contract is typically documented under a confirmation referencing the credit derivatives definitions as published by the International Swaps and Derivatives Association. The confirmation typically specifies a reference entity, a corporation or sovereign that generally, although not always, has debt outstanding, and a reference obligation, usually an unsubordinated corporate bond or government bond. The period over which default protection extends is defined by the contract effective date and scheduled termination date.
The confirmation also specifies a calculation agent who is responsible for making determinations as to successors and substitute reference obligations (for example necessary if the original reference obligation was a loan that is repaid before the expiry of the contract), and for performing various calculation and administrative functions in connection with the transaction. By market convention, in contracts between CDS dealers and end-users, the dealer is generally the calculation agent, and in contracts between CDS dealers, the protection seller is generally the calculation agent.
It is not the responsibility of the calculation agent to determine whether or not a credit event has occurred but rather a matter of fact that, pursuant to the terms of typical contracts, must be supported by publicly available information delivered along with a credit event notice. Typical CDS contracts do not provide an internal mechanism for challenging the occurrence or non-occurrence of a credit event and rather leave the matter to the courts if necessary, though actual instances of specific events being disputed are relatively rare.
CDS confirmations also specify the credit events that will give rise to payment obligations by the protection seller and delivery obligations by the protection buyer. Typical credit events include bankruptcy with respect to the reference entity and failure to pay with respect to its direct or guaranteed bond or loan debt. CDS written on North American investment grade corporate reference entities, European corporate reference entities and sovereigns generally also include restructuring as a credit event, whereas trades referencing North American high-yield corporate reference entities typically do not.
Finally, standard CDS contracts specify deliverable obligation characteristics that limit the range of obligations that a protection buyer may deliver upon a credit event. Trading conventions for deliverable obligation characteristics vary for different markets and CDS contract types. Typical limitations include that deliverable debt be a bond or loan, that it have a maximum maturity of 30 years, that it not be subordinated, that it not be subject to transfer restrictions (other than Rule 144A), that it be of a standard currency and that it not be subject to some contingency before becoming due.
The premium payments are generally quarterly, with maturity dates (and likewise premium payment dates) falling on March 20, June 20, September 20, and December 20. Due to the proximity to the IMM dates, which fall on the third Wednesday of these months, these CDS maturity dates are also referred to as "IMM dates".
Credit default swap and sovereign debt crisis
The European sovereign debt crisis resulted from a combination of complex factors, including the globalisation of finance; easy credit conditions during the 2002–2008 period that encouraged high-risk lending and borrowing practices; the 2007–2012 global financial crisis; international trade imbalances; real-estate bubbles that have since burst; the 2008–2012 global recession; fiscal policy choices related to government revenues and expenses; and approaches used by nations to bail out troubled banking industries and private bondholders, assuming private debt burdens or socialising losses. The Credit default swap market also reveals the beginning of the sovereign crisis.
Since December 1, 2011 the European Parliament has banned naked Credit default swap (CDS) on the debt for sovereign nations.
The definition of restructuring is quite technical but is essentially intended to respond to circumstances where a reference entity, as a result of the deterioration of its credit, negotiates changes in the terms in its debt with its creditors as an alternative to formal insolvency proceedings (i.e. the debt is restructured). During the 2012 Greek sovereign debt crisis, one important issue was whether the restructuring would trigger Credit default swap (CDS) payments. European Central Bank and the International Monetary Fund negotiators avoided these triggers as they could have jeopardized the stability of major European banks who had been protection writers. An alternative could have been to create new CDS which clearly would pay in the event of debt restructuring. The market would have paid the spread between these and old (potentially more ambiguous) CDS. This practice is far more typical in jurisdictions that do not provide protective status to insolvent debtors similar to that provided by Chapter 11 of the United States Bankruptcy Code. In particular, concerns arising out of Conseco's restructuring in 2000 led to the credit event's removal from North American high yield trades.
Physical or cash
As described in an earlier section, if a credit event occurs then CDS contracts can either be physically settled or cash settled.
- Physical settlement: The protection seller pays the buyer par value, and in return takes delivery of a debt obligation of the reference entity. For example, a hedge fund has bought $5 million worth of protection from a bank on the senior debt of a company. In the event of a default, the bank pays the hedge fund $5 million cash, and the hedge fund must deliver $5 million face value of senior debt of the company (typically bonds or loans, which are typically worth very little given that the company is in default).
- Cash settlement: The protection seller pays the buyer the difference between par value and the market price of a debt obligation of the reference entity. For example, a hedge fund has bought $5 million worth of protection from a bank on the senior debt of a company. This company has now defaulted, and its senior bonds are now trading at 25 (i.e., 25 cents on the dollar) since the market believes that senior bondholders will receive 25% of the money they are owed once the company is wound up. Therefore, the bank must pay the hedge fund $5 million × (100% − 25%) = $3.75 million.
The development and growth of the CDS market has meant that on many companies there is now a much larger outstanding notional of CDS contracts than the outstanding notional value of its debt obligations. (This is because many parties made CDS contracts for speculative purposes, without actually owning any debt that they wanted to insure against default.) For example, at the time it filed for bankruptcy on September 14, 2008, Lehman Brothers had approximately $155 billion of outstanding debt but around $400 billion notional value of CDS contracts had been written that referenced this debt. Clearly not all of these contracts could be physically settled, since there was not enough outstanding Lehman Brothers debt to fulfill all of the contracts, demonstrating the necessity for cash settled CDS trades. The trade confirmation produced when a CDS is traded states whether the contract is to be physically or cash settled.
When a credit event occurs on a major company on which a lot of CDS contracts are written, an auction (also known as a credit-fixing event) may be held to facilitate settlement of a large number of contracts at once, at a fixed cash settlement price. During the auction process participating dealers (e.g., the big investment banks) submit prices at which they would buy and sell the reference entity's debt obligations, as well as net requests for physical settlement against par. A second stage Dutch auction is held following the publication of the initial midpoint of the dealer markets and what is the net open interest to deliver or be delivered actual bonds or loans. The final clearing point of this auction sets the final price for cash settlement of all CDS contracts and all physical settlement requests as well as matched limit offers resulting from the auction are actually settled. According to the International Swaps and Derivatives Association (ISDA), who organised them, auctions have recently proved an effective way of settling the very large volume of outstanding CDS contracts written on companies such as Lehman Brothers and Washington Mutual. Commentator Felix Salmon, however, has questioned in advance ISDA's ability to structure an auction, as defined to date, to set compensation associated with a 2012 bond swap in Greek government debt. For its part, ISDA in the leadup to a 50% or greater "haircut" for Greek bondholders, issued an opinion that the bond swap would not constitute a default event.
|Date||Name||Final price as a percentage of par|
|2005-06-14||Collins & Aikman - Senior||43.625|
|2005-06-23||Collins & Aikman - Subordinated||6.375|
|2005-10-11||Delta Air Lines||18|
|2006-03-31||Dana Holding Corporation||75|
|2006-11-28||Dura - Senior||24.125|
|2006-11-28||Dura - Subordinated||3.5|
|2008-10-06||Fannie Mae - Senior||91.51|
|2008-10-06||Fannie Mae - Subordinated||99.9|
|2008-10-06||Freddie Mac - Senior||94|
|2008-10-06||Freddie Mac - Subordinated||98|
|2008-11-04||Landsbanki - Senior||1.25|
|2008-11-04||Landsbanki - Subordinated||0.125|
|2008-11-05||Glitnir - Senior||3|
|2008-11-05||Glitnir - Subordinated||0.125|
|2008-11-06||Kaupthing - Senior||6.625|
|2008-11-06||Kaupthing - Subordinated||2.375|
|2008-12-09||Masonite - LCDS||52.5|
|2008-12-17||Hawaiian Telcom - LCDS||40.125|
|2009-01-06||Tribune - CDS||1.5|
|2009-01-06||Tribune - LCDS||23.75|
|2009-01-14||Republic of Ecuador||31.375|
|2009-02-03||Millennium America Inc||7.125|
|2009-02-03||Lyondell - CDS||15.5|
|2009-02-03||Lyondell - LCDS||20.75|
|2009-02-05||Sanitec - 1st Lien||33.5|
|2009-02-05||Sanitec - 2nd Lien||4.0|
|2009-02-09||British Vita - 1st Lien||15.5|
|2009-02-09||British Vita - 2nd Lien||2.875|
|2009-04-21||Charter Communications CDS||2.375|
|2009-04-21||Charter Communications LCDS||78|
|2009-05-13||General Growth Properties||44.25|
|2009-06-09||HLI Operating Corp LCDS||9.5|
|2009-06-10||Georgia Gulf LCDS||83|
|2009-06-11||R.H. Donnelley Corp. CDS||4.875|
|2009-06-12||General Motors CDS||12.5|
|2009-06-12||General Motors LCDS||97.5|
|2009-06-18||JSC Alliance Bank CDS||16.75|
|2009-06-24||RH Donnelley Inc LCDS||78.125|
|2009-07-09||Six Flags CDS||14|
|2009-07-09||Six Flags LCDS||96.125|
|2009-11-10||METRO-GOLDWYN-MAYER INC. LCDS||58.5|
|2009-11-20||CIT Group Inc.||68.125|
|2009-12-16||NJSC Naftogaz of Ukraine||83.5|
|2010-01-07||Financial Guarantee Insurance Compancy (FGIC)||26|
|2010-04-15||McCarthy and Stone||70.375|
|2010-04-22||Japan Airlines Corp||20.0|
|2010-06-04||Ambac Assurance Corp||20.0|
|2010-07-15||Truvo Subsidiary Corp||3.0|
|2010-09-09||Truvo (formerly World Directories)||41.125|
|2010-09-21||Boston Generating LLC||40.75|
|2010-12-09||Anglo Irish Bank||18.25|
|2010-12-10||Ambac Financial Group||9.5|
|2011-11-29||Dynegy Holdings, LLC||71.25|
|2011-12-09||Seat Pagine Gialle||10.0|
|2012-02-22||Eastman Kodak Co||22.875|
|2012-03-29||ERC Ireland Fin Ltd||0.0|
|2012-05-09||Sino Forest Corp||29.0|
|2012-05-30||Houghton Mifflin Harcourt Publishing Co||55.5|
|2012-06-06||Residential Cap LLC||17.625|
|2015-02-19||Caesars Entmt Oper Co Inc||15.875|
|2015-03-05||Radio Shack Corp||11.5|
|2015-06-23||Sabine Oil Gas Corp||15.875|
|2015-09-17||Alpha Appalachia Hldgs Inc||6|
Pricing and valuation
There are two competing theories usually advanced for the pricing of credit default swaps. The first, referred to herein as the 'probability model', takes the present value of a series of cashflows weighted by their probability of non-default. This method suggests that credit default swaps should trade at a considerably lower spread than corporate bonds.
- the "issue premium",
- the recovery rate (percentage of notional repaid in event of default),
- the "credit curve" for the reference entity and
- the "LIBOR curve".
If default events never occurred the price of a CDS would simply be the sum of the discounted premium payments. So CDS pricing models have to take into account the possibility of a default occurring some time between the effective date and maturity date of the CDS contract. For the purpose of explanation we can imagine the case of a one-year CDS with effective date with four quarterly premium payments occurring at times , , , and . If the nominal for the CDS is and the issue premium is then the size of the quarterly premium payments is . If we assume for simplicity that defaults can only occur on one of the payment dates then there are five ways the contract could end:
- either it does not have any default at all, so the four premium payments are made and the contract survives until the maturity date, or
- a default occurs on the first, second, third or fourth payment date.
To price the CDS we now need to assign probabilities to the five possible outcomes, then calculate the present value of the payoff for each outcome. The present value of the CDS is then simply the present value of the five payoffs multiplied by their probability of occurring.
This is illustrated in the following tree diagram where at each payment date either the contract has a default event, in which case it ends with a payment of shown in red, where is the recovery rate, or it survives without a default being triggered, in which case a premium payment of is made, shown in blue. At either side of the diagram are the cashflows up to that point in time with premium payments in blue and default payments in red. If the contract is terminated the square is shown with solid shading.
The probability of surviving over the interval to without a default payment is and the probability of a default being triggered is . The calculation of present value, given discount factor of to is then
|Description||Premium Payment PV||Default Payment PV||Probability|
|Default at time|
|Default at time|
|Default at time|
|Default at time|
The probabilities , , , can be calculated using the credit spread curve. The probability of no default occurring over a time period from to decays exponentially with a time-constant determined by the credit spread, or mathematically where is the credit spread zero curve at time . The riskier the reference entity the greater the spread and the more rapidly the survival probability decays with time.
To get the total present value of the credit default swap we multiply the probability of each outcome by its present value to give
In the 'no-arbitrage' model proposed by both Duffie, and Hull-White, it is assumed that there is no risk free arbitrage. Duffie uses the LIBOR as the risk free rate, whereas Hull and White use US Treasuries as the risk free rate. Both analyses make simplifying assumptions (such as the assumption that there is zero cost of unwinding the fixed leg of the swap on default), which may invalidate the no-arbitrage assumption. However the Duffie approach is frequently used by the market to determine theoretical prices.
Under the Duffie construct, the price of a credit default swap can also be derived by calculating the asset swap spread of a bond. If a bond has a spread of 100, and the swap spread is 70 basis points, then a CDS contract should trade at 30. However, there are sometimes technical reasons why this will not be the case, and this may or may not present an arbitrage opportunity for the canny investor. The difference between the theoretical model and the actual price of a credit default swap is known as the basis.
Critics of the huge credit default swap market have claimed that it has been allowed to become too large without proper regulation and that, because all contracts are privately negotiated, the market has no transparency. Furthermore, there have been claims that CDSs exacerbated the 2008 global financial crisis by hastening the demise of companies such as Lehman Brothers and AIG.
In the case of Lehman Brothers, it is claimed that the widening of the bank's CDS spread reduced confidence in the bank and ultimately gave it further problems that it was not able to overcome. However, proponents of the CDS market argue that this confuses cause and effect; CDS spreads simply reflected the reality that the company was in serious trouble. Furthermore, they claim that the CDS market allowed investors who had counterparty risk with Lehman Brothers to reduce their exposure in the case of their default.
Credit default swaps have also faced criticism that they contributed to a breakdown in negotiations during the 2009 General Motors Chapter 11 reorganization, because certain bondholders might benefit from the credit event of a GM bankruptcy due to their holding of CDSs. Critics speculate that these creditors had an incentive to push for the company to enter bankruptcy protection. Due to a lack of transparency, there was no way to identify the protection buyers and protection writers.
It was also feared at the time of Lehman's bankruptcy that the $400 billion notional of CDS protection which had been written on the bank could lead to a net payout of $366 billion from protection sellers to buyers (given the cash-settlement auction settled at a final price of 8.625%) and that these large payouts could lead to further bankruptcies of firms without enough cash to settle their contracts. However, industry estimates after the auction suggest that net cashflows were only in the region of $7 billion. because many parties held offsetting positions. Furthermore, CDS deals are marked-to-market frequently. This would have led to margin calls from buyers to sellers as Lehman's CDS spread widened, reducing the net cashflows on the days after the auction.
Senior bankers have argued that not only has the CDS market functioned remarkably well during the financial crisis; that CDS contracts have been acting to distribute risk just as was intended; and that it is not CDSs themselves that need further regulation but the parties who trade them.
Some general criticism of financial derivatives is also relevant to credit derivatives. Warren Buffett famously described derivatives bought speculatively as "financial weapons of mass destruction." In Berkshire Hathaway's annual report to shareholders in 2002, he said, "Unless derivatives contracts are collateralized or guaranteed, their ultimate value also depends on the creditworthiness of the counterparties to them. In the meantime, though, before a contract is settled, the counterparties record profits and losses—often huge in amount—in their current earnings statements without so much as a penny changing hands. The range of derivatives contracts is limited only by the imagination of man (or sometimes, so it seems, madmen)."
To hedge the counterparty risk of entering a CDS transaction, one practice is to buy CDS protection on one's counterparty. The positions are marked-to-market daily and collateral pass from buyer to seller or vice versa to protect both parties against counterparty default, but money does not always change hands due to the offset of gains and losses by those who had both bought and sold protection. Depository Trust & Clearing Corporation, the clearinghouse for the majority of trades in the US over-the-counter market, stated in October 2008 that once offsetting trades were considered, only an estimated $6 billion would change hands on October 21, during the settlement of the CDS contracts issued on Lehman Brothers' debt, which amounted to somewhere between $150 to $360 billion.
Despite Buffett's criticism on derivatives, in October 2008 Berkshire Hathaway revealed to regulators that it has entered into at least $4.85 billion in derivative transactions. Buffett stated in his 2008 letter to shareholders that Berkshire Hathaway has no counterparty risk in its derivative dealings because Berkshire require counterparties to make payments when contracts are initiated, so that Berkshire always holds the money. Berkshire Hathaway was a large owner of Moody's stock during the period that it was one of two primary rating agencies for subprime CDOs, a form of mortgage security derivative dependent on the use of credit default swaps.
The monoline insurance companies got involved with writing credit default swaps on mortgage-backed CDOs. Some media reports have claimed this was a contributing factor to the downfall of some of the monolines. In 2009 one of the monolines, MBIA, sued Merrill Lynch, claiming that Merrill had misrepresented some of its CDOs to MBIA in order to persuade MBIA to write CDS protection for those CDOs.
During the 2008 financial crisis, counterparties became subject to a risk of default, amplified with the involvement of Lehman Brothers and AIG in a very large number of CDS transactions. This is an example of systemic risk, risk which threatens an entire market, and a number of commentators have argued that size and deregulation of the CDS market have increased this risk.
For example, imagine if a hypothetical mutual fund had bought some Washington Mutual corporate bonds in 2005 and decided to hedge their exposure by buying CDS protection from Lehman Brothers. After Lehman's default, this protection was no longer active, and Washington Mutual's sudden default only days later would have led to a massive loss on the bonds, a loss that should have been insured by the CDS. There was also fear that Lehman Brothers and AIG's inability to pay out on CDS contracts would lead to the unraveling of complex interlinked chain of CDS transactions between financial institutions. So far this does not appear to have happened, although some commentators[who?] have noted that because the total CDS exposure of a bank is not public knowledge, the fear that one could face large losses or possibly even default themselves was a contributing factor to the massive decrease in lending liquidity during September/October 2008.
Chains of CDS transactions can arise from a practice known as "netting". Here, company B may buy a CDS from company A with a certain annual premium, say 2%. If the condition of the reference company worsens, the risk premium rises, so company B can sell a CDS to company C with a premium of say, 5%, and pocket the 3% difference. However, if the reference company defaults, company B might not have the assets on hand to make good on the contract. It depends on its contract with company A to provide a large payout, which it then passes along to company C.
The problem lies if one of the companies in the chain fails, creating a "domino effect" of losses. For example, if company A fails, company B will default on its CDS contract to company C, possibly resulting in bankruptcy, and company C will potentially experience a large loss due to the failure to receive compensation for the bad debt it held from the reference company. Even worse, because CDS contracts are private, company C will not know that its fate is tied to company A; it is only doing business with company B.
As described above, the establishment of a central exchange or clearing house for CDS trades would help to solve the "domino effect" problem, since it would mean that all trades faced a central counterparty guaranteed by a consortium of dealers.
Tax and accounting issues
The U.S federal income tax treatment of CDS is uncertain (Nirenberg and Kopp 1997:1, Peaslee & Nirenberg 2008-07-21:129 and Brandes 2008). [notes 2] Commentators have suggested that, depending on how they are drafted, they are either notional principal contracts or options for tax purposes,(Peaslee & Nirenberg 2008-07-21:129). but this is not certain. There is a risk of having CDS recharacterized as different types of financial instruments because they resemble put options and credit guarantees. In particular, the degree of risk depends on the type of settlement (physical/cash and binary/FMV) and trigger (default only/any credit event) (Nirenberg & Kopp 1997:8). And, as noted below, the appropriate treatment for Naked CDS may be entirely different.
If a CDS is a notional principal contract, pre-default periodic and nonperiodic payments on the swap are deductible and included in ordinary income. If a payment is a termination payment, or a payment received on a sale of the swap to a third party, however, its tax treatment is an open question. In 2004, the Internal Revenue Service announced that it was studying the characterization of CDS in response to taxpayer confusion. As the outcome of its study, the IRS issued proposed regulations in 2011 specifically classifying CDS as notional principal contracts, and thereby qualifying such termination and sale payments for favorable capital gains tax treatment. These proposed regulations—which are yet to be finalized—have already been subject to criticism at a public hearing held by the IRS in January 2012, as well as in the academic press, insofar as that classification would apply to Naked CDS.
The thrust of this criticism is that Naked CDS are indistinguishable from gambling wagers, and thus give rise in all instances to ordinary income, including to hedge fund managers on their so-called carried interests, and that the IRS exceeded its authority with the proposed regulations. This is evidenced by the fact that Congress confirmed that certain derivatives, including CDS, do constitute gambling when, in 2000, to allay industry fears that they were illegal gambling, it exempted them from “any State or local law that prohibits or regulates gaming.” While this decriminalized Naked CDS, it did not grant them relief under the federal gambling tax provisions.
The accounting treatment of CDS used for hedging may not parallel the economic effects and instead, increase volatility. For example, GAAP generally require that CDS be reported on a mark to market basis. In contrast, assets that are held for investment, such as a commercial loan or bonds, are reported at cost, unless a probable and significant loss is expected. Thus, hedging a commercial loan using a CDS can induce considerable volatility into the income statement and balance sheet as the CDS changes value over its life due to market conditions and due to the tendency for shorter dated CDS to sell at lower prices than longer dated CDS. One can try to account for the CDS as a hedge under FASB 133 but in practice that can prove very difficult unless the risky asset owned by the bank or corporation is exactly the same as the Reference Obligation used for the particular CDS that was bought.
A new type of default swap is the "loan only" credit default swap (LCDS). This is conceptually very similar to a standard CDS, but unlike "vanilla" CDS, the underlying protection is sold on syndicated secured loans of the Reference Entity rather than the broader category of "Bond or Loan". Also, as of May 22, 2007, for the most widely traded LCDS form, which governs North American single name and index trades, the default settlement method for LCDS shifted to auction settlement rather than physical settlement. The auction method is essentially the same that has been used in the various ISDA cash settlement auction protocols, but does not require parties to take any additional steps following a credit event (i.e., adherence to a protocol) to elect cash settlement. On October 23, 2007, the first ever LCDS auction was held for Movie Gallery.
Because LCDS trades are linked to secured obligations with much higher recovery values than the unsecured bond obligations that are typically assumed the cheapest to deliver in respect of vanilla CDS, LCDS spreads are generally much tighter than CDS trades on the same name.
- Bucket shop (stock market)
- Constant maturity credit default swap
- Credit default option
- Credit default swap index
- CUSIP Linked MIP Code, reference entity code
- Inside Job (2010 film), an Oscar-winning documentary film about the financial crisis of 2007–2010 by Charles H. Ferguson
- Recovery swap
- Toxic security
- Intercontinental Exchange
- Intercontinental Exchange's closest rival as credit default swaps (CDS) clearing houses, CME Group (CME) cleared $192 million in comparison to ICE's $10 trillion (Terhune Bloomberg Business Week 2010-07-29).
- The link is to an earlier version of this paper.
- Simkovic, Michael, "Leveraged Buyout Bankruptcies, the Problem of Hindsight Bias, and the Credit Default Swap Solution", Columbia Business Law Review (Vol. 2011, No. 1, pp. 118), 2011.
- Pollack, Lisa (January 5, 2012). "Credit event auctions: Why do they exist?". FT Alphaville. Retrieved January 5, 2012.
- "Chart; ISDA Market Survey; Notional amounts outstanding at year-end, all surveyed contracts, 1987–present" (PDF). International Swaps and Derivatives Association (ISDA). Retrieved April 8, 2010.
- ISDA 2010 MID-YEAR MARKET SURVEY. Latest available a/o 2012-03-01.
- "ISDA: CDS Marketplace :: Market Statistics". Isdacdsmarketplace.com. December 31, 2010. Retrieved March 12, 2012.
- Kiff, John; Jennifer Elliott; Elias Kazarian; Jodi Scarlata; Carolyne Spackman (November 2009). "Credit Derivatives: Systemic Risks and Policy Options" (PDF). International Monetary Fund: IMF Working Paper (WP/09/254). Retrieved April 25, 2010.
- Weistroffer, Christian; Deutsche Bank Research (December 21, 2009). "Credit default swaps: Heading towards a more stable system" (PDF). Deutsche Bank Research: Current Issues. Retrieved April 15, 2010.
- Simkovic, Michael, Secret Liens and the Financial Crisis of 2008.
- Sirri, Erik, Director, Division of Trading and Markets U.S. Securities and Exchange Commission. "Testimony Concerning Credit Default Swaps Before the House Committee on Agriculture October 15, 2008". Retrieved April 2, 2010.
- Partnoy, Frank; David A. Skeel, Jr. (2007). "The Promise And Perils of Credit Derivatives". University of Cincinnati Law Review. 75: 1019–1051. SSRN .
- "Media Statement: DTCC Policy for Releasing CDS Data to Global Regulators". Depository Trust & Clearing Corporation. March 23, 2010. Retrieved April 22, 2010.
- Michael Simkovic (2016). Adler, Barry, ed. Making Fraudulent Transfer Law More Predictable, in Handbook on Corporate Bankruptcy. Edward Elgar. SSRN .
- Mengle, David. "Credit Derivatives: An Overview" (PDF). Economic Review (FRB Atlanta), Fourth Quarter 2007. 92 (4). Retrieved 13 January 2016.
- International Swaps and Derivatives Association, Inc. (ISDA). "24. Product description: Credit default swaps". Retrieved March 26, 2010.
ISDA is the trade group that represents participants in the privately negotiated derivatives industry
- Federal Reserve Bank of Atlanta (April 14, 2008). "Did You Know? A Primer on Credit Default Swaps". Financial Update. 21 (2). Retrieved March 31, 2010.
- Kopecki, Dawn; Shannon D. Harrington (July 24, 2009). "Banning ‘Naked’ Default Swaps May Raise Corporate Funding Costs". Bloomberg. Retrieved March 31, 2010.
- Leonard, Andrew (April 20, 2010). "Credit default swaps: What are they good for?". Salon.com. Salon Media Group. Retrieved April 24, 2010.
- CFA Institute. (2008). Derivatives and Alternative Investments. pg G-11. Boston: Pearson Custom Publishing. ISBN 0-536-34228-8.
- Cox, Christopher, Chairman, U.S. Securities and Exchange Commission. "Testimony Concerning Turmoil in U.S. Credit Markets: Recent Actions Regarding Government Sponsored Entities, Investment Banks and Other Financial Institutions". Senate Committee on Banking, Housing, and Urban Affairs. September 23, 2008. Retrieved March 17, 2009.
- Garbowski, Mark (October 24, 2008). "United States: Credit Default Swaps: A Brief Insurance Primer". Retrieved November 3, 2008.
like insurance insofar as the buyer collects when an underlying security defaults ... unlike insurance, however, in that the buyer need not have an "insurable interest" in the underlying security
- Morgenson, Gretchen (August 10, 2008). "Credit default swap market under scrutiny". Retrieved November 3, 2008.
If a default occurs, the party providing the credit protection — the seller — must make the buyer whole on the amount of insurance bought.
- Frielink, Karel (August 10, 2008). "Are credit default swaps insurance products?". Retrieved November 3, 2008.
If the fund manager acts as the protection seller under a CDS, there is some risk of breach of insurance regulations for the manager.... There is no Netherlands Antilles case law or literature available which makes clear whether a CDS constitutes the ‘conducting of insurance business’ under Netherlands Antilles law. However, if certain requirements are met, credit derivatives do not qualify as an agreement of (non-life) insurance because such an arrangement would in those circumstances not contain all the elements necessary to qualify it as such.
- Kramer, Stefan (April 20, 2010). "Do We Need Central Counterparty Clearing of Credit Default Swaps?" (PDF). Retrieved April 3, 2011.
- Gensler, Gary, Chairman Commodity Futures Trading Commission (March 9, 2010). "Keynote Address of Chairman Gary Gensler, OTC Derivatives Reform, Markit’s Outlook for OTC Derivatives Markets Conference" (PDF). Archived from the original (PDF) on May 27, 2010. Retrieved April 25, 2010.
- "Surveys & Market Statistics". International Swaps and Derivatives Association (ISDA). Retrieved April 20, 2010.
- "Regular OTC Derivatives Market Statistics". Bank for International Settlements. Retrieved April 20, 2010.
- "Trade Information Warehouse Reports". Depository Trust & Clearing Corporation (DTCC). Retrieved April 20, 2010.
- "S&P Capital IQ Announces Acquisition of Credit Market Analysis Limited". S&P Capital IQ. Retrieved July 2, 2012.
- "The Trade Information Warehouse (Warehouse) is the market's first and only centralized global repository for trade reporting and post-trade processing of OTC credit derivatives contracts". Depository Trust & Clearing Corporation. Retrieved April 23, 2010.
- "Publications: OCC's Quarterly Report on Bank Derivatives Activities". Office of the Comptroller of the Currency. Retrieved April 20, 2010.
- Lucas, Douglas; Laurie S. Goodman; Frank J. Fabozzi (May 5, 2006). Collateralized Debt Obligations: Structures and Analysis, 2nd Edition. John Wiley & Sons Inc. p. 221. ISBN 978-0-471-71887-1.
- "SEC charges Goldman Sachs with fraud in subprime case". USA Today. April 16, 2010. Retrieved April 27, 2010.
- Litan, Robert E. (April 7, 2010). "The Derivatives Dealers’ Club and Derivatives Markets Reform: A Guide for Policy Makers, Citizens and Other Interested Parties" (PDF). Brookings Institution. Retrieved April 15, 2010.
- Buiter, Willem (March 16, 2009). "Should you be able to sell what you do not own?". Financial Times. Retrieved April 25, 2010.
- Munchau, Wolfgang. "Time to outlaw naked credit default swaps". Financial Times. Retrieved April 24, 2010.
- Leopold, Les (June 2, 2009). The Looting of America: How Wall Street's Game of Fantasy Finance Destroyed Our Jobs, Our Pensions, and Prosperity, and What We Can Do About It. Chelsea Green Publishing. ISBN 978-1-60358-205-6. Retrieved April 24, 2010.
- Soros, George (March 24, 2009). "Opinion: One Way to Stop Bear Raids". Wall Street Journal. Retrieved April 24, 2010.
- Moshinsky, Ben; Aaron Kirchfeld (March 11, 2010). "Naked Swaps Crackdown in Europe Rings Hollow Without Washington". Bloomberg. Retrieved April 24, 2010.
- Jacobs, Stevenson (March 10, 2010). "Greek Debt Crisis Is At The Center Of The Credit Default Swap Debate". Huffington Post. Retrieved April 24, 2010.
- "E.U. Derivatives Ban Won’t Work, U.S. Says". New York Times. March 17, 2010. Retrieved April 24, 2010.
- Kern, Steffen; Deutsche Bank Research (March 17, 2010). "Short Selling" (PDF). Research Briefing. Retrieved April 24, 2010.
- "Greece Govt Bond 10 Year Acting as Benchmark". Bloomberg.com. March 8, 2012. Retrieved March 12, 2012.
- "Bill H.R. 977". govtrack.us. Retrieved March 15, 2011.
- "OCC 96-43; OCC Bulletin; Subject: Credit Derivatives; Description: Guidelines for National Banks" (txt). Office of the Comptroller of the Currency. August 12, 1996. Retrieved April 8, 2010.
- McDermott, Robert. "The Long Awaited Arrival of Credit Derivatives". Derivatives Strategy, December/January 1997. Retrieved 8 April 2010.
- Miller, Ken (Spring 2009). "Using Letters Of Credit, Credit Default Swaps And Other Forms of Credit Enhancements in Net Lease Transactions" (PDF). Virginia Law & Business Review. 4 (1): 69–78, 80. Retrieved April 15, 2010.
the use of an exotic credit default swap (called a Net Lease CDS), which effectively hedges tenant credit risk but at a substantially higher price than a vanilla swap.
- "Archived copy" (PDF). Archived from the original (PDF) on June 26, 2010. Retrieved 2016-02-08. Chatiras, Manolis, and Barsendu Mukherjee. Capital Structure Arbitrage: Investigation using Stocks and High Yield Bonds. Amherst, MA: Center for International Securities and Derivatives Markets, Isenberg School of Management, University of Massachusetts, Amherst, 2004. Retrieved March 17, 2009.
- Smithson, Charles; David Mengle (Fall 2006). "The Promise of Credit Derivatives in Nonfinancial Corporations (and Why It’s Failed to Materialize)" (PDF). Journal of Applied Corporate Finance. 18 (4): 54–60. doi:10.1111/j.1745-6622.2006.00111.x. Retrieved April 8, 2010.
- Tett, Gillian (2009). Fool's Gold: How Unrestrained Greed Corrupted a Dream, Shattered Global Markets and Unleashed a Catastrophe. Little Brown. pp. 48–67, 87, 303. ISBN 978-0-349-12189-5.
- Philips, Matthew (September 27, 2008). "The Monster That Ate Wall Street". Newsweek. Retrieved April 7, 2010.
- Lanchester, John (June 1, 2009). "Outsmarted". New Yorker. Retrieved April 7, 2010.
- Tett, Gillian. "The Dream Machine: Invention of Credit Derivatives". Financial Times. March 24, 2006. Retrieved March 17, 2009.
- Lanchester, John (June 1, 2009). "Outsmarted". New Yorker. Retrieved April 7, 2010.[dead link]
- Simon, Ellen (October 20, 2008). "Meltdown 101: What are credit default swaps?". USA Today. Retrieved April 7, 2010.
- "Remarks by Chairman Alan Greenspan Risk Transfer and Financial Stability To the Federal Reserve Bank of Chicago's Forty-first Annual Conference on Bank Structure, Chicago, Illinois (via satellite) May 5, 2005". Federal Reserve Board. May 5, 2005. Retrieved April 8, 2010.
- McDermott, Robert. "The Long Awaited Arrival of Credit Derivatives, December–January 1997". Derivatives Strategy. Retrieved 8 April 2010.
The lack of standardized documentation for credit swaps, in fact, could become a major brake on market expansion.
- Ranciere, Romain G. (April 2002). "Credit Derivatives in Emerging Markets" (PDF). IMF Policy Discussion Paper. Retrieved April 8, 2010.
- "ISDA Market Survey, Year-End 2008". Isda.org. Retrieved August 27, 2010.
- Atlas, Riva D. (September 16, 2005). "Trying to Put Some Reins on Derivatives". New York Times. Retrieved April 8, 2010.
- Weithers, Tim. "Credit Derivatives, Macro Risks, and Systemic Risks" (PDF). Economic Review (FRB Atlanta), Fourth Quarter 2007. 92 (4): 43–69. Retrieved April 9, 2010.
- "The level of outstanding credit-derivative trade confirmations presents operational and legal risks for firms" (PDF). Financial Risk Outlook 2006. The Financial Services Authority. Retrieved April 8, 2010.
- "Default Rates". Efalken.com. Retrieved August 27, 2010.
- Colin Barr (March 16, 2009). "The truth about credit default swaps". CNN / Fortune. Retrieved March 27, 2009.
- "Bad news on Lehman CDS". Ft.com. October 11, 2008. Retrieved August 27, 2010.
- "Testimony Concerning Turmoil in U.S. Credit Markets: Recent Actions Regarding Government Sponsored Entities, Investment Banks and Other Financial Institutions (Christopher Cox, September 23, 2008)". Sec.gov. September 23, 2008. Retrieved August 27, 2010.
- Bowers, Simon (November 5, 2008). "Banks hit back at derivatives criticism". The Guardian. London. Retrieved April 30, 2010.
- Harrington, Shannon D. (November 5, 2008). "Credit-Default Swaps on Italy, Spain Are Most Traded (Update1)". Bloomberg. Retrieved August 27, 2010.
- "DTCC " DTCC Deriv/SERV Trade Information Warehouse Reports". Dtcc.com. Retrieved August 27, 2010.
- Chad Terhune (July 29, 2010). "ICE's Jeffrey Sprecher: The Sultan of Swaps". Bloomberg Business Week. Retrieved February 15, 2013.
- Robert E. Litan (April 7, 2010). "The Derivatives Dealers’ Club and Derivatives Markets Reform: A Guide for Policy Makers, Citizens and Other Interested Parties" (PDF). Brookings Institution. Archived from the original (PDF) on May 28, 2013.
- "IntercontinentalExchange gets SEC exemption: The exchange will begin clearing credit-default swaps next week". Bloomberg News. March 7, 2009.
- Van Duyn, Aline. "Worries Remain Even After CDS Clean-Up". The Financial Times. Retrieved March 12, 2009.
- Monetary and Economic Department. "OTC derivatives market activity in the first half of 2011" (PDF). Bank for International Settlements. Retrieved Dec 15, 2011.
- "Report Center - Data". ICE. Retrieved March 12, 2012.
- Leising, Matthew; Harrington, Shannon D (March 6, 2009). "Intercontinental to Clear Credit Swaps Next Week". Bloomberg. Retrieved March 12, 2009.
- Zuckerman, Gregory; Burne, Katy (April 6, 2012). "'London Whale' Rattles Debt Market". The Wall Street Journal.
- Azam Ahmed (May 15, 2012). "As One JPMorgan Trader Sold Risky Contracts, Another One Bought Them". The New York Times. Retrieved May 16, 2012.
- Katy Burne (April 10, 2012). "Making Waves Against 'Whale'". The Wall Street Journal. Retrieved May 16, 2012.
- Farah Khalique (May 11, 2012). "Chart of the Day: London Whale trading". Financial News. Retrieved May 16, 2012.
- "Crony Capitalism: After Lobbying Against New Financial Regulations, JPMorgan Loses $2B in Risky Bet". Democracy Now!. May 15, 2012. Retrieved May 16, 2012.
- Jessica Silver-Greenberg; Peter Eavis (May 10, 2012). "JPMorgan Discloses $2 Billion in Trading Losses". The New York Times. Retrieved May 16, 2012.
- "2003 Credit Derivatives Definitions". Isda.org. Retrieved August 27, 2010.
- "Euro-Parliament bans "naked" Credit Default Swaps". EUbusiness. Nov 16, 2011. Retrieved Nov 26, 2011.
- Financewise.com[permanent dead link]
- "Settlement Auction for Lehman CDS: Surprises Ahead?". Seeking Alpha. October 10, 2008. Retrieved August 27, 2010.
- "In depth: Fed to hold CDS clearance talks". Ft.com. Retrieved August 27, 2010.
- "Isda Ceo Notes Success Of Lehman Settlement, Addresses Cds Misperceptions". Isda.org. October 21, 2008. Retrieved August 27, 2010.
- Salmon, Felix (March 1, 2012). "How Greece's Default Could Kill The Sovereign CDS Market". Seeking Alpha. Retrieved March 1, 2012.
- Watts, William L., "No Greek CDS payout on swap, panel says", MarketWatch, March 1, 2012. Retrieved 2012-03-01.
- Markit. Tradeable Credit Fixings. Retrieved 2008-10-28.
- "Year Credit Event Fixing". www.creditfixings.com. Retrieved 2017-01-26.
- "Gannett and the Side Effects of Default Swaps". The New York Times. June 23, 2009. Retrieved July 14, 2009.
- "Protecting GM from Credit Default Swap Holders". Firedoglake. May 14, 2009. Retrieved July 14, 2009.
- "/ Financials — Lehman CDS pay-outs higher than expected". Ft.com. October 10, 2008. Retrieved August 27, 2010.
- "Daily Brief". October 28, 2008. Retrieved November 6, 2008.
- Buffett, Warren (February 21, 2003). "Berkshire Hathaway Inc. Annual Report 2002" (PDF). Berkshire Hathaway. Retrieved September 21, 2008.
- Olsen, Kim Asger, "Pay-up time for Lehman swaps", atimes.com, October 22, 2008.
- Holm, Erik (November 21, 2008). "Berkshire Asked by SEC in June for Derivative Data (Update1)". Bloomberg. Retrieved August 27, 2010.
- Buffett, Warren. "Berkshire Hathaway Inc. Annual Report 2008" (PDF). Berkshire Hathaway. Retrieved December 21, 2009.
- Ambac, MBIA Lust for CDO Returns Undercut AAA Success (Update2) , Christine Richard, bloomberg, jan 22, 2008. Retrieved 2010 4 29.
- Credit Default Swaps: Monolines faces litigious and costly endgame, Aug 2008, Louise Bowman, euromoney.com. Retrieved 2010 4 29.
- Supreme Court of New York County (April 2009). "MBIA Insurance Co. v Merrill Lynch" (PDF). mbia.com. Retrieved 2010-04-23.
- MBIA Sues Merrill Lynch , Wall Street Journal, Serena Ng, 2009 May 1. Retrieved 2010 4 23.
- UPDATE 1-Judge dismisses most of MBIA's suit vs Merrill Apr 9, 2010, Reuters, Edith Honan, ed. Gerald E. McCormick
- Investing Daily (September 16, 2008). "AIG, the Global Financial System and Investor Anxiety". Kciinvesting.com. Retrieved August 27, 2010.
- Sam Fleming, Daily Mail16 October 2008, 12:00am Data (October 16, 2008). "Banks caught in jaws of CDS menace". This is Money. Retrieved August 27, 2010.
- Unregulated Credit Default Swaps Led to Weakness. All things Considered, National Public Radio. Oct 31, 2008.
- Nirenberg, David Z.; Steven L. Kopp. (August 1997). "Credit Derivatives: Tax Treatment of Total Return Swaps, Default Swaps, and Credit-Linked Notes". Journal of Taxation.
- Peaslee, James M.; David Z. Nirenberg (November 26, 2007). "Federal Income Taxation of Securitization Transactions: Cumulative". Supplement No. 7: 83. Retrieved July 28, 2008.
- Ari J. Brandes (July 21, 2008). "A Better Way to Understand Credit Default Swaps". Tax Notes. SSRN .
- Peaslee & Nirenberg, 89.
- I.R.S. REG-111283-11, IRB 2011-42 (Oct. 17, 2011).
- Diane Freda, I.R.S. Proposed Rules Mistakenly Classify Section 1256 Contracts, I.R.S. Witnesses Say, DAILY TAX REP. (BNA) No. 12 at G-4 (Jan. 20, 2012).
- James Blakey, Tax Naked Credit Default Swaps for What They Are: Legalized Gambling, 8 U. Mass. L. Rev. 136 (2013).
- See Hearing to Review the Role of Credit Derivatives in the U.S. Economy, Before H. Comm. on Agriculture, at 4 (Nov. 20, 2008) (statement of Eric Dinallo, Superintendent of New York State Ins. Dept.) (declaring that "[w]ith the proliferation of various kinds of derivatives in the late 20th Century came legal uncertainty as to whether certain derivatives, including credit default swaps, violated state bucket shop and gambling laws. [The Commodity Futures Modernization Act of 2000] created a ‘safe harbor’ by . . . preempting state and local gaming and bucket shop laws . . .") available at http://www.dfs.ny.gov/about/speeches_ins/sp0811201.pdf.
- Commodity Futures Modernization Act of 2000, H.R. 5660, 106th Cong. § 117(e)(2).
- "FASB 133". Fasb.org. June 15, 1999. Retrieved August 27, 2010.
- "Final Results of the Movie Gallery Auction, October 23, 2007". (archived 2009)
|Look up credit default swap in Wiktionary, the free dictionary.|
- Interactive data visualizations (spreads and default probabilities) of 720 Credit Default Swaps (Sovereign, Corporate, Financial) and indices
- Barroso considers ban on speculation with banning purely speculative naked sales on credit default swaps of sovereign debt
- "Systemic Counterparty Confusion: Credit Default Swaps Demystified". Derivative Dribble. October 23, 2008.
- CBS '60 minutes' video on CDS
- 2003 ISDA Credit Derivatives Template. International Swaps and Derivatives Association
- Understanding Derivatives: Markets and Infrastructure Federal Reserve Bank of Chicago, Financial Markets Group
- BIS - Regular Publications. Bank for International Settlements.
- A Beginner's Guide to Credit Derivatives - Nomura International Probability.net
- "A billion-dollar game for bond managers". Financial Times.
- Duffie, Darrell. "Credit Swap Valuation". Stanford Graduate School of Business. CiteSeerX .
- John C. Hull and Alan White. "Valuing Credit Default Swaps I: No Counterparty Default Risk". University of Toronto.
- Hull, J. C. and A. White, Valuing Credit Default Swaps II: Modeling Default Correlations. Smartquant.com
- Elton et al., Explaining the rate spread on corporate bonds
- Warren Buffett on Derivatives - Excerpts from the Berkshire Hathaway annual report for 2002. fintools.com
- The Real Reason for the Global Financial Crisis. Financial Sense Archive
- Demystifying the Credit Crunch. Private Equity Council.
- "The AIG Bailout" William Sjostrom, Jr.
- Standard CDS Pricing Model Source Code - ISDA and Markit. CDSModel.com
- List of CDS premiums of various countries in English translation from German[permanent dead link]
- Calculators for Credit Default Swap. QuantCalc, Online Financial Math Calculator
- Calculators for Credit Default Swap with hazard rate. QuantCalc, Online Financial Math Calculator
- Zweig, Phillip L. (July 1997), BusinessWeek New ways to dice up debt - Suddenly, credit derivatives-deals that spread credit risk--are surging
- Goodman, Peter (Oct 2008) New York Times The spectacular boom and calamitous bust in derivatives trading
- Pulliam, Susan and Ng, Serena (January 18, 2008), Wall Street Journal: "Default Fears Unnerve Markets"
- Das, Satayjit (February 5, 2008), Financial Times: "CDS market may create added risks"
- Morgenson, Gretchen (February 17, 2008), New York Times: "Arcane Market is Next to Face Big Credit Test"
- March 17, 2008 Credit Default Swaps: The Next Crisis?, Time
- Schwartz, Nelson D. and Creswell, Julie (March 23, 2008), New York Times: "Who Created This Monster?"
- Evans, David (May 20, 2008), Bloomberg: "Hedge Funds in Swaps Face Peril With Rising Junk Bond Defaults"
- van Duyn, Aline (May 28, 2008), Financial Times: "Moody's issues warning on CDS risks"
- Morgenson, Gretchen (June 1, 2008), New York Times: "First Comes the Swap. Then It’s the Knives."
- Kelleher, James B. (September 18, 2008), Reuters: "Buffett's 'time bomb' goes off on Wall Street."
- Morgenson, Gretchen (September 27, 2008), New York Times: "Behind Insurer’s Crisis, Blind Eye to a Web of Risk"
- Varchaver, Nicholas and Benner, Katie (Sep 2008), Fortune Magazine: "The $55 Trillion Question" - on CDS spotlight during financial crisis.
- Dizard, John (October 23, 2006). "A billion dollar game". Financial Times. Retrieved October 19, 2008.
- October 19, 2008, Portfolio.com: "Why the CDS Market Didn't Fail" Analyzes the CDS market's performance in the Lehman Bros. bankruptcy.
- Boumlouka, Makrem (April 8, 2009), Wall Street Letter: "Credit Default Swap Market: "Big Bang"? [permanent dead link]".
| 1 | 7 |
<urn:uuid:aad5f6db-1f9f-4f33-8026-8d0865613838>
|
|The emu inhabits the areas shaded faded red.|
The emu (Dromaius novaehollandiae) is the second-largest living bird by height, after its ratite relative, the ostrich. It is endemic to Australia where it is the largest native bird and the only extant member of the genus Dromaius. The emu's range covers most of mainland Australia, but the Tasmanian, Kangaroo Island and King Island subspecies became extinct after the European settlement of Australia in 1788. The bird is sufficiently common for it to be rated as a least-concern species by the International Union for Conservation of Nature.
Emus are soft-feathered, brown, flightless birds with long necks and legs, and can reach up to 1.9 metres (6.2 ft) in height. Emus can travel great distances, and when necessary can sprint at 50 km/h (31 mph); they forage for a variety of plants and insects, but have been known to go for weeks without eating. They drink infrequently, but take in copious amounts of water when the opportunity arises.
Breeding takes place in May and June, and fighting among females for a mate is common. Females can mate several times and lay several clutches of eggs in one season. The male does the incubation; during this process he hardly eats or drinks and loses a significant amount of weight. The eggs hatch after around eight weeks, and the young are nurtured by their fathers. They reach full size after around six months, but can remain as a family unit until the next breeding season. The emu is an important cultural icon of Australia, appearing on the coat of arms and various coins. The bird features prominently in Indigenous Australian mythology.
- 1 Taxonomy
- 2 Common name
- 3 Description
- 4 Distribution and habitat
- 5 Behaviour and ecology
- 6 Relationship with humans
- 7 Status and conservation
- 8 See also
- 9 Notes
- 10 References
- 11 External links
Emus were first reported as having been seen by Europeans when explorers visited the western coast of Australia in 1696. This was during an expedition led by Dutch captain Willem de Vlamingh who was searching for survivors of a ship that had gone missing two years earlier. The birds were known on the eastern coast before 1788, when the first Europeans settled there. The birds were first mentioned under the name of the "New Holland cassowary" in Arthur Phillip's Voyage to Botany Bay, published in 1789 with the following description:
This is a species differing in many particulars from that generally known, and is a much larger bird, standing higher on its legs and having the neck longer than in the common one. Total length seven feet two inches. The bill is not greatly different from that of the common Cassowary; but the horny appendage, or helmet on top of the head, in this species is totally wanting: the whole of the head and neck is also covered with feathers, except the throat and fore part of the neck about half way, which are not so well feathered as the rest; whereas in the common Cassowary the head and neck are bare and carunculated as in the turkey.
The plumage in general consists of a mixture of brown and grey, and the feathers are somewhat curled or bent at the ends in the natural state: the wings are so very short as to be totally useless for flight, and indeed, are scarcely to be distinguished from the rest of the plumage, were it not for their standing out a little. The long spines which are seen in the wings of the common sort, are in this not observable,—nor is there any appearance of a tail. The legs are stout, formed much as in the Galeated Cassowary, with the addition of their being jagged or sawed the whole of their length at the back part.
The species was named by ornithologist John Latham in 1790 based on a specimen from the Sydney area of Australia, a country which was known as New Holland at the time. He collaborated on Phillip's book and provided the first descriptions of, and names for, many Australian bird species; Dromaius comes from a Greek word meaning "racer" and novaehollandiae is the Latin term for New Holland, so the name can be rendered as "fast-footed New Hollander". In his original 1816 description of the emu, the French ornithologist Louis Jean Pierre Vieillot used two generic names, first Dromiceius and later Dromaius. It has been a point of contention ever since as to which name should be used; the latter is more correctly formed, but the convention in taxonomy is that the first name given to an organism stands, unless it is clearly a typographical error. Most modern publications, including those of the Australian government, use Dromaius, with Dromiceius mentioned as an alternative spelling.
The emu was long classified, with its closest relatives the cassowaries, in the family Casuariidae, part of the ratite order Struthioniformes. However, an alternate classification was proposed in 2014 by Mitchell et al., based on analysis of mitochondrial DNA. This splits off the Casuariidae into their own order, the Casuariformes, and includes only the cassowaries in the family Casuariidae, placing the emus in their own family, Dromaiidae. The cladogram shown below is from their study.
Two different Dromaius species were present in Australia at the time of European settlement, and one additional species is known from fossil remains. The insular dwarf emus, D. n. baudinianus and D. n. minor, originally present on Kangaroo Island and King Island respectively, both became extinct shortly after the arrival of Europeans. D. n. diemenensis, another insular dwarf emu from Tasmania, became extinct around 1865. However, the mainland subspecies, D. n. novaehollandiae, remains common. The population of these birds varies from decade to decade, largely being dependent on rainfall; in 2009, it was estimated that there were between 630,000 and 725,000 birds. Emus were introduced to Maria Island off Tasmania, and Kangaroo Island off the coast of South Australia, during the 20th century. The Maria Island population died out in the mid-1990s. The Kangaroo Island birds have successfully established a breeding population.
In 1912, the Australian ornithologist Gregory M. Mathews recognised three living subspecies of emu, D. n. novaehollandiae (Latham, 1790), D. n. woodwardi Mathews, 1912 and D. n. rothschildi Mathews, 1912. However, the Handbook of the Birds of the World argues that the last two of these subspecies are invalid; natural variations in plumage colour and the nomadic nature of the species make it likely that there is a single race in mainland Australia. Examination of the DNA of the King Island emu shows this bird to be closely related to the mainland emu and hence best treated as a subspecies.
The etymology of the common name "emu" is uncertain, but is thought to have come from an Arabic word for large bird that was later used by Portuguese explorers to describe the related cassowary in Australia and New Guinea. Another theory is that it comes from the word "ema", which is used in Portuguese to denote a large bird akin to an ostrich or crane. In Victoria, some terms for the emu were Barrimal in the Dja Dja Wurrung language, myoure in Gunai, and courn in Jardwadjali. The birds were known as murawung or birabayin to the local Eora and Darug inhabitants of the Sydney basin.
The emu is the second tallest bird in the world, only being exceeded in height by the ostrich; the largest individuals can reach up to 150 to 190 cm (59 to 75 in) in height. Measured from the bill to the tail, emus range in length from 139 to 164 cm (55 to 65 in), with males averaging 148.5 cm (58.5 in) and females averaging 156.8 cm (61.7 in). Emus are the fourth or fifth heaviest living bird after the two species of ostrich and two larger species of cassowary, weighing slightly more on average than an emperor penguin. Adult emus weigh between 18 and 60 kg (40 and 132 lb), with an average of 31.5 and 37 kg (69 and 82 lb) in males and females, respectively. Females are usually slightly larger than males and are substantially wider across the rump.
Although flightless, emus have vestigial wings, the wing chord measuring around 20 cm (8 in), and each wing having a small claw at the tip. Emus flap their wings when running, perhaps as a means of stabilising themselves when moving fast. They have long necks and legs, and can run at speeds of 48 km/h (30 mph) due to their highly specialised pelvic limb musculature. Their feet have only three toes and a similarly reduced number of bones and associated foot muscles; emus are unique among birds in that their gastrocnemius muscles in the back of the lower legs have four bellies instead of the usual three. The pelvic limb muscles of emus contribute a similar proportion of the total body mass as do the flight muscles of flying birds. When walking, the emu takes strides of about 100 cm (3.3 ft), but at full gallop, a stride can be as long as 275 cm (9 ft). Its legs are devoid of feathers and underneath its feet are thick, cushioned pads. Like the cassowary, the emu has sharp claws on its toes which are its major defensive attribute, and are used in combat to inflict wounds on opponents by kicking. The toe and claw total 15 cm (6 in) in length. The bill is quite small, measuring 5.6 to 6.7 cm (2.2 to 2.6 in), and is soft, being adapted for grazing. Emus have good eyesight and hearing, which allows them to detect threats at some distance.
The neck of the emu is pale blue and shows through its sparse feathers. They have grey-brown plumage of shaggy appearance; the shafts and the tips of the feathers are black. Solar radiation is absorbed by the tips, and the inner plumage insulates the skin. This prevents the birds from overheating, allowing them to be active during the heat of the day. A unique feature of the emu feather is the double rachis emerging from a single shaft. Both of the rachis have the same length, and the texture is variable; the area near the skin is rather furry, but the more distant ends resemble grass. The sexes are similar in appearance, although the male's penis can become visible when he urinates and defecates. The plumage varies in colour due to environmental factors, giving the bird a natural camouflage. Feathers of emus in more arid areas with red soils have a rufous tint while birds residing in damp conditions are generally darker in hue. The juvenile plumage develops at about three months and is blackish finely barred with brown, with the head and neck being especially dark. The facial feathers gradually thin to expose the bluish skin. The adult plumage has developed by about fifteen months.
The eyes of an emu are protected by nictitating membranes. These are translucent, secondary eyelids that move horizontally from the inside edge of the eye to the outside edge. They function as visors to protect the eyes from the dust that is prevalent in windy arid regions. Emus have a tracheal pouch, which becomes more prominent during the mating season. At more than 30 cm (12 in) in length, it is quite spacious; it has a thin wall, and an opening 8 centimetres (3 in) long.
Distribution and habitat
Once common on the east coast of Australia, emus are now uncommon there; by contrast, the development of agriculture and the provision of water for stock in the interior of the continent have increased the range of the emu in arid regions. Emus live in various habitats across Australia both inland and near the coast. They are most common in areas of savannah woodland and sclerophyll forest, and least common in heavily populated districts and arid areas with annual precipitation of less than 600 millimetres (24 in). Emus predominately travel in pairs, and while they can form large flocks, this is an atypical social behaviour that arises from the common need to move towards a new food source. Emus have been shown to travel long distances to reach abundant feeding areas. In Western Australia, emu movements follow a distinct seasonal pattern – north in summer and south in winter. On the east coast their wanderings seem to be more random and do not appear to follow a set pattern.
Behaviour and ecology
Emus are diurnal birds and spend their day foraging, preening their plumage with their beak, dust bathing and resting. They are generally gregarious birds apart from the breeding season, and while some forage, others remain vigilant to their mutual benefit. They are able to swim when necessary, although they rarely do so unless the area is flooded or they need to cross a river.
Emus begin to settle down at sunset and sleep during the night. They do not sleep continuously but rouse themselves several times during the night. When falling asleep, emus first squat on their tarsi and enter a drowsy state during which they are alert enough to react to stimuli and quickly return to a fully awakened state if disturbed. As they fall into deeper sleep, their neck droops closer to the body and the eyelids begin to close. If there are no disturbances, they fall into a deeper sleep after about twenty minutes. During this phase, the body is gradually lowered until it is touching the ground with the legs folded underneath. The beak is turned down so that the whole neck becomes S-shaped and folded onto itself. The feathers direct any rain downwards onto the ground. It has been suggested that the sleeping position is a type of camouflage, mimicking a small mound. Emus typically awake from deep sleep once every ninety minutes or so and stand upright to feed briefly or defecate. This period of wakefulness lasts for ten to twenty minutes, after which they return to slumber. Overall, an emu sleeps for around seven hours in each twenty-four-hour period. Young emus usually sleep with their neck flat and stretched forward along the ground surface.
The vocalisations of emus mostly consist of various booming and grunting sounds. The booming is created by the inflatable throat pouch; the pitch can be regulated by the bird and depends on the size of the aperture. Most of the booming is done by females; it is part of the courtship ritual, is used to announce the holding of territory and is issued as a threat to rivals. A high-intensity boom is audible 2 kilometres (1.2 mi) away, while a low, more resonant call, produced during the breeding season, may at first attract mates and peaks while the male is incubating the eggs. Most of the grunting is done by males. It is used principally during the breeding season in territorial defence, as a threat to other males, during courtship and while the female is laying. Both sexes sometimes boom or grunt during threat displays or on encountering strange objects.
On very hot days, emus pant to maintain their body temperature, their lungs work as evaporative coolers and, unlike some other species, the resulting low levels of carbon dioxide in the blood do not appear to cause alkalosis. For normal breathing in cooler weather, they have large, multifolded nasal passages. Cool air warms as it passes through into the lungs, extracting heat from the nasal region. On exhalation, the emu's cold nasal turbinates condense moisture back out of the air and absorb it for reuse. As with other ratites, the emu has great homeothermic ability, and can maintain this status from −5 to 45 °C (23 to 113 °F). The thermoneutral zone of emus lies between 10 and 30 °C (50 and 86 °F).
As with other ratites, emus have a relatively low basal metabolic rate compared to other types of birds. At −5 °C (23 °F), the metabolic rate of an emu sitting down is about 60% of that when standing, partly because the lack of feathers under the stomach leads to a higher rate of heat loss when standing from the exposed underbelly.
Emus forage in a diurnal pattern and eat a variety of native and introduced plant species. The diet depends on seasonal availability with such plants as Acacia, Casuarina and grasses being favoured. They also eat insects and other arthropods, including grasshoppers and crickets, beetles, cockroaches, ladybirds, bogong and cotton-boll moth larvae, ants, spiders and millipedes. This provides a large part of their protein requirements. In Western Australia, food preferences have been observed in travelling emus; they eat seeds from Acacia aneura until the rains arrive, after which they move on to fresh grass shoots and caterpillars; in winter they feed on the leaves and pods of Cassia and in spring, they consume grasshoppers and the fruit of Santalum acuminatum, a sort of quandong. They are also known to feed on wheat, and any fruit or other crops that they can access, easily climbing over high fences if necessary. Emus serve as an important agent for the dispersal of large viable seeds, which contributes to floral biodiversity. One undesirable effect of this occurred in Queensland in the early twentieth century when emus fed on the fruit of prickly pears in the outback. They defecated the seeds in various places as they moved around, and this led to a series of campaigns to hunt emus and prevent the seeds of the invasive cactus being spread. The cacti were eventually controlled by an introduced moth (Cactoblastis cactorum ) whose larvae fed on the plant, one of the earliest examples of biological control.
Small stones are swallowed to assist in the grinding up and digestion of the plant material. Individual stones may weigh 45 g (1.6 oz) and the birds may have as much as 745 g (1.642 lb) in their gizzards at one time. They also eat charcoal, although the reason for this is unclear. Captive emus have been known to eat shards of glass, marbles, car keys, jewellery and nuts and bolts.
Emus drink infrequently but ingest large amounts when the opportunity arises. They typically drink once a day, first inspecting the water body and surrounding area in groups before kneeling down at the edge to drink. They prefer being on firm ground while drinking, rather than on rocks or mud, but if they sense danger, they often stand rather than kneel. If not disturbed, they may drink continuously for ten minutes. Due to the scarcity of water sources, emus are sometimes forced to go without water for several days. In the wild, they often share water holes with kangaroos, other birds and animals; they are wary and tend to wait for the other animals to leave before drinking.
Emus form breeding pairs during the summer months of December and January and may remain together for about five months. During this time, they stay in an area a few kilometres in diameter and it is believed they find and defend territory within this area. Both males and females put on weight during the breeding season, with the female becoming slightly heavier at between 45 and 58 kg (99 and 128 lb). Mating usually takes place between April and June; the exact timing is determined by the climate as the birds nest during the coolest part of the year. During the breeding season, males experience hormonal changes, including an increase in luteinising hormone and testosterone levels, and their testicles double in size.
Males construct a rough nest in a semi-sheltered hollow on the ground, using bark, grass, sticks and leaves to line it. The nest is almost always a flat surface rather than a segment of a sphere, although in cold conditions the nest is taller, up to 7 cm (2.8 in) tall, and more spherical to provide some extra heat retention. When other material is lacking, the bird sometimes uses a spinifex tussock a metre or so across, despite the prickly nature of the foliage. The nest can be placed on open ground or near a shrub or rock. The nest is usually placed in an area where the emu has a clear view of its surroundings and can detect approaching predators.
Female emus court the males; the female's plumage darkens slightly and the small patches of bare, featherless skin just below the eyes and near the beak turn turquoise-blue. The colour of the male's plumage remains unchanged, although the bare patches of skin also turn light blue. When courting, females stride around, pulling their neck back while puffing out their feathers and emitting low, monosyllabic calls that have been compared to drum beats. This calling can occur when males are out of sight or more than 50 metres (160 ft) away. Once the male's attention has been gained, the female circles her prospective mate at a distance of 10 to 40 metres (30 to 130 ft). As she does this, she looks at him by turning her neck, while at the same time keeping her rump facing towards him. If the male shows interest in the parading female, he will move closer; the female continues the courtship by shuffling further away but continuing to circle him.
If a male is interested, he will stretch his neck and erect his feathers, then bend over and peck at the ground. He will circle around and sidle up to the female, swaying his body and neck from side to side, and rubbing his breast against his partner's rump. Often the female will reject his advances with aggression, but if amenable, she signals acceptance by squatting down and raising her rump.
Females are more aggressive than males during the courtship period, often fighting for access to mates, with fights among females accounting for more than half the aggressive interactions during this period. If females court a male that already has a partner, the incumbent female will try to repel the competitor, usually by chasing and kicking. These interactions can be prolonged, lasting up to five hours, especially when the male being fought over is single and neither female has the advantage of incumbency. In these cases, the females typically intensify their calls and displays.
The sperm from a mating is stored by the female and can suffice to fertilise about six eggs. The pair mate every day or two, and every second or third day the female lays one of a clutch of five to fifteen very large, thick-shelled, green eggs. The shell is around 1 mm (0.04 in) thick, but rather thinner in northern regions according to indigenous Australians. The eggs are on average 13 cm × 9 cm (5.1 in × 3.5 in) and weigh between 450 and 650 g (1.0 and 1.4 lb). The maternal investment in the egg is considerable, and the proportion of yolk to albumen, at about 50%, is greater than would be predicted for a precocial egg of this size. This probably relates to the long incubation period which means the developing chick must consume greater resources before hatching. The first verified occurrence of genetically identical avian twins was demonstrated in the emu. The egg surface is granulated and pale green. During the incubation period, the egg turns dark green, although if the egg never hatches, it will turn white from the bleaching effect of the sun.
The male becomes broody after his mate starts laying, and may begin to incubate the eggs before the clutch is complete. From this time on, he does not eat, drink, or defecate, and stands only to turn the eggs, which he does about ten times a day. He develops a brood patch, a bare area of wrinkled skin which is in intimate contact with the eggs. Over the course of the eight-week incubation period, he will lose a third of his weight and will survive on stored body fat and on any morning dew that he can reach from the nest. As with many other Australian birds, such as the superb fairywren, infidelity is the norm for emus, despite the initial pair bond: once the male starts brooding, the female usually wanders off, and may mate with other males and lay in multiple nests; thus, as many as half the chicks in a brood may not be fathered by the incubating male, or even by either parent, as emus also exhibit brood parasitism.
Some females stay and defend the nest until the chicks start hatching, but most leave the nesting area completely to nest again; in a good season, a female emu may nest three times. If the parents stay together during the incubation period, they will take turns standing guard over the eggs while the other drinks and feeds within earshot. If it perceives a threat during this period, it will lie down on top of the nest and try to blend in with the similar-looking surrounds, and suddenly stand up to confront and scare the other party if it comes close.
Incubation takes 56 days, and the male stops incubating the eggs shortly before they hatch. The temperature of the nest rises slightly during the eight-week period. Although the eggs are laid sequentially, they tend to hatch within two days of one another, as the eggs that were laid later experienced higher temperatures and developed more rapidly. During the process, the precocial emu chicks need to develop a capacity for thermoregulation. During incubation, the embryos are kept at a constant temperature but the chicks will need to be able to cope with varying external temperatures by the time they hatch.
Newly hatched chicks are active and can leave the nest within a few days of hatching. They stand about 12 cm (5 in) tall at first, weigh 0.5 kg (17.6 oz), and have distinctive brown and cream stripes for camouflage, which fade after three months or so. The male guards the growing chicks for up to seven months, teaching them how to find food. Chicks grow very quickly and are fully grown in five to six months; they may remain with their family group for another six months or so before they split up to breed in their second season. During their early life, the young emus are defended by their father, who adopts a belligerent stance towards other emus, including the mother. He does this by ruffling his feathers, emitting sharp grunts, and kicking his legs to drive off other animals. He can also bend his knees to crouch over smaller chicks to protect them. At night, he envelops his young with his feathers. As the young emus cannot travel far, the parents must choose an area with plentiful food in which to breed. In captivity, emus can live for upwards of ten years.
There are few native natural predators of emus still alive. Early in its species history it may have faced numerous terrestrial predators now extinct, including the giant lizard Megalania, the thylacine, and possibly other carnivorous marsupials, which may explain their seemingly well-developed ability to defend themselves from terrestrial predators. The main predator of emus today is the dingo, which was originally introduced by Aboriginals thousands of years ago from a stock of semi-domesticated wolves. Dingoes try to kill the emu by attacking the head. The emu typically tries to repel the dingo by jumping into the air and kicking or stamping the dingo on its way down. The emu jumps as the dingo barely has the capacity to jump high enough to threaten its neck, so a correctly timed leap to coincide with the dingo's lunge can keep its head and neck out of danger.
Despite the potential prey-predator relationship, the presence of predaceous dingoes does not appear to heavily influence emu numbers, with other natural conditions just as likely to cause mortality. Wedge-tailed eagles are the only avian predator capable of attacking fully-grown emus, though are perhaps most likely to take small or young specimens. The eagles attack emus by swooping downwards rapidly and at high speed and aiming for the head and neck. In this case, the emu's jumping technique as employed against the dingo is not useful. The birds try to target the emu in open ground so that it cannot hide behind obstacles. Under such circumstances, the emu can only run in a chaotic manner and change directions frequently to try and evade its attacker. Other raptors, monitor lizards, introduced red foxes, feral and domestic dogs, and feral pigs occasionally feed on emu eggs or kill small chicks.
Emus can suffer from both external and internal parasites, but under farmed conditions are more parasite-free than ostriches or rheas. External parasites include the louse Dahlemhornia asymmetrica and various other lice, ticks, mites and flies. Chicks sometimes suffer from intestinal tract infections caused by coccidian protozoa, and the nematode Trichostrongylus tenuis infects the emu as well as a wide range of other birds, causing haemorrhagic diarrhoea. Other nematodes are found in the trachea and bronchi; Syngamus trachea causing haemorrhagic tracheitis and Cyathostoma variegatum causing serious respiratory problems in juveniles.
Relationship with humans
Emus were used as a source of food by indigenous Australians and early European settlers. Emus are inquisitive birds and have been known to approach humans if they see unexpected movement of a limb or piece of clothing. In the wild, they may follow and observe people. Aboriginal Australians used a variety of techniques to catch the birds, including spearing them while they drank at waterholes, catching them in nets, and attracting them by imitating their calls or by arousing their curiosity with a ball of feathers and rags dangled from a tree. The pitchuri thornapple (Duboisia hopwoodii), or some similar poisonous plant, could be used to contaminate a waterhole, after which the disoriented emus were easy to catch. Another stratagem was for the hunter to use a skin as a disguise, and the birds could be lured into a camouflaged pit trap using rags or imitation calls. Aboriginal Australians only killed emus out of necessity, and frowned on anyone who hunted them for any other reason. Every part of the carcass had some use; the fat was harvested for its valuable, multiple-use oil, the bones were shaped into knives and tools, the feathers were used for body adornment and the tendons substituted for string.
The early European settlers killed emus to provide food and used their fat for fuelling lamps. They also tried to prevent them from interfering with farming or invading settlements in search of water during drought. An extreme example of this was the Emu War in Western Australia in 1932. Emus flocked to the Chandler and Walgoolan area during a dry spell, damaging rabbit fencing and devastating crops. An attempt to drive them off was mounted, with the army called in to dispatch them with machine guns; the emus largely avoided the hunters and won the battle. Emus are large, powerful birds, and their legs are among the strongest of any animal and powerful enough to tear down metal fencing. The birds are very defensive of their young, and there have been two documented cases of humans being attacked by emus.
In the areas in which it was endemic, the emu was an important source of meat to Aboriginal Australians. They used the fat as bush medicine and rubbed it into their skin. It served as a valuable lubricant, was used to oil wooden tools and utensils such as the coolamon, and was mixed with ochre to make the traditional paint for ceremonial body adornment. Their eggs were also foraged for food.
An example of how the emu was cooked comes from the Arrernte of Central Australia who called it Kere ankerre:
Emus are around all the time, in green times and dry times. You pluck the feathers out first, then pull out the crop from the stomach, and put in the feathers you've pulled out, and then singe it on the fire. You wrap the milk guts that you've pulled out into something [such as] gum leaves and cook them. When you've got the fat off, you cut the meat up and cook it on fire made from river red gum wood.
The birds were a food and fuel source for early European settlers, and are now farmed, in Australia and elsewhere, for their meat, oil and leather. Commercial emu farming started in Western Australia around 1970. The commercial industry in the country is based on stock bred in captivity, and all states except Tasmania have licensing requirements to protect wild emus. Outside Australia, emus are farmed on a large scale in North America, with about 1 million birds in the US, Peru, and China, and to a lesser extent in some other countries. Emus breed well in captivity, and are kept in large open pens to avoid the leg and digestive problems that arise from inactivity. They are typically fed on grain supplemented by grazing, and are slaughtered at 15 to 18 months.
The Salem district administration in India advised farmers in 2012 not to invest in the emu business which was being heavily promoted at the time; further investigation was needed to assess the profitability of farming the birds in India. In the United States, it was reported in 2013 that many ranchers had left the emu business; it was estimated that the number of growers had dropped from over five thousand in 1998 to one or two thousand in 2013. The remaining growers increasingly rely on sales of oil for their profit, although, leather, eggs, and meat are also sold.
Emus are farmed primarily for their meat, leather, feathers and oil, and 95% of the carcass can be used. Emu meat is a low-fat product (less than 1.5% fat), and is comparable to other lean meats. Most of the usable portions (the best cuts come from the thigh and the larger muscles of the drum or lower leg) are, like other poultry, dark meat; emu meat is considered for cooking purposes by the US Food and Drug Administration to be a red meat because its red colour and pH value approximate that of beef, but for inspection purposes it is considered to be poultry. Emu fat is rendered to produce oil for cosmetics, dietary supplements, and therapeutic products. The oil is obtained from the subcutaneous and retroperitoneal fat; the macerated adipose tissue is heated and the liquefied fat is filtered to get a clear oil. This consists mainly of fatty acids of which oleic acid (42%), linoleic and palmitic acids (21% each) are the most prominent components. It also contains various anti-oxidants, notably carotenoids and flavones.
There is some evidence that the oil has anti-inflammatory properties; however, there have not yet been extensive tests, and the USDA regards pure emu oil as an unapproved drug and highlighted it in a 2009 article entitled "How to Spot Health Fraud". Nevertheless, the oil has been linked to the easing of gastrointestinal inflammation, and tests on rats have shown that it has a significant effect in treating arthritis and joint pain, more so than olive or fish oils. It has been scientifically shown to improve the rate of wound healing, but the mechanism responsible for this effect is not understood. A 2008 study has claimed that emu oil has a better anti-oxidative and anti-inflammatory potential than ostrich oil, and linked this to emu oil's higher proportion of unsaturated to saturated fatty acids. While there are no scientific studies showing that emu oil is effective in humans, it is marketed and promoted as a dietary supplement with a wide variety of claimed health benefits. Commercially marketed emu oil supplements are poorly standardised.
Emu leather has a distinctive patterned surface, due to a raised area around the feather follicles in the skin; the leather is used in such items as wallets, handbags, shoes and clothes, often in combination with other leathers. The feathers and eggs are used in decorative arts and crafts. In particular, emptied emu eggs have been engraved with portraits, similar to cameos, and scenes of Australian native animals. Mounted Emu eggs and emu-egg containers in the form of hundreds of goblets, inkstands and vases were produced in the second half of the nineteenth century, all richly embellished with images of Australian flora, fauna and indigenous people by travelling silversmiths, founders of a ‘new Australian grammar of ornament’. They continued longstanding traditions that can be traced back to the European mounted ostrich eggs of the thirteenth century and Christian symbolism and notions of virginity, fertility, faith and strength. For a society of proud settlers who sought to bring culture and civilisation to their new world, the traditional ostrich-egg goblet, freed from its roots in a society dominated by court culture, was creatively made novel in the Australian colonies as forms and functions were invented to make the objects attractive to a new, broader audience. Significant designers Adolphus Blau, Julius Hogarth, Ernest Leviny, Julius Schomburgk, Johann Heinrich Steiner, Christian Quist, Joachim Matthias Wendt, William Edwards and others had the technical training on which to build flourishing businesses in a country rich in raw materials and a clientele hungry for old-world paraphernalia.
The emu has a prominent place in Australian Aboriginal mythology, including a creation myth of the Yuwaalaraay and other groups in New South Wales who say that the sun was made by throwing an emu's egg into the sky; the bird features in numerous aetiological stories told across a number of Aboriginal groups. One story from Western Australia holds that a man once annoyed a small bird, who responded by throwing a boomerang, severing the arms of the man and transforming him into a flightless emu. The Kurdaitcha man of Central Australia is said to wear sandals made of emu feathers to mask his footprints. Many Aboriginal language groups throughout Australia have a tradition that the dark dust lanes in the Milky Way represent a giant emu in the sky. Several of the Sydney rock engravings depict emus, and the birds are mimicked in indigenous dances.
The emu is popularly but unofficially considered as a faunal emblem – the national bird of Australia. It appears as a shield bearer on the Coat of arms of Australia with the red kangaroo, and as a part of the Arms also appears on the Australian 50 cent coin. It has featured on numerous Australian postage stamps, including a pre-federation New South Wales 100th Anniversary issue from 1888, which featured a 2 pence blue emu stamp, a 36 cent stamp released in 1986, and a $1.35 stamp released in 1994. The hats of the Australian Light Horse are decorated with emu feather plumes.
Trademarks of early Australian companies using the emu included Webbenderfer Bros frame mouldings (1891), Mac Robertson Chocolate and Cocoa (1893), Dyason and Son Emu Brand Cordial Sauce (1894), James Allard Pottery Wares (1906), and rope manufacturers G. Kinnear and Sons Pty. Ltd. still use it on some of their products.
There are around six hundred gazetted places in Australia with "emu" in their title, including mountains, lakes, hills, plains, creeks and waterholes. During the 19th and 20th centuries, many Australian companies and household products were named after the bird. In Western Australia, Emu beer has been produced since the early 20th century and the Swan Brewery continues to produce a range of beers branded as "Emu". The quarterly peer-reviewed journal of the Royal Australasian Ornithologists Union, also known as Birds Australia, is entitled Emu: Austral Ornithology.
Status and conservation
In John Gould's Handbook to the Birds of Australia, first published in 1865, he lamented the loss of the emu from Tasmania, where it had become rare and has since become extinct; he noted that emus were no longer common in the vicinity of Sydney and proposed that the species be given protected status. In the 1930s, emu killings in Western Australia peaked at 57,000, and culls were also mounted in Queensland during this period due to rampant crop damage. In the 1960s, bounties were still being paid in Western Australia for killing emus, but since then, wild emus have been granted formal protection under the Environment Protection and Biodiversity Conservation Act 1999. Their occurrence range is between 4,240,000 and 6,730,000 km2 (1,640,000–2,600,000 sq mi), and a 1992 census suggested that their total population was between 630,000 and 725,000. As of 2012[update], the International Union for Conservation of Nature considers their population trend to be stable and assesses their conservation status as being of least concern. The isolated emu population of the New South Wales North Coast Bioregion and Port Stephens is listed as endangered by the New South Wales Government.
Although the population of emus on mainland Australia is thought to be higher now than it was before European settlement, some local populations are at risk of extinction. The threats faced by emus include the clearance and fragmentation of areas of suitable habitat, deliberate slaughter, collisions with vehicles and predation of the eggs and young.
- Patterson, C.; Rich, Patricia Vickers (1987). "The fossil history of the emus, Dromaius (Aves: Dromaiinae)". Records of the South Australian Museum. 21: 85–117.
- BirdLife International (2012). "Dromaius novaehollandiae". IUCN Red List of Threatened Species. IUCN. 2012. Retrieved 14 July 2015.old-form url
- Davies, S.J.J.F. (2003). "Emus". In Hutchins, Michael (ed.). Grzimek's Animal Life Encyclopedia. 8 Birds I Tinamous and Ratites to Hoatzins (2nd ed.). Farmington Hills, Michigan: Gale Group. pp. 83–87. ISBN 978-0-7876-5784-0.
- Brands, Sheila (14 August 2008). "Systema Naturae 2000 / Classification, Dromaius novaehollandiae". Project: The Taxonomicon. Archived from the original on 10 March 2016. Retrieved 14 July 2015.
- "Names List for Dromaius novaehollandiae (Latham, 1790)". Department of the Environment, Water, Heritage and the Arts. Archived from the original on 14 July 2015. Retrieved 14 July 2015.
- Robert, Willem Carel Hendrik (1972). The explorations, 1696-1697, of Australia by Willem De Vlamingh. Philo Press. p. 140. ISBN 978-90-6022-501-1.
- Eastman, p. 5.
- Gould, John (1865). Handbook to the Birds of Australia. 2. London. pp. 200–203.
- Philip, Arthur (1789). The voyage of Governor Phillip to Botany Bay. London: Printed by John Stockdale. pp. 271–272.
- Latham, John (1790). Index Ornithologicus, Sive Systema Ornithologiae: Complectens Avium Divisionem In Classes, Ordines, Genera, Species, Ipsarumque Varietates (Volume 2) (in Latin). London: Leigh & Sotheby. p. 665.
- Gotch, A.F. (1995) . "16". Latin Names Explained. A Guide to the Scientific Classifications of Reptiles, Birds & Mammals. Facts on File. p. 179. ISBN 978-0-8160-3377-5.
- Vieillot, Louis Jean Pierre (1816). Analyse d'une nouvelle ornithologie élémentaire, par L.P. Vieillot. Deteville, libraire, rue Hautefeuille. pp. 54, 70.
- Alexander, W.B. (1927). "Generic name of the Emu". Auk. 44 (4): 592–593. doi:10.2307/4074902. JSTOR 4074902.
- Christidis, Les; Boles, Walter (2008). Systematics and Taxonomy of Australian Birds. Csiro Publishing. p. 57. ISBN 978-0-643-06511-6.
- Tudge, Colin (2009). The Bird: A Natural History of Who Birds Are, Where They Came From, and How They Live. Random House Digital. p. 116. ISBN 978-0-307-34204-1.
- Mitchell, K.J.; Llamas, B.; Soubrier, J.; Rawlence, N.J.; Worthy, T.H.; Wood, J.; Lee, M.S.Y.; Cooper, A. (2014). "Ancient DNA reveals elephant birds and kiwi are sister taxa and clarifies ratite bird evolution" (PDF). Science. 344 (6186): 898–900. Bibcode:2014Sci...344..898M. doi:10.1126/Science.1251981. hdl:2328/35953. PMID 24855267.
- Boles, Walter (6 April 2010). "Emu". Australian Museum. Retrieved 18 July 2015.
- Heupink, Tim H.; Huynen, Leon; Lambert, David M. (2011). "Ancient DNA suggests dwarf and 'giant' emu are conspecific". PLoS ONE. 6 (4): e18728. Bibcode:2011PLoSO...618728H. doi:10.1371/journal.pone.0018728. PMC 3073985. PMID 21494561.
- "Emu Dromaius novaehollandiae". BirdLife International. Retrieved 26 June 2015.
- Williams, W.D. (2012). Biogeography and Ecology in Tasmania. Springer Science & Business Media. p. 450. ISBN 978-94-010-2337-5.
- Frith, Harold James (1973). Wildlife conservation. Angus and Robertson. p. 308.
- Mathews, Gregory M. (1912). "Class: Aves; Genus Dromiceius". Novitates Zoologicae. XVIII (3): 175–176.
- "Emu (South Eastern): Dromaius novaehollandiae [novaehollandiae or rothschildi] (= Dromaius novaehollandiae novaehollandiae) (Latham, 1790)". Avibase. Retrieved 5 September 2015.
- "Emu (Northern): Dromaius novaehollandiae novaehollandiae (woodwardi) (= Dromaius novaehollandiae woodwardi) Mathews, 1912". Avibase. Retrieved 5 September 2015.
- "Emu (South Western): Dromaius novaehollandiae rothschildi Mathews, 1912". Avibase. Retrieved 5 September 2015.
- Bruce, M.D. (1999). "Common emu (Dromaius novaehollandiae)". In del Hoyo, J.; Elliott, A.; Sargatal, J. (eds.). Handbook of the Birds of the World Alive. Lynx Edicions. ISBN 978-84-87334-25-2.(subscription required)
- Gill, Frank; Donsker, David (eds.). "Subspecies Updates". IOC World Bird List, v 5.2. Retrieved 14 July 2015.
- McClymont, James R. "The etymology of the name 'emu'". readbookonline.net. Archived from the original on 21 April 2015. Retrieved 5 August 2015.
- Mathew, John (1899). Eaglehawk and crow a study of the Australian aborigines including an inquiry into their origin and a survey of Australian languages. Рипол Классик. p. 159. ISBN 978-5-87986-358-1.
- Troy, Jakelin (1993). The Sydney language. Canberra: Jakelin Troy. p. 54. ISBN 978-0-646-11015-8.
- Gillespie, James; Flanders, Frank (2009). Modern Livestock & Poultry Production. Cengage Learning. p. 908. ISBN 978-1-4283-1808-3.
- Stephen Davies (2002). Ratites and Tinamous. ISBN 978-0-19-854996-3.
- Eastman, p. 6.
- Patak, A.E.; Baldwin, J. (1998). "Pelvic limb musculature in the emu Dromaius novaehollandiae (Aves : Struthioniformes: Dromaiidae): Adaptations to high-speed running". Journal of Morphology. 238 (1): 23–37. doi:10.1002/(SICI)1097-4687(199810)238:1<23::AID-JMOR2>3.0.CO;2-O. PMID 9768501.
- Eastman, p. 9.
- Eastman, p. 7.
- "Emus vs. Ostriches". Wildlife Extra. Archived from the original on 18 July 2015. Retrieved 19 July 2015.
- Maloney, S.K.; Dawson, T.J. (1995). "The heat load from solar radiation on a large, diurnally active bird, the emu (Dromaius novaehollandiae)". Journal of Thermal Biology. 20 (5): 381–387. doi:10.1016/0306-4565(94)00073-R.
- Eastman, pp. 5–6.
- Eastman, p. 23.
- Coddington, Catherine L.; Cockburn, Andrew (1995). "The mating system of free-living emus". Australian Journal of Zoology. 43 (4): 365–372. doi:10.1071/ZO9950365.
- Davies, S.J.J.F. (1976). "The natural history of the emu in comparison with that of other ratites". In Firth, H.J.; Calaby, J.H. (eds.). Proceedings of the 16th international ornithological congress. Australian Academy of Science. pp. 109–120. ISBN 978-0-85847-038-5.
- Ekesbo, Ingvar (2011). Farm Animal Behaviour: Characteristics for Assessment of Health and Welfare. CABI. pp. 174–190. ISBN 978-1-84593-770-6.
- Immelmann, K. (1960). "The sleep of the emu". Emu. 60 (3): 193–195. doi:10.1071/MU960193.
- Maloney, S.K.; Dawson, T.J. (1994). "Thermoregulation in a large bird, the emu (Dromaius novaehollandiae)". Comparative Biochemistry and Physiology B. 164 (6): 464–472. doi:10.1007/BF00714584.
- Maloney, S.K.; Dawson, T.J. (1998). "Ventilatory accommodation of oxygen demand and respiratory water loss in a large bird, the emu (Dromaius novaehollandiae), and a re-examination of ventilatory allometry for birds". Physiological Zoology. 71 (6): 712–719. doi:10.1086/515997. PMID 9798259.
- Maloney, Shane K. (2008). "Thermoregulation in ratites: a review". Australian Journal of Experimental Agriculture. 48 (10): 1293–1301. doi:10.1071/EA08142.
- Barker, R.D.; Vertjens, W.J.M. (1989). The Food of Australian Birds: 1 Non-Passerines. CSIRO Australia. ISBN 978-0-643-05007-5.
- Eastman, p. 44.
- Powell, Robert (1990). Leaf and branch: Trees and tall shrubs of Perth. Department of Conservation and Land Management. p. 197. ISBN 978-0-7309-3916-0.
- Eastman, p. 31.
- McGrath, R.J.; Bass, D. (1999). "Seed dispersal by emus on the New South Wales north-east coast". Emu. 99 (4): 248–252. doi:10.1071/MU99030.
- "The prickly pear story" (PDF). Department of Employment, Economic Development and Innovation, State of Queensland. 2015. Retrieved 21 July 2015.
- Eastman, p. 15.
- Malecki, I.A.; Martin, G.B.; O'Malley, P.J.; Meyer, G.T.; Talbot, R.T.; Sharp, P.J. (1998). "Endocrine and testicular changes in a short-day seasonally breeding bird, the emu (Dromaius novaehollandiae), in southwestern Australia". Animal Reproduction Science. 53 (1–4): 143–155. doi:10.1016/S0378-4320(98)00110-9. PMID 9835373.
- Eastman, p. 24.
- Patodkar, V.R.; Rahane, S.D.; Shejal, M.A.; Belhekar, D.R. (2011). "Behavior of emu bird (Dromaius novaehollandiae)". Veterinary World. 2 (11): 439–440.
- Campbell, Bruce; Lack, Elizabeth (2013). A Dictionary of Birds. Bloomsbury Publishing. p. 179. ISBN 978-1-4081-3839-7.
- Dzialowski, Edward M.; Sotherland, Paul R. (2004). "Maternal effects of egg size on emu Dromaius novaehollandiae egg composition and hatchling phenotype". Journal of Experimental Biology. 207 (4): 597–606. doi:10.1242/jeb.00792.
- Bassett, S.M.; Potter, M.A.; Fordham, R.A.; Johnston, E.V. (1999). "Genetically identical avian twins". Journal of Zoology. 247 (4): 475–478. doi:10.1111/j.1469-7998.1999.tb01010.x.
- Eastman, p. 25.
- Royal Australasian Ornithologists' Union (1956). The Emu. The Union. p. 408.
- Taylor, Emma L.; Blache, Dominique; Groth, David; Wetherall, John D.; Martin, Graeme B. (2000). "Genetic evidence for mixed parentage in nests of the emu (Dromaius novaehollandiae)". Behavioral Ecology and Sociobiology. 47 (5): 359–364. doi:10.1007/s002650050677. JSTOR 4601755.
- Eastman, p. 26.
- Reader's Digest Complete Book of Australian Birds. Reader's Digest Services. 1978. ISBN 978-0-909486-63-1.
- Eastman, p. 27.
- Eastman, p. 29.
- Caughley, G.; Grigg, G.C.; Caughley, J.; Hill, G.J.E. (1980). "Does dingo predation control the densities of kangaroos and emus?". Australian Wildlife Research. 7: 1–12. CiteSeerX 10.1.1.534.9972. doi:10.1071/WR9800001.
- Wedge-tailed eagle (Australian Natural History Series) by Peggy Olsen. CSIRO Publishing (2005), ISBN 978-0-643-09165-8
- Nemejc, Karel; Lukešová, Daniela (2012). "The parasite fauna of ostriches, emu and rheas". Agricultura Tropica et Subtropica. 54 (1): 45–50. doi:10.2478/v10295-012-0007-6.
- Eastman, p. 63.
- ""Emu War" defended". The Argus. 19 November 1932. p. 22. Retrieved 19 July 2015.
- "Attacked by an emu". The Argus. 10 August 1904. p. 8. Retrieved 15 July 2015.
- "Victoria". The Mercury. 24 March 1873. p. 2. Retrieved 15 July 2015.
- Eastman, pp. 62–64.
- Clarke, P. A. (2018). Aboriginal foraging practices and crafts involving birds in the post-European period of the Lower Murray, South Australia. Transactions of the Royal Society of South Australia, 142(1), 1-26.
- Turner, Margaret–Mary (1994). Arrernte Foods: Foods from Central Australia. Alice Springs, Northern Territory: IAD Press. p. 47. ISBN 978-0-949659-76-7.
- Nicholls, Jason (1998). Commercial emu raising : using cool climate forage based production systems : a report for the Rural Industries Research and Development Corporation. Barton, A.C.T. : Rural Industries Research and Development Corp. ISBN 978-0-642-57869-3. Archived from the original on 15 July 2015. Retrieved 15 July 2015.
- "Ratites (Emu, Ostrich, and Rhea)". United States Department of Agriculture. 2 August 2013. Retrieved 15 July 2015.
- Davis, Gary S. (29 May 2007). "Commercial Emu Production". North Carolina Cooperative Extension Service. Retrieved 30 July 2015.
- Saravanan, L. (21 April 2012). "Don't invest in Emu farms, say Salem authorities". The Times of India. Retrieved 15 July 2015.
- Robbins, Jim (7 February 2013). "Ranchers find hope in flightless bird's fat". The New York Times. Retrieved 8 February 2013.
- Howarth, Gordon S.; Lindsay, Ruth J.; Butler, Ross N.; Geier, Mark S. (2008). "Can emu oil ameliorate inflammatory disorders affecting the gastrointestinal system?". Australian Journal of Experimental Agriculture. 48 (10): 1276–1279. doi:10.1071/EA08139.
- Yoganathan, S.; Nicolosi, R.; Wilson, T.; Handelman, G.; Scollin, P.; Tao, R.; Binford, P.; Orthoefer, F. (2003). "Antagonism of croton oil inflammation by topical emu oil in CD-1 mice". Lipids. 38 (6): 603–607. doi:10.1007/s11745-003-1104-y. PMID 12934669.
- Kurtzweil, Paula (25 February 2010). "How to Spot Health Fraud". Drugs. U.S. Food and Drug Administration. Retrieved 15 July 2015.
- Bennett, Darin C.; Code, William E.; Godin, David V.; Cheng, Kimberly M. (2008). "Comparison of the antioxidant properties of emu oil with other avian oils". Australian Journal of Experimental Agriculture. 48 (10): 1345–1350. doi:10.1071/EA08134.
- Politis, M.J.; Dmytrowich, A. (1998). "Promotion of second intention wound healing by emu oil lotion: comparative results with furasin, polysporin, and cortisone". Plastic and Reconstructive Surgery. 102 (7): 2404–2407. doi:10.1097/00006534-199812000-00020. PMID 9858176.
- Whitehouse, M.W.; Turner, A.G.; Davis, C.K.; Roberts, M.S. (1998). "Emu oil(s): A source of non-toxic transdermal anti-inflammatory agents in aboriginal medicine". Inflammopharmacology. 6 (1): 1–8. doi:10.1007/s10787-998-0001-9. PMID 17638122.
- "Kalti Paarti - Carved emu eggs". National Museum of Australia. Retrieved 15 July 2015.
- Jonathan Sweet, ‘Belonging before Federation: Design and Identity in Colonial Australian Gold and Silver’, in Brian Hubber (ed.), All that Glitters: Australian Colonial Gold and Silver from the Vizard Foundation, exhibition catalogue, Geelong Regional Art Gallery, Geelong, 2001, p. 15.
- John B Hawkins, 19th Century Australian Silver, Antique Collectors’ Club, Woodbridge, UK, 1990, vol. 1, p. 22–6; Eva Czernis-Ryl (ed.), Australian Gold & Silver, 1851–1900, exhibition catalogue, Powerhouse Museum, Sydney, 1995.
- Dirk Syndram & Antje Scherner (ed.), Princely Splendor: The Dresden Court 1580–1620, exhibition catalogue, Metropolitan Museum of Art, New York, 2006, pp. 87–9.
- Joylon Warwick James, ‘A European Heritage: Nineteenth-Century Silver in Australia’, The Silver Society Journal, 2003, pp. 133–7
- Terence Lane, ‘Australian Silver in the National Gallery of Victoria’, Art Bulletin, vol. 18, 1980–81, pp. 379–85
- Judith O’Callaghan (ed.), The J. and J. Altmann Collection of Australian Silver, exhibition catalogue, National Gallery of Victoria, Melbourne, 1981.
- Eichberger, D. (1988). Patterns of Domestication.
- Dixon, Roland B. (1916). "Australia". Oceanic Mythology. Bibliobazaar. pp. 270–275. ISBN 978-0-8154-0059-2.
- Eastman, p. 60.
- Norris, R. P., & Hamacher, D. W. (2010). Astronomical symbolism in Australian Aboriginal rock art. arXiv preprint arXiv:1009.4753.
- Norris, R. (2008). Emu Dreaming:[The Milky Way and other heavenly bodies have been inspiration for a rich Aboriginal culture.]. Australasian Science, 29(4), 16.
- Norris, Ray P.; Hamacher, Duane W. (2010). "Astronomical Symbolism in Australian Aboriginal Rock Art". Rock Art Research. 28 (1): 99. arXiv:1009.4753. Bibcode:2011RArtR..28...99N.
- Eastman, p. 62.
- Robin, Libby, 1956-; Joseph, Leo; Heinsohn, Robert; ProQuest (Firm) (2009), Boom & bust : bird stories for a dry country, CSIRO Pub, ISBN 978-0-643-09709-4CS1 maint: multiple names: authors list (link)
- "Australia's National Symbols". Department of Foreign Affairs and Trade. Retrieved 15 July 2015.
- "Fifty cents". Royal Australian Mint. 2010. Retrieved 18 July 2015.
- "Emu Stamps". Bird stamps. Birdlife International. Retrieved 18 July 2015.
- "Tabulam and the Light Horse Tradition". Australian Light Horse Association. 2011. Retrieved 18 July 2015.
- Marti, S. (2018). “The Symbol of Our Nation”: The Slouch Hat, the First World War, and Australian Identity. Journal of Australian Studies, 42(1), 3-18.
- Cozzolino, Mimmo; Rutherford, G. Fysh (Graeme Fysh), 1947- (2000), Symbols of Australia (20th anniversary ed.), Mimmo Cozzolino, p. 62, ISBN 978-0-646-40309-0CS1 maint: multiple names: authors list (link)
- "Place Names Search Result". Geoscience Australia. 2004. Archived from the original on 9 December 2012. Retrieved 18 July 2015.
- Spiller, Geoff; Norton, Suzanna (2003). Micro-Breweries to Monopolies and Back: Swan River Colony Breweries 1829-2002. Western Australian Museum. ISBN 978-1-920843-01-4.
- "Emu: Austral Ornithology". Royal Australasian Ornithologists' Union. 2011. Retrieved 18 July 2015.
- "Emu set for television comeback". BBC News. 8 June 2006. Retrieved 18 July 2015.
- "Introducing LiMu Emu and Doug, the Dynamic Duo of the Insurance World Starring in New Liberty Mutual Ad Campaign" (Press release). Liberty Mutual Insurance. 25 February 2019. Retrieved 11 July 2019.
- "Emu population in the NSW North Coast Bioregion and Port Stephens LGA". New South Wales: Office of Environment and Heritage. 22 October 2012. Retrieved 15 July 2015.
|Wikimedia Commons has media related to Dromaius novaehollandiae.|
|Wikispecies has information related to Emu|
|Look up emu in Wiktionary, the free dictionary.|
- Encyclopædia Britannica (11th ed.). 1911. .
| 1 | 20 |
<urn:uuid:a7a4acdd-3b6f-4757-96b5-0b0e78009c58>
|
For the next month, you can view the BBC TV documentary Scotland’s Treasures on iPlayer.
I was up in Aberdeen yesterday, interviewing for an education project themed around the Deskford carnyx. As part of my preparation I was reading up on the Deskford find as well as on carnyxes generally, and some ideas crystallised in my mind about this object specifically, as well as about the whole theme of reconstructing archaeological objects more generally. And the recreation of ancient music is perhaps the most difficult strand of reconstructing ancient objects, because the musical instrument is not merely a decorative item or a functioning tool, but is the living substrate of a whole other creative art, i.e. music making.
I was chatting with Maura Uí Chróinín in Kilkenny, about the “BC/AD” music-archaeology theme of this year’s Galway Early Music Festival, and she made the point that most music archaeologists seem to work on their own, outside of both the musical and the archaeological mainstream. The reasons for this are obvious enough, since archaeologists most often don’t have music training and musicians don’t have archaeological background, and so the majority of scholars on both sides feel un-qualified to judge or participate in music-archaeology work.
The late Iron age object from Deskford (my photo shown on the right, in the NMS) was excavated in the 19th century and so is, by modern standards, poorly recorded and conserved. It is in the form of a sheet bronze hollow boar’s head, and has with it a number of associated sheet bronze items which seem to form the palate of the boar’s mouth, its lower jaw, and a circular plate which is often assumed to have closed the open back of the head. The original descriptions also mention a wooden tongue mounted on springs but these are lost.
Early suggestions of its function were perhaps as a headdress. In 1959, Stuart Piggot published a paper suggesting it may have been the bell of a distinctive type of Iron Age long trumpet, called carnyx. At that date, the carnyx was known from classical art and literature, and Piggot drew attention to a lost example excavated in Tattershall, England, in the 18th century.
Piggot’s article included a speculative reconstruction of the Deskford boar’s head mounted on a long vertical tube, and despite his reservations and cautions, this image and the idea of the only extant carnyx surviving from North-East Scotland captured the public imagination. In the 1990s, John Purser led a team to build a working reconstruction of the boars head as a long trumpet bell, following Piggot’s drawing. This modern carnyx has been played extensively by trombonist John Kenny – I remember seeing him play it at a concert in Edinburgh some years ago.
In all this excitement, people forget that Piggot’s suggestion was just that – a speculative suggestion made at a time when very little was actually known about the carnyx. Now we have a lot more information available, especially since the publication of detailed information of the set of almost complete carnyxes excavated in 2004 in Tintinac in France. Looking over the depictions, the Tintinac examples (illustration left from Wikipedia) and the River Witham drawing publiushed by Piggot, I see a number of important features that could be said to characterise the carnyx. The tube is tapering along its whole length like a horn, and flares gently but markedly towards the animal head, which is not seperate in shape but forms a smooth continuation of the bore flare. The animal mouth is wide open, not constricting the bell of the instrument. In contrast, the Deskford head tapers the other way, severly constricting the bell of the reconstructed instrument – a recent acoustical study notes that it acts like a “trombone mute”. Also, the use of the circular dished plate to close the back of the boar’s head requires a thin tube, with a sudden step in profile as the tube meets the head. Again this has an adverse effect on the harmonicity of the instrument in contrast to the smooth expansion of the other extant and depicted carnyxes.
These considerations alone make me instantly very suspicious of this idea, that the Deskford head represents the remains of a musical instrument. I can see no specific evidence to support this interpretation and I can see a number of problems, ways in which the Deskford head is markedly different in form from all of the other extant and depicted carnyxes. I would go as far as to say, the Deskford boar’s head is not a carnyx.
A number of descriptions of the reconstruction Deskford carnyx are at pains to point out that it involves a large amount of interpretative or newly-invented design, but that nonetheless it represents a fascinating working instrument that can “result in
instruments capable of playing a valuable role in the musical culture of the present day.” (M. Campbell & J. Kenny, Acoustical and musical properties of the Deskford Carnyx reconstruction, Proceedings of the Acoustics 2012 Nantes Conference). This is the rub – you invent a new instrument, give it an ancient name and hang it on an ancient cultural icon or artefact, and so set off in a new direction. This is not music archaeology; this is modernist cultural creativity, re-imagining ancient symbols for new purposes. If the purpose was really to get the ancient carnyx up and running, then there are the Tintinac examples ready to be exactly replicated; compared to that, a new instrument using a copy of the Deskford boar’s head as its bell has virtually no archaeological or music-archaeological value. Clearly it is not intended to do music-archaeology work; instead it is designed and produced for present day national-cultural reasons, to provide a newly-invented iconic “ancient” Scottish sight and sound.
We are not so far away from the invention of the gut-strung lever harp in the 1890s, and the neglect of the historical Gaelic harp…
One final thought: many modern depictions or recreations of carnyxes emphasise its long S shape, with a vertical tube topped by a 90 degree bend to hold the animal head, and with another 90 degree bend at the bottom to hold the mouthpiece horizontal while the tube is vertical. It seems to me that all the ancient carnyxes did not have this 90 degree bend at the bottom – some may have had an oblique mouthpiece cut in the lower end of the vertical tube, but the normal arrangement seems to have been a plain mouthpiece on the end of the long tube, as seen on the Tintinac example illustrated above. So the player has to tip their head right back and blow almost vertically into the instrument. A very different playing position with all its implications for sound production!
Following on from the interlace on the caskets I posted yesterday, here is a whalebone gaming piece found in a cave on the isle of Rum. The Museum suggests it is 15th or early 16th century.
Again the style of the interlace carving is reminiscent of the pillar carving on the Trinity and Queen Mary harps – the interlace in low relief over and under against a recessed ground, tightly knotted, with parallel incised stripes emphasising the turn of the ribbons. Compare especially this panel on the Trinity harp forepillar:
The gaming pieces is a bit wobbly in its execution, but then so too is the interlace on the Trinity harp. However the thing about the gaming piece that really got me is the weird asymmetry. I have rotated my photo to show it with the axis of symmetry vertical, however it does not have a horizontal symmetry. The pattern of the top half is quite elegant and interesting, but if mirrored in the bottom half it would not give a single endless line. Perhaps the artist saw this and made one fewer edge loops, so crossing over two of the ribbons. However this also has the effect of creating two closed circles in the lower half. We see similar closed circles on the Trinity pillar. Look at how the end circles on the trinity pillar do not close but loop back on each other. This is similar to how the two circles in the upper half of the gaming piece are not closed.
I have to say that no matter how I turn and manipulate the gaming piece in my mind, it is not as elegant a composition as the panel on the Trinity harp pillar!
In the National Museum today I looked at the two whalebone caskets. They are about 15th century in date, from the West Highlands – similar caskets appear on the stone slabs such as the one from Keills, and it has been suggested that the caskets were used to store documents such as charters and land-grants.
I was interested to look at the interlace panels; they seem similar to the interlace on the Trinity College harp forepillar. There is also some similarity with the small sections of interlace on the Queen Mary harp forepillar.
I was in the NMS yesterday and amongst other things I looked at the shrine of the Guthrie bell. This is a medieval silver confection which encases an early medieval iron bell – no-one seems to know which early saint the bell belonged to, but the silver decoration was made and applied in the West Highlands in mid-late medieval times.
My photo shows a late 15th or early 16th century figure of a West Highland bishop, and beside him some embossed silver panels of decoration which are a good match of the forepillar vines on the Queen Mary harp. The inscription is upside down and says “Iohannes Alexan/dri me fieri fecit”. I am not sure who John mac Alex was, though these are common manes amongst the Lords of the Isles who are likely patrons for the remodelling of the shrine in the late 15th century.
I am playing two special events in the gallery beside the Queen Mary harp in the National Museum of Scotland, Chambers St. Edinburgh, as part of the 26 Treasures series. The Queen Mary harp is treasure no. 8, and Sara Sheridan is the writer who has been assigned this treasure.
On Saturday 3rd December 2011, I am playing at the launch event, in the gallery beside the Queen Mary harp, four 20 minute performances at 12.30, 1.30, 2.30 & 3.30pm.
And on Wednesday 14th December 2011, at 10.30am, I am working with Sara Sheridan to present a childrens’ storytelling event about the Queen Mary harp.
In the new museum there is a small gallery of musical instruments, and I was delighted to see a tonkori on display. I have never seen one before, but some years ago a friend in Japan sent me a copy of Oki, Tonkori. This very interesting string music, on an instrument with only 5 or so notes, has to my ear many parallels with the indigenous European string arts such as lyre, jouhikko, kantele, and harp. I think the musical patterns are suprisingly reminiscent of the music in Robert ap Huw’s manuscript.
On Tuesday I visited the perhaps misleadingly titled “Lewis Chessmen” exhibition in Edinburgh. This gaming-piece here is particularly interesting – the curators suggest it is a locally-made replacement piece. The decoration on the chair back matches that on the Queen Mary harp and on other 15th century West Highland sculpture. It’s British Museum 1831,1101.82
| 1 | 2 |
<urn:uuid:c85e1bb6-d76c-4f0e-86d8-80a86705f0b1>
|
What is GNU/Linux?
The open-source Linux operating system, for all its complications and confusing nomenclature, spans a universe of alternatives to Windows and macOS worth exploring.
Most consumers can, with a little effort, name two desktop and laptop operating systems: Microsoft Windows and Apple's macOS. Few have ever considered any of the open-source alternatives found under the umbrella of GNU/Linux, though some may have done so without even knowing it—Google's Chrome OS uses the Linux kernel. To be honest, aside from the Chromebook platform, GNU/Linux systems are typically not best for people who rely on big-name software or don't like dabbling with a customizable, hands-on interface. However, if you're looking for a change of pace, don't want to pay for your software, and don't mind rolling up your sleeves, switching to GNU/Linux may not only be worthwhile, but make you a convert for life. This guide for nontechnical users will show you how.
What Are UNIX, Linux, and GNU?
Before diving headfirst into the wonky world of GNU/Linux systems, it's important to understand how they came about and some of the terms you may encounter while researching and using them. I'll start with a brief history of the big three: UNIX, Linux, and GNU.
UNIX is a proprietary, command-line-based operating system originally developed by Dennis Ritchie and Ken Thompson (among others) at AT&T's Bell Labs in the late 1960s and early 1970s. UNIX is coded almost entirely in the C programming language (also invented by Ritchie) and was originally intended to be used as a portable and convenient OS for programmers and researchers. As a result of a long and complicated legal history involving AT&T, Bell Labs, and the federal government, UNIX and UNIX-like operating systems grew in popularity, as did Thompson's influential philosophy of a modular, minimalist approach to software design.
During this period, Richard Stallman launched the GNU Project with the goal of creating "an operating system that is free software." GNU, confusingly, stands for "GNU's Not UNIX." This project is responsible for the UNIX-like GNU OS. Stallman also launched the related Free Software Foundation (FSF) on the principle that "any user can study the source code, modify it, and share the program" for any participating software.
I'll go deeper into what makes up an operating system in a minute, but the plot thickened when, essentially, the development of a very important low-level component called the kernel or GNU Hurd did not fully materialize. This is where Linux, a kernel developed by Linus Torvalds among others, entered the picture. According to GNU:
"Linux is the kernel: the program in the system that allocates the machine's resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system."
GNU purists argue that references to Linux as the complete operating system that exists today should instead be written as GNU/Linux, in acknowledgment of the pair's symbiotic relationship. Others tend to focus on the fact that Linux (with no prefix) has become a more mainstream term and the logic behind the GNU/Linux nomenclature could expand ad nauseam to GNU/Linux/Windowing System Name/Desktop Environment Name/Etc. For the purpose of this guide, I'll use GNU/Linux.
Other UNIX-like operating system options exist too, notably FreeBSD and Qubes OS, which work with their own kernels and software. The histories of these projects could fill many books, but this brief summation should be enough to contextualize some terms you may come across.
What Is a Distro?
The modern operating systems we use every day, such as Windows and macOS, are made of many, many different (and very technical) components, including kernels that help software communicate with hardware and desktop environments or user interfaces that you see on screen. A detailed explanation of how all the modules work is beyond the scope of this article.
Try thinking about, for example, how moving the mouse translates to the cursor moving across the screen or how a file is stored on your solid-state drive. Seemingly simple tasks are actually immensely complex when you understand all the components in play and how quickly modern computers can perform these actions. Windows and macOS are designed to operate with as little friction as possible, as users simply don't need to understand how things work behind the scenes. In other words, everything beneath the graphical user interface (GUI) is functionally irrelevant to most use cases.
Now, let's move to GNU/Linux distros. A distro (short for distribution) is best thought of as a neatly wrapped package of the core software components that make up a GNU/Linux operating system. Consider distros like Ubuntu, Mint, PCLinuxOS, or Puppy as roughly the functional equivalent of Windows and macOS.
A typical GNU/Linux distribution includes the Linux kernel; GNU tools and libraries; a windowing system for displaying windows on screen and interacting with input devices; a desktop environment; and additional parts. Some of the most common desktop environments are GNU's GNOME, KDE's Plasma, MATE, and XFCE. Different flavors of distros use different desktop environments—fancier or leaner, more or less like Microsoft Windows, or whatever—but the core components of the OS are the same.
A software firm or organization typically packages all these parts and creates an ISO file (technically, the compressed image of an installation CD-ROM or DVD), which users can download and install on their computers. For example, Canonical is the company that manages the release of the popular distro Ubuntu; Microsoft and Apple function in a similar role when releasing new versions of Windows or macOS. If you're skilled enough, you can cherry-pick components and package a distro of your own, but we won't get into that here.
As mentioned, Apple's and Microsoft's platforms are just as complex. MacOS is built on a UNIX-like hybrid kernel called XNU (X is not UNIX), a graphical user interface called Aqua, and a GUI shell called Finder. Windows 10 is a member of the Windows NT family, using a hybrid kernel and the Windows Shell GUI. (On a side note, Microsoft now includes a full Linux kernel in Windows 10, which confuses things a bit.) Chrome OS is based on Chromium OS and the Linux kernel.
The takeaway here is that even though you may think of Windows and macOS as monolithic, they have just as many moving parts. The difference is that you'll rarely if ever encounter their complexities, while even the most user-friendly distros are not as seamless.
You might also come across the terms upstream and downstream when reading about the relationship of one distro to another. Sticking with Ubuntu as an example, that distro is downstream from another popular release called Debian. Quoting Ubuntu's website, it "builds on the Debian architecture and infrastructure and collaborates widely with Debian developers." In other words, Canonical makes fixes and changes to Debian's packages based on its own software philosophy and deploys those to its users (sometimes sending changes back upstream to Debian).
Some Popular Desktop Distros
There are tons of different GNU/Linux distros and it would be difficult to catalog all of them. Some are designed for usability, others for privacy, and still others for programmers or for speedy performance on minimal or obsolete hardware. Some serve narrower purposes, such as Raspberry Pi's Raspbian and LibreELEC, designed to be just enough OS for running the home multimedia platform Kodi.
Here's a quick list of some popular desktop distros:
- Arch Linux is an independent, lightweight distro with a default command-line interface. It has an excellent support community and associated Wiki.
- Debian is one of the oldest distros and many other distros are forks of it. Debian sticks closely to the free software movement and community.
- Elementary OS is an Ubuntu-based distro. Its desktop environment, called Pantheon, is designed above all to be user-friendly.
- Fedora is sponsored by Red Hat, a subsidiary of IBM. It's a polished distro focused on integrating new features.
- KDE Neon is managed by the developers of the KDE desktop environment. Though technically not a complete distro, it is available as an installable image. Its focus is putting the latest KDE software on top of a stable Ubuntu build.
- Mint (not to be confused with Peppermint) is another Ubuntu-based distro with a strong user and developer community and particularly good support for multimedia.
- Pop!_OS is a distro published by System 76, a GNU/Linux laptop and desktop vendor. The distro strives for a minimalist aesthetic and to enable clean workflows.
- PureOS is a Debian-based OS distributed by hardware manufacturer Purism. It has the distinction of being endorsed by the GNU Project, since it exclusively supports free software.
Are there systems for handheld and other non-desktop devices that use the Linux kernel? Absolutely. LineageOS, /e/, Plasma Mobile, PureOS, LibremOS, and Ubuntu Touch (now run by the UBports community) are just a few examples.
Some Advantages and Disadvantages of GNU/Linux Systems
I would be remiss not to state that running a GNU/Linux system is not like running macOS or Windows. Simple tasks don't always work as you'd expect. For instance, installing programs is not always straightforward even if you use the distro's built-in app store, which might not have the latest versions of various programs. For such tasks, you need to be willing to at least learn the basics of the terminal or typed command-line interface.
Scanners, multifunction printers, and other peripherals present challenges, too, as driver installations are not as straightforward or as easy to troubleshoot. Be prepared to spend a lot of time relearning how to do basic tasks in new ways and to search for solutions in various forums scattered across the web. If you get frustrated easily with technology, GNU/Linux systems are not the best fit.
Ultimately, whether an open-source OS is the correct choice for you depends on how you use your computer. If you're a programmer, you may prefer a stable, stripped-back design. If you have a spare or older PC lying around the house, a GNU/Linux distro might give it new life.
One major reason you might consider using a GNU/Linux distro is that many are technically free, although you should certainly contribute what you can to the community that maintains your distro of choice. Although the price of the operating system isn't something you typically consider when buying a PC, it might be a factor if you are building your own desktop. You can buy a Windows 10 Home license, but that will run you at least $139. You can't even get macOS unless you pay for Apple hardware.
Another temptation for some users is the customizability and flexibility of open-source GNU/Linux systems. As stated, many distros support different desktop environments, each of which can offer a fresh interface. Still another draw is long-term support and stability. Many distro developers maintain releases for years and don't require you to update if you don't want to. This helps maintain consistency and ensures fewer breakdowns. The move from Windows 7 to Windows 8 is surely a strong enough example of the perils of changing too many things too quickly.
You may also appreciate one of the philosophies that guide many GNU/Linux projects. You'll hear such terms as Libre (free as in freedom, not cost); FOSS (Free and Open Source); and FLOSS (Free/Libre and Open Source) for different ideologies. GNU offers a more in-depth explanation of the different camps within the free software movement.
However, for students and home users with little or no technical expertise who simply don't want to be bothered with unforeseen complications, there's absolutely no shame in wanting a computer that makes your life easier. For these users, Windows and macOS are much more familiar and thus more intuitive, and troubleshooting most problems can be done without needing Command Prompt or Terminal respectively.
I haven't even mentioned another big consideration: whether the apps you use on a daily basis are available. Microsoft Office, for instance, is not, though the company did recently release a public preview of Microsoft Teams for GNU/Linux. Nor are Adobe's Creative Cloud apps. Of course, you can use alternatives such as LibreOffice for document creation; GIMP, Inkscape, and Krita for creative work; and DarkTable for photo editing. In my experience, however, these apps are arguably not quite as capable or seamless as their better-known rivals.
It's not all a lost cause, though. You can still get popular browsers such as Chrome, Firefox, and Tor; communication tools such as Signal and Slack; security software such as VPNs and password managers; and multimedia essentials such as VLC Player. If you want the most available programs, pick a popular distro, such as one based on Debian. Steam is available, and a growing number of games support GNU/Linux either natively or with help from Steam's Proton tool, though Blizzard's Battle.Net, Epic's Game Store, and EA's Origin are unavailable.
What Devices Can I Get with GNU/Linux?
While you can install GNU/Linux manually on many laptops and desktops, it can be a chore. Unfortunately, you probably can't walk into a brick-and-mortar store and find macOS and Windows alternatives other than Chromebooks. Probably the closest thing to a mainstream GNU/Linux device, the Raspberry Pi, starts at only $35 and targets enthusiasts and programmers who need a low-cost functioning computer for development.
If you're looking for something from a more familiar manufacturer, the Dell XPS 13 Developers Edition is likely your best bet. It ships with Ubuntu 18.04 and is an impressive piece of hardware (the Windows version of the same laptop is a PCMag Editors' Choice). You can also find some Lenovo and HP business laptops with Mint Cinnamon installed. Many distros also offer a list of devices that are certified for them, in case you want to verify that an install will work on a PC you already own. This is a more cost-effective route if you don't want to buy a new machine; an older or secondhand laptop will suit you just as well, since GNU/Linux systems aren't typically resource hogs.
Some Linux-friendly boutique manufacturers include Pine64 (PineBook), Purism (Librem laptops), System 76 (desktops and laptops), and ThinkPenguin (desktops and laptops). These tend to cost less than comparable Windows and macOS systems.
Several of these providers also sell phones with alternative OSes; for example, Pine64 offers the PinePhone and Purism has the Librem 5. Customers in Europe can buy several refurbished phones with /e/ preinstalled. It's possible to load one of these operating systems onto an existing device, but it's an even geekier job than converting a laptop or desktop. Check the OS vendor's site to see if it is compatible.
How Do I Get Started?
Let's say GNU/Linux intrigues you and you want to try out a distro for yourself. For many of the below scenarios, you'll need to reformat a flash drive or CD. It's also critical to back up any data on your PC before you change any drive partitions. Here are four potential perspectives and recommendations for how to proceed:
I'm a newbie and just want to see what GNU/Linux is like.
Virtualization is your friend. You should install your distro of choice inside Oracle's free VM VirtualBox. This way, you can boot into your regular OS as normal and launch a GNU/Linux distro in a window or full screen, as long as you allocate sufficient RAM and storage to the sandboxed OS. Whichever distro you install will work in the VirtualBox as if it was a native installation and can be deleted at any time.
Alternatively, you can boot into some distros directly from a USB stick (or bootable CD) without actually installing them. This method doesn't require Oracle's VM VirtualBox or for you to make any changes to your hardware configuration, though the software will run a little slower than it would from a hard drive or SSD. For instance, Ubuntu provides guides for creating bootable media for both Mac and Windows systems.
I want to use GNU/Linux regularly or semi-regularly alongside Windows or macOS.
Some people need to run both a GNU/Linux distro and one of the big two operating systems at the same time, whether for development work, support for enterprise applications, or external device compatibility. Or maybe you're simply testing whether you can make the switch from one to the other. (If you're a Chromebook user seeking an alternative to Chrome OS, see our step-by-step guide on how to install Linux on a Chromebook.)
Installing GNU/Linux in a dual-boot configuration alongside Windows or macOS is not too much more complicated than the first two methods, with the main difference being that you are actually installing the full OS on your system and will need to carve out a portion of your hard drive or SSD for it. Deleting a distro running via VirtualBox and reclaiming the virtual drive is an easier process than removing and cleaning up a disk partition with a full OS installed.
You may come across other annoyances, too. For example, once you install the secondary OS, you must deal with a bootloader or start menu (usually GRUB) at launch. Getting all your drivers to work properly can prove troublesome as well, and transferring files between systems is rarely straightforward.
GNU/Linux is superior to all other OSes.
If you're all set on using a GNU/Linux distro and nothing else, your easiest option is to buy a dedicated laptop or desktop from one of the hardware vendors mentioned above. This route is more straightforward than installing a distro in a dual-boot configuration, as you don't have to partition a hard drive for two operating systems.
You can also install GNU/Linux over an existing OS, wiping out the previous platform. The potential drawback is that you'll have to configure the operating system yourself. Drivers might not work out of the box. Support or help for the problems you encounter may be buried deep in online forum threads. On the bright side, you will likely very quickly (by necessity) learn a lot more about computing than from using any other OS and hardware combination.
Stability is paramount and I don't like change.
Some people prefer, or even depend on, constancy. Indeed, one GNU/Linux benefit mentioned above is that you can get stable or long-term releases of many popular distros and not worry about regularly needing to install major updates.
This characteristic makes GNU/Linux particularly suitable for entities that require the utmost stability, such as government agencies and research labs. In fact, at one point the White House, the U.S. Department of Defense, and the FAA all moved to Red Hat Linux.
No End in Sight
This guide is just a brief introduction to the world of GNU/Linux systems. Feel free to explore these systems on your own or stay tuned for upcoming features and how-to articles. GNU/Linux systems occupy an important place in the computing world and many more users could find they fit their needs better than Windows and macOS equivalents.
| 1 | 2 |
<urn:uuid:4bd9b89c-7692-41b8-b038-a0ab0346d121>
|
Fluorescently labelled bowel cells, Image © Dow et al/Cell 2015
Bowel cancer’s origins can often be traced back to just a single faulty gene: APC – short for adenomatous polyposis coli.
The potent effects of damage to this critical stretch of our DNA were first uncovered back in the 1980s, when our scientists helped track it down and link it to bowel cancer.
The gene turned out to be what’s known as a ‘tumour suppressor’, which normally (as the name suggests) protects our cells from becoming cancerous.
This crucial role is reflected in the fact that it’s the first domino to fall in an estimated eight out of 10 cases of bowel cancer. Faults in the gene switch it off, leaving once-healthy cells exposed to a relentless barrage of signals telling them to keep growing – ultimately leading to the formation of a tumour.
Since APC’s discovery, scientists have been trying to decode these signals. The ultimate goal is to find ways to target them with treatments, tackling bowel cancer at its very root.
Today, things took a big step forwards: an international team of scientists – writing in the journal Cell – has offered the first answers to a question that has been on scientists’ minds for some time: what happens if you switch APC back on again?
And, tantalisingly, their answers point to potential new ways to develop treatments for bowel cancer.
Off or on?
Picking apart exactly how the loss of APC leads to bowel cancer has been no easy task, according to Professor Owen Sansom, whose team at the Cancer Research UK Beatson Institute is also studying these signals.
“Scientists across the globe have spent a lot of time removing APC from cells in the lab and in animal models to study how bowel cancer develops,” he says.
This painstaking work has revealed a critical role for APC in controlling a second molecule, called β-catenin, which can be found inside our cells.
Without APC, a cell produces too much β-catenin, and can’t control where the molecule should be held inside the cell. This leaves the β-catenin free to enter the cell’s nucleus, where it switches on other genes that trigger rapid cell growth.
This relay of events is known as the Wnt signalling pathway, and it plays a crucial role in the early stages of bowel cancer development.
Because the misfiring Wnt pathway appears to be so important, a global research effort has homed in on finding ways to target these signals for treatment.
But there have been a few questions left unanswered.
As a tumour develops, its cells acquire more and more genetic damage, further fuelling their growth and helping them survive. In the case of bowel cancer there are two genetic ‘dominos’ that usually fall soon after APC: KRAS and p53.
Until now, it’s been unclear if it’s these additional faults that keep the cancer cells growing, or whether the initial inactivation of APC still has a role to play later in the disease’s development.
“An important question we haven’t been able to answer is whether or not bowel cancer cells retain their dependency on APC loss to keep growing,” says Sansom.
And, according to him, this has largely been down to a technical challenge. “We’re really good at removing these genes and proteins, but not so good at putting them back into cells afterwards,” he says.
But the latest study has addressed this question for the first time – with striking results.
The researchers, based at the Memorial Sloan Kettering Cancer Center in New York, developed genetically engineered mice in which they could carefully control the activity of APC in cells lining the bowel. The mice were also genetically tagged with fluorescent markers, allowing the researchers to follow the development of their bowel cells in different circumstances.
They also carried out similar experiments in tiny lab-grown pockets of bowel tissue called ‘organoids’ (see image, right).
Similar experiments have been done before; that’s how we know these faults, in this order, are so important in bowel cancer development. But what’s special about this new study is that the researchers found a way to switch APC back on again – something that hasn’t been possible before.
When they did this they saw a very striking response: the tumours shrank, and then disappeared.
And, even 30 weeks later, the disease hadn’t come back. It was as if the disease had been simply switched off.
Science fact or science fiction?
On the surface, these results allude to a really powerful way of making these bowel tumours shrink and disappear: just switch APC back on again.
Unfortunately it’s not that simple. Almost all drugs in use today – from aspirin to Zytiga – work by switching overactive biological processes off, rather than switching on new or missing ones. The ability to re-engineer a working form of the APC gene into cells in a cancer patient is – currently – more in the realm of science fiction than clinical reality. And it’s likely to remain so for some time.
But, as Sansom points out, what these results do show us is that research is heading in the right direction – especially in the development of potential new drugs for bowel cancer.
Because when APC is switched off, other things do get switched on: β-catenin levels rise and Wnt signals fire. And this means there might be targets for drugs somewhere among these processes.
“These are really exciting results,” says Sansom. “What they show is that, at least in the case of this new lab model, later stages of bowel cancer are still dependent on APC being switched off.”
“This reinforces the idea that targeting the Wnt pathway, which gets activated when cells lose APC, could be a really powerful way of tackling the disease.”
The researchers themselves suggest a particular type of drug that can trigger the destruction of β-catenin even when APC isn’t around. There are plenty of researchers trying to develop these drugs – called Tankyrase inhibitors – and they are just one example of potential targets that hit the Wnt pathway.
From one gene to many
The next steps are to better understand how this dependency on APC loss plays out in people with bowel cancer, rather than in the lab.
But these findings, plus the addition of a new laboratory model with which to study bowel cancer, could prove crucial in tracking down the best targets to shut off the Wnt pathway in patients.
But there’s a wider point here. Nearly three decades ago, scientists discovered a single gene with the potential to fuel the development of bowel cancer, and beating the disease seemed – suddenly – a much simpler task. But the intervening years have shown us that even a single type of cancer can be viewed as multiple diseases when the layers of genetic complexity stack up.
The sometimes-baffling ways that new gene faults appear, accumulate and diversify as a tumour develops is one of the biggest challenges facing researchers. And in the context of this so-called ‘tumour heterogeneity’, searching for and exploiting the underlying, original faults causing the disease has never seemed so urgent.
Today, the legacy of that 80s research comes into stark focus. The APC gene not only lies at the origins of bowel cancer, but beyond that too – exposing a crucial, fundamental weakness, and a new way to find out how to target it.
Dow, L., O’Rourke, K., Simon, J., Tschaharganeh, D., van Es, J., Clevers, H., & Lowe, S. (2015). Apc Restoration Promotes Cellular Differentiation and Reestablishes Crypt Homeostasis in Colorectal Cancer Cell, 161 (7), 1539-1552 DOI: 10.1016/j.cell.2015.05.033
| 1 | 2 |
<urn:uuid:912c04f6-d750-4cfe-8bae-83168ee69363>
|
This is a guest post by Patrick Egan (Pádraig Mac Aodhgáin), a researcher and musician from Ireland, former Kluge Center Fellow in Digital Studies and currently on a Fulbright Tech Impact scholarship. He recently submitted his PhD in digital humanities with ethnomusicology to University College Cork. Patrick’s interests over the past number of years have focused on ways to creatively use descriptive data from archival collections.
As the Irish saying goes: “an rud is annamh is iontach,” “what’s seldom is wonderful.”
Irish America is often thought about in terms of waves of emigration to urban centers such as New York, Boston, or Chicago, evidenced by recordings made in those cities by musicians such as the great uileann piper (Irish bagpipe player) Patsy Touhey and fiddle virtuoso Michael Coleman. But the American Folklife Center at the Library of Congress is home to a diverse array of recordings that illuminate intriguing histories from all over North America. Very often these musicians lived in remote places such as Nova Scotia, Montana, West Virginia, or the Central Valley area of California.
Surprising Stories of Musicians in Fascinating Times
As a Kluge Fellow in Digital Studies and Fulbright Tech Impact scholar, the Library has afforded me a unique opportunity to delve into these collections, revealing the music, songs, and stories of everyday people who lived through fascinating times, from the beginnings of recorded sound in the early 1900s, to the depression era of the 1930s, and the folk revival of the 1970s. Some of the stories of these musicians have been captured, preserved, and digitized for all to hear.
Take, for instance, John Harrington, an accordion player from Butte, Montana who grew up in a mining town called Mercur City in Utah. John’s bohemian lifestyle challenges many narratives of migration that are sometimes taken for granted in Irish America, and hearing about his experiences was illuminating.
John was born in Utah in 1903, moved to Butte in 1911, and then to Ireland in 1919, a crucial time in Irish history. For eight years he lived in West Cork. In 1927 he relocated to New York and worked on the 8th Avenue subway. When World War II broke out he worked at a shipyard in California, and then he finally relocated back to Butte. John made an album of his music in 1999 at the age of about 96, and lived until he was 100 years old!
It is stories like this that accompany thousands of recordings of Irish traditional music at the American Folklife Center. They shine a light on “parallel worlds” through which musicians, singers,
and dancers emerged in Irish America. With these parallel worlds comes an amazing diversity of repertoires, recording situations, and stories of unsung performers.
I have been working on a project entitled “Connections in Sound” since January, which focuses on experimental ways to bring these archived audio materials together, to reveal hidden treasures, and to unite tunes, songs, and dances using digital tools.
Why digital? Why now?
Internet communities and online resources of Irish traditional music have grown steadily over the past thirty years or so. Websites such as www.irishtune.info and www.itma.ie are making the music, songs, and dances available for people to access and learn.
Understanding this trend and the position of the archive is the central focus of my research, looking at Irish traditional music in America in particular. Even though an archive of Irish traditional music doesn’t yet exist in America, the American Folklife Center contains a sizeable and highly diverse collection of material, making it possible to link up multiple versions of tunes and songs in Irish traditional music.
Collaborators with LC Labs have provided expert knowledge on how to harness these recordings with state of the art digital infrastructures to bring the music together and connect it to online resources. Making these connections will allow us to create an interconnected web of useful resources.
The Challenges of Bringing Diverse Collections Together in One Dataset and Representing Them Online
A number of challenges arise when creating digital representations of audio material and connecting them on the web. For example, when entering musicians into the dataset, it was discovered that a number of them had not officially recorded or published their music or work. In these cases, the musicians had no “authority files” created for them, leaving them underrepresented.
Take John Harrington, mentioned above. John was an amateur collector of cultural heritage materials. He donated collection materials to libraries during his lifetime, and so has been given a “name authority“, which is a web resource that is unique to him. As can be seen at this link, the name authority not only gives an artist a unique ID on the World Wide Web as a webpage or homepage, but it also provides biographical details of that person and other details that allow us to find out more about them.
For some performers, this is not the case. Take for example Mae Mulcahy, a concertina player, housewife, mother, and also a well-known member of the community in Butte at the time she was recorded by Gary Stanton for the Montana Folklife Survey in 1979. Without an authority file, however, there is no identifier or webpage that can be used for her; this needs to be created if future discoveries of her performances are to be linked together.
Exploring these issues is important to the Connections in Sound project, as many instances like this example occur throughout the dataset. Ultimately, this gives us insight into what it means to engage in digital activity.
An event will take place at noon on Thursday August 29th, where Patrick Egan will be in conversation with staff from the American Folklife Center addressing the progress of his project, Connections in Sound, and discussing the audio collections that contain Irish traditional music. He will also present some digital visualizations and digital infrastructures that he is using for linking music recordings, and finish with a performance of Irish traditional music with local DC musicians.
| 1 | 3 |
<urn:uuid:24ada565-25c2-4ca9-adde-0f36e274f872>
|
Fluorocarbon refrigerants are synthetic chemicals which usually have a high global warming potential, and some still have the potential to cause damage to the ozone layer as well if released to the atmosphere.
Climate change is an increasingly important global concern with far reaching effects. The heating, ventilation, air conditioning, and refrigeration (HVAC&R) industry is allotting a significant amount of effort to reduce the environmental impacts of HVAC&R systems. Discussions about the climate impact is often limited to the GWP of the fluids used, but this is far too restrictive, as it does not take into account the real emissions of fluids, and ignores indirect emissions, especially those related to energy use over the life time of the equipment.
A US manufacturer exhibited a ductless air-cooled water chiller using Opteon XL55, a lower GWP replacement for R410A, at the 2016 AHR Expo which opened on 25 January 2016.
This practical guide is designed to improve the skills and knowledge of professionals in the refrigeration sector that need to be certified in accordance with the requirements of the EEU (Eurasian Economic Union). The guide contains basic information on refrigeration equipment, main components of the refrigeration system, Besides, it provides information on commercial, industrial and mobile air conditioners, transport refrigeration, brazing of pipes of the refrigeration system, etc.
The successes of the Consumer Goods Forum’s five-year commitment to replace HFCs in refrigeration systems are recorded in a new publication.
A third or more of annual food production is never eaten, yet more than 50% of the wasted food could have its shelf-life extended by the cold chain.
Chemours is predicting that R449A, the lower GWP alternative refrigerant for R404A, will be adopted in more than 1,000 supermarket systems by the end of next year.
With the US facing restrictions on higher GWP refrigerants, ASHRAE is looking to modify its safety standard to incorporate “mildly flammable” A2L gases.
The European Commission is to refer Germany to the Court of Justice of the EU for its failure to apply the MAC Directive.
New initiatives on reducing the consumption of HFC refrigerants are expected to save the emission of more than 1 billion tonnes of CO2 equivalent by 2025.
Although global availability is still limited, as of March this year sales of air conditioners using the “mildly flammable” refrigerant R32 had already passed 3 million.
Daikin has approved the use of “alternative materials for jointing and piping” on its air conditioning and heating systems.
A team at the University of Wisconsin-Madison is working with new materials in an effort to produce a viable energy efficient 3D-printed heat exchanger.
In a move to encourage the adoption of R32 refrigerant, Daikin is to offer rival manufacturers worldwide free access to its patents.
Green Point, Bitzer’s compressor remanufacturing operation, is expanding its service in the UK to include other makers of compressor.
The cooling demands of large hyper data centres are set dramatically change the market for precision cooling over the next five to 10 years.
First it went tiny with the mini-VRV, now Daikin has launched an “invisible” VRV air conditioning system.
A Trane air-cooled chiller employing a new low-GWP refrigerant replacement for R410A is to be presented at the IIR conference in Yokohama.
HFO-1336mzz(Z) is to be promoted as a potential working fluid for high temperature heat pumps at this week’s ICR 2015 conference in Japan.
| 1 | 2 |
<urn:uuid:85aadbea-cd10-4d34-a3aa-b1ac7ebd695a>
|
The American musician and soundscape ecologist Bernie Krause once said, “While a picture might be worth a thousand words, a soundscape is worth a thousand pictures.” The deeply emotional and vastly expressive nature of sound makes it a powerful art form. While sound art is still a relatively young discipline, it’s exploded in popularity in recent decades. But how can you tell whether a work falls under this category? It’s tricky, especially since the definition of sound art isn’t so clean cut, and doesn’t necessarily include every piece of art that makes noise. Like much contemporary art, sound art is interdisciplinary, spanning diverse genres from installation, film, and experimental music to interactive technology and spoken-word poetry—the commonality being that sound is employed as the primary medium. And while other art might utilize sound or music in the background, sound art tends to apply noise as an art experience in and of itself.
To better contextualize this art genre, let’s briefly review the history of it. It seems the actual term was first documented in 1983 in the catalog for the show “Sound/Art” at SculptureCenter in New York City. However, the movement has roots in Dadaist, Surrealist, and Fluxus performances, as well as early 20th century pure noise experimental music. The American composer John Cage beckoned a new era of sound art with his seminal piece 4’33” in 1952. The musical composition, performed by pianist David Tudor, was four minutes and 33 seconds of absolute silence, and it completely transformed the way people thought about music and sound. An upsurge of sound installations in the 1960s continued to question traditional sonic practices, and in the 1970s artists explored these themes in new ways with the advancement of electronic/computer-generated sounds.
Today, the distinction between art, music, and sound has never been messier. Just look at David Byrne’s sound installation Playing the Building. Originally commissioned in 2005, the former Talking Heads singer repurposed spaces within buildings to create sounds that resemble musical instruments. He produced these sounds by blowing air through pipes, banging metal rods against columns, and fastening vibrating motors to beams. In her essay “Thinking Critically About Sound Art,” experimental-music writer Geeta Dayal stresses that we should continue to question what sound art is. She writes, “It is by constantly questioning and arguing for art’s value that we begin to understand art, and ourselves. The transitory, elusive, sometimes baffling nature of sound is part of its enduring mystery and power.”
Here are 9 artists (in no particular order) making serious noise in this medium:
Born 1970, Silver Spring, MD. Lives and works in Oslo, Norway
Crucial to Camille Norment’s work is the notion of cultural psychoacoustics, which Norment defines as “the investigation of socio-cultural phenomena through sound and music—particularly instances of sonic and social dissonance." Her work examines sound as a force over the body, mind, and society. She works with recorded sound, installation, drawing, and performance—including performing within a trio comprised of Norwegian hardingfele, electric guitar, and glass armonica. "Each of these instruments was once banned in fear of the psychological, social, or sexual power their sound was thought to have over the body, and the challenge they represented to social control." Norment represented Norway in the 2015 Venice Biennial with her work Rapture—a three-part project entailing an installation, sonic performances, and a publication that explores themes like sound and the body, censorship and repression, and national identity. Look out for her upcoming solo exhibition at the Logan Center in Chicago and a commissioned project for the DIA Foundation.
CHRISTINE SUN KIM
Born 1980, Orange County, CA. Lives and works in Berlin, Germany
Christine Sun Kim’s mission is to “unlearn sound etiquette.” Deaf since birth, the artist is disconnected from sound as most people experience it, and is interested in deconstructing the societal conventions surrounding it. In 2013, Kim’s work was included in MoMA’s first major exhibition of sound art, Soundings: A Contemporary Score. While her projects usually incorporate audio components, sound is used in her practice as something to be objectified and displayed in a new light. Currently, you can view her work at the Whitney Biennial, which consists of six large-scale charcoal drawings depicting “degrees of deaf rage” while navigating the hearing-centric world. The humorous but poignant series will make you seriously check your hearing privilege.
Born 1977, Bern, Switzerland. Lives and works in Bern, Switzerland
Self-taught Swiss artist Zimoun explores mechanical rhythm and flow in prepared systems through installations that incorporate everyday industrial objects. For example, one of his minimalist kinetic structures 435 prepared dc-motors, 2030 cardboard boxes 35x35x35cm consisted of cardboard boxes, wires, and DC motors. The boxes, which hung from the ceiling and were set in motion by the motors, gently collided against one another in a rhythmic motion creating a sound reminiscent of the soundscapes in Godfrey Reggio’s film Koyaanisqatsi. Zimoun has exhibited internationally, including recent solo exhibitions at the Museum of Contemporary Art Busan in Korea and the Museum of Contemporary Art MAC in Santiago, Chile.
Born 1955, San Rafael, CA. He lives and works in London, UK.
Christian Marclay’s works transform sound and music into a physical form through a variety of media such as photography, sculpture, music, video and collage. The Swiss-American artist began exploring sound and art in 1979 through turntable performances, and is accredited with inventing “turntablism,” or altering sounds using multiple turntables. Perhaps his best-known piece is the critically acclaimed video installation The Clock (2011)—a 24-hour long montage depicting thousands of different clocks while utilizing sound and music to outline the passage of time. Marclay’s work has influenced musicians, artists, and thinkers; and he’s collaborated with numerous musicians including John Zorn, Elliott Sharp, Zeena Parkins and Sonic Youth.
Born, 1968, New York, NY. Lives and works in Queens, NY.
Since the early ‘90s, Marina Rosenfeld’s work has been leading innovative practices in sound and art. Her art focuses on creating transformative musical situations through amplified sound, bodies, and space. And like Christian Marclay, she’s performed as an experimental turntablist. Rosenfeld has created conceptual orchestras of single instruments, chamber and choral works, and acclaimed performances. Her recent work Music Stands (2019), which was on view at The Artists Institute in New York, is an installation of three metal frameworks that support microphones and speakers. Through modulated sounds, recordings of Rosenfeld’s own voice, and reflective panels, the artist highlights the volatile physicality of electronic sound.
LAWRENCE ABU HAMDAN
Born 1985, Amman, Jordan. Lives and works in Beirut, Lebanon.
Instead of a "private eye,' Lawrence Abu Hamdan refers to himself as a “private ear.” His art employs sound to investigate human rights issues and law, while examining the political repercussions of listening. This year, Abu Hamdan was nominated for the Turner Prize for his exhibition Earwitness Theatre (2018) and his performance After SFX (2018). Both projects reference his 2016 project with Amnesty International and Forensic Architecture, where Abu Hamdan was asked to create an audio investigation into the Syrian regime prison Saydnaya. The prison, inaccessible to outside witnesses and highly restricted, has executed an estimated 13,000 people since 2011. Earwitness Theatre presented an expanded library of sound effects specific to the investigation of earwitness testimony, and After SFX is a performance comprising the sounds, voices, and texts originating from objects included in Earwitness Theatre.
Based in Brooklyn, NY
Birch Cooper and Brenna Murphy make up the art collective MSHR (pronounced mesher), whose work combines digital sculpture, analog circuitry and ceremonial performance. Exhibitions typically include sculptural instruments installed to create immersive light-soundscapes that are trippy to say the least. The duo explains that their sculptural, musical, and electronic work largely inform each other, creating a “meta-form” that is their collaborative practice. Their work appeared in MoMA PS1’s 2017 exhibition Past Skin, the Rubin Museum’s 2018 show The World is Sound, and the Sonic Arcade exhibition at the Museum of Arts and Design from September 2017 to February 2018.
Born 1977, Fort Defiance, AZ. Lives and works in Albuquerque, NM.
Composer of chamber music and a solo performer of noise music, Raven Chacon is one of the most profiled Native American artists currently working in either genre. (He also used to be a member of the artist collective Postcommodity; you might remember their video installation documenting the fence along the Mexican/US border in the 2017 Whitney Biennial.) Some other places he’s exhibited or performed include documenta 14, REDCAT, Musée d’art Contemporain de Montréal, and he’s this year’s inaugural artist in Bemis Center’s Sound Art + Experimental Music Program, where he’ll produce a new sound installation. The installation will explore the role of sound at Standing Rock using field recordings Chacon took during the 2016 protests there. Chacon also co-curated Bemis Center’s summer exhibition Inner Ear Vision: Sound as Medium, which is on view now.
Born 1979, Hong Kong. Lives and works in Hong Kong
Sound, either physically or thematically, permeates all of artist and composer Samson Young’s art, who works across media to challenge conventional associations with objects, stories, and spaces. Chosen to represent Hong Kong at the 2017 Venice Biennale, Young’s work in the show Risers—a colorful installation playing appropriated pop songs—critiqued charity singles recorded by celebrity musicians to raise money for victims of natural disasters and other causes. The artist revealed that he’s always felt uneasy about these singles due to their implied imperialism and the fact that the global music industry profits off of them. On May 22nd, Young was named winner of a Prix Ars Electronica Award of Distinction, under the digital musics and sound art category. The winning work, Muted Situation #22: Muted Tchaikovsky’s 5th (2018) is a video and sound installation that positions muting as a suppression of dominant voices, and a way to uncover the marginalized.
| 1 | 2 |
<urn:uuid:da9241fb-8df2-436e-baf8-b4755bbd8b10>
|
It has recently been announced that the World Health Organisation is proposing – finally – to remove transgender identity and gender dysphoria from its list of mental health disorders.
The list, known as the ICD-10, describes gender dysphoria as “the urge to belong to the opposite sex that may include surgical procedures to modify the sex organs in order to appear as the opposite sex”.
Calls for the WHO to revisit this have increased in recent years and most recently since new research has confirmed that transgender and non-binary people experience disproportionately high rates of social rejection and are more likely to be victims of violence. One such study, published in the medical periodical The Lancet, argued that “the conceptualisation of transgender identity as a mental disorder has contributed to precarious legal status, human rights violations, and barriers to appropriate health care among transgender people.” The psychiatrists behind the study recommended removing “categories related to transgender identity from the classification of mental disorders, in part based on the idea that these conditions do not satisfy the definitional requirements of mental disorders…[after considering] whether distress and impairment, considered essential characteristics of mental disorders, could be explained by experiences of social rejection and violence rather than being inherent features of transgender identity” they concluded that there was a need to declassify gender dysphoria and transgender identity as mental disorders and to instead seek ways to “increase access to appropriate services and reduce the victimisation of transgender people.”
The human cost of conflating identity with disorder
Professor Geoffrey Reed, the study’s senior author, said: ““The definition of transgender identity as a mental disorder has been misused to justify denial of health care and contributed to the perception that transgender people must be treated by psychiatric specialists, creating barriers to health care services.
“The definition has even been misused by some governments to deny self-determination and decision-making authority to transgender people in matters ranging from changing legal documents to child custody and reproduction.”
Dr Rebeca Robles, the study’s lead investigator, added: “Rates of experiences related to social rejection and violence were extremely high in this study, and the frequency with which this occurred within participants own families is particularly disturbing.”
The WHO is reportedly considering declassification when it next reviews its list of mental and behavioural disorders in two years’ time. Work on this – which will be known as the ICD-11, has taken some time and the list has not been updated since the 1980s. There have so far been no objections from within the WHO to the calls to change the classification of transgender identity. There appears to be recognition that the existing classification reinforces stigma while doing nothing to alleviate the problems of rejection and distress many transgender and non-binary people experience. All this is naturally positive.
“A diagnosis – but not a mental health diagnosis”
Perhaps less positive is the suggestion that ICD-11 will declassify transgender identity will as a mental disorder but will list it in a different part of the document, potentially under conditions related to sexual health. New York psychiatrist Dr Jack Drescher, a member of the ICD-11 working group, explains: “So they’ll be diagnoses, but they won’t be mental disorder diagnoses.” Glad that’s been cleared up.
It is proposed that the new ICD-11 will refer to “gender incongruence” as “characterized by a marked and persistent incongruence between an individual´s experienced gender and the assigned sex, which often leads to a desire to ‘transition’, in order to live and be accepted as a person of the experienced gender, through hormonal treatment, surgery or other health care services to make the individual´s body align, as much as desired and to the extent possible, with the experienced gender. The diagnosis cannot be assigned prior the onset of puberty.” So while it’s been declassified as a mental health condition, it is still likely to remain a clinical diagnosis.
While declassification will be welcome, not to mention overdue, it will represent the beginning of a process rather than signify an end in itself. There can be little doubting that being rendered mentally disordered will always be stigmatising and dehumanising; however, it’s not only the ICD-11 categorisations that need to be challenged but also the culture of medicalisation behind it. It’s not enough to declassify the “mental disorder” element in a well-meaning but misguided attempt to strip away stigma that speaks volumes about the way mental health continues to be treated, when the same identity issue is continued to be perceived as something that is in some way “wrong”. We need to move away from the language of “disorder” altogether.
This is vitally important, especially as many transgender and non-binary people receive deficient treatment in the NHS. If we’re serious about tackling discrimination, we need a change of language – and a corresponding change in culture.
Medicalisation – part of the problem?
There is also a tendency in our scientific world to over-medicalise everything, and consequently there will be those who feel that transgender people are actually best served through a system that provides them with psychological care and institutional support. One such voice, American psychiatrist Paul McHugh, goes so far as to suggest transgender people’s real difficulty is “underlying psycho-social troubles”, which constitute “a mental disorder that deserves understanding, treatment and prevention”. He is not alone.
These voices may be arguing against a growing consensus, but they underline the reality that the arguments must move away from medical. After all, it wasn’t long ago that homosexual people were seen as being mentally disordered and it wasn’t the intellectual medical arguments that brought about greater social acceptance for our gay and lesbian brothers and sisters. Yes, declassifying transgender people as mentally disordered will, at a stroke, cease to mark them in the way they have for decades. It will also mean governments who have used the WHO’s inadvertent support for their denial of rights and protections to transgender and non-binary people may have to reconsider their actions.
The medical arguments label, analyse, consider data and seek to offer scientific explanations. All that can be helpful. But what they’re less good at is recognising that transgender experiences are incredibly varied, as are the “treatments” transgender people want. They don’t generally treat individuals as individuals, but as some kind of homogenous group with a shared identity. Ultimately, why should it be up to the medical profession to decide who has a valid gender identity and who doesn’t?
And that’s the real issue – who has the right to determine who is and who isn’t a particular gender? Who has the right to deny people the right to identify with any gender or none? While declassifying gender dysphoria as a mental health condition represents a powerful statement and an overdue step forward, the real solution lies in improving social awareness, with education rooted in the experiences of transgender and non-binary people.
Lesley has often been asked when she knew she first felt like a girl, a woman, and she can’t answer the question. She explains: “I have never felt like a girl or a woman; I simply was a girl, and, later, a woman. Nor can I explain how a child, albeit a very bright child, can get their head around the way I was and live comfortably with the dichotomy of being a girl while living as a boy.
“I had problems enough as a child and this wasn’t one of them. My father had a problem with my problem, and he had a drink problem, and the toxic mixture would bubble into regular physical violence, intimidation, and humiliation for me. I was clever too, I knew it, I wasn’t shy about it and that was a problem; I liked “snob music” and that was a problem. I was a problem. I would “end up a bloody pervert, like that one on the telly”, and I have no idea which pervert I was destined to be like. I never worried myself about that. My mother exploited my strange malady in another more sinister and sickening way, and that remains a problem for me.
“For a long time, my biggest issue was that I lived contentedly in a boys’ world while something in my manner seemed “girlish” to my parents, but I WAS a tomboy. I lived my boys’ life fully in character. I climbed trees better than any other child my age; I was up for every mad, crazy adventure going. I hung about the fringes of my older brother’s delinquent gang. I stole. I lied. I cheated. The police dragged me home from time to time, and I was charged with petty offences.
“The discovery that my conduct and my dilemma might be explained by my being a tomboy was a great relief. So I am a girl! That’s great!”
When Lesley first heard this expression, and discovered its meaning, a great weight was lifted from her shoulders. Puberty was hell, but she did all she could to survive, and be a girl in a boys’ life – not a boys’ body, or mans’ body. It annoys her still to hear to hear the phrase “a woman trapped in a man’s body”. She says: “I was a girl, a woman, and my body was my own. Don’t get me wrong; my body was pretty well screwed up, but it was mine, and my body and my psyche lived in joyous confusion together for most of my life. We still do. I don’t have a female body. Nor can I ever have one. My breast and genital surgeries were deeply moving experiences for me, but I don’t have a new body. I have my body, the same body I always had – altered but still me, still a woman, still the same woman I always was. I had to put my hand up to a ‘mental health disorder’ to be allowed certain treatment, but I was never convinced of the correctness of this.”
Andrew, who is non-binary, admits they’ve “never felt particularly male…from as young as I can remember I always wanted to do ‘girly’ things and struggled to adapt to societal gender roles. In my teens, things happened to my body physically that don’t really happen to boys. So while I didn’t have a woman’s body I certainly didn’t have a typically male one either. For all the arguments about gender being a psycho-social expression of identity or a social construct, in my case there was an undoubtedly a biological basis.
“The picture was complicated further by my sexual orientation (Andrew is bisexual) and the fact that in the Hebrides during the 1990s there were few opportunities to openly and positively discuss my gender identity. Seeing a doctor about this was difficult to say the least – I was also concerned about anything I said being connected to my mental health problem (Andrew experienced depression at this time). So I hid it, tried to make sense of it alone, sometimes even ashamed of who I was. Later, working in mental health, I became more aware of non-binary and transgender identities but also discovered the stigma behind them…there’s no doubt that revisiting the ICD and reclassifying gender dysphoria can help tackle this. But it isn’t enough by itself.”
The new ICD-11 is expected to change the status of transwomen like Lesley. She says: “I will no longer be seen to suffer from ‘gender dysphoria’ – a very vague mental health condition. It would seem that I have had it exchanged for a ‘condition related to sexual health’ – namely ‘gender incongruence’ . . . Hm! At least I’m not crazy! I still have ‘a condition’ and I am still suitable for treatment.
“When I first engaged in gender politics and activism, I met women like me who hated the term transsexual. It was the use of “sexual” that was a problem. We didn’t have a “sexual problem”; we had a “gender issue” and I remain conflicted about that. Transgender is a dreary term, and I have never liked it. Is it an umbrella term? Am I trangender(ed)? My feeling is that I am transsexual. My body, my own woman’s body, clearly bore the evidence of male sexual characteristics. I lived my life as a man. I have children to whom I am a father. My gender, my certainty that I am a woman, has never changed.
“I emerged from hiding. I nailed my colours to the mast. I am the woman I always was. I have never changed gender. My body is my own, it is a woman’s body; it always was.
“Not in this respect.”
| 1 | 2 |
<urn:uuid:8431ea6a-3162-4d78-a914-50c7799ec45a>
|
Products & Solutions
- Services & Support
- Contact Us
- Learning Center
What types of barcodes are there?
Barcodes are broken down into two groups: 1-D and 2-D barcodes. 1-D barcodes have a limit of information they can encrypt (usually up to 14 alpha-numerical characters). The 1-D barcodes generally cannot store as much information but can be scanned with less expensive scanners. The 2-D barcodes have the ability to hold thousands of times more information depending on which 2-D you choose. With the increased information storage of 2-D barcodes, they can be used to store biometrics, images and more. There are a wide variety of 1-D and 2-D barcodes to choose from. Some are open standards and formats and some are closed and proprietary. The following are some examples of different barcode types:
1. Code 39 - Code 39 is one of the most common 1-D barcodes and is sometimes referred to as the "3 of 9 code". "USS Code 39", "Code 3/9", "Type 39". The 3 of 9 barcode is capable of storing characters 0-9, A-Z, and some special characters and can store from 8 to 11 characters. The barcodes gets its name because each characters is stored using 3 wide and 6 narrow bars (for a total of 9).
2. Universal Product Code (UPC) BarCode - The Universal Product Code (UPC) is very recognizable in the United States and Canada because it is the standard barcode used on almost all products and allows products to be checked out and tracked by the point-of-sale systems. The code only stores numeric digits.
3. Maxicode - Maxicode is a 2-D barcode that can store up to 100 characters. This code was developed by United Parcel Service (UPS) and is printed on their shipping labels to help track packages. The barcode is distinctive because of the bull's-eye in the center of the code, which allows the reader to find the code quickly and read it even if moving rapidly and in any position.
4. PDF 417 - PDF 417 is a 2-D barcode that allows up to 1,800 characters to be stored. This code also allows linking of more than one PDF 417 barcode to have even more storage.
ColorID provides ID badging software that can print a variety of types of barcodes for your ID badges. Our software packages allow you to choose from over 20 different types of 1D and 2D barcodes. We can also supply you with a number of barcode readers and ID printer's capability of printing barcodes on your badges.
In this week's blog, ColorID weighs in on the new Datacard SD260 printer. If you'd like to learn more about this new line of printers, contact ColorID today toll free at (888) 682-6567 or visit us online at www.colorid.com/.
Datacard's new SD260 printer will replace the SP35 series printer. This unit comes in a compact size with some advanced standard features. This printer is a simplex (single sided unit) printer with the capability to do manual duplex (dual sided) printing. There are several upgrades available including: magnetic stripe encoder, smart card personalization, 100 output hopper and option security lock.
The SD260 prints incredibly fast, high quality direct to card images without card jams. Datacard prints speeds for full color simplex printing are around 18 seconds a card. When we tested this unit we came up with very similar print speeds. This makes the SD260 considerably faster than any other full color, simplex printer on the market. Datacard has also included a new technology on its input hopper called "TruePick". With this technology it can pick cards (standard and thin) every time with no adjustments.
The SD260 also comes with several eco-friendly features including: Energy Star qualifications, biodegradable supply cores, recyclable supply materials and a separate power-down button. To our knowledge this is the only Energy Star rated ID badge printer on the market.
The new LCD Screen has soft touch controls (similar to an iPod glass screen). The unit also comes standard with USB & Ethernet. This printer has the capability to be a high volume printer; however its main drawback is its inability to print dual sided cards or apply lamination. The new system worked successfully through all of ColorID's trials and we believe this will be a great improvement from the SP35.
Throughout the card printing process, the SD260 printer showed outstanding print speed, no temperature operating issues, and it maintained error free operation through 250 card prints. Datacard includes a 30 month depot warranty with the printer. This is 6 months better than the previous SP35 printer.
Below are some of the SD260's specifications and options available:
Today, anyone under the age of 25 would likely have no idea that it was not that many years ago that the checkout clerk punched information into a big machine for every item that was purchased at their local grocery store. Today we quickly move through the checkout line with our products being scanned and product prices jumping up on a screen in front of us. How did we move from the old cash register to the modern scanner systems we use today?
In 1948 a graduate student named Bernard Silver overheard someone ask if there was a way to identify and track grocery products. Silver told his friend Norman Woodland about this challenge and the two of them began to investigate methods and systems to identify products at check out. They tried several ideas before coming on the idea of a barcode type system that could be printed on products and read by a scanner. They came up with two barcode patterns: one with vertical bars and one that used a circular pattern. They were issued patent 2,612,994 for their invention in 1952. Later this patent was sold to RCA. Woodland was hired by IBM were he encouraged IBM to further develop his ideas.
In 1959 David Collins, who worked at Sylvania, developed a barcode like system to identify rail cars. His system used reflective material that were arranged in stripes and were affixed to the side of rail cars. This system was tested by different railways during the 60's and was ultimately made a standard by the Association of American Railroads. However, due to technical problems and a lack of enthusiasm by the railways, the idea was abandoned.
In 1966 the National Association of Food Chains held a meeting to discuss if there was any way to automatically identify grocery items at checkout. Representatives from RCA attended the meeting and since they had purchased the Silver and Woodland patent they proposed a system based on patents circular barcode. RCA pushed their circular barcode idea over the following years as many other companies got involved. At another meeting in 1971 RCA demonstrated their circular barcode. Also attending this meeting were representatives from IBM. After the meeting one of the IBM marketing people realized that one of the two original inventors of the RCA patent, Woodland, worked for IBM in North Carolina. They quickly started a project headed up by Woodland to create their own barcode system. The circular RCA barcode had serious problems with smearing when applied being products. The IBM barcode was just vertical bars and did not suffer from the same problems.
At 8:01 AM on June 26, 1974 at Marsh's Supermarket in Troy, Ohio a pack of Juicy Fruit gum was scanned using the newly created IBM barcode standard. This barcode became what we know today as the Universal Product Code (UPC Code) that can be found on almost every product in the stores today. The receipt for this first barcode scanned package of gum is in the Smithsonian Museum.
It took many more years for this new barcode system to be widely adopted by the grocery industry, because the grocery stores did not want to invest in scanners until the products were all labeled with the new barcodes and the manufactures did not want to invest in the labeling equipment until there was a significant number of stores that had scanners. Because of this "chicken or the egg" dilemma the new barcode grocery idea almost died. However by the early 80;s the adoption of the new barcode system had a foothold and became an industry standard.
Today barcodes are on everything from our ID Cards to our toothpaste packaging.
ColorID can supply you with a number of barcode readers and ID printer's that are capable of printing barcodes on your badges.
| 1 | 7 |
<urn:uuid:0d63f378-6701-468f-b2f8-9429af79c824>
|
Whatever Happened to the Ozone Hole?
An environmental success story.
If you were around in the 1980s, you probably remember the lurking fear of an ominous hole in the sky. In the middle of the decade scientists discovered that a giant piece of the ozone layer was disappearing over Antarctica, and the situation threatened us all. The news media jumped on the story. The ozone layer is like the earth’s sunscreen: without it ultraviolet rays from the sun would cause alarming rates of skin cancer and could even damage marine food chains. And it turns out we were causing the problem.
Today, more than three decades after the initial discovery, the ozone hole in Antarctica is finally on the road to recovery. How did we do it? This environmental success story gives us a glimpse into what happens when scientists, industry, the public, and the government all work together to manage a problem that threatens all of us. Happy Earth Day!
To research this episode we read Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming by Naomi Oreskes and Erik Conway. We read, listened to, and used excerpts from an oral history with chemist Mario Molina that was conducted by the Science History Institute’s Center for Oral History. We also interviewed atmospheric chemist Susan Solomon at MIT in 2016.
These are the archival news clips we used as they appear in the episode:
Dow, David; Quinn, Jane Bryant; Rather, Dan. “Ozone Layer,” CBS Evening News. Aug 15, 1986.
Hager, Robert; Seigenthaler, John. “Ozone Layer,” NBC Evening News. Dec 3, 2000.
Gibson, Charles; Blakemore, Bill. “Environment/Ozone Layer,” ABC Evening News. Aug 22, 2006.
Reasoner, Harry; Stout, Bill. “Supersonic Transport Vs. Concorde,” CBS Evening News. Jan 1, 1969.
Quinn, Jane Bryant; Rather, Dan. “Ozone Layer Depletion,” CBS Evening News. Oct 20, 1986.
Chancellor, John; Neal, Roy. “Special Report (Ozone),” NBC Evening News. Sep 24, 1975. Benton, Nelson; Cronkite, Walter. “Ozone/Fluorocarbons/ National Academy of Sciences Study,” CBS Evening News. Sept 14, 1976.
Brokaw, Tom; Hager, Robert. “Assignment Earth (Ozone Layer),” NBC Evening News. Feb 3, 1992.
Whatever Happened to the Ozone Hole?
>>Distillations sound collage>>
CBS Evening News. Aug 15, 1986: Miles above Antarctica a strange and disturbing process has been measured since the late 70s. It is a growing hole in the ozone layer, that gas shroud that screens the earth from the worst dangers of ultraviolet radiation.
Alexis: Hi, I’m Alexis Pedrick.
Lisa: And I’m Lisa Berry Drago, and this is Distillations, coming to you from the Science History Institute.
Alexis: Each episode of Distillations takes a deep dive into a moment of science-related history in order to shed some light on the present. Today we’re talking about the ozone hole, in the first installment of a three-part series about environmental success stories.
Lisa: If you were a kid in the 80s like we were, you probably remember the lurking fear of the ominous hole in the sky. Maybe you didn’t understand it, but if you were like us, it freaked you out.
NBC Evening News. Dec 3, 2000: Without ozone, the sun's ultraviolet rays would shine through unfiltered, dramatically increasing cases of skin cancer, and eye damage, and damage to the entire food chain.
Alexis: So Lisa, tell me about your childhood and the ozone hole.
Lisa: I remember feeling a lot of guilt about McDonald’s. I think it wasn’t that we were worried about what was inside the container yet, but we were very worried about the Styrofoam containers.
Alexis: Exactly! I think I had that same helpless, scared feeling. I remember doing a lot of projects in school about how to save the environment. But it was always things like, ‘don’t run the water while brushing your teeth.’ But that did not seem to match up with the fact that the hole in the ozone was going to like open up and murder us all. I felt all this panic, it sounds like you felt all this panic, but then it sort of just…like…faded away. Disappear. And people kind of stopped talking about it. And then this happened in 2006:
ABC Evening News. Aug 22, 2006: Some good environmental news tonight, scientists report that the ozone layer in the Earth's atmosphere seems, finally, to be on the road to recovery.
Bill Blakemore: It'll begin to decrease, starting about 2018 or so, and by 2070 the ozone hole should be fully recovered.
Alexis: So now it’s 2018, the year the ozone hole is supposed to start closing, at least according to that CBS news clip, and here we are, living our lives…not in a Mad Max film, which is personally how I thought it was going to go.
Lisa: So maybe you’re wondering, how did we go from global freak-out to actually finding a solution? Well, it turns out the ozone hole story is a very good and very rare example of an environmental success story. It gives us a glimpse into what happens when scientists, industry, the public, and the government all work together to manage a problem that threatens everyone.
Alexis: So today we’re going to tell you how to solve any environmental problem in five easy steps. Number one: figure out the problem.
Lisa: Number two: get evidence.
Alexis: Number three: inform the public. Lisa: Number four: get industry onboard Alexis: Number five: implement policy.
Lisa: And we’ll call this one step five and a half: after you’re successful, make sure you continue monitoring the issue and make adjustments and regulations. Easy, right?
Chapter 1: Figure out the problem
Alexis: So we are at step number one: figure out the problem.
Lisa: The ozone hole story all started around in 1970 with a kind of wild idea to make supersonic jets:
CBS Evening News. Jan 1, 1969: Theoretically it will fly faster than any other passenger plane.
Sound effects of Concorde pilot pit: … Concorde 9180…
CBS Evening News. Jan 1, 1969: At nearly three times the speed of sound, eighteen hundred miles an hour.
Lisa: When the idea for these planes was, ahem, in the air, there was some concern that their emissions could damage the ozone layer. In the end it wasn’t a huge issue, but people started thinking about what could hurt the ozone layer, and that led to the discovery that some seemingly innocuous things could actually do it serious harm.
Mario Molina: Spray cans got a bad name at that time.
Alexis: That’s chemist Mario Molina, in an oral history interview conducted by the Science History Institute in 2013. In 1973 Molina was a young post-doc at UC Irvine, working in the chemistry lab of F. Sherwood Rowland. They’d been studying a set of common industrial chemicals called chlorofluorocarbons.
CBS Evening News. Aug 15, 1986: Chlorofluorocarbons, CFC's, gasses used in a variety of household products, including plastic foams.
CBS Evening News. Oct 20, 1986: 50 years ago, CFC's were a miracle of modern technology from the DuPont Company, turning the home ice box into a safe and efficient refrigerator.
Lisa: CFCs were used in things like air conditioners, refrigerators, and aerosol cans. Molina and Rowland discovered that CFCs had the potential to destroy the ozone layer, and in 1974 they published a paper in the academic science journal Nature about it. So we already knew that there were CFC's in the atmosphere. What was new was the idea that the ultraviolet rays from the sun would decompose these CFCs and the resulting chemicals would deplete the ozone. Responses to the articles from scientists familiar with the field were supportive, but there was some backlash from those outside of it.
Molina: Some thought maybe we were exaggerating or just trying to make noise.
It was an unusual thing to talk about: invisible gases, invisible rays, but eventually the media sort of picked it up.
NBC Evening News. Sep 24, 1975: Some scientists have theorized that fluorocarbon gases from spray cans and refrigerants have floated up to the stratosphere to react chemically with ozone. The government has mounted an all- out campaign to prove or disprove the theory.
Alexis: Congress acted quickly.
Lisa: Which sounds crazy, right?!
Alexis: I know. It’s not a combination of words we’re used to hearing anymore. But they took it really seriously and asked the National Academy of Sciences to look into the issue.
CBS Evening News. Sept 14, 1976: The National Academy of Sciences today confirmed that fluorocarbons in aerosol sprays weaken the Earth's protective ozone layer in the atmosphere. Spray cans, which use fluorocarbon gasses as propellant are the principal offenders in the National Academy report. The report says their use must be regulated and perhaps banned in some cases to protect the earth’s ozone layer.
Alexis: Not surprisingly, there was significant pushback from the CFC industry.
CBS Evening News. Sept 14, 1976: The industry claims that fluorocarbons are an 8 billion dollar a year business that employs more than a million people. The industry also claims that there are no adequate substitutes and that it will take years to research and develop alternatives.
Lisa: They challenged Molina and Rowland’s theory every step of the way. They formed a resistance campaign, and their star witness was a British professor of theoretical mechanics named Richard Scorer. He repeatedly denounced the ozone depletion hype with phrases like “pompous claptrap.” Eventually the LA Times exposed him as being a “scientific hired gun.”
Alexis: In 1977 three federal agencies, the FDA, the EPA, and the Consumer Product Safety Commission, announced they would phase-out and ultimately ban CFC propellants by 1979. But Americans had been watching the news, and they were worried. And they’d already begun phasing them out themselves.
Lisa: It seemed like the problem was under control, until something happened in 1985 that shocked the world. And the chapter of the story that unfolded during our childhoods was about to begin. I am talking about step two: gather your evidence.
Chapter 2: Get evidence (Antarctica)
CBS Evening News. Aug 15, 1986: Scientists are worried by a mysterious massive hole high above the ice.
David Dow: In Antarctica, up to half the ozone is being depleted each year over an area the size of the United States.
CBS Evening News. Aug 15, 1986: (Susan Solomon) Nobody predicted this, it's kind of like a bomb falling out of the sky.
Lisa: In 1985 the British Antarctic Survey confirmed what their satellites had actually detected years before: there was a massive hole in the ozone above the South Pole.
Alexis: Now, dear listeners, you might be wondering, if it was already detected years before why didn’t we find out about it sooner?
Lisa: Ah, yes. Interesting story: the satellite had been recording ozone levels that were so low that they’d been catalogued as mistakes. But these were not mistakes. The ozone was disappearing over a huge area. Until this point ozone depletion was mostly theoretical. It was something that wouldn’t happen for a long time. But suddenly it was actually happening and scientists leap into action to try to figure it out.
CBS Evening News. Aug 15, 1986: A research team is leaving Los Angeles tonight, bound for the South Pole. Within weeks, scientists will sent balloons into the Antarctic stratosphere in search of a final answer. The ozone hole, a consequence of man or nature.
Susan Solomon: When I saw the British paper I started thinking about what could possibly account for this incredible phenomenon that we just didn't anticipate at all.We started thinking about, "Hey, could we actually go down there and measure some stuff? Actually figure out whether this chemistry is what's responsible or not or some other chemistry?"
Lisa: Atmospheric chemist Susan Solomon led the expedition to Antarctica when she was just 30 years old. The 16-person team took the first chemical measurements of the stratosphere on the continent, trying to figure out what was causing the hole. This is Solomon in 1986 on CBS news, before the expedition.
CBS Evening News. Aug 15, 1986: (Susan Solomon) We don't yet know weather or not what is happening down there has anything to do with mankind. It may be a completely natural phenomenon.
David Dow: It may be, but some scientists, including Solomon, are looking hard at another long debated possibility. Chlorofluorocarbons, CFC's.
Solomon: Antarctica really is very, very cold so it truly is the coldest place on Earth. You know, they open the door of the airplane and you just think, "Okay, it's hitting my face and it's hurting my nose and this air is just so ferocious. Maybe I'll just stay in my room the whole time, you know?
Lisa: But she didn’t, of course. In fact, Solomon and her team took measurements around the clock, often in extreme conditions.
Solomon: Taking measurements in the dark, going up to the roof, pointing the mirror, almost getting blown off the roof by the strong winds that sometimes came up was pretty exciting stuff.
Alexis: Solomon had theorized that extreme temperatures were partly to blame for the ozone hole. Antarctica is so cold it has clouds in its stratosphere, which is really unusual. The Arctic also has them occasionally, but everywhere else on earth is too dry and warm.
Solomon: I realized they might be driving a very different chemistry there than we have anyplace else in the world. I did some studies of what that chemistry might do and then I wrote a scientific paper presenting the theory that reactions on the polar stratosphere clouds combining with the man-made chlorine that we've pumped up over the past decades might be enough to produce the Antarctic ozone hole and that turned out to be the right answer.
Alexis: The expedition found the smoking gun. It proved Mario Molina and Sherry Rowland’s theory. Then two other studies confirmed what Solomon’s team found.
Solomon: Three independent data sets can't all be getting the wrong answer. At some point it just becomes silly to say that's not the answer.
Chapter 3: Inform the public
Lisa: Doing the science was only part of the battle. There was another crucial step in solving this problem:
Alexis: Right, letting the public know.
Lisa: It’s so important, but it’s also much more difficult than it sounds. The news moves fast. And science—science goes at it’s own pace.
Solomon: Science wants to make very sure that everything gets properly peer-reviewed and that we really take our time producing an answer that has been checked and
rechecked by other people which is absolutely very, very important. On the other hand even in those days the demand for the information was huge because of the potential importance of it. We immediately got all kinds of requests from reporters and requests to testify. We just tried to do a balancing act.
NBC Evening News. Feb 3, 1992: Today, scientists who recently returned from the Antarctic told Congress that the ozone layer is still disappearing at an alarming rate. Dr. Susan Solomon let the expedition, and today before a House sub-committee reported results.
Solomon in NBC Evening News. Feb 3, 1992: We observed a 35 percent decrease of the total ozone overhead. Three different and independent sets of observation showed this change. I think we will eventually see large-scale depletions of the ozone layer at other latitudes.
Lisa: Solomon, Molina, and Rowland all had to confront the fact that experts in their positions weren’t accustomed to speaking directly to the public or speaking before congress. There was then and there is still now a common idea among scientists that the facts should speak for themselves. But the facts themselves, let alone the methods used to obtain those facts are not always clear or easy to interpret. And on their own they are not always enough to sway public opinion.
Solomon: I did not have then and I don't have now a good formula for how science deals with that. I think the problem we have is that bad information spreads faster than good information. Bad information doesn't have to be peer-reviewed so stuff goes flying around the Internet immediately and creates all kinds of misimpressions or fears or misunderstandings that sometimes take many, many years to unravel and sort out.
Alexis: In the 1970s Mario Molina and Sherry Rowland had a hard time navigating how to talk to the public.
Molina: A sort criticism at that time was if you are in academics, you don’t publish in the newspapers. We had a few colleagues that actually did like to publish first in the newspapers. And they were not very highly regarded as scientists, although in society they were better known.
Alexis: But Molina and Rowland felt their findings were important enough they had to break with tradition.
Molina: So that’s when slowly we decided, hey, this looks serious enough. We have to make an effort to learn to communicate with the media. If this is real, we should do something about it.
Alexis: But it was a steep learning curve to figure out how to actually do it. After they published that first article about CFCs, they decided they had to get the story out beyond a peer-reviewed journal that only scientists would read.
Molina: So we sort of naively organized a press conference.
Alexis: They brought in scientists to explain how the atmosphere works. Then others to talk about how measurements that were being taken. But by the time Molina and Rowland explained their findings almost all of the reporters had left.
Molina: Because that’s not the way [laughs] press conferences work with the media. You have to come up with a punch line at the very beginning.
Alexis Even though they weren’t great at it, Susan Solomon thinks Molina was right: talking to the public is crucial.
Solomon: Because nothing happens without public understanding. I think there's a role for science, there's a role for technology, but in terms of making a change at a large scale, even when it comes to things like chlorofluorocarbons, public understanding is absolutely critical.
Alexis: So if you use me and Lisa as case studies, the media coverage definitely worked in swaying public opinion.
CBS Evening News. Oct 20, 1986: CFC's are all around us, and in some surprising places. For example, every time you crumble a Big Mac package you may be venting CFC's into the atmosphere.
Alexis: Did little three-year old Lisa see this CBS story in 1986? And begin fearing takeout containers?
Alexis: We’ll never know for sure, but it seems plausible.
Lisa: It’s not enough to make people understand the consequences of a problem. They need to be able to perceive the problem itself. And luckily, 1980s news footage was showing viewers the problem. They could actually visualize the ozone hole.
CBS Evening News. Aug 15, 1986: In enhanced satellite photos, it appears as a series of colored splotches over the South Pole.
Solomon: The nice thing, in some ways, about the way that the ozone hole unfolded was that people could actually see the images of what was happening and those were very, very powerful. The film clips that you can see of the ozone hole develop and grow are pretty tangible, imaginable.
Chapter 4: Implement policy, get industry onboard.
Alexis: Ok, so people were seeing the problem. We’ve gathered our evidence. Now its time to move on to step four and five. Get industry onboard and implement policy.
Lisa: The U.S., Canada, and Norway had already banned CFCs in aerosol cans back in 1978, so the discovery of the ozone hole in 1985 came as a shock.
CBS Evening News. Oct 20, 1986: Most Americans thought we got rid of the ozone problem years ago, when the government banned the use of chlorofluorocarbons in aerosol sprays. But the problem did not go away, it got worse.
Lisa: At this point CFC use was still growing in Europe and in the Soviet Union and it became clear that this was a global problem in need of a global solution. After Antarctica, policy happened fast. In 1987, almost every country in the world signed on to the Montreal Protocol. The treaty’s purpose was to protect the ozone by phasing out CFCs and other ozone-depleting chemicals. Within 15 years we went from the basic scientific understanding of a problem to implementing policy to address it. The speed of the response was unprecedented.
Alexis: Solomon says that the treaty was so successful because the proposed solutions were practical, not overly idealistic. Industries needed to find replacements for CFCs, and for some things it was really easy. Like electronic chips. Up until this point CFCs were the way to clean them. But there was a surprising and environmental friendly solution.
Solomon: People actually recognized that you could clean some types of chips pretty well with lemon juice. Pretty easy, huh?
Alexis: But other things had more at stake: take for example fire extinguishers. They were made with another ozone-depleting chemical called Halon, and there were no easy substitutes for it. So phasing out these chemicals had to happen more slowly, with help from scientists and economic advisors.
Solomon: Then, the governments could choose, for example, to take longer with the chemicals for fires extinguishers. They could choose to make an exemption for medical asthma inhalers and things like that.
Lisa: Chemical companies like DuPont also had to get on board. And let’s take a second and realize how hard it is for a major industry to shift gears like that. It requires people at every level from stakeholders, and board members to people at the factory floor, to buy into the changes and support them. It’s a huge investment. It’s a huge undertaking. And a change in infrastructure.
Alexis: And on top of that all, these changes had to be industry-wide.
Solomon: Some of the big chemical companies actually had very good scientists working in those companies. And they were huge, in my opinion, at bringing the chemical industry into the discussion in a very fair and positive way. I really wish more industries would do that.
Chapter 5: What does success look like?
Alexis: The Montreal Protocol went into effect on January 1st, 1989, ending a tumultuous decade. But despite the phasing out of CFCs and other ozone-depleting chemicals, the hole over Antarctica continued to grow throughout the 90s and into the 21st century.
Solomon: So in the old days when you'd look at it your heart would kind of sink along with the ozone because it started dropping and then it just dropped like a rock through the whole decade of the '80s.
Alexis: The paperwork had been signed, but CFCs have a lifespan of 50 to a hundred years.
Solomon: So the chlorofluorocarbons from your grandmother's refrigerator that she got rid of in 1975, some of it is still in the atmosphere depleting the ozone layer.
Alexis: But the hard work eventually started to pay off. And finally, in 2006:
ABC Evening News. Aug 22, 2006: Some good environmental news tonight, scientists report that the ozone layer in the Earth's atmosphere seems, finally, to be on the road to recovery. The hopeful update comes two decades after one gutsy woman made the world pay attention to a potentially catastrophic problem.
Lisa: Scientists estimate that the hole will fully close around 2065—eighty years after it was discovered.
Alexis: So the ozone hole story really was an environmental success story. It did involve the five steps: figure out the problem, get evidence, tell the public, get industry onboard, and pass some regulations. But it also needed other things.
Lisa: It really helped that the problem had what Susan Solomon calls the 3 p’s: The problem is personal, it’s perceptible, and the solutions are practical. The ozone hole felt like a personal problem because hair spray cans, refrigerator coolant—all things that were woven into the fabric of everyday life. The problem was perceptible because people were seeing in the news every night. Scientists were reaching out and helping to visualize their data. And the solution, the Montreal Protocol, was very practical. It’s a living document that even has opportunities to reassess and rework the problem every few years.
Alexis: So the hole over Antarctica is closing, yes. But there’s ozone depletion in other places, especially around the equator where a lot more people live. And we got other environmental issues like climate change. Yes it’s only five easy steps but we’d have to be willing to repeat these fives easy steps over and over again and make changes that aren’t just small, personal things like hair spray or refrigerator coolant, but actually huge lifestyle changes on a major scale. It’s actually not as easy as we made it sound.
Lisa: Still, Susan Solomon is relatively optimistic about humanity’s ability to manage severe environmental problems.
Solomon: We've done that with the issue of pesticides. We've done it pretty well with the issue of smog. We've done it with acid rain. We've done it with chlorofluorocarbons. I mean, gee, we've got quite a string of successes going here, people. I think there's plenty of reasons for hope from history.
Alexis: Distillations is more than a podcast. We’re also a multimedia magazine.
Lisa: You can find our videos, our blog, and our print stories at Distillations DOT org.
Alexis: And you can also follow the Science History Institute on Facebook, Twitter, and Instagram.
Lisa: This episode was produced by Mariel Carr and Rigo Hernandez.
Alexis: And our theme music was composed by Zach Young.
Lisa: Special thanks to the Science History Institute’s oral history program for sharing the Mario Molina interview. You can find our vast collection of interviews with leading figures in chemistry, chemical engineering, life sciences, and related fields at O-H DOT SIENCE HISTORY DOT ORG.
Lisa: For Distillations I’m Lisa Berry Drago.
Alexis: And I’m Alexis Pedrick.
Both: Thanks for listening!
| 1 | 3 |
<urn:uuid:7c31e08e-0eea-420c-b4b7-70757755ca9b>
|
After a century of relative stability, the current decade is seeing a dramatic pace of change in the generation, distribution, and consumption of electricity. Driven by technology and ambitious policy objectives, this rapid evolution has begun to stretch the electricity system in fundamental ways. In order to support the new capabilities (e.g. distributed generation, demand participation, etc.) being introduced, the electricity system needs a ubiquitous layer of information for situational awareness, coordination, and control. Developments in information technology have enabled the weaving of a “packet grid” that supports information flows independently of the “electron grid”—this combination is generally referred to as the smart grid.
As of today, most utilities still rely on legacy communication networks that were purpose-built for the support of individual applications. An integrated network architecture will be required to meet the evolving needs of the electricity system while maintaining reliability, security, and performance. In order to make effective decisions, power sector leaders will need a basic understanding of smart grid network technologies. This paper outlines some of the fundamentals of communication networks for the smart grid. The content comes from extensive ad hoc reading of Google queries, Wikipedia, and a book written by Budka, Deshpande, and Thottan from Alcatel-Lucent Bell Labs. For a more in-depth understanding, reading this book is strongly recommended.
Linking networked devices to grid applications
An integrated communication network for the electricity system is an essential backbone for many different grid applications. With the roll-out of a new generation of devices such as smart meters, sensors, and controllers, the amount of data that needs to be collected, transmitted, analyzed, and acted upon is growing exponentially. Some devices such as smart meters only need to communicate a few times per hour. Other devices, however, need to communicate more than 100 times per second and require nearly-instantaneous responses to their measurements.
Source: Alcatel-Lucent Bell Labs (modified)
Although modern technologies can easily handle smart grid requirements reliably and securely, most of the network communication system in place today were deployed by utilities decades ago. Furthermore, new infrastructure deployments are expensive and in any many cases the economic benefits will take some time to be realized. Nonetheless, smart grid applications promise to revolutionize the electricity system through better efficiency, reliability, resilience, and the integration of distributed and/or intermittent resources.
How do networks transfer data?
Communication networks are used to transfer information (data) between endpoints through an interconnected grouping of paths (links) and intersections (nodes), from a source to a destination (endpoints). For the most part, data flows between nodes in both directions. Each node is capable of receiving and forwarding data over links with other nodes or endpoints. Each link has a maximum data rate (bits per second), with nodes often storing data within their buffer capacity.
Generic communication network schematic
Source: Alcatel-Lucent Bell Labs
At the most basic level, data is encoded in a stream of “0”s and “1”s (bits)—a string of 8 bits is a byte. Bytes are assembled into “units” of data transfer called protocol data units (PDU). Once sent by an endpoint into a network, a PDU is commonly referred to as a packet. Generally, packets are composed of a header containing overhead information such as the destination address, a payload with the actual data being transferred, and sometimes a trailer containing additional overhead. The format and maximum size of each packet are specified by the network.
Protocol data unit (PDU)
Source: Alcatel-Lucent Bell Labs
Communication networks provide data transfer service between endpoints to support applications across locations. For example, an email server communicates with a computer to send and receive emails. If the amount of data transferred from the source to the destination endpoints exceeds the maximum packet size, the data message is divided (or packetized) into parts that the network can accommodate. The packets are then transferred to the intended destination endpoint and reassembled by following a set of rules.
Network protocols establish the rules of communication between endpoints and nodes in the network. These rules define the way endpoints are addressed, how packets are routed through the network, and data processes such as packetizing. Protocols are organized as a hierarchy of layers, each serving a distinct function and existing to serve the layer above it. Data is packetized from one layer to the next, with the payloads from each layer representing fragments that are reassembled into packets of the layer above.
Hierarchy of network layers
Source: Alcatel-Lucent Bell Labs
As per the Open System Interconnection (OSI) reference model, there are seven layers of network communication protocols. For each link between two nodes, communication happens at each respective layer. Multiple disparate media can be used for communication on successive links, in which case separate L1 and L2 modules are required to support link connections at each node.
|L7: Application layer||High-level APIs, including resource sharing, remote file access, directory services and virtual terminals|
|L6: Presentation layer||Translation of data between a networking service and an application; including character encoding, data compression and encryption/decryption|
|L5: Session layer||Managing communication sessions, i.e. continuous exchange of information in the form of multiple back-and-forth transmissions between two nodes|
|L4: Transport layer||Reliable transmission of data segments between points on a network, including segmentation, acknowledgement, and multiplexing|
|L3: Network layer||Structuring and managing a multi-node network, including addressing, routing, and traffic control|
|L2: (Data) Link layer||Reliable transmission of data frames between two nodes connected by a physical layer|
|L1: Physical layer||Transmission and reception of raw bit streams over a physical medium|
The L1 physical (PHY) layer is the physical communication medium used for the transfer of data as a stream of “0”s and “1”s. This medium can be a twisted pair of wires (e.g. phone line, CAT 5), coaxial cables (e.g. TV cable), optical fiber, power lines, or air (i.e. wireless). Bits are typically encoded using a modulated sinusoidal signal that is decoded by the receiving device. Transmission of data over optical fiber is done via lasers or light-emitting diodes (LED) intermittently injecting photons that travel through glass fibers over long distances at the speed of light. Power line communication (PLC) uses power lines themselves to transmit information. PLC has traditionally been limited to low data rates, however new standards are being developed to support better performance. Serial communication refers to a set of early legacy standards that are still used in many utility deployments for low data rate applications and transfer bits asynchronously a few at a time rather than maintaining a synchronized clock for sending and receiving of data. For wireless signals (e.g. 3G, LTE, RAN), the signal is usually a combination of many frequencies that occupies a specified bandwidth within the wireless communication spectrum.
The L2 link layer is used to establish a link between two nodes and reliably deliver a frame of data between the L2 entities at each node. Each L2 frame is a packet with a header, a payload, and a trailer that includes a checksum (CRC) for error detection. An acknowledgement procedure allows for frames to be retransmitted if the source fails to signal successful delivery within a specified time interval. The L2 layer also supports traffic control and sequencing for the transmission of data frames between nodes. Network service providers (NSP) have traditionally offered Frame Relay services allowing endpoints connected to Frame Relay service nodes to establish L2 virtual connections (VC) with each other. The NSP then manages the Frame Relay network and transmit data frames between service nodes on behalf of customers.
The L3 network layer controls the operation of packet transmission by assigning addresses to nodes and routing frames along physical paths. The L3 layer also groups clusters of nodes and endpoints into subnetworks that have a unique common address referred to as the subnet’s gateway. L3 traffic is exchanged between subnets and across networks through routers at each node that maintain routing tables which map the address of each destination endpoint and help direct packets to the next router along the path. To prevent packet non-deliveries due to router network link failures, L3 entities manage router buffers and periodically detect failures and modify routing tables according to routing protocols (e.g. Open Shortest Path First, or OSPF) that recalculate optimal paths through metrics such as minimum number of nodes (hops) or delays. Another important function of the network layer is to provide quality of service (QoS), which we describe later in its own section.
The L4 transport layer ensures that messages are delivered without errors, in the correct sequence, and with no duplications or losses. It relieves the higher layer protocols from any concern over the transfer of data across the network. Unlike lower-level connectionless protocols that only require a secure link between immediately adjacent nodes, the transport layer is often supported by a connection-oriented protocol that provides end-to-end connections between endpoints across the network. The L4 layer provides message segmentation for the higher layers by splitting large packets into smaller units that the network layer can handle and reassembling them at the destination. The transport layer also inserts control information in its packet headers for traffic sequencing, provides acknowledgements for reliable end-to-end message delivery, and enables the multiplexing of several message streams. The role of the L4 transport protocol depends on the type of service that is provided by the lower layers (e.g. VC, checksum). If the network layer unreliable, for example, the transport layer will need to include extensive error detection and recovery.
The L5 session layer supports the establishment, maintenance, and termination of sessions (i.e. connections) between two application processes running on different machines connected to endpoints across the network. The L5 layer functions allow application processes to communicate over the network through security clearances, name recognition, logging, etc.
The L6 presentation layer acts as a translator between the application layer and the network. It formats the data to be presented to the application layer and provides data conversion, data compression, and data encryption.
The L7 application layer serves as the network interface for users and applications. It contains a variety of common functions such as resource sharing, remote file access, directory services, electronic messaging (e.g. email), etc.
For communication between two endpoints, the L1 physical and the L7 application layer are always present. The combination of layers between L1 and L7, which is referred to as the protocol stack, can either span all layers or none at all—in which case the endpoints would be directly connected to each other over a single link. In many network configurations, the L5 session layer and L6 presentation layer are not present. Some protocols also span several layers or include certain features of a given layer while excluding others.
Common smart grid networking protocols
RS-232-C is a L1 standard for serial communication used in data transmission for legacy applications (e.g. power substation). It defines the signals connecting between data terminal equipment (e.g. computer terminal) and data communication equipment (e.g. modem). The standard defines the electrical characteristics, timing, and meaning of signals. Due to the large size of connectors and low data rates, RS-232-C has been replaced in computers by newer standards such as universal serial bus (USB). Nonetheless, the standard is still widely used in industrial and scientific applications.
Time division multiplexing (TDM) is a L1 protocol that allows a single physical medium to simultaneously transmit data on several channels by rapidly alternating between different streams of data and recomposing the data for each channel on the receiving end. The most basic unit of the TDM data rate (64 kbps, kilobits per second) is called digital signal zero (DS0). A collection of 24 DS0s is a transmission system one (T1) and a collection of 28 T1s is a T3, which has a data rate of 45 Mbps. In fiber optic networks, wave division multiplexing (WDM) is a L1 protocol that makes use of different colors to create simultaneous transmission channels, thus dramatically increasing the data transmission capacity of a single optical fiber.
Synchronous optical network (SONET) is a very high data rate (up to 40 Gbps) L1 TDM protocol originally developed for voice communication over optical fiber networks. Outside of North America, its equivalent name is Synchronous Data Hierarchy (SDH). SONET signaling and frame formats have also been implemented for communication over other media (e.g. wireless microwaves). The basic unit of communication for SONET is called synchronous transmission system one (STS1) with a data rate of 51.84 Mbps. The L2 protocol for SONET is typically packets over SONET/SDH (POS). POS employs the point-to-point protocol (PPP), a common L2 protocol that can simultaneously support multiple L3 network protocols but requires a duplex circuit (i.e. two separate, continuous, circular, side-by-side, one-way lines of communication) to operate.
Ethernet is a L2 protocol that is very common in local area networks (LAN) and has evolved to support higher data rates over longer distances such as metropolitan area networks (MAN) (e.g. through optical fiber). Typically, the Ethernet protocol is coupled with the media access control (MAC) protocol, a sublayer of the L2 link layer. The MAC sublayer acts as an interface with the L1 physical layer by emulating full-duplex (dual one-way) communication on half-duplex (single one-way) channels. In addition to receiving and transmitting frames of data, the MAC sublayer assigns addresses to each device connected on the network. MAC addresses, which are assigned to each device at the time of manufacture and thus globally unique, are only addressable within a subnetwork by the local router—a network layer is required to transmit data beyond the LAN. Ethernet networks are composed of Ethernet switches connected together in a spanning tree structure that ensures a loop-free topology. In order to transmit data, Ethernet uses a carrier sensing process whereas each switch (station) continuously monitors each of its connections for frames being transmitted through the broadcast domain. All stations in the domain receive all transmissions simultaneously (multiple access), however only the station whose MAC address matches that of a transmission retains the frame for processing or forwarding. When more than one station transmits a frame at the same time, collision detection occurs and each station reattempts its transmission after a random delay. Ethernet switches use MAC address learning to maintain a table mapping sources and destinations, thus eliminating the need for a broadcast when frames can be directly forwarded.
Internet Protocol (IP) is the most widely implemented L3 network layer protocol. Although the Internet is based on IP, the protocol is also used on networks unrelated to the Internet. IP was developed to connect any two endpoints that have at least one or more networks providing a data path between them. IP can operate independently of the underlying physical media, L1 layers, and L2 layers that make up the network interconnections. An IPv4 address is a combination of four 8-bit numbers separated by periods (a.b.c.d whereas 0≤a,b,c,d≤255), for a total of 232 (4.3 billion) addresses globally regulated by the Internet Assigned Numbers Authority (IANA). Every entity (node or endpoint) that is addressable in an IP network must have a unique IP address. IANA has reserved a subset of approx. 17 million addresses that can be used redundantly by devices within different subnetworks, as long as those devices do not connect directly to the Internet (i.e. without going through a router). Due to the rapid growth of networking over the past two decades, it became evident long ago that IPv4‘s 32-bit address space would rapidly become inadequate to support the growing demand for IP addresses—by the Internet-of-Things (IoT), for example. To address this concern, IPv6 was developed in the mid-1990s with 128-bit IP addresses that can support up to 2128 (3.4 × 1028) unique addresses. Although new network products have begun to support IPv6, deployed networks are taking a long time to migrate—it should be noted that IPv6 is backward-compatible with IPv4 addresses.
Transmission Control Protocol (TCP) is a L4 transport layer that is defined over IP and often referred to as TCP over IP (TCP/IP). TCP is a connection-oriented protocol that provides reliable, sequenced, and error-checked delivery of a stream of packets between application endpoints. TCP also handles the segmentation of messages into packets on behalf of the application layer. Due to network traffic and congestion, IP packets can be lost, duplicated, or delivered out of order. Under TCP, the destination endpoint provides acknowledgement for each packet successfully delivered by the source endpoint. The source endpoint keeps track of each packet sent; if a positive acknowledgement is not received within set time, the packet is retransmitted. TCP is optimized for accurate and reliable delivery of packets rather than timely delivery, thus incurring relatively long delays. Due to high overhead, TCP is referred to as a heavyweight protocol. Similar to TCP/IP, User Datagram Protocol (UDP) is a L4 transport layer that is defined over IP. However, UDP is has little overhead (lightweight) and does not guarantee the delivery of packets (connectionless). UDP is often used for application such as voice data that are highly time-sensitive and would not benefit from the retransmission of lost packets.
MPLS network schematic
Source: Alcatel-Lucent Bell Labs
Multiprotocol Label Switching (MPLS) is often considered a “layer 2.5” protocol, as it performs the functions of L1 and L2 while also including features that are typical of L3. MPLS services allow a single network to satisfy different types of traffic by emulating many L1 and L2 protocols including T1, PPP, Frame Relay, and Ethernet. The MPLS protocol does not have any sublayers of its own (i.e. L1 or L2) and is agnostic to the L1 and L2 layer protocols used for connectivity. An MPLS network is composed of three types of nodes: customer edge routers (CE) are outside endpoints connecting into the network, provider edge routers (PE) are MPLS network endpoints, and provider routers (P) are intermediate nodes for data transmission within the MPLS network. The MPLS packet header, which is inserted between the L2 header and the payload, designates a MPLS label for each packet. MPLS routers maintain routing tables for each label and forward packets along predetermined label-switched paths (LSP) according to their label mappings. All packets entering a given path follow the same sequence of routers to their next destination. There can be several different LSPs defined between two endpoints and each LSP is unidirectional—for two-way communication, two LSPs must be defined in opposite directions and each can go through a different set of routers. LSPs can be defined or removed from the network at any time according to the network’s label distribution protocol (LDP), which also determines network changes due to router or link failures. MPLS provides extended QoS functionality and supports guaranteed minimum data rates for given LSPs through the resource reservation protocol (RSVP). It is expected that MPLS will replace legacy L2 protocols such as Frame Relay.
Long-Term Evolution (LTE) is a fourth-generation (4G) wireless communication standard developed by 3GPP and commonly used in smartphones. LTE has been widely deployed and can transmit data at high data rates (up to 100 Mbps) with low latency (less than 100 milliseconds). The wireless antenna base station for an LTE network is called the evolved Node B (eNB) and establishes the connection with user equipment through the radio access network (RAN). LTE uses a L1 protocol called orthogonal frequency-division multiplexing (OFDM) to enable simultaneous two-way communication through frequency-division duplexing (FDD), whereas each mobile device’s uplink (UL) transmitter and downlink (DL) receiver operate at different carrier frequencies. LTE also supports time-division duplexing (TDD), whereas the UL and DL are separated by the allocation of different time slots within the same frequency band. LTE also employs multiple input and multiple output (MIMO) to multiply the capacity of individual radio links by using multiple transmit and receive antennas to exploit multi-path propagation of signals. At the L2 link layer, LTE has a MAC sublayer that performs channel mapping, data handling, and ciphering to prevent the unauthorized acquisition of data. In addition, the LTE L2 also includes a radio link control (RLC) sublayer that performs packet segmentation and reassembly, transfer acknowledgement, and flow control between entities. At the L3 layer, LTE supports an end-to-end IP connection and the eNB base station itself is an IP node. The eNB is the bridge between user devices and evolved packet core (EPC), which serves as the gateway between the wireless network and the broader IP network. The EPC handles packet routing and forwarding, IP address allocation, access equipment authentication, QoS, and more.
Radio frequency mesh (RF-mesh) networks are ad hoc communication networks composed of radio nodes (i.e. wireless routers) organized in a mesh topology. For smart grid applications (i.e. smart meters), the predominant RF-mesh standard is Zigbee, which was developed and specified by an alliance of smart grid companies. RF-mesh networks are supported at L1 and L2 by the IEEE 802.15.4 standard for low data rate wireless networks. Zigbee enhances the 802.15.4 standard by adding L3 networking and security functions required for smart grid applications. RF-mesh works by building a multihop network that dynamically establishes connections between neighboring nodes. When a node connects to the mesh network, it begins to exchange data frames with the other nodes in the network over an air interface and the mesh protocol routes each message to its destination node. Due to constraints in transmission power or physical obstacles, the RF range between may be limited—to support a larger and more reliable network, stand-alone data forwarders (DF) can be deployed to extend RF range and reduce the number of “hops” required. DFs are more effective when mounted at a proper height (e.g. a power pole) to maintain a clearer line of sight. Similarly to Ethernet communication, a logical mesh must be created so that each node receives data frame broadcasts according to the spanning tree. Each node either retains the frame (if it is addressed to itself) or forwards it toward its destination. Because RF-mesh networks communicate wirelessly over the unlicensed spectrum, other wireless traffic may cause interference with the RF-mesh. Such interference can be mitigated by using frequency hopping spread spectrum technology that spans multiple channels. The useable data rate for RF-mesh depends on the data rate supported by the radio, the number of hops between a node and the destination, the protocols used over the radio broadcast, and packet overheads.
Substation: SCADA, DNP3, and teleprotection
Supervisory Control and Data Acquisition (SCADA) systems have been used by utilities since the 1960s to proactively monitor and control grid operations. The SCADA master control is typically located in the utility’s centralized data and control center (DCC) and connected through the communication network to remote terminal units (RTU) deployed within each of the transmission and distribution substations. At each substation, one or more human–machine interface (HMI) and engineering workstations may be installed for local access to substation functions such as manual control and device configuration.
Primary equipment inside the substation enables SCADA to control substation operations in real-time. To protect the circuit from short circuits, ground faults, or other anomalies, there is switchgear in place to trip the circuit in case of failures. Switches are operated manually, circuit breakers (CB) trip automatically when there is a failure, and reclosers have the capability to switch back on without manual intervention. Other primary equipment includes current transformers (CT), voltage transformers (VT), and voltage regulators.
Secondary equipment supports control functions and reports the measurement of voltages (V), currents (A), power (W), and reactive power (VAR), as well as the status of various substation systems. Bay controllers and relays (low-power devices) receive control signals from the DCC through the RTU and actuate substation elements. The RTU also collects measurements, alarms, and other information from secondary equipment and forwards it to the DCC. Because of the large number of devices connected through individual serial connections, the result is a complex pair-wise copper wiring mesh.
Legacy substation schematic
Source: Alcatel-Lucent Bell Labs
Recently, utilities have begun modernization their SCADA systems with substation automation by deploying microprocessor-based intelligent electronic devices (IED) to replace conventional equipment such as CTs, VTs, RTUs, bay controllers, and relays. A single IED may support functions formerly supported by multiple conventional devices in the substation, thus reducing the required number of devices and interconnections. These modern devices introduce new substation functions, simplify operations, improve performance, support newer communication protocols, and reduce costs. Utilities are also implementing new transmission management system (TMS) and distribution management system (DMS) applications that take advantage of the improved monitoring and control functions of the IEDs.
IEC 61850 is a comprehensive set of standards for utility substation systems specifying object models that characterize substation equipment and communication. As a basis for multivendor interoperability, a configuration language is defined to allow standards-based tools for SCADA operations and maintenance functions. With IEC 61850, all substation devices are based on IEDs that support one or more functions including switchgear, CT, VT, bay controller, and relay. According to the number and location of switchgear and transformers in the substation, one or more Ethernet LAN-based process busses are deployed to support local interconnections of IEDs. The station bus provides connections between IEDs and other systems in the substation (e.g. HMI)—process busses connect into the station bus, which is also an Ethernet LAN. The station bus connects to the SCADA master controller in the DCC through the IP router connected to the station LAN. An important aspect of IEC 61850 is the definition of the generic substation events (GSE), which provide a fast (within 4 ms) and reliable mechanism for generating event notifications within a substation.
Modern substation schematic
Source: Alcatel-Lucent Bell Labs
Distributed Network Protocol 3 (DNP3) is becoming the most prevalent SCADA communication protocol and replacing many traditional protocols. DNP3, which is still evolving, was developed by the DNP3 user group, an organization with members representing utilities and SCADA product vendors. Despite the word “distributed”, DNP3 is used by utility SCADA systems for both transmission and distribution, as well as in other industries such as water to gas supply. With the evolution of IEC 61850, rather than going through the RTU each IED can communicate directly with the SCADA master over DNP3 through the IP network. During transition to IEDs, serial connections are tunneled using IP—once IEDs are fully deployed and all devices are communicating directly with the DCC over DNP3, the RTUs can be removed. Connectivity to conventional equipment may be maintained for a period of time by connecting it to IEDs that support those devices. DNP3 provides for periodic polling of substation devices (typically every 2-5 seconds) by the SCADA master control for measurement data (e.g. voltage, frequency, accumulated kWh), status information (e.g. switch on/off), and special-purpose data (e.g. temperature, wind speed). In addition to responding to data requests, IEDs also asynchronously send information on substation events (e.g. circuit failure) as they occur. Based on data received from the substation and other available information, the SCADA master sends control signals to IEDs/RTUs (e.g. disconnect a switch, change taps on a voltage regulator). To ensure that time stamps are accurate, DNP3 supports synchronization of IED/RTU clocks.
The DNP3 protocol contains its own application, transport, and data link layers—for the perspective of the network, together these layers form an application layer riding over TCP or UDP. DNP3’s application layer breaks messages into fragments, the transport layer breaks fragments into packets, and the link layer adds its header on each packet to form a frame. To be compatible with L1 layers such as RS-232-C, DNP3 is defined over serial physical layer connections that can be emulated (i.e. with MPLS) and has an end-to-end data link layer. DNP3 uses the substation’s existing IP network (defined over the station bus’ Ethernet LAN) and features a L4 data connection management layer that allows it to run over TCP/IP or UDP/IP. The substation’s IP router communicates with the DCC through the IP network.
Teleprotection occurs when protection relays at different substations need to communicate with each other to locate faults in the circuit and disconnect faulty transmission lines by tripping a circuit break in either or both substations. For example, the distance relay at substation A may detect that there is a fault based on CT/VT instrumentation and send a permissive signal through the communication line to the relay in substation B. If the relay at substation A also receives a reciprocal signal from substation B, the transmission line is tripped at substation A. Within each substation, the relay that reports the fault may be different from the relay that sends the trip signal to the circuit breaker. Because faults in high-voltages lines can lead to severe power outages and danger, they must be cleared quickly and reliably. Teleprotection has very stringent delay requirements of less than 8ms for relay communication between substations and less than 4ms within each substation. The required bit error rate of less than 10-6 (one in one million) often necessitates multiple communication paths between substations. Given the stringent delay requirements, protection equipment for two adjoining substations is usually directly connected over SONET/SDH, Ethernet, wireless microwave, or PLC.
Other substation applications include closed-circuit television (CCTV) and mobile workforce communication (MWC).
Transmission: WASA&C, FACTS, and DLR
Wide area situational awareness and control (WASA&C) refers to the near real-time monitoring and control of transmission system operations across interconnected utility grids and over large geographical areas. Until recently, utilities have mostly relied on their respective transmission management systems (TMS) and SCADA to monitor and control their respective power grids. In comparison with TMS and SCADA, WASA&C is dramatically more granular (60x per second versus once every 2-5 seconds), gathers more measurements (phase angle rather than just voltage or current), monitors across many utilities, and has synchronized time stamps. In order to measure phasor values (amplitude and phase) for voltage and current across the network at extremely high frequencies (up to 120x per second for 60Hz lines), WASA&C employs sophisticated IEDs called phasor measurement units (PMU). All PMU measurements are time-stamped using a clock that is synchronized to the global positioning system (GPS). With synchronized time-stamping, PMUs are called synchrophasors.
Following the blackout of 2003, the Department of Energy (DOE) came together with North American utilities and regulators to define a specification for the North American Synchrophasor Initiative network (NASPInet). Still today, instead of using it for wide area grid control, utilities are simply forwarding PMU data to their respective independent system operators (ISO) and regional transmission organizations (RTO). NASPInet aims to create a network infrastructure for secure, reliable, and high-performance communication between synchrophasors and WASA&C applications deployed at utility DCCs, ISOs, and RTOs. The WASA&C applications at a utility DCC will eventually need to process data received from many thousand PMUs, including its own and those deployed by other utilities. Phasor data concentrators (PDC) are deployed at each substation containing PMUs and consolidate PMU data as needed—historical PMU data is stored in archives. At its core, NASPInet is a data bus that provides the secure, reliable, high-performance communication network and centralized services necessary for PMU traffic across utilities and monitoring centers. Integral to NASPInet are phasor gateways, which connect utility and monitoring center networks to the data bus. For secure data communication, a decentralized information-sharing network architecture called SeDAX is being considered for the data bus implementation. Performance requirements for NASPInet are categorized from class A through E, in descending order. Class A, which is used for wide area voltage and reactive power control, requires very high data rates (up to 120x per second for 60Hz lines), very high reliability (99.9999%, or unavailable less than 32 seconds per year), and very high latencies (less than 50 ms delay). Class C, in contrast, can tolerate less than 30 measurements per second with 99.99% (53 minutes per year) reliability and network delay of 1 second.
Source: Alcatel-Lucent Bell Labs
Flexible AC transmission systems (FACTS) are used to regulate reactive power (VAR) by controlling the reactance of capacitors along transmission lines with thyristors, which are semiconductor devices that act as “valves” for capacitors. FACTS allows capacitors to dampen voltage and power transients by dynamically changing their reactance. IEDs deployed in transmission substations to support the FACTS also provide monitoring and control functions that improve transmission power flow.
Dynamic line rating (DLR) allows transmission lines to increase their capacity by monitoring environmental conditions using IEDs deployed at transmission towers. Because heat directly affects line resistance and transmission losses, the actual capacity of transmission lines at any given moment is sensitive to environmental conditions. Generally, transmission line capacity ratings (i.e. maximum current) are based on the worst possible conditions. By using IEDs to measure environmental factors in real-time (e.g. ambient temperature, wind, solar radiation, ice accumulation, etc.), DLR has the potential to provide an additional 10–15 % transmission capacity 95 % of the time and 20–25 % more transmission capacity 85 % of the time—without costly investments in transmission upgrades. The parameters monitored by DLR IEDs also help compute power line sag to improve reliability and safety.
Given the significant number of IEDs that need to be monitored over large geographies, wireless networks with wide area coverage are the most appropriate communication medium for IEDs deployed at transmission towers. Intermediate data concentrators may be used at substations to connect with the centralized network.
Distribution: DMS, DA, VVWC, and AMI
Distribution management systems (DMS) are collections of applications that act as a decision support system to assist the DCC and field personnel with the monitoring and control of the distribution system. The main purpose of a DMS is to improve reliability and quality of service by regulating voltage frequencies and magnitudes, reducing outages, etc. By access various data sources such as SCADA and OMS, DMSs integrate real-time information on a single console at the DCC. Outage management systems (OMS) complement manual customer reports of power outages with automated applications such as customer information systems (CIS) and geographical information systems (GIS) to rapidly and accurately detect outages due to extreme weather, technical failures, human error, or intrusion. Distribution operation modeling and analysis (DOMA) monitors real-time distribution system flows and simulates future scenarios (i.e. look-ahead and “what if”) to support grid operators with analysis for decision-making.
Distribution automation (DA) refers to the acquisition of measurement data and control of feeder devices through IEDs connected to those devices. DA extends IEC 61850 substation automation to the automation of feeder devices. For DA, each feeder device must be connected to an IED to support measurement, monitoring, and control functions. Depending on available the available communication technology, a DA data concentrator can be deployed at the substation to collect IED data for the DA master and relay commands and polls back to feeder device IEDs. Reclosers are circuit breakers that monitor the feeder and are triggered when the current exceeds a certain threshold. Shortly after being triggered, reclosers automatically attempt to reconnect the circuit several times before concluding that the fault is permanent, after which they have to be operated manually. Switches are used to manually sectionalize (on-site or via executed commands) faulty feeder sections to divert power until repairs are completed. Capacitor banks are used to dynamically control reactive power (VAR) and maintain the power factor (PF) as close to 1 as possible. For dynamic VAR control, real-time electric measurements are used to rapidly connect or disconnect portions of capacitor banks on the feeder. Deploying capacitors close to inductive loads is more effective for reducing VAR than centralizing them in substations. Distribution transformer loads and other measurements (e.g. internal temperature), although typically not monitored but rather estimated from customer meter data, are becoming increasingly important to the efficiency (e.g. proper asset sizing) and reliability (e.g. failure predictions) of the distribution system. Synchrophasors deployed along distribution feeders are becoming increasingly valuable for better state estimation and accurate monitoring of power quality (consistent sinusoid of voltage frequency and elimination of harmonics) in distribution systems, particularly where there are large-scale installations of distributed generation (DG) such as solar photovoltaic.
Distribution automation schematic
Source: Alcatel-Lucent Bell Labs
The Volt, VAR, Watt Control (VVWC) function ensures that various electric quantities remain within acceptable operational ranges by regulating voltage (V), adjusting reactive power (VAR), and controlling the power (W) delivered through the grid. VVWC functions are required for both transmission and distribution systems and may be integrated between the two. At the distribution level, VVWC functions coordinate with demand response (DR) to control power (e.g. reducing demand through voltage control). The function also controls capacitor banks in substations and distributed energy resources (DER) for generation and storage. VVWC may collect data from various sources including SCADA, DA, DER, and AMI. Upon processing, the VVWC function sends control messages to IEDs at distribution substations, feeder systems, and DERs.
Advanced metering infrastructure (AMI) refers to the network infrastructure that connects smart meters deployed on customer property, including intermediate network elements supporting connection with the meter data management system (MDMS) located at the DCC. According to July 2014 estimates from EPRI, there were 50 million smarts meters deployed across the United States, covering 46% of U.S. households. A large number of utility functions are supported by AMI measurements, including operational functions such as automated meter reading (AMR), demand forecasting, DA, VVWC, and outage management, as well as business functions such as customer billing and revenue protection.
Smart meters provide periodic interval measurements, typically once every 15 or 60 minutes, as well as threshold alarms when a measurement (e.g. voltage) exceeds or falls below a pre-set value. Measurements for 3-phase meters are provide per phase, per line, and for the entire 3-phase connection. For high-voltage lines, meter connections may require CTs and VTs. Customers with on-site DG can either report the net energy flow with a single meter or use two separate meters for power consumption and production. In addition to measurements, AMI supports remote meter maintenance functions such as disconnection and reconnection, registration with the MDMS (e.g. after an outage), and firmware updates.
Smart meter measurements
|Average reactive power||VAR||Average|
|Reactive power consumption||VARh||Cumulative|
|Power factor (VAR)|||cos φ|||Instantaneous|
|Phase angle (φ degrees)||sin(φ)||Instantaneous|
|Pricing mode||TOU/CPP/RTP||Selection set|
Meter data management systems (MDMS) import, validate, cleanse, and process the large quantities of data delivered by smart meters, provide long-term data storage, and support customer billing and analysis. AMI data is transmitted from smart meters through their communication interfaces (either integrated or attached to meters) and carried through the neighborhood area network (NAN), which is most commonly based on RF-mesh over the unlicensed wireless spectrum or PLC through the distribution transformer (see above sections on layers and standards). Meter data concentrators are used to support communication between clusters of meters and the MDMS in the DCC, either over RF-mesh (up to several thousand meters) or PLC (limited within the secondary of the distribution transformer). Meters and the MDMS communicate over an end-to-end IP connection and each meter is IP-addressable by the MDMS.
AMI network schematic
Source: Alcatel-Lucent Bell Labs (modified)
When AMI solutions are vendor-proprietary, a product-specific meter management system called the head end communicates with meter data concentrators over the IP network using a standard protocol such as extensible markup language (XML)—in such cases, the MDMS cannot communicate directly with meters and must send AMI commands through the head end. Although most AMI deployments to date have been based on proprietary solutions, interoperability standards are being developed that support direct meter-MDMS connections and are expected to make it more prevalent. For example, AMI solution vendors are already adopting the ANSI C12.22 standard, which defines the L7 application layer for meter-MDMS communication and supports different network configurations for L1 through L4. In addition, the ANSI C12.19 standard provides common tables (data structures) for transferring data between meters and the MDMS.
Behind-the-meter: HAN, HEMS, and DER
Home area networks (HAN) are LANs that facilitate communication among digital devices present inside or within the close vicinity of a home. The ability for smart devices to interact enables functions for home automation that improve quality of life, enhance home security, and increase energy efficiency. Within the home, HANs use a combination of twisted wires (e.g. phone line, CAT 5), coaxial cables, Wi-Fi LAN, fiber optics, RF-mesh (e.g. Zigbee), and PLC through home electrical wiring (e.g. HomePlug). Outside the home, HANs connect to the utility communication network through the Internet via the home Wi-Fi network and PLC or RF-mesh via the smart meter.
Home area network schematic
Source: Alcatel-Lucent Bell Labs (modified)
Home energy management systems (HEMS) connect to the HAN and manage the home’s energy consumption (e.g. lights, appliances, HVAC), storage (e.g. EVs, storage batteries), and generation (e.g. solar, fossil-powered backup). Although newer appliances may already be equipped with integrated control and communication functions, existing appliances within a home (e.g. thermostats, lights, washer/dryer) may need external monitoring and control devices for measurements, on/off control, and other functions such as price-responsiveness. The HEMS monitors energy consumption/supply and may control the operation appliances based on user settings or energy management services from the utility company or other third parties via the utility EMS (UEMS), either through the smart meter or via the Internet. The HEMS may also provide smartphone access to devices connected to the HAN for user monitoring and control from outside the home.
Distributed energy resources (DER) are sources of electricity that are connected to distribution feeders and located close to consumption loads. DERs include demand-side management (DSM) as well as electricity generation and storage. Although individually small, DERs can be aggregated into virtual power plants (VPP) to provide meaningful amounts of power necessary to meet regular demand. Distributed energy resource management systems (DERMS) are software application platforms that can be used to manage and coordinate DERs.
Automated demand response (ADR) uses the HEMS to automate demand response (DR) via real-time communication with the UEMS. When the system operator needs to dispatch DR in response to a scarcity condition, the UEMS sends a control signal to the HEMS—either in the form of a higher real-time price or a quantity of curtailment. The HEMS then controls net consumption by shutting down appliances (or reducing their consumption) and using available electricity generation/storage resources. If possible, the HEMS may also reschedule consumption by some appliances to future periods (i.e. off-peak). OpenADR standards have been developed for communication based on IP connectivity (i.e. via the Internet) between customers’ HEMS and the UEMS of utilities, ISOs, or third party energy service providers. Alternatively, ADR signals may be sent through the smart meter via the NAN (i.e. RF-mesh or PLC). Due to security concerns, utilities tend to favor IP connections augmented with a security apparatus that limits HEMS access to only the required UEMS systems.
Distributed generation (DG) is customer-sited electricity generation equipment (e.g. solar PV, fuel cells) that injects its excess power into the distribution grid. Although some DG may directly produce AC power (e.g. wind, fossil-based), sources that produce DC output (e.g. solar PV, batteries) need DC-to-AC inverters to convert their production into an AC sinusoid that is matched and synchronized (amplitude, frequency, and phase) with the distribution system. For safety and grid stability, DG sources that are more than a few hundred kW need remote monitoring and control to ensure that they stay within their operational limits for voltage (e.g. 15 %), frequency (e.g. within 0.5 Hz for 60 Hz system), voltage flicker, power factor (e.g. within 0.85 lagging and 0.85 leading), and harmonics. In the event of a short circuit or ground fault, circuit breakers are needed to trip the connection between the DG and the grid. Due to their intermittent nature, DG sources impose system balancing constraints on the grid that require ancillary services such as reactive power supply, voltage regulation, and frequency regulation—although DG sources can provide such services (i.e. through smart inverters), their intermittency makes them unreliable. When DG is unintentionally disconnected from the grid, due to safety and synchronization concerns unintentional islanding must be prevented by discontinuing power production—even just locally. In order to maintain the functions required for the safety and reliability of DG, feeder devices deployed at DG connection points (i.e. relays, circuit breakers, reclosers, CTs, VTs) need to be connected to IEDs that can communicate with the DMS.
Distributed storage (DS) refers to devices such as chemical batteries, flywheels, supercapacitors, and pumped hydro that can store electricity received from the grid (charge) and deliver stored electricity to the grid (discharge) when called upon. The performance of DS devices is based on AC-to-AC efficiency (ratio of energy discharged to energy received from the grid), real-time response (speed of adjustment to changes in load), power rating (power discharge capacity in Watts), and discharge time (maximum length of time the DS can discharge at its power rating until empty). Safety, performance, and reliability concerns for DS are similar to that of DG. Electric vehicles (EV) that plug into the grid are a form of DS that can use excess charge to manage peak demand through an electric vehicle service element (EVSE) that allows energy service providers to track EV battery charge/discharge from any power outlet and manage financial transactions accordingly.
Microgrids refer to collections of individual consumers within a building, campus, or community that are interconnected with at least one shared DG source. A microgrid forms an autonomous power system that is capable of voluntary islanding to independently provide a minimum level of service (i.e. lighting, security, elevators) during a utility grid power outage. Microgrid energy management systems (MEMS) are used to manage electricity operations within the microgrid as well as energy transactions and interconnection with the utility grid. All microgrid devices need to be able to communicate with the MEMS, generally through PLC or RF-mesh. The utility may want to deploy IEDs at the microgrid feeder interconnection for monitoring and control through the DMS.
Integration: core-edge architecture
The core-edge architecture is a smart grid communication network design developed by Alcatel-Lucent Bell Labs. The network design is application-centric (i.e. begins with the end in mind) and integrated end-to-end, which contrasts with the traditional utility practice of deploying purpose-built disparate networks. The architecture’s core network, which may interconnect hundreds of routers, is generally deployed in parts of the utility service area where DCCs, company headquarters, and business offices are located. Remote endpoints at the edge of the grid connect to the core network over separate connections. The core-edge network handles all communication between application endpoints and connects with entities from external networks such as wholesale energy markets, bulk generation, and third-party service providers.
Source: Alcatel-Lucent Bell Labs (modified)
Wide area network (WAN) refers to the core network that is the backbone for communication across the utility service area. WAN is composed of an interconnection of WAN routers (WR), whereas all network endpoints connect to WRs (whether directly or through intermediate nodes). Traffic between pairs of endpoints is router through the end-to-end IP connection by the respective WRs in the WAN. In cases where the network implementation cannot support application requirements (e.g. teleprotection cannot tolerate delays), direct connections between application endpoints may be allowed. In addition to providing connectivity with application endpoints, WRs provide traffic aggregation and route data over the WAN towards the destination endpoint. To ensure network reliability, there must be at least two separate physical paths between every pair of WRs. Interior routers (IR) help the implementation of path redundancy and shorter paths between pairs of WRs. While WRs and IRs are often conveniently located at existing facilities such as DCCs and substation, additional locations may be necessary depending on the overall network design. WANs can be deployed over fiber infrastructure (point-to-point Ethernet over SONET/SDH) and microwave infrastructure (usually in combination with fiber for WAN extension). Utilities can use existing fiber and microwave asset deployments for their WAN and, if necessary, either build additional assets or contract with NSPs for leased TDM lines or shared services such as Frame Relay, Metro Ethernet, Virtual Private LAN Service (VPLS), and Virtual Private Routed Network (VPRN).
Field area network (FAN) refers to the wireless and wireline connections that support communication between the WAN and remote endpoints or CRs. Cluster routers (CR) aggregate data locally for collocated endpoints, thus enabling one single FAN connection to connect multiple endpoints with the WAN. Endpoints in a FAN are intelligent devices that can be located at substations (e.g. CR, SCADA IEDs, phasor data concentrators, meter data concentrators, DA concentrators, CCTV), on distribution feeders (e.g. DA IEDs, DER IEDs, meter data concentrators), and at customer locations (e.g. AMI, HEMS). There are numerous FAN networking technologies available, including optical fiber (WDM), leased TDM lines, wireless LTE, PLC, Frame Relay, Metro Ethernet, and MPLS services, among others. As with WANs, FAN connections can either be utility-owned or NSP-provided.
Performance: latency and QoS
Because grid operations need control actions to be taken in a timely manner, each utility application has absolute requirements for the overall latency (delay) it can tolerate from the communication network. In teleprotection, for example, any delay longer than a few milliseconds between the moment when a fault is detected to the tripping of the circuit breaker is unacceptable. Other applications (e.g. AMI) have much more tolerance for higher latencies.
Delay and priority requirements for smart grid applications
|Application function||Delay allowance (ms, one-way)||QoS priority (max=0)|
|PMU measurements (class A)||20||12|
|SCADA and DA measurements||100||25|
|Critical AMI (e.g. VVWC)||250||40|
|DMS and TMS applications||250||65|
|Priority AMI (e.g. ADR, black start)||300||70|
|PMU (other than class A)||500||80|
|Normal AMI (meter readings)||1,000||85|
|Outage management system||1,000||90|
|Best effort (default)||2,000||100|
Quality of service (QoS) refers to the level of preferential treatment that the network gives to packets of certain priority applications over those of others. Smart grid applications that have stringent delay requirements (e.g. teleprotection) need to share network resources that were sized to support average data rates with many other applications. As a result, QoS is especially important during times when network links and routers face congestion due to heavy data traffic. When the network exceeds its capacity to transfer, store, and buffer data, low-QoS applications experience delivery delays and packet losses. QoS information is stored within the packet header, usually on the L3 network layer (e.g. IP, MPLS) in the form of a type of service (TOS) byte. According to their pre-configured per-hop behavior (PHB), network routers may provide preferential treatment for higher-QoS packets (based on the TOS byte) by forwarding them before other packets. Packets without a TOS byte are assigned to the best effort (default) class of service, which is lowest-priority and has no guarantee of network resources. MPLS provides extended QoS functionality and supports guaranteed minimum data rates for given LSPs through the resource reservation protocol (RSVP).
Reliability: network availability and ownership
Network availability (reliability) is dependent on the frequency of failures of the nodes and the links in the network, the redundancy of paths between endpoints on the network, and the self-healing capabilities of routing protocols. High end-to-end network reliability requires utility-grade equipment that is hardened to withstand extreme conditions. For wireless networks, communication over licensed spectrum is more reliable than unlicensed spectrum due to high interference from other users. Underground cabling is often more reliable than over-the-ground cabling, however underground faults take longer to repair. Ensuring that there are multiple physical paths (either parallel or separate) between endpoints improves reliability. For example, each endpoint or CR should have multiple FAN connections with the WAN. Routing protocols such as MPLS fast reroute (FRR) provide for fast recovery after failures by configuring and maintaining physically separate backup paths in routing tables across the network.
Network ownership of communication network infrastructure is an important consideration for reliability. Utility-owned networks benefit from guaranteed performance (e.g. latency requirements for teleprotection), more stringent availability requirements (most NSP applications require less than 99.99% uptime, compared with 99.9999% for smart grid), and regulatory compliance with standards such as the North American Electric Reliability Corporation’s (NERC) critical infrastructure protection (CIP) requirements. NSP networks, on the other hand, benefit from frequent technology upgrades (new capabilities), state-of-the-art expertise (best-practice personnel), and lower cost. The final solution is often a mix of utility-owned and NSP networks (e.g. for new applications), depending upon the willingness of NSPs to provide service-level agreements with penalties to ensure the performance, reliability, and security of critical applications.
Security: zones and network elements
In order to keep the smart grid safe from security threats, it is necessary to minimize the attack surface, increase the amount of effort required to compromise the network, and decrease the detection and response time. Safety protections are needed at the device level (e.g. partitioning of systems, recovery from attacks), system level (e.g. physical security, access control), organizational level (e.g. policies, mechanisms, procedures). In addition to separating the operational grid network from the utility business network, the security architecture for smart grid communication networks should be divided into multiple security zones (e.g. transmission, distribution SCADA, distribution non-SCADA, enterprise, external networks). Depending upon the criticality of applications within each of these zones, they are accordingly subjected to different levels of security requirement.
There are several types of network security elements in place to protect smart grid communication networks. Access control lists (ACL) in routers monitor IP headers in every packet to filter unwanted data traffic based on source endpoints, destination endpoints, and other criteria. Unified threat management (UTM) is a collection of network security products that includes functions such as deep packet inspection and behavior-based threat detection algorithms. UTM encompasses a range of stand-alone devices such as firewalls (FW), intrusion prevention systems (IPS), and intrusion detection systems (IDS). UTM devices are deployed at substations, DCCs, and other locations throughout the network. For additional security, data encryption protocols such as IP security (IPsec) and transport layer security (TLS) may be implemented on different layers between application endpoints. MPLS also facilitates network security by providing endpoints within the MPLS service with complete separation from endpoints that are not defined over the MPLS infrastructure. Due to the limitations of existing network security technologies, scalable secure transport protocol (SSTP) was developed as a secure end-to-end protocol for smart grid application security that is lightweight, agnostic to underlying protocols, and scalable.
Conclusion: looking towards the future
Over the past few years, smart grid deployments have received a boost through incentives and regulation from governments cross the world. In the United States, as of July 2014 there were 50 million smarts meters deployed across the United States, covering 46% of U.S. households. In addition to AMI, smart grid programs have also led to evolution of network technologies such as RF-mesh and PLC. Interoperability standards are increasingly the norm for solution vendors and wireless broadband NSP services are becoming more widely adopted for FANs.
MPLS networking provides utilities a secure way to seamlessly support legacy protocols over an integrated communication network, thus easing the transition towards modern architectures without discarding existing infrastructure investments. Broader QoS implementation can improve network performance while reducing the cost of network expansions.
After a century of relative stability, the current decade is seeing a dramatic pace of change in the generation, distribution, and consumption of electricity. Driven by technology and ambitious policy objectives, this rapid evolution has begun to stretch the electricity system in fundamental ways. Modernizing the communication network infrastructure is the first of many steps in addressing the challenges of this new age of electricity operations.
| 1 | 9 |
<urn:uuid:86f3d273-dc1c-493b-bafe-e3a8db4323af>
|
Boston Blog Posts
Boston's Great Elm
The Boston's Great Elm was a famous tree that stood in the middle of the Boston Commons for about 200 years. It finally came down on February 15, 1876, when it was destroyed by a huge gale storm. The tree was a popular spot for people to visit in the 1800s. It also was part of Boston's early history.
Some Notes about Boston's Historic Tree
Nobody knows when the tree was planted, but rumors suggest that it was planted around the Kings Philip War around 1670 by Capt. Daniel Henchman.
In colonial times, the Boston Commons grounds were mostly a cow pasture. Cows use to lay down underneath the tree to get out of the summer sun.
The Boston's Elm was one of three trees show on early maps of Boston, which was engraved in 1722. The Boston Elm would be the last tree to fall.
Some people confuse the Boston Elm as being the famous Liberty Tree. Actually, the Liberty Tree wasn’t in the Boston Commons. The Liberty Tree was in Hanover Square, which is currently the corner of Essex and Washington Street.
The “Great Elm” was known as "Boston Oldest Inhabitant."
Early town records indicate that Quakers and “witches” were hanged on its branches. Mary Dyer was one of the colonial American hang from the tree.
The Sons of Liberty hung lanterns on evening during festival occasions.
Citizens use to gather near the tree to protest the British Occupation of the town.
During the early days of the American Revolution, British Solders camp underneath the Elm when they occupied Boston.
In 1825, the tree was measured, 65 feet in height and a circumference of 24 feet 2 inches.
Before the tree came down, a small fence was built around it to prevent people from climbing the tree.
The tree was in tough shape after the storm in 1860 damaged the tree. It was a strong gale wind in a storm on February 15, 1876, that brought down the tree.
After the Boston Elm tree was taken down, a sizable relic was given to the Children's Museum in Jamaica Plain. Note: This is not the Boston Children's Museum. That started out in Jamaica Plain but not until 20 years later.
A chair made of the wood from the Boston Elm is in the rare book room in the Boston Public Library. Worth checking out if you're interested in Colonial Boston history.
Finding the Boston Elm Marker
The Boston Great Elm was located in the middle of Boston Commons, between the Boston Common Visitors Center and Frog Pond. The marker is in the ground. The center of the marker is green surround by light brown cement. (See the picture above)
From the ‘Park Street’ T stop, head towards the Brewer Fountain, then head towards the Boston Gardens. You’ll be walking on the “Mayor’s Walk.” In about a 100 yards, you’ll come to a walkway intersection. Take a short walk up the grassy hill and you’ll see the marker for the “Great Elm Site."
The Marker reads:
Site of the Great Elm
Here the Sons of Liberty Assembled
Here Jesse Lee, Methodist Pioneer,
Preached in 1790.
The landmark of the Common, the Elm
blew down in 1876.
Placed by the
N.E. Methodist Historical Society.
X Doesn't Mark the Spot
The Kings Chapel Burial grounds in Boston is the oldest graveyard in Boston. Founded in 1630, at the time of the settlement of Boston, For the first 30 years, it was the only graveyard in the city. The graveyard is not affiliated with any church, it just happens to be next to the Kings Chapel. The Chapel was built in 1688, 53 years after the graveyard was established.
- The City of Boston has always owned the graveyard.
- In 1896, a subway ventilation shaft was put in the southwest corner when Boston's subway system became the first in the country. Many gravestones had to be relocated elsewhere in the cemetery.
- There are 600 gravestones and 29 tabletop tombs marking more than 1,000 people buried in the graveyard.
- The last person to be buried in the graveyard is Rhys William in 2003. He was a minister at the First and Second Church in Boston.
Some Notable People Buried Here
- Sir Isaac Johnson - owned the land and used it for his vegetable garden before it became a burying ground, was the first person to be buried here in 1630.
- John Winthrop - Massachusetts first Governor
- William Dawes - Paul Revere's compatriot on his ride to Lexington in 1775 (Remains we removed in 1882 to)
- Reverend John Cotton - a powerful religious leader in seventeenth-century Boston
- Hezekiah Usher - the colonies' first printer and publisher
- Mary Chilton - who many believe was the first woman to step off the Mayflower.
The Legend of Captain Kidd at the Kings Chapel Burial grounds
Captain William Kidd was a Scottish sailor who was tried and executed for piracy after returning from a voyage to the Indian Ocean.
In 1697, Captain William Kidd was asked by Richard Coote, the Colonial governor of the Massachusetts Bay colony, to catch pirates. Some say that William Kidd might have gone to the dark side and by 1698 was accused of piracy. He was tricked into coming to Boston for clemency and to prove his innocent.
On July 6, 1699, Kidd was arrested. He spent a year at Stone Prison, much of the time in solitary confinement. In early 1700, he was brought to England for inquest and trial. On May 9th, 1700, he was found guilty of murder and on multiple counts of piracy. He was sentenced to death and hanged on May 23, 1701.
Where is Captain Kidd's Buried?
It is very unclear to where Captain Kidd body is after the execution.
There are some legends that say that his body was brought back to Massachusetts and buried at Kings Chapel Burial grounds. His ghost is supposed haunting the Kings Chapel at midnight on Halloween.
Go to the cemetery at midnight, preferably when the moon is dark. Tap softly on one of the headstones three times, and whisper "Captain Kidd, Captain Kidd, for what were you hanged?" And in the dark of the night, Captain Kidd will answer . .
Aside from there being no marker at the Kings Chapel Burial ground, there is no record in the records books of Captain Kidd body coming to America. There is no proof to the story that Captain Kidd body is at Kings Chapel Burial ground. I did read some stories that claim that the British just dump his body in the ocean.
So when you visit the Kings Chapel Burial Grounds on a tour and they mention the Legend of Captain Kidd, you can be sure that it's not true.
Oh, in Massachusetts it's illegal to go to a cemetery after sundown. So the only person answering your tap will be a policeman asking you to leave the cemetery.
Rivalry (Boston and New York)
When you think of the Boston vs New York rivalry, what comes to your mind?
Red Sox vs Yankees? Patriots vs Jets?
Long before the Babe was traded to the Yankees, there was an engineering rivalry between the two cities. They were both in a race to see which one would be the first to build a successful underground subway system.
It all started after the great blizzard of 1888 hit the Northeast crippling all transportations for many weeks. Politicians in both cities wanted a solution similar to what London was implementing at the time. They wanted to put some of the current transportation structure underground.
After the legislative approves $5,000,000 for the project, construction began on March 28, 1895. Most of the early construction happened around Boston Commons. There were numerous delays including some issues with finding lots of unmarking graves around Central Burying Grounds. The tunnels were not as deep as the ones in London as the theory was that buildings would hold up better with tunnels not dug too deep. Boston Elevated Railway was the main company that undertook the main part of the project.
The first subway cars left Tremont Station at 8am on September 1st, 1897. Today the MBTA operates 4 Subway lines and 12 commuter rail lines covering 1,193 miles.
While New York wasn't the first subway system, it would be the largest. The New York legislature approved $35,000,000 for initial construction. Once the final plans were in place, construction began on March 24, 1900.
Operation of the subway began at 2:37 pm on October 27, 1904, with the opening of all stations from City Hall to 145th Street on the West Side Branch. Today the has 26 lines and 468 stations in operation; the longest line, the 8th Avenue "A" Express train, stretches more than 32 miles, from the northern tip of Manhattan to the far southeast corner of Queens.
There's a lot more to talk about the original subway rivalry. Way too much for a weekly Blog posting.
You can read a lot more detail about all the drama that took place during the rush to be the first in The Race Underground: Boston, New York, and the Incredible Rivalry That Built America's First Subway by Doug Most.
Leif Ericsson Statue
There are many monuments and statues around Commonwealth Avenue in Boston. Near the Charlesgate overpass stands a life size statue of Leif Erikson.
Many people believe that Leif Ericsson, a Norse explorer, was the first European to step on North American soil in the year 1000. This theory was made popular in 1838 when the accounts of the journey were translated into English. Once Americans learned about Leif's adventures Leif became popular.
In 1887, Boston philanthropist Eben N. Horsford commissioned the statue. According to various newspaper articles published at the time, the location was selected because that is where he believes where the keel of Erikson's ship grated on the shore of Boston's Back Bay. This was the first Leif Ericsson statue in America.
The statue was created by Anne Whitney, a notable Boston sculptor that also created a duplicate Leif Ericsson statue for Juneau Park, Milwaukee. If you look at Leif Erikson statue's left foot you can see Anne Whitney's name and date of work (1885) . Next to the name is SC - which stands for sculptor.
The statue has the following inscription on the front:
In runic letters, which were used to write various Germanic languages before the adoption of the Latin alphabet:
Leif, son of Erik the Red
On the back, which is slightly hard to read, says:
On the right side of the statue is a bronze plaque showing the Ericsson crew landing on the rocky shore.
On the left side of the statue is bronze plaque showing the crew sharing the story of the discovery.
You can find the statue on the Commonwealth Ave. Mall, near Charlesgate East. The best time to see the statue is late afternoon so you can get an unshaded picture of the statue.
Anne Whitney's name on the Leif Ericsson statue.
Here are some notes of things that make Fenway Park in Boston Massachusetts, a special place to visit.
Game Time - Gates open 1 xBD hours before game time. Season ticket holders and Red Sox Nation members may enter at Gate C 2 xBD hours before each game.
Teddy Ballgame's Seat - In Seat 21 in row 37 of section 42 of the bleachers marks the spot where, in 1946, Ted Williams knocked the longest in-park home run in the park's history. The ball ended up landing in and ruining the straw hat of Joe Boucher, a Yankee fan. The seat where Joe Boucher sat was 502 feet from home plate. The red seat back was installed in 1984 by then Red Sox owner Haywood Sullivan.
Morse Code - Two of the scoreboard's vertical lines contain the initials TAY and JRY -- for Tom Yawkey and Jean Yawkey -- appear in Morse code in two vertical stripes on the scoreboard.
Pesky's Pole - Just one of many examples of Fenway's uniqueness is the right field foul pole, which is placed closer than in most big-league stadiums at 302 feet. It was officially designated Pesky's pole on September 27, 2006, which was Pesky's 87th birthday. There is a commemorative plaque at the base of the pole.
Manually Operated Scoreboard - The only one left in the Majors, the game's score is kept by two operators who sit inside the Green Monster and monitor the game by radio. The numbers used on scoreboard are 13-by-16 inch plates that are at about 2 pounds.
The Monster's Ladder - There is a ladder 13 feet up the wall in left center. In the past, it was used by groundskeepers to fetch balls hit into the net over the giant green wall. But now with the four new rows of seats on top of the Green Monster, its function is obsolete. If a ball should hit the ladder the ball is in play, there has been three known inside the park home runs as a result of hitting the Monster Ladder.
Day Game Seats - During daylight games Bleacher section 34 and 35 are blocked off to provide a solid batter's eye backdrop for the hitters.
TV Seats - If you're looking for the seats that get you on Television during a game, you'll want to sit in section Field Box 35 rows 4 and 5. There is a Field Box Usher that will check your tickets to prevent people from seat squatting in this area. In addition, the camera tends to catch people sitting over Field Box 58 and 57.
Catching a Foul Ball - In all the years that I have been going to Fenway, there are a few Sections where I kept seeing foul balls land. The best sections to catch a foul ball is Lodge Box 112 (Rows A-D) for right-hand hitters and Lodge Box 147 (Rows A-D) for left-hand hitters.
No matter where you sit if you can see the field, the ball can get to you. Just remember to be alert at all times during gameplay.
Obstructed View Seats - Being one of the oldest parks in baseball does have one drawback, obstructed view seats. These are the worst seat locations in Fenway Park. You won't be able to see the batter or the pitcher.
Sound of Music - Music is a big part of the game at Fenway Park. Immediately after a volunteer yells out "Play Ball" the song "Play Ball" by J. Bristol is played through the park. During the game, the Red Sox hitters get to pick the music as they walk to the batter's box.
In the middle of the 7th inning the crowd will sing "Take me Out to the Ballgame." At the middle of the 8th inning, diehard Red Sox fans start singing "Sweet Caroline."
When the Red Sox win, the following songs are played throughout the stadium: "Tessie" by the Dropkick Murphys, "Dirty Water" by The Standells and "Joy to the World" by Three Dog Night.
Support the Outside Vendors - Since the 1990's the Boston City Council has been slowly phasing out the cart vendors around Fenway Park. The vendors will be allowed to continue to operate until they die or retire, but their operating permits will not be allowed to pass on to anyone else.
Currently there are 16 outside vendors around the park. Once they leave, only Aramark will be the sole vendor around Fenway Park.
Personally I like getting the Peanuts from Nicholas "Nicky" Jacobs who sells them at their family cart by the Gate A. The family has been selling peanuts at the same spot since 1912.
Fenway First Timer Perks: If you have never been to Fenway Park before make sure that you stop into one of the Fan Services booths -- located at Gate E, Gate D, and Gate B -- to receive your "First Timers" fan items.
When they check your ticket, simply ask the directions to get your Fenway First Timer Perk!
Huntington Avenue Ground
If you like this post, check out my post about the location ofHuntington Avenue Grounds, it's the ballpark the Red Sox used before going to Fenway Park.
Discover the Boston Marathon Finish Lines
In its 120-year history, the Boston Marathon has had 4 different finish line locations. Here's some information about each of the finish lines:
The goal of today's post is to help people find the location of the major Boston Marathon finish lines.
1897- 1898 - The Early Years
The exact location of the first two Boston Marathon finish lines were never recorded. This is because the final part of the marathon involved running a lap around the Irvington Oval. The Irvington Oval was a running track near Copley Square. The exact location of the finish line was never recorded.
The winner of the first Boston Marathon was J.J. McDermott of the Pastime Athletic Club of New York, he was given an ovation as he went around the Irvington oval track.
Today, there are a many Boston marathon symbols in Copley Square to remember those that accepted the challenge to run the race. The memorabilia is located where some historians consider the first finish line would have been located.
Finding the Finish Line Today: Visit Copley Square and in the area near the BosTix Booth is where you'll see Boston Marathon markers. The four brown metal poles in the area are similar to ones that the Boston Athletic Association (BAA) used as the finish line in the early days of the Boston Marathon. (42.3501,-71.0767)
1899 - 1964 - Exeter Street Years
In 1899, the BAA moved the finish line to be next to the organization headquarters on Exeter Street. That location today is the main branch of the Boston Public Library.
The marathon last mile was a bit different than today, back then runners would go further down Commonwealth Avenue and then turn right onto Exeter Street for the final leg of the marathon. The finish line was near the back of the Lenox Hotel, just before Blagden Street.
How you can find the finish line: The finish line was located next to the Lenox Hotel on Exeter street. Based on pictures and videos of the 1960 marathon, it looks like the finish line was between the City Table entrance and the back of the Lenox building. On Exeter Street, there is a separation in the pavement and that is where I believe the finish line was located. Exeter Street has been paved over long after the 1964 marathon, so you won't find any indication of the previous finish line. I don't believe that the road separation has anything to do with the finish line. (42.3488, -71.0794)
1965 - 1985 - The Prudential Years
When the Prudential Insurance Company became a major sponsor, the BAA change the finish link Boston to be in front of the Prudential Center Plaza. The change began the same weekend that the Prudential Center open for the first time.
The official race ended on Ring Road, but it's not the same Ring RD that you know of today. Between 1965 and 1988, there was a North Ring Road that was parallel to Boylston Street and the Hynes Civic Auditorium. This is where the Boston Marathon finish line was from 1965 to 1985 - about 300 yards from the intersection of Hereford Street and Boylston Street.
Some of the Notable finishes at the Prudential Finish Line:
- 1972 - The BAA offically recognized Women runners
- Bill Rodgers wins 3 straight Boston Marathons (1978, 1979, 1980)
- 1982 - Alberto Salazar beats Beardsley by 2 seconds.
Finding the Finish Line Today: The finish line disappeared when Ring Road was removed in 1988 to make room for the Hynes Convention Center. The finish line location was right at the base of the Prudential Plaza, just about where the Quest Eternal sculpture was located. The Prudential Plaza is currently going through major renovation and the Quest Eternal statute has been removed. To see where it was, simply stand by the Boston Marathon RunBase store and look over to the Prudential Building. (42.3486,-71.083)
1985 - Present - The John Hancock Years
In the mid-1980s the BAA encountered challenges getting elite runners from running in other marathons. Boston certainly had the name and history, but other marathons offered better incentives to run their races. The BAA decided to commercialize the Boston Marathon and make the race a professional event in an effort to keep pace with the other major marathons.
The Prudential Insurance withdrew sponsorship in protest.
In September 1985, the BAA announced that a 10-year $10 million sponsorship deal with the John Hancock Mutual Life Insurance Co. The agreement names Hancock as the race's major corporate sponsor and the race will now pay a cash prize - $250,000 for the first year. The new cash prize match similar prizes by New York and Chicago marathons.
As a result of the change of sponsorship, the finish line was moved to be near the John Hancock building.
Finding the Finish Line Today: You can find the current Boston Marathon finish line right in front of the Boston Public Library. The finish line road paint is now visible year round. (42.3498, -71.0788)
Things to do with a Preschooler in Boston
As a long time Bostonian, I thought I put together a list of Boston places that I would like to show my 5-year-old daughter. These are places that she would have fun seeing in Boston. Many of these should be familiar to most Bostonians, but I am sure there are some surprises in the list.
During the past couple of years she had done many of these things, but there's a few that she should do over and over again.
List of 10 unique things to Do with Preschoolers around Boston Before They Grow Up
A fun day running around the museum exploring all the exhibits. Going up the steep stairs at the Omni Theater gives them a hint that the movie they are about to watch will be unlike anything they have ever seen.
Watching a Red Sox game on a hot summer day at Fenway Park. Arrive early to watch batting practice and walk around the park. Don't forget pictures with Wally! Don't forget to get your "First Timers" fan items at one of the Service Booths at Gate E, Gate D, and Gate B!
Spend some time summer day at the Boston Commons, there's plenty to do at the playground, fly a kite, get wet at Frog Pond and throw around the frizbee. Enjoy a nice family day playing in the oldest park in the Country. Did you know that George Washington walked around the park? At the Garden, everyone can enjoy a nice ride on the Swan Boats, sit on one of the Make way for Duckings statues (Figuring out the names of each) and smelling the spring flowers.
in Maynard, Massachusetts. One of the oldest continuing running ice cream stand in New England.
Enjoy the view of Boston from high above. "Can you see your House? How about the Baseball field?"
An opportunity to explore an old castle in Boston? Who wouldn't want to do that. Let them go explore and have fun. Good place to watch airplanes arriving/leaving Logan Airport.
Fun times exploring one of Boston's Island. Pack a lunch, and get the boat to Thompson Island.
A New England classic, watch the reenactment of the Minuteman in Lexington and Concord.
Enjoy some of the Halloween adventure in Witch Country. The children will have fun dressing up in costume and enjoying the festivities in downtown Salem. Visit in early October for smaller crowds.
Drive in movie theaters are getting rare, and the one in Mendon is really nice. Get some popcorn, and have a nice evening watching a movie.
The Hurricane Simulator at the Ecotarium is a pretty cool experience for a preschooler.
Do you know of any other places that I take my daughter in Boston to have a memorable childhood? Let me know!
New Back Bay Video Display
Today I noticed a new video display above the Track 5/7 Commuter Rail station exit way:
The number 5 and 7 tracks are for the Framingham/Worcester base trains at the Back Bay train station. The exit takes riders on the other side of Dartmouth Street. The exit is right next to the Copley Place Mall.
The MBTA also replaced the old 'Back Bay' sign and clearly indicated that this is not an entrance way. I have noticed that the door downstairs has been closed on a number of occasions which prevents commuters from entering the tracks from the exit door.
The video display was probably put in sometime during the past weekend. (I go by this exit every day and today it was the first time it caught my eye.)
Happy Evacuation Day!
On this day in 1901, the City of Boston officially celebrated Evacuation Day for the first time. This is the description in the Massachusetts record on how the sitting governor should handle March 17 every year, it was officially enacted in law in 1941:
Section 12K. The governor shall annually issue a proclamation setting apart March seventeenth as Evacuation Day and recommending that it be observed by the people with appropriate exercises in the public schools and otherwise, as he may see fit, to the end that the first major military victory in the war for American independence, namely, the evacuation of Boston by the British, may be perpetuated.
The True Meaning of Evacuation Day
During the Revolutionary War, General Washington was struggling to outsmart British General Gage, whose troops had occupied Boston since 1768. On the pre-dawn hours of March 17, 1776, Washington's Troops, made a strategic move to gain control of Dorchester Heights in South Boston which overlooked the entire British fleet. Colonel Henry Knox's troops had recently transported cannons they captured from Fort Ticonderoga in New York and transported them to Boston. On the morning of March 17, the British awoke to find the cannons aimed straight at them. The British were forced to evacuated their perch a few days later. This was a turning point in the war.
How did Evacuation Day become a Boston Holiday?
Boston Pilot and the Eliot School rebellion
The earliest mention of making March 17 an Evacuation Day holiday came in 1859. That's when the Boston Pilot suggested it during the Eliot School incident (Eliot School rebellion).
This was when Thomas J. Whall, a Catholic, refused to recite the Protestant version of the Ten Commandments. As a result, of his refusal, he was suspended from school. The Boston Pilot, which led the fight for the young Whall, was was looking for more fuel to the fire. In 1859, it noted the eighty-third anniversary of the British leaving Boston on March 17, 1776, and posed the rhetorical question as to why Bostonians hadn't yet celebrated Evacuation Day. Everyone knew the reason: Evacuation Day happened to fall on Saint Patrick's Day. The Pilot added:
Irish Boston: A Lively Look at Boston's Colorful Irish Past -
The expulsion of the battalion of England from Boston was not a 'Know Nothing' achievement; not would the sentiments of those who accomplished it harmonize with the sentiments of that party.
Dorchester Heights Monument
Additionally interest later came when construction of the Dorchester Heights Monument was being built.
In the later half of 19th Century, the hills around Dorchester Heights were getting smaller due to excavation. In 1898, the General Court of Massachusetts commissioned a monument to stand on the hill of the Heights. Designed by the architectural firm of Peabody and Stearns, the white marble Georgian revival tower commemorates the 1776 victory. Shortly after the construction was completed was when the City of Boston started celebrating Evacuation Day.
Becomes Law in 1941
In 1941 state representatives Thomas Coyne and Michael Cusik managed to make it a legal holiday in Suffolk County, which includes Boston, Chelsea, and Winthrop.
Every year there is some Massachusetts legislator who will file a bill to eliminate the holiday as it serves very little purpose. Opponents argue that it cost the city too much money in holiday pay. Proponents argue that it was a critical point during the Revolutionary War that should always be remembered.
Prudential Tower Time Capsule
The Prudential Tower is the second tallest building in Boston standing at 748 feet. The building was constructed over an old rail yard and the Massachusetts Turnpike during the 1960s. It cost the Prudential Company $150 million to build. (In today’s dollars it would be $1,128,171,428.57.)
The Prudential Center grand opening was held during Patriots Day weekend in 1965 (April 17-19). It was the first time that New Englanders would be able to go into the tallest building outside of New York city. According to news reports at the time, about 35,000 people came to the celebration.
As part of the grand opening celebration a time capsule was sealed in the north lobby of the Prudential Tower. The time capsule was sealed by British Consul General John N. O. Curle, O.V.S, and Prudential Senior Vice President Thomas Allsopp, with the help of construction workers Archie Langham, Charlie Ablondai, and Brian O’Rourke. The time capsule was sealed at 10 a.m on April 19th, 1965.
The time capsule was to be open ten years later - April 19, 1975. Which is the 200 anniversary of the famous Paul Revere Ride and the 100 anniversary of the Prudential Company. The time capsule was protected by a 350-pound bronze plaque displaying an actual piece of the Rock of Gibraltar.
The time capsule contained microfilmed pages from more than 200 New England newspapers, audiotapes of radio and TV editorial forecasts and editorial relating to Boston 1975. There were letters from by authorities in government, education, the arts, and sciences. In addition, there was a brochure of The Prudential Center as well other items from the opening weekend.
A picture of the contents of time capsule was posted on insuringthecity.wordpress.com website.
There is no indication of what happened at the Prudential Center on April 19, 1975. There’s nothing to suggest that the time capsule was actually removed and opened. I checked various media sources and verified that there is no mention of that particular time capsule after April 1965.
What has become of the time capsule and the contents? I still don't know, I am still investigating. If I can get a copy of the audio recordings, I'll be sure to play it with my readers.
Some additional information that I found:
Patriots Day and Easter fell on the same weekend in 1965. That will happen again in 2017.
On April 19, 1965, Gordon Moore published the famous article "Cramming more components onto integrated circuits” in Electronics magazine. Moore projected that over the next ten years the number of components per chip would double every 12 months. By 1975, he turned out to be right, and the doubling became immortalized as Moore's law.
More than 60,000 people per day visit Prudential Center, making it one of the most popular places to visit in Boston.
Prudential Preferred Shopper Card
The Prudential Shopping Center features 65 world-class retailers, 21 distinctive dining options, 3 top Boston attractions and 1 Boston icon. A great place to visit and an awesome place to visit when working nearby.
If your one of those that work near the Prudential Shopping Center, you should consider getting their Preferred Shopper Card. It's a reward card that saves you money at many stores around the Prudential Shopping center. The card is free and available at the center court Information desk. Simply sign up for the card by giving them your name and email address.
This year card contains deals from 36 vendors around the Prudential Shopping Center. There are fewer deals than past years because much of the mall is under construction for some new stores. I still found some good deals with the card:
- 10% off when you eat at 5 Napkin Burger
- 10% off toys at Magic Bean
- 20% off Merchandise at Boston Duck Tours
- Free gift with Purchase at Microsoft
Past years savings featured awesome deals from Paradise Bakery where you were able to get two cookies for the price of one. That made for the great afternoon snack! Sadly they are no longer at the mall because the food court is now closed.
I would recommend picking up the Preferred Shopper Card and see all the available deals today. You never know when you at the Prudential and can use the card to save some money.
How to Scan to Evernote on a Mac
If you do a search around the internet for instructions on scanning documents directly to Evernote on a Macintosh, you'll find that many are out of date. You may think it's no longer possible. Well it still is...and it's very easy to set up.
Here's my simple instructions on how to capture scan images and put them directly into Evernote using Image Capture version 6.7 (OS X El Capitan)
- Open Image Capture
- Click on the "Show Details" button on the bottom right window
- On the pull down menu next to the "Scan To: text, Select "Other..."
- Select the 'Applications' Folder and then find 'Evernote'
- Click 'Open'
- You'll see the EverNote now appears in the pull down menu
Scan file items will appear as individual notes in Evernote. Immediately after the scan, you will have an opportunity to make some notes. This is a good time to comment on why you scan the item.
Evernote recommends to use Color as the "Kind:" and to scan at 72 dpi resolution. (See the example screenshot)
If you have Evernote Premium, the text in the scan items will be searchable. This includes photos. This means if you have a photo with an inspirational quote, you can search for it.
Note: This is backward compatible, which means that if you upgrade to Premium today, anything that you already have in Evernote account will be searchable. Make sure to give Evernote some time to scan and index all your document and photos after you upgrade.
Copley Place Construction
Does your morning commute consists of walking by the 'SW Corridor Path' near Copley Place? Wondering what's the deal with all the construction site fences? You'll be happy to know that, some change is coming. There are two separate projects that are going on.
Project One - Fix Wall Damage
A group of engineers are in the early stages fixing a hole in the wall from a cement truck accident on March 21, 2014.
Around noon time, a cement truck rolled over on the exit 22 ramp from the Mass Turnpike inside the Prudential Tunnel. The truck crashed into the wall of the tunnel, knocking bricks out of a section of the Copley Mall.
Immediately after the crash a tarp was put over the hole and a short time later the bricks were removed. When you drove through the tunnel it looked very strange to see the light shine through the tarp.
The Massachusetts Turnpike has finally gotten around to fixing the wall. This fix will cost the cement truck insurance company, at least, $20,000.
The only good thing out of that accident was the natural lighting in the dark tunnel. Looks like the construction isn't going to shed new light into the tunnel.
Project Two - Upgrading the Copley Place Entrance
The major construction change in this area is the redesign of the Copley Place Dartmouth Street entrance to be more handicap accessible. Check out the artist rendering with how it looks today to what it will look like:
This entrance redesign is estimated to cost Copley Place $9.2 million dollars. The existing mall entrance will be demolished. So MBTA commuters that use this entrance will have to find alternative ways to get into the mall.
This is a popular route that many Back Bay commuters use to get to work. Those that go this way will tell all about the constant escalator breakdown. When this happens, escalator is blocked and there's a long line of people grumbling there way up 45 steps up to Copley Place.
This past Wednesday, the MBTA send out this text alert to commuters:
The underpath is a quick way for commuters to get from Copley Place to the Orange Line. This is very convenient way to get to the Back Bay station when it's raining or snowing outside.
Check back here for an additional post on the big changes going on at Copley Place.
Signs of Spring in Boston
It's been really cold this week in the Boston area. We are certainly feeling the brunt of the winter season. On Valentine's Day, the weather was very cold that it felt like minus 26 in MetroWest.
Most Bostonians has had enough of the winter and looking some signs of Spring. One good sign is that the Red Sox equipment truck has left for Florida. Pitcher and Catchers report in a few days.
If you're looking for a place to see the first flowers of Spring, I would recommend heading over to St Botolph Street sometime around the third week of March.
Last year, on the first day of Spring, I captured this photo of a blooming Crocus:
It turns out that this was one of the first flowers to spring up during last year's terrible winter. My picture even made it to several online media outlets like this one:
Here's the location of where you can check to see the first signs of Spring. It's near the intersection of Garrison Street and St Botolph Street, just a few blocks from Copley Place:
Back Bay Buildings
I started working in the back bay Boston back in September 2011. The area has changed a lot since then. Here is a picture that I took from the 16th floor of the Christian Science Center administration building.
Here's the same picture today from the 14th floor:
Some of the notable differences between the two photos:
- We can no longer see the landmark Citgo sign because of the 24 story Berkley building at 168 Massachusetts Ave. Construction started in late 2011 and was completed by the fall semester in 2013. Many maps showed that a McDonalds was at this location. The property was owned by the First Church of Christ, Scientist.
- The parking lot on Belvedere Street is gone. Replaced by "30 Dalton Street." This new 26-story residential tower located near the Christian Science Center Plaza in Boston will feature 218 luxury rental units, below grade parking for up to 21 cars, and a ground floor retail space facing Belvidere Street. New residences will be able to move in this summer.
- The small park in front of the parking lot is also gone. This is the site of "One Dalton Street", the 699-foot tower that will become Boston's tallest residential building.
- According to the "One Dalton Street" construction schedule, I won't be able to see the "30 Dalton Street." building by labor day this year.
I wasn't able to take a picture from the same spot since our company expanded to other floors in the building.
| 1 | 3 |
<urn:uuid:3f7601cc-2440-4c1c-8949-7370b30eb45d>
|
St Thomas' Hospital Historical Collection
Introduction to the collection
The St Thomas’s Historical Collection comprises the pre-1901 holdings of the St. Thomas’s Hospital Medical School. These include, for the most part, the textbooks and periodicals which were used by medical students from the 18th century onwards. As many pre-20th century medical school teaching collections have been dispersed, its survival, along with those of Guy’s and King’s College Hospital, makes it very valuable.
The collection was, from the early 18th century, formed through many donations and bequests. These have enhanced its value, as a significant number of eminent medical practitioners and surgeons were among the benefactors. Only from the mid-19th century did the library begin to shape an acquisitions policy which was not determined so heavily by gifts.
Although the origins of St Thomas’s can be traced back to 1173, when it was an infirmary attached to a priory, this religious foundation was dissolved in 1540 as part of Henry VIII’s general policy toward such organisations. The hospital can trace its continuous existence from its re-foundation in 1551.
Although there are no extant records concerning the date of the formation of the library, it seems probable that a library of some description existed by the 1740s. As medical education had been put on a formal footing in the late 17th century, such provision had become essential.
From the 1840s, the library began to be professionally managed. It is no coincidence that at this time the medical schools of St Thomas’ and Guy’s formally separated, and St Thomas’s began a prolonged period of re-organisation and self-examination. A catalogue, an acquisitions policy, and regular stock checks were introduced. The first salaried librarian was employed in 1842; fines began to be enforced from 1860. However, as the library was financed by subscription, and not by direct subvention from either the medical school or the hospital, it had to continue to rely heavily on bequests and donations. This fact has determined the character of the St Thomas’s Historical Collection.
The collection today
Since 2002, the St Thomas’s Historical Collection has been housed in the Foyle Special Collections Library. The collection comprises some 4,000 monographs and 2,000 volumes of journals. A number of items from the St Thomas’s Historical Collection have appeared in exhibitions at King’s in recent years, and are available to view on the online exhibitions page of the Special Collections web pages.
The strengths of the collection lie in clinical medicine, surgery, anatomy, therapeutics and pharmacology. Psychiatry, forensic medicine and dentistry are also included, although holdings in these areas are not extensive.
A full list of records for items in the St. Thomas’s Historical Collection can be found here.
Notable provenances in the collection
This section draws attention to the most significant physicians and surgeons whose bequests have added distinction to the collection.
You can view details of the books and journals which form each of these bequests on the Library catalogue. To do this choose the Basic search option, then select Former owners, provenance in the drop-down menu and enter the name of the person in question, as shown in the screenshot below
In some cases, such as that of Richard Mead, the size of the extant bequest is apparently small. However, Mead was a distinguished practitioner, and so his contribution to the collection is deemed to be important.
The collection was enhanced in the course of the 19th century by inheriting the collections of two anatomy schools, run by Joshua Brookes and Richard Grainger (see below). Anatomy schools had played an important part in the education of aspiring surgeons from the middle of the 18th century until they were superseded by the formation of university medical schools (at King’s College London and at University College London) from the start of the 1830s. Through these additions, the institutional development of medicine in the 19th century can be traced.
A number of important practitioners continued to bequeath items to the collection throughout the 19th century. The most renowned of these are also listed below.
Unless otherwise indicated, all benefactors spent all or part of their medical careers at St. Thomas’s. Further details of each figure are available from the online Oxford Dictionary of National Biography.
List of notable persons
Richard Mead (1673-1754), shown to the right, prolific collector, antiquarian and bibliophile who introduced a method of smallpox inoculation and wrote an important treatise on controlling bubonic plague.
Joseph Letherland (1699-1764), the first medical practitioner to draw attention to diphtheria as a distinct disease.
Joshua Brookes (1761-1833), a very popular teacher of anatomy with a huge collection of specimens. His library, which contains several items with the provenance of the anatomist, expert on embalming and pioneer of ballooning John Sheldon (1752-1808) was bequeathed to King’s after his death.
Richard Grainger (1801-65), proprietor of the influential Webb Street School of Anatomy and Medicine until its closure in 1842 when he accepted a teaching post at St Thomas’. He was one of the pioneers of the use of the improved microscope, and was a prominent public health reformer and inspector. The library of the Webb Street School was transferred to that of St. Thomas’s when the school closed.
Henry Cline (1750-1827) teacher of the celebrated surgeon Sir Astley Cooper, and friend of Edward Jenner, who helped Jenner to publicise his method of smallpox vaccination.
Marshall Hall (1790-1857), neurophysiologist who made major contributions to the study of the physiology of reflex action. His influential research on phlebotomy cast doubt on its utility as a therapeutic technique.
Joseph Henry Green (1791-1863), amanuensis to the poet and essayist Samuel Taylor Coleridge; theorist of the social function of the medical profession; and mentor to Sir John Simon (see below). John Elliotson (1791-1868): pioneering user of the stethoscope; controversial advocate of mesmerism and phrenology; personal doctor to Dickens and Thackeray.
John Flint South (1797-1882), author of the first manual on first-aid to be published; pioneering historian of British surgery.
Sir Henry Wentworth Acland (1815-1900), modernised the scientific and medical curricula of Oxford University, where he taught for many years; wrote an influential report on the outbreak of cholera in the Oxford area in 1854, the findings of which paralleled the contemporary research of the now more celebrated John Snow.
Sir William Withey Gull (1816-90), made important contributions to the study of anorexia; was a prominent advocate of vivisection.
Sir John Simon (1816-1904) shown to the right, was the first appointee to the post of Chief Medical Officer in 1855. In this and subsequent posts he influenced much public health legislation, and launched many investigations concerning urban and occupational health. Simon was a fluent speaker of German, and knew the German medical world well. His connections to German medical scientists are reflected in his bequest to the library.
Florence Nightingale (1820-1910). This pioneer of the modern nursing profession was associated with St. Thomas’s for many years.Charles Murchison (1830-79): the first medical scientist to distinguish between typhus and typhoid fever on the basis of their causation.
Sir William Mac Cormac (1836-1901), pioneering advocate of antiseptic surgery, who wrote the first textbook in English on the subject.
Noteworthy items in the collection
The collection includes a number of visually striking anatomical works, including those of Vesalius, Ruysch and Cheselden. From the aesthetic perspective, Jacques Fabien Gautier d’Agoty’s Cours complet d’anatomie is particularly noteworthy, although not very anatomically correct.
As most of the collection of journals in the St Thomas’s Historical Collection was acquired after the medical schools of Guy’s and St Thomas’s separated, it attempted to provide a self-sufficient collection of medical periodicals, which reflected the importance of the periodical as a publication genre from the late 18th century onwards. The collection includes some comparatively rare runs, including the Sussex county asylum reports and The annals of medicine and surgery. Some journal runs in the collection date from the 17th century, such as the Journal des scavans (1664-90).
One of the two incunabula in the collection is the 1491 edition of Hortus sanitatis, a compilation of medieval knowledge and belief about the natural world. It is lavishly illustrated with hand-coloured plates with depictions of real and mythical flora and fauna.
It has an intellectual importance in addition to its aesthetic value: it was the last attempt to summarise knowledge of the natural world before the European conquest of the Americas and Renaissance attempts at taxonomy transformed knowledge of the natural world.
The penny lancet
Another item which is even rarer is The penny lancet, a journal which was published for a few months in 1832 (this periodical had nothing to do with The lancet, then in the 10th year of its publication).The St. Thomas’s Historical Collection holds the only recorded complete run of this journal.
Its publication took advantage of the popular disquiet which arose from the devastating outbreak of cholera in 1831 which had swept through Europe, and the extremely inadequate orthodox medical response to it. It sought a gullible audience among those who could not afford medical advice or (with good reason) were suspicious of it. It contains medical anecdotes and gossip, and anatomical information, much of which was plagiarised from medical and surgical textbooks. One piece of information which could not have been plagiarised is the advice to readers to perform a surgical operation on themselves!
The audience it sought, gullible or not, was more elusive than the publisher had assumed, and the periodical had ceased publication by the end of 1832.
After the demise of his journal after only three months, the publisher, George Berger, bound unsold volumes of this periodical into slender volumes. One of these editions was purchased a century later by the orthopaedic surgeon Walter Rowley Bristow who at his death in 1947 bequeathed it to St. Thomas’s.
Jenner and Snow
There are three extremely important provenances which deserve special mention. One is a volume of tracts on smallpox vaccination by Edward Jenner (1749-1823), pictured right.
The title page of the first of these tracts – An inquiry into the causes and effects of the variolae vaccinae (1798) is inscribed by the author and dedicated to the St Thomas’s surgeon Henry Cline, who did much to bring Jenner’s innovation to the attention of the wider medical world.
The anaesthetist and epidemiologist John Snow (1813-58), who did so much to revolutionise our understanding of the causes of cholera and, in so doing, to establish a methodology for epidemiology, is represented by his pathbreaking On the mode of communication of cholera (1855). This copy is inscribed by the author to the St. Thomas’s physician Charles Murchison, who undertook research on typhus and typhoid fever.
The second item by John Snow, which touches on his other important contribution to medicine, that of the use of chloroform for anaesthesia is On chloroform and other anaesthetics (1858) and has a poignant association. It contains a tipped-in letter from Snow’s brother, William, to Charles Murchison, thanking him for his care during Snow’s last illness.
The collection holds a number of items with Florence Nightingale’s inscription: she had a close association with St Thomas’s which arose from her having founded her nursing school there. Perhaps the most remarkable book with her provenance in the collection is A contribution to the sanitary history of the British army during the late war with Russia (1859). This copy, which bears Nightingale’s inscription, was one of only a limited number which were printed. They were published privately, and distributed to influential members of the political establishment. Nightingale was extremely well educated, and, as this book demonstrates, knew how to use and to present statistics. This book is one of the earliest publications to present statistical information in graphic form.
Although in this case her purpose was to convince her readers that soldiers were dying from preventable infections in military hospitals rather than on the battlefield, her methods had many other applications. This book is as important a contribution to epidemiology and to the development of medical research methods as Snow’s research on cholera was.
Elizabeth Blackwell and Somerset Maugham
These two items are not very representative of the collection as a whole, but are nevertheless important and interesting. Although the collection reflects the world of orthodox medical practice, which, throughout this period was dominated by male practitioners, there are a few items which do reflect another very important part of medical history during this period. Apart from the works inscribed by Nightingale, the copy of Elizabeth Blackwell’s Essays in medical sociology (1899), which was presented to the library at St Thomas’s by the author, is noteworthy.
Elizabeth Blackwell (1821-1910) was the first female to acquire medical qualifications in the United States, and the first female to be registered as a doctor in Great Britain by the General Medical Council. After organising the nursing services during the American Civil War, she became a writer and polemicist on medical and social matters, campaigning against the anti-female bias of the Contagious Diseases Act, and against certain aspects of what she saw as the materialist bias of the medical profession, such as vivisection and bacteriology. Her friend, Florence Nightingale, was her connection to St Thomas’.
Fiction does not feature much in the collection, but there is an exception which must be mentioned. The novelist and playwright William Somerset Maugham (1874-1965) was a medical student at St Thomas’s, where he acquired his medical qualifications in 1897. Although he never practised as a doctor, his first novel Liza of Lambeth (1897) in which a young, pregnant woman dies of puerperal fever after being beaten by the wife of the man with whom she had an affair – is based on the stories which Maugham probably heard when delivering babies in the slums of south London as part of his medical course.
The collection holds a copy of the 50th anniversary edition (which had a print run of 1,000), which was presented to the library at St Thomas’s by the author and inscribed by him. In his autobiographical novel Of human bondage, his experience as a medical student is drawn on directly as material for his fiction. His knowledge of disease features more generally in later novels, such as The moon and sixpence and The painted veil. Liza of Lambeth is the sole work by Maugham in the St Thomas’s Historical Collection.
Resources for the history of St Thomas's Hospital and its library
Lists of the catalogued books and journals held in the St Thomas's Hospital Historical Medical Collection can be obtained from the Library catalogue.
King's College London Archives hold a significant amount of material relating to the history of St Thomas's Hospital and its medical school as does the London Metropolitan Archives.
David T Bird. Catalogue of the printed books and manuscripts (1491-1900) in the library of St Thomas's Hospital Medical School. London: St. Thomas's Hospital Medical School, 1984. [Special Collections Reference Z921. S7 B5 ]
CL Feltoe (ed.) Memorials of John Flint South. London: John Murray, 1884. [St. Thomas's Historical Collection R489.S7 A2]
Brian Hurwitz and Ruth Richardson. 'The Penny lancet'. The lancet, 364, December 18, 2004, 2224-2228
Susan C Lawrence. Charitable knowledge: hospital pupils and practitioners in eighteenth century London. Cambridge: Cambridge University Press, 1996. [New Hunt's House / St. Thomas's WZ56 LAW]
EM McInnes. St. Thomas' Hospital London. London: St. Thomas's Hospital, 1990. [Special Collections Reference RA988.L8 S53 MCI]
FG Parsons. The history of St Thomas' s Hospital. London: Methuen & Co., 1932-1936. [Special Collections Reference RA988.L8 S53 PAR]
| 1 | 9 |
<urn:uuid:3fba392b-d5a0-4e13-be3d-14a5e1fbb32c>
|
Ten Technologies From the 1980s and 1990s That Made Today’s Oil and Gas Industry
Contrary to popular imagination, which favors John Wayne stereotypes heroically rescuing the oil industry with wrench and hammer, the oilfield is a place of exquisite engineering, the match of anything on Earth, a marvel of innovation at the biggest and smallest scales.
The office-block sized blowout preventers on the ocean floor or the minute geopositioning electronics inside a logging while drilling (LWD) tool both are designed to operate perfectly within exacting environmental specifications. Almost every aspect of upstream exploitation is the result of exhaustively leveraging the glorious value chain of math, science, and engineering.
Along this trajectory, failure is met more often than success, as ideas and developments are tried out and eventually fine-tuned until something begins to work reliably. The journey is not for the faint-hearted. Whether it be one obsessive individual or a team with an equal desire to win, both energy and imagination must be sustained at every hurdle, to force progress and eventual success. This is as valid for the glamorous game-changing innovation as it is for a leap-of-faith improvement to existing technological practice.
Since the 1980s, our industry has experienced a technology renaissance all along this innovation spectrum— the oil price volatility in this modern era of our industry certainly focused minds on doing things more efficiently at less cost. As a celebration of these years of technical innovation, we now make so bold as to list perhaps 10 of the most significant contributions.
No doubt it is foolhardy to propose such a list because we all have an opinion on what should be on it. Nevertheless, there is surely enough common ground to guarantee some degree of objectivity. What may be objectionable is limiting the number to 10. Within that constraint, however, just the intellectual and practical bravado displayed surely merits all 10 to be included.
1981: Horizontal Wells Increase Production
The Soviet Union pioneered horizontal wells in the late 1960s only to turn its back on furthering the development of the practice in favor of vertical wells that were easier and faster to drill. But the mantle was picked up by Jacques Bosio, a drilling engineer with French oil company Elf Aquitaine, which needed horizontal drilling to intersect fractures and increase production from a karst reservoir found off the Italian coast, the Rospo Mare field.
In 1981, for twice the cost of a vertical well, horizontal drilling was sanctioned. The first well would bring in 3,000 B/D—more than 20 times its offset vertical well. By the mid-1980s, horizontal drilling was seeing wider adoption as a way to target thin oil and gas reservoirs in Texas, the Middle East, and the North Sea. Operators had known about these skinny hydrocarbon-bearing layers for years—now they had a way to contact them with enough surface area to make money. Bosio would go on to become the first SPE President from outside the US, in 1993.
1982: The Topdrive Improves Efficiency, Begins Drilling Automation
George Boyadjieff, an aerospace engineer, spent much of his career with Varco International challenging himself to find a better way to turn to the right. His early innovations led to the iron roughneck in the mid-1970s, a notable development in its own right.
Other advances came to the drill floor to accelerate the rate of penetration, such as the power swivel, but although this saved time, pipe handling and tripping remained a bottleneck.
Contracted to help with the design of two new jack-up rigs, Boyadjieff was aware of these issues and conceived a machine that would hang from the derrick’s traveling block and drill 90-ft-long stands of drill pipe vs. the 30-ft-stands used with the power swivels. In 1982, Varco released the realization of this idea as the topdrive. By decade’s end, it would be in use on most of the industry’s large rigs and is now ubiquitous.
1983: Reservoir Simulation Enables Field-Wide Reservoir Development
Efforts to simulate the production of a reservoir had been under way since the early 1950s but a push by the United Kingdom’s government helped bring it to the fore. The UK Department of Energy needed to predict the future of the country’s North Sea reserves and found a theoretical physicist in the nuclear industry, Ian Cheshire, who could help.
In 1977, Cheshire and his team would release a new simulation software that presented the reservoir in three dimensions and with multiphase flow. He would later be hired by a London-based group, Exploration Consultants Limited (ECL), to perfect his simulation software for the oil industry.
His crowning achievement was released in 1983 as ECLIPSE—short for ECL’s Implicit Program for Simulating Everything. ECLIPSE enjoyed widespread adoption because it allowed engineers to alter a reservoir model’s cell sizes to help match unique geometries; in other words, it made things more realistic. ECLIPSE was later acquired by Schlumberger, which notes that the program has been cited in more than 1,500 SPE papers and is used in 70 countries.
1983: Coiled Tubing Gives New Life to Old Wells
Service companies and operators were experimenting with flexible tubing for well interventions but found that the fatigue caused by spooling and unspooling made for a short service life. In early testing, this meant more money was being made on fishing coiled tubing out of wells than from any production improvements they delivered, and operators quickly grew cold on the idea.
Then in 1983, Quality Tubing in Japan began making longer sheets of steel, which meant fewer welds were needed to create a coiled tubing system and, therefore, less points of potential failure. By the 1990s, thanks to continued improvements, coiled tubing became synonymous with workover operations. The technology went on to be an enabling vehicle for a host of downhole technologies and intervention practices.
1985: 3D Seismic Becomes Everyday Tool For Reservoir Engineering
The 2D seismic interpretations that oil and gas companies had been studying for decades left them wanting more. Vertical slices of the subsurface were essential for exploration, but not cut out for reservoir development. For landing wells in reservoir sweet spots, engineers needed a 3D cube of seismic data.
Esso performed the first 3D seismic experiment in 1964 just outside of Houston. After that, a consortium of oil companies and independent efforts fine-tuned the technique. But the computers needed to organize and interpret this new data set were expensive and so large that their use had to be rationed.
Then in the mid-1980s, the decisive development arrived as the industry started using workstations that allowed engineers to study the 3D seismic data from their desks. Pivotal players on the software and acquisition side included Sun Microsystems, Landmark Graphics, GeoQuest Systems, and the Geco seismic company. As more operators adopted their technologies, major projects would never move forward without 3D seismic. By repeating 3D seismic surveys over time, a technique called 4D seismic even became possible to monitor fluid movement in the reservoir.
1985: MWD and LWD Guarantee the Future of Horizontal Wells
The commercialization of measurement-while-drilling (MWD) suffered from years of setbacks that included the limitations of electronics, telemetry, and industry trust. But by the early 1980s these issues started to break down and MWD would see wider use by directional drillers interested in knowing where their bits were headed.
The next step was to see if well logs could be run in a similar fashion. This innovation would become logging-while-drilling (LWD) and it promised engineers a way to evaluate formations without waiting for wireline to be run. In 1985, Sperry Sun led the market with its first LWD tool and the SPE paper it shared about the innovation pushed others to follow.
Wireline survived the introduction of LWD, but operators now had an ability to log in horizontal wells too tough to access with wireline, and even see ahead of and around the bit. This dramatically increased the odds of targeting not just the reservoir, but its most productive sweet spots.
1989: Third-Generation Wireline Formation Testing Gives A Taste of the Reservoir
Since the 1920s, engineers had some understanding that testing the formation prior to producing a well had value. Innovation here started with tools that measured reservoir pressure and flow. This led to the first buildup and drawdown tests developed in the 1950s that gave clues to a formation’s permeability. Building on this, a method was soon developed to establish the distance from the wellbore to sealing faults and other reservoir structures. Schlumberger jumped in by 1955 and made it possible to do some of this testing with wireline instead of drill pipe.
Wireline testing, however, could not reliably capture uncontaminated reservoir fluids. There was no way to control the influx of drilling mud and mud filtrate. A group of the service company’s Houston-based engineers led the breakthrough. They simply packaged into the downhole tool a way of monitoring production so you could be sure the sample was pure reservoir fluid. Brought to market in 1989, their modular dynamics formation tester combined many other technical features and bolstered the industry’s ability to certify reserves.
1990: Engineering Enters the Deepwater Era
The offshore sector had been inching into deeper waters from the late 1940s using drill ships and fixed-leg production facilities. But with more deposits waiting to be tapped in deeper waters, companies would need to invest in the development of floating production systems, especially ones that could handle the worst that the open seas had to offer.
Two of the most critical solutions came from a single man. Ed Horton, an engineer and founder of Deep Oil Technologies, is credited with inventing both the tension leg platform (TLP) and the spar. First conceived in the 1970s, Horton’s contributions would become widely adopted for offshore production by the 1990s as companies began drilling wells in depths that exceeded 1,500 ft.
His legacy is perhaps most felt in the Gulf of Mexico where TLPs and spars dominate the deepwater landscape. This includes Shell’s Perdido spar, which set the record in 2010 for the deepest subsea project on the planet. This record was nabbed in 2016 by Shell’s Stones development (also in the Gulf), which is using a Brazilian-born innovation, the floating production storage and offloading unit or FPSO, to produce from wells at depths of 9,500 ft.
1992: Inflow Control Devices For Horizontal Well Production
As companies drilled more horizontal wells and logged them, they realized that production was not evenly distributed along the wellbore. Engineers at Norway’s Norsk Hydro were keen to solve the issue in one of their offshore wells in which 75% of production was contributed by the section closest to the heel. To augment the production profile, they came up with a completion tool called the inflow control device (ICD).
First built in 1992, the ICD used filters and chokes distributed along the length of a horizontal well that could be tailored to optimize production. Others realized the opportunity the ICD represented, including Saudi Aramco, now the biggest user of the technology. For Aramco, the advent of the ICD meant it could economically develop tight formations with multilateral wells that greatly enhance the wellbore’s reservoir contact area.
1997: From Just A Hunch To A Revolution
As one of his gas fields dried up, oilman George Mitchell was inspired to find a way to produce from ultra-tight rocks called the Barnett Shale. On the shoulders of government-funded research and recent advances in MWD/LWD, Mitchell and his engineers seized on the idea that they might be able to do this by combining horizontal drilling with hydraulic fracturing.
The first of their trial wellbores was drilled and fractured in 1991, but many attempts failed to unlock acceptable quantities of gas. Then in 1997, Nick Steinsberger, a petroleum engineer working for Mitchell, earned his entry into the history books with an accidental discovery. Inadvertently blending gel into the fracturing fluids resulted in a more watery mix than had been previously used. This appeared to do the job.
The technique would become known as a “slickwater frac” and it enabled Mitchell Energy to double its overall gas production. The company was sold to Devon Energy in 2002 and a few years later headlines would report that the shale revolution had been born. In the next decade, contrary to historical trends, the US became the largest combined producer of oil and gas. The tandem of hydraulic fracturing and horizontal drilling is now used globally as programs of varying maturity are under way in Canada, Argentina, China, and Saudi Arabia.
|Henry Edmundson is the director of R9 Energy Consultants based in Cambridge, England. He spent 45 years with Schlumberger where he was the founding editor of the Oilfield Review and latterly global director of the company’s petroleum technologists. The entries in this list are based on excerpts taken from Edmundson’s book, Groundbreakers: The Story of Oilfield Technology and the People Who Made It Happen, which he co-authored with Mark Mau and published in 2015.|
Ten Technologies From the 1980s and 1990s That Made Today’s Oil and Gas Industry
Henry Edmundson, Author and Consultant at R9 Energy Consultants
01 March 2019
No editorial available
Don't miss out on the latest technology delivered to your email weekly. Sign up for the JPT newsletter. If you are not logged in, you will receive a confirmation email that you will need to click on to confirm you want to receive the newsletter.
13 January 2020
16 January 2020
14 January 2020
No editorial available
| 1 | 17 |
<urn:uuid:efa01290-1813-41c0-8e1c-1629a960c3f3>
|
Video Games & Mobile Learning
- Exploration Introductions: Mobile Learning (What do you know? What do you want to know? What can you find right now? Let’s have a look around…)
Reading: Gershenfeld, A. (2014). Mind Games. Scientific American, 310(2), 54-59.
- Video Gaming Assessment
- Schematic visual of the organization of your ePortfolio website
- Hyperlinks | internal & external
- Inserting pics | shift + command 3 or 4 for Mac / snip-it tool for Win
- Questions from the last class
- How’s your Video Gaming exploration coming along; what are you learning? Struggles?
- New additions | Some Online Games
- The MindShift Guide to Digital Games & Learning
Game-Based Learning | Get tips, techniques, and tools that apply the principles of game design to the learning process — a dynamic way to engage learners and help educators assess learning — Edutopia
Part II | Mobile Learning
- Next readings:
- Martin, F., Pastore, R., & Snider, J. (2012). Developing Mobile Based Instruction. Techtrends: Linking Research & Practice To Improve Learning, 56(5), 46-51.
- Selwyn, N., S. Nemorin, S. Bulfin & N. Johnson (2017) Left to their own devices: the everyday realities of one-to-one classrooms, Oxford Review of Education, 43:3, 289-310.
- Upcoming due dates: Monday 12 February Video Game Exploration
- How’s your exploration coming along; what are you learning? Struggles?
- Video Gaming and Learning:
- Recalling the Crisis of Presence: It’s a Crisis of Presence….No Really
- Cell Phones in the Classroom? Oh My!!!
- Did you know…?
- What do you know, what do you think, what do you want to know?
- Perspectives: Principals, Teachers, Parents, Students, You…
- Common Sense Classroom Strategies | Where is your placement? What is the policy/strategy at this school
- ED386/ED586 (Policy on cell phone use in our class)
- Poll EveryWhere
Mini-Lecture WordPress & the Design of a Mobile Learning App
Resource The Water Cycle — Just in time
| 1 | 2 |
<urn:uuid:81e924a7-9bba-415b-985e-18daf543de6c>
|
In the physical sciences, subatomic particles are particles that are smaller than atoms. These may be composite particles, such as the neutron and proton; or elementary particles, which according to the standard model are not made of other particles. Particle physics and nuclear physics study these particles and how they interact. The concept of a subatomic particle was refined when experiments showed that light could behave like a stream of particles (called photons) as well as exhibiting wave-like properties. This led to the concept of wave–particle duality to reflect that quantum-scale particles behave like both particles and waves (they are sometimes described as wavicles to reflect this). Another concept, the uncertainty principle, states that some of their properties taken together, such as their simultaneous position and momentum, cannot be measured exactly. The wave–particle duality has been shown to apply not only to photons but to more massive particles as well.
Interactions of particles in the framework of quantum field theory are understood as creation and annihilation of quanta of corresponding fundamental interactions. This blends particle physics with field theory.
Subatomic particles are either "elementary", i.e. not made of multiple other particles, or "composite" and made of more than one elementary particle bound together.
- Six "flavors" of quarks: up, down, strange, charm, bottom, and top;
- Six types of leptons: electron, electron neutrino, muon, muon neutrino, tau, tau neutrino;
- Twelve gauge bosons (force carriers): the photon of electromagnetism, the three W and Z bosons of the weak force, and the eight gluons of the strong force;
- The Higgs boson.
All of these have now been discovered by experiments, with the latest being the top quark (1995), tau neutrino (2000), and Higgs boson (2012).
Nearly all composite particles contain multiple quarks (antiquarks) bound together by gluons (with a few exceptions with no quarks, such as positronium and muonium). Those containing few (≤ 5) [anti]quarks are called hadrons. Due to a property known as color confinement, quarks are never found singly but always occur in hadrons containing multiple quarks. The hadrons are divided by number of quarks (including antiquarks) into the baryons containing an odd number of quarks (almost always 3), of which the proton and neutron (the two nucleons) are by far the best known; and the mesons containing an even number of quarks (almost always 2, one quark and one antiquark), of which the pions and kaons are the best known.
Except for the proton and neutron, all other hadrons are unstable and decay into other particles in microseconds or less. A proton is made of two up quarks and one down quark, while the neutron is made of two down quarks and one up quark. These commonly bind together into an atomic nucleus, e.g. a helium-4 nucleus is composed of two protons and two neutrons. Most hadrons do not live long enough to bind into nucleus-like composites; those who do (other than the proton and neutron) form exotic nuclei.
In the Standard Model, all the elementary fermions have spin 1/2, and are divided into the quarks which carry color charge and therefore feel the strong interaction, and the leptons which do not. The elementary bosons comprise the gauge bosons (photon, W and Z, gluons) with spin 1, while the Higgs boson is the only elementary particle with spin zero.
The hypothetical graviton is required theoretically to have spin 2, but is not part of the Standard Model. Some extensions such as supersymmetry predict additional elementary particles with spin 3/2, but none have been discovered as of 2019.
Due to the laws for spin of composite particles, the baryons (3 quarks) have spin either 1/2 or 3/2, and are therefore fermions; the mesons (2 quarks) have integer spin of either 0 or 1, and are therefore bosons.
In special relativity, the energy of a particle at rest equals its mass times the speed of light squared, E = mc2. That is, mass can be expressed in terms of energy and vice versa. If a particle has a frame of reference in which it lies at rest, then it has a positive rest mass and is referred to as massive.
All composite particles are massive. Baryons (meaning "heavy") tend to have greater mass than mesons (meaning "intermediate"), which in turn tend to be heavier than leptons (meaning "lightweight"), but the heaviest lepton (the tau particle) is heavier than the two lightest flavours of baryons (nucleons). It is also certain that any particle with an electric charge is massive.
When originally defined in the 1950s, the terms baryons, mesons and leptons referred to masses; however, after the quark model became accepted in the 1970s, it was recognised that baryons are composites of three quarks, mesons are composites of one quark and one antiquark, while leptons are elementary and are defined as the elementary fermions with no color charge.
Most subatomic particles are not stable. All mesons, as well as baryons—except for proton—decay by either strong or weak force. Proton observationally doesn't decay, although whether is it "truly" stable is unknown. Charged leptons mu and tau decay by weak force; the same for their antiparticles. Neutrinos (and antineutrinos) don't decay, but a related phenomenon of neutrino oscillations is thought to exist even in vacuum. Electron and its antiparticle positron are theoretically stable due to charge conservation unless a lighter particle having magnitude of electric charge ≤ e exists (which is unlikely).
Of subatomic particles which don't carry color (and hence can be isolated) only photon, electron, neutrinos with some disclaimers, several atomic nuclei (proton included), and antiparticles thereof can remain in the same state indefinitely.
All observable subatomic particles have their electric charge an integer multiple of the elementary charge. The Standard Model's quarks have "non-integer" electric charges, namely, multiple of 1⁄3 e, but quarks (and other combinations with non-integer electric charge) cannot be isolated due to color confinement. For baryons, mesons, and their antiparticles the constituent quarks' charges sum up to an integer multiple of e.
Through the work of Albert Einstein, Satyendra Nath Bose, Louis de Broglie, and many others, current scientific theory holds that all particles also have a wave nature. This has been verified not only for elementary particles but also for compound particles like atoms and even molecules. In fact, according to traditional formulations of non-relativistic quantum mechanics, wave–particle duality applies to all objects, even macroscopic ones; although the wave properties of macroscopic objects cannot be detected due to their small wavelengths.
Interactions between particles have been scrutinized for many centuries, and a few simple laws underpin how particles behave in collisions and interactions. The most fundamental of these are the laws of conservation of energy and conservation of momentum, which let us make calculations of particle interactions on scales of magnitude that range from stars to quarks. These are the prerequisite basics of Newtonian mechanics, a series of statements and equations in Philosophiae Naturalis Principia Mathematica, originally published in 1687.
Dividing an atomEdit
The negatively charged electron has a mass equal to 1⁄1837 or 1836 of that of a hydrogen atom. The remainder of the hydrogen atom's mass comes from the positively charged proton. The atomic number of an element is the number of protons in its nucleus. Neutrons are neutral particles having a mass slightly greater than that of the proton. Different isotopes of the same element contain the same number of protons but differing numbers of neutrons. The mass number of an isotope is the total number of nucleons (neutrons and protons collectively).
Chemistry concerns itself with how electron sharing binds atoms into structures such as crystals and molecules. Nuclear physics deals with how protons and neutrons arrange themselves in nuclei. The study of subatomic particles, atoms and molecules, and their structure and interactions, requires quantum mechanics. Analyzing processes that change the numbers and types of particles requires quantum field theory. The study of subatomic particles per se is called particle physics. The term high-energy physics is nearly synonymous to "particle physics" since creation of particles requires high energies: it occurs only as a result of cosmic rays, or in particle accelerators. Particle phenomenology systematizes the knowledge about subatomic particles obtained from these experiments.
The term "subatomic particle" is largely a retronym of the 1960s, used to distinguish a large number of baryons and mesons (which comprise hadrons) from particles that are now thought to be truly elementary. Before that hadrons were usually classified as "elementary" because their composition was unknown.
A list of important discoveries follows:
|elementary (lepton)||G. Johnstone Stoney (1874)||J. J. Thomson (1897)||Minimum unit of electrical charge, for which Stoney suggested the name in 1891.|
|composite (atomic nucleus)||never||Ernest Rutherford (1899)||Proven by Rutherford and Thomas Royds in 1907 to be helium nuclei.|
|elementary (quantum)||Max Planck (1900) Albert Einstein (1905)||Ernest Rutherford (1899) as γ rays||Necessary to solve the thermodynamic problem of black-body radiation.|
|composite (baryon)||long ago||Ernest Rutherford (1919, named 1920)||The nucleus of 1|
|composite (baryon)||Ernest Rutherford (c.1918)||James Chadwick (1932)||The second nucleon.|
|Antiparticles||Paul Dirac (1928)||Carl D. Anderson (
|Revised explanation uses CPT symmetry.|
|composite (mesons)||Hideki Yukawa (1935)||César Lattes, Giuseppe Occhialini (1947) and Cecil Powell||Explains the nuclear force between nucleons. The first meson (by modern definition) to be discovered.|
|elementary (lepton)||never||Carl D. Anderson (1936)||Called a "meson" at first; but today classed as a lepton.|
|composite (mesons)||never||1947||Discovered in cosmic rays. The first strange particle.|
|composite (baryons)||never||University of Melbourne (
|The first hyperon discovered.|
|elementary (lepton)||Wolfgang Pauli (1930), named by Enrico Fermi||Clyde Cowan, Frederick Reines (
|Solved the problem of energy spectrum of beta decay.|
|elementary||Murray Gell-Mann, George Zweig (1964)||No particular confirmation event for the quark model.|
|Weak gauge bosons||elementary (quantum)||Glashow, Weinberg, Salam (1968)||CERN (1983)||Properties verified through the 1990s.|
|elementary (quark)||1973||1995||Does not hadronize, but is necessary to complete the Standard Model.|
|Higgs boson||elementary (quantum)||Peter Higgs et al. (1964)||CERN (2012)||Thought to be confirmed in 2013. More evidence found in 2014.|
|Tetraquark||composite||?||Zc(3900), 2013, yet to be confirmed as a tetraquark||A new class of hadrons.|
|Pentaquark||composite||?||Yet another class of hadrons. As of 2019[update] several are thought to exist.|
|Graviton||elementary (quantum)||Albert Einstein (1916)||Interpretation of a gravitational wave as particles is controversial.|
|Magnetic monopole||elementary (unclassified)||Paul Dirac (1931)||undiscovered|
- "Subatomic particles". NTD. Retrieved 5 June 2012.
- Bolonkin, Alexander (2011). Universe, Human Immortality and Future Human Evaluation. Elsevier. p. 25. ISBN 9780124158016.
- Fritzsch, Harald (2005). Elementary Particles. World Scientific. pp. 11–20. ISBN 978-981-256-141-1.
- Heisenberg, W. (1927), "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", Zeitschrift für Physik (in German), 43 (3–4): 172–198, Bibcode:1927ZPhy...43..172H, doi:10.1007/BF01397280.
- Arndt, Markus; Nairz, Olaf; Vos-Andreae, Julian; Keller, Claudia; Van Der Zouw, Gerbrand; Zeilinger, Anton (2000). "Wave-particle duality of C60 molecules". Nature. 401 (6754): 680–682. Bibcode:1999Natur.401..680A. doi:10.1038/44348. PMID 18494170.
- Cottingham, W.N.; Greenwood, D.A. (2007). An introduction to the standard model of particle physics. Cambridge University Press. p. 1. ISBN 978-0-521-85249-4.
- If there are three sorts of neutrino having a well-defined invariant mass, then mass eigenstates are stable, but they don't correspond to flavor eigenstates.
- Walter Greiner (2001). Quantum Mechanics: An Introduction. Springer. p. 29. ISBN 978-3-540-67458-0.
Eisberg, R. & Resnick, R. (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles (2nd ed.). John Wiley & Sons. pp. 59–60. ISBN 978-0-471-87373-0.
For both large and small wavelengths, both matter and radiation have both particle and wave aspects. [...] But the wave aspects of their motion become more difficult to observe as their wavelengths become shorter. [...] For ordinary macroscopic particles the mass is so large that the momentum is always sufficiently large to make the de Broglie wavelength small enough to be beyond the range of experimental detection, and classical mechanics reigns supreme.
- Isaac Newton (1687). Newton's Laws of Motion (Philosophiae Naturalis Principia Mathematica)
- Taiebyzadeh, Payam (2017). String Theory; A unified theory and inner dimension of elementary particles (BazDahm). Riverside, Iran: Shamloo Publications Center. ISBN 978-600-116-684-6.
- Klemperer, Otto (1959). "Electron physics: The physics of the free electron". Physics Today. 13 (6): 64–66. Bibcode:1960PhT....13R..64K. doi:10.1063/1.3057011.
- Some sources such as "The Strange Quark". indicate 1947.
- "CERN experiments report new Higgs boson measurements". cern.ch. 23 June 2014.
- General readers
- Feynman, R.P. & Weinberg, S. (1987). Elementary Particles and the Laws of Physics: The 1986 Dirac Memorial Lectures. Cambridge Univ. Press.
- Brian Greene (1999). The Elegant Universe. W.W. Norton & Company. ISBN 978-0-393-05858-1.
- Oerter, Robert (2006). The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics. Plume.
- Schumm, Bruce A. (2004). Deep Down Things: The Breathtaking Beauty of Particle Physics. Johns Hopkins University Press. ISBN 0-8018-7971-X.
- Martinus Veltman (2003). Facts and Mysteries in Elementary Particle Physics. World Scientific. ISBN 978-981-238-149-1.
- Coughlan, G.D., J.E. Dodd, and B.M. Gripaios (2006). The Ideas of Particle Physics: An Introduction for Scientists, 3rd ed. Cambridge Univ. Press. An undergraduate text for those not majoring in physics.
- Griffiths, David J. (1987). Introduction to Elementary Particles. John Wiley & Sons. ISBN 978-0-471-60386-3.
- Kane, Gordon L. (1987). Modern Elementary Particle Physics. Perseus Books. ISBN 978-0-201-11749-3.
| 1 | 12 |
<urn:uuid:c6d56d7c-fabb-4583-9ec3-a729f9e03ea0>
|
NELUG meeting 16/2/2000
- Internet connects millions of machines around the world.
- Allows machines to find/talk to each other.
- No one machine knows the whole of the network (knowledge is distributed).
- Supports the “languages” (protocols) that various applications use to talk to each other – done in layers.
applications protocol layer – http, ftp, nfs tcp udp ip icmp Hardware layer (ethernet, token ring, ppp)
- These are the user and system programs which talk to each other using IP.
- protocol layer
- These are application/domain specific languages which the various applications understand. These hide the gory details of ip, tcp and udp from the user.
- These are basic protocols that the higher level protocols use.
- The basic unit of information transmission. The higher layers use one or more ip packets to transfer data.
- internet control message protocol – a close friend of ip which is used to pass various control messages between different machines. This is normally only used by the operating system.
- hardware layer
- This is the actual hardware which is used to transmit network packets.
- Each machine on the internet has a (unique) IP address.
- Written as four digits with values between 0 & 255 e.g. 184.108.40.206.
- To talk to a machine you address packets of data with your address (source) and the targets address (destination).
Subnets (and netmasks)
- Subnets are used to group a number of machines which are directly connected together.
- A netmask defines the subnet by separating the network and subnet parts of the address parts form the host part.
- For example a netmask of 255.255.255.0 specifies a subnet which has up to 254 (0 and 255 are special addresses) hosts connected to it.
- Historically networks were classed as either class A (netmask 255.0.0.0), B (255.255.0.0) and C (255.255.255.0). These represent the way in which addresses were allocated to individual institutions. i.e a university may have a class B network allocated and it is responsible for allocating all of the addresses within that range.
- In most cases you should probably assume that you are connected to a class C network and set the netmask appropriately.
DNS – Domain Name Service
- IP addresses are not easy to remember (names are easier).
- The Domain Name Service provides a mapping from names to IP addresses.
- Makes the net more user friendly.
- Allows particular name to move between machines – e.g. to a new service provider.
- Multiple names may map to the same address (often used for web sites).
- Machines are not directly connected to all other machines.
- To talk to non local machines you go via a gateway (often an ISP).
- That gateway machine is connected to other gateways.
- Any machine can act as a gateway if it has two or more network interfaces. So to talk to machine z you may have to go via
me -> a -> b -> c -> z
me -> w -> x-> z
- Routing protocols allow machines to work out the best way to get to another machine.
- This allows problems to be worked around (i.e. broken gateway machine).
- In most cases we only need to know one gateway machine (our ISP) – this is known as the default route.
- IP (internet protocol) is the core internet message format.
- This consists of the header and a message body.
- The message body carries sub protocols.
- The most widely used are:
- tcp – transmission control protocol – a reliable bidirectional stream of data.
- udp – user datagram protocol – an unreliable packet based protocol.
tcp (also known as tcp/ip)
- tcp uses IP packets to construct a reliable bidirectional data stream.
- It handles lost, corrupted and reordered IP packets presenting a stream of data to the application.
- This is a connection oriented protocol, i.e. the user makes a connection and may then use that connection until it breaks it (or omeone else does).
- http (hypertext transmission protocol), ftp (file transfer protocol), telnet all use this protocol.
udp (also known as udp/ip)
- udp does not provide a connection oriented protocol.
- Instead each packet of data has to be individually addressed and
- The user is responsible for handling lost packets (corrupted packets are detected by the IP layer and discarded).
- This is useful where a machine must talk to multiple machines and where it does not want the overhead of a connection oriented protocol.
- Examples: nfs (network file system), tftp (trivial file transfer protocol).
- An ip address allows a packet to be delivered to a specific machine.
- But the machine must work out which application should receive that packet.
- Ports are used to do this (both tcp and udp use these).
- A port is effectively an address within a machine. They are usually specified as an ip addr/port/protocol combination i.e. 220.127.116.11:23 (tcp)
- Programs bind to a port to say that they wish to receive packets which are addressed to that port or that they wish to transmit packets from that port.
- A port is identified by a 16 bit integer e.g. 0 to 65535.
- There are a number of well known ports:
- echo – echos back everything that is sent to it
- echo – echos back everything that is sent to it
- telnet – remote terminal protocol
- smtp – simple mail transfer protocol
Note that tcp and udp have separate port numberings.
- Most systems define well known ports in the file /etc/services.
arp – address resolution protocol
- Machines on the local area network must be able to address each other directly (in terms of hardware addresses).
- arp allows machines to find others and to dynamically account for new machines which are added/removed.
- Put simply it maps ip addresses to mac (ethernet) addresses.
- Only those machines which you are currently (or have recently been) talking to are kept in the arp cache.
Diagnostic/fault finding tools
- Ping uses low level packets to talk to a machine to check if it is responding (these are not actually IP packets (they are icmp packets) but are very closely related).
- This is useful to check if things are setup correctly.
- It also helps to diagnose slow/busy links.
- Example use of ping (localhost is loopback interface which talks to your own machine)
richm@patricia richm]$ ping localhost PING localhost (127.0.0.1) from 127.0.0.1 : 56(84) bytes of data. 64 bytes from 127.0.0.1: icmp_seq=0 ttl=255 time=0.2 ms 64 bytes from 127.0.0.1: icmp_seq=1 ttl=255 time=0.2 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=255 time=0.1 ms ... 64 bytes from 127.0.0.1: icmp_seq=8 ttl=255 time=0.1 ms --- localhost ping statistics --- 9 packets transmitted, 9 packets received, 0% packet loss round-trip min/avg/max = 0.1/0.1/0.2 ms
- Note that when using ping on a dialup connection expect to see times or 100 or 200ms.
- If a machine is very busy or there is congestion somewhere in the network some packets may get lost. This is normal but if a large percentage of packets are being lost then connection to that machine may be very difficult.
- ifconfig is used to configure network interfaces.
- It is seldom used by the user – scripts turn you configuration into appropriate ifconfig commands.
- It can be useful to look at your current network setup. e.g.
[richm@patricia richm]$ /sbin/ifconfig -a eth0 Link encap:Ethernet HWaddr 00:00:C0:A0:CE:14 inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 Interrupt:7 Base address:0x290 Memory:d0000-d2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:3924 Metric:1 RX packets:1024 errors:0 dropped:0 overruns:0 frame:0 TX packets:1024 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0
This shows two interfaces:
- an ethernet interface
- the loopback interface – this is present on all machines and always has address 127.0.0.1 (localhost)
- netstat shows network statistics.
- with no parameters it shows the current connections (we are only concerned with “internet connections” here. UNIX domain sockets are covered in many books on networking.
- Example (from Solaris netstat):
ws-csm2:819 $ netstat -f inet TCP: IPv4 Local Address Remote Address Swind Send-Q Rwind Recv-Q State -------------------- -------------------- ----- ------ ----- ------ ------- ws-csm2.658 patricia.nfsd 8760 0 24820 0 ESTABLISHED ws-csm2.56332 patricia.32784 8760 0 24820 0 ESTABLISHED localhost.56334 localhost.32804 32768 0 32768 0 ESTABLISHED localhost.32804 localhost.56334 32768 0 32768 0 ESTABLISHED localhost.56337 localhost.56331 32768 0 32768 0 ESTABLISHED ws-csm2.56904 tux.39504 8760 0 24820 0 ESTABLISHED ws-csm2.56906 tux.44245 8760 0 24820 0 ESTABLISHED
- This shows the current routing table (where the computer will send packets based on their destination addresses) e.g.
richm@patricia richm]$ netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.1 * 255.255.255.255 UH 0 0 0 eth0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 loopback * 255.0.0.0 U 0 0 0 lo default * 0.0.0.0 U 0 0 0 eth0
- Note the “default” entry – any packet addressed to an address which does not appear in the routing table goes to the default route.
- Sometimes it is useful to use “netstat -nr” to stop and addresses being converted to machine names. Try this is netstat -r appears to hang.
- Traceroute is useful for diagnosing routing problems.
- It determines the route which a packet is taking to get to a specified machine.
- There may be 10 to 20 hops on the way to a machine.
- Some gateways are setup to not respond to traceroute. In these cases you will get a * in the listing.
# traceroute webserver traceroute to webserver (18.104.22.168): 1-30 hops, 38 byte packets 1 gateway (22.214.171.124) 2.81 ms 1.97 ms 3.49 ms 2 isp-relay1 (126.96.36.199) 14.0 ms 13.4 ms 14.1 ms 3 isp-relay2 (188.8.131.52) 17.7 ms 17.0 ms 17.7 ms 4 webserver (184.108.40.206) 24.7 ms * 19.7 ms
tcpdump (snoop is similar on Solaris)
- tcpdump analyses network packets on your local network and prints summaries of their contents.
- It is useful when looking for a subtle network problem.
- *** Care *** this program has access to all of the traffic on your network. If used inappropriately it can decode all manner of information. Unauthorised use can get you in serious trouble.
- The arp command allows you to examine the arp cache and find out the hardware addresses of local machines.
# arp -a Net to Media Table: IPv4 Device IP Address Mask Flags Phys Addr ------ -------------------- --------------- ----- --------------- hme0 ws-csm2 255.255.255.255 08:00:20:34:9a:15 hme0 patricia 255.255.255.255 00:d0:58:00:d8:e1 hme0 tux 255.255.255.255 08:00:20:89:7e:34 hme0 nelug 255.255.255.255 08:00:20:43:0f:a4
- To test that DNS lookups are working correctly nslookup can be used to perform name lookups
patricia:15 $ nslookup Default Server: ws-csm2.nelug.org Address: 220.127.116.11 > phileas Server: ws-csm2.nelug.org Address: 18.104.22.168 Name: phileas.nelug.org Address: 22.214.171.124 >
| 1 | 23 |
<urn:uuid:250f1017-43d4-4d7a-b452-fdf76c910169>
|
January 1972 Popular Electronics
Table of Contents
Wax nostalgic about and learn from the history of early electronics. See articles
published October 1954 - April 1985. All copyrights are hereby acknowledged.
Just as the evolution from vacuum tubes to transistors changed the test equipment (TE) landscape in the 1970s with drastic decreases in physical size, ruggedness, price, functionality, portability, bandwidth, and power consumption, today's products move ahead at an equal or even greater pace. Microprocessors make possible programmability and reconfigurability, mathematical signal processing, LCD displays, high speed interconnectability, data storage, automation, and intuitive user interfaces. Surface mount components and other advanced production methods, super high bandwidth semiconductors, advanced filter construction, and light weight, rugged packaging have resulted in an amazing selection of special-purpose and multi-purpose test equipment. This 1972 Popular Electronics article gushed over the relatively recent advent of triggered sweep oscilloscopes priced in a range accessible to hobbyists and small electronics shops. Then, as now, a lot of the lower cost TE was being supplied by newcomers - often from foreign manufacturers. Major test equipment manufacturers were starting to feel the pinch as brands like Rohde & Schwarz, LeCroy, and Yokogawa began appearing on lab benches next to Hewlett Packard and Tektronix.
Test Equipment Scene
By Stan Prentiss
Why Triggered Sweep Oscilloscopes?
Test equipment manufacturers are "coming alive" at a remarkable rate and the biggest change next to new lines of sweep-marker and color-bar generators is an entire bevy of triggered sweep oscilloscopes. Following the "biggies" lead - Tektronix and Hewlett Packard - with their very expensive but highly accurate and durable equipment, such foreign and domestic companies as Telequipment, Leader, Sencore, B & K, Lectrotech, Heath, EICO, and others are now preparing to offer, or are offering, brand new DC amplifier triggered oscilloscopes that range in price from just over $300 to $975.
Comes the Evolution. Most of us hardly saw an oscilloscope before the very late 1940's, or early '50's, and those we did see weren't exactly instruments as we know them today. Bandpasses were in the kilohertz range, as were X-axis sweeps, and hot vacuum tubes often made a blower fan necessary in the big units. Lower priced equipments were all ac-coupled and recurrent sweep types, with as few tubes as possible to keep down costs; and their linearity was often questionable. Since that time, there have certainly been refinements in these economical models, such as extended 5-MHz or 10-MHz bandwidths, recurrent sweep ranges to 500 kilohertz, perhaps dc amplifiers, and flat-faced cathode ray tubes. And these scopes are fine for general peak-to-peak voltage measurements and medium-to-low frequency waveform displays. But generally, many have not matured as have their more expensive brethren: once again, because of costs. There are still under $108-plus kit and $170-up factory-built oscilloscopes to be had, but you'll naturally have to make some allowances; because perfection forever costs money - all the way to thousands for the best.
Three Basic Uses of an Oscilloscope.
Fig. 1 - Shows measurement of peak-to-peak AC.
Fig. 2 - DC voltage levels.
Fig. 3 - Frequency measurements.
What Is a Triggered Sweep? First, let us take a close look at a recurrent (non-triggered) sweep scope. Its sweep circuit consists of a selected timing capacitor charging up from a voltage source, and discharging through a tube or transistor. Because the charging curve of a capacitor is linear for only a short period, if the applied charging voltage is not on the head, the nonlinear portion of the charging curve is also included in the sweep. This same state of affairs can occur when you try to use a single capacitor over too wide a charging time to cover too much of a sweep range.
A recurrent sweep scope can be synced to a waveform, but because of wider manufacturing tolerances in low-priced instruments and the use of unregulated power supplies, the point where the timing capacitor starts and stops its operation may wander about a bit. Also, with a varying input signal, the scope sync circuit may have a little "play" so you have to expect some jitter in the trace.
What does a triggered sweep scope have that others do not? To start with, it usually uses some form of "lockout circuit" that unhitches the sweep circuit from the sync so that after the first toggle on, nothing coming in on the vertical channel can affect the stability of that particular sweep. This approach "clumps" the effect of variable sync due to noise and signal variations. Then, a triggered sweep scope uses only one selected capacitor for a limited range, and operates this capacitor well within its linear range. Also, most triggered sweep scopes use some form of well regulated power supply. All this costs money, and that is why they are more expensive than recurrent types.
What we'll do is supply a working introduction to a dual-trace triggered sweep oscilloscope, including a recurrent sweep scope for comparison.
Three Uses For An Oscilloscope. Let's start with the three basic uses of an oscilloscope. You can use it for peak-to-peak ac voltage measurements (Fig. 1); dc voltage measurements (Fig. 2); and time base-frequency measurements (Fig. 3). And the interesting part is that you can do all these things simultaneously. For instance the waveform in Fig 1 is simply a differentiated square wave. The vertical attenuator on the oscilloscope is set for 0.5 volt per division, but there is a 10X capacitance (LC) probe attached, so we move the decimal forward one place, and that makes each division worth 5 volts. Therefore, total peak-to-peak amplitude (height) of the waveform is 22 volts.
In Fig. 2, let's say that each division is worth 10 volts, and we'll establish a de reference with the lower trace on the first (bottom) horizontal line of the graticule. So, if the voltage rises five divisions when the circuit is active, this means we're looking at a well-filtered power supply of 50 volts.
Time Base Measurements. Here, we'll use an exact function generator and different waveforms to illustrate what a triggered scope's accurately calibrated time base does. Let's set the upper channel for triangular waveforms and the lower channel for pulse waveforms (Fig. 3). Naturally, since the same time base is driving both, they're at the same repetition rate.
If you'd like to check the accuracy of both the oscilloscope and any calibrated generator you might be using, step up or down repetition rates so that just one division contains a complete cycle. In this instance (Fig. 3) we went to 0.1 millisecond per division on the scope's time base, and found, to our satisfaction that since F = 1/T = 1/0.1 x 10-3 = 10 kHz. Note that the lower trace of Fig. 3 is a symmetrical, rectangular waveform with a 50% duty cycle, and this makes it a square wave. The upper trace is a sawtooth that is not quite linear at the top tip. Of course, while we're discussing oscilloscopes we're doing some waveform analysis that will be one of our more specific subjects in the future when we talk about graphic display instruments and their use.
An AC Recurrent Sweep Scope. To give an idea of what you might come across in the lowest priced equipment, let's do some waveform comparisons following a short explanation of what to expect. The recurrent sweep scope has a calibrated vertical amplifier that's as linear (current and voltage proportional) as the manufacturer can make it, for the price. Usually there's no dc amplifier, because these have to be especially biased and compensated and are more expensive to make. Then, of course, there's no triggered sweep for time base measurements. So what you have is a peak-to-peak ac voltage indicator and a gross frequency horizontal sweep that's broadly readable from about 5 Hz to 100 or 500 kilohertz divided into X10 increments, with variable dc potentiometer tuning in between. And you must cope with a low frequency tilt.
What good are the inexpensive ac scopes? They're excellent for hobbyists, experimenters, some of the vacuum tube TV boys (where dc swings are too great to see ac signals except with capacitance coupled amplifiers), and those who have small pocketbooks and restricted needs. (Experienced TV and electronics people are now using dc-coupled amplifiers more and more, especially in semiconductor work, where most dc levels and ac waveforms are simultaneously visible.)
Waveforms Shown on a Less Expensive Scope.
Figure 4 - The top of Fig. 3.
Fig. 5 - The bottom of Fig. 3.
Fig. 6 - The same thing at lower frequency.
Waveforms of An Inexpensive AC Scope. To illustrate what we've been talking about - at possibly, the most modest of levels - let's show the kind of responses that may be forthcoming if you select a real cheapie. Naturally, the better ac and the dc scopes do a more acceptable job than we'll show here - and this just might be a suggestion to evaluate before you buy.
Fig. 4 is simply the top of Fig. 3 at the same frequency of 10 kilohertz, repeated on an uncalibrated X-axis of a recurrent sweep scope. Notice that the trace is thicker - has a much larger spot size, begins somewhat nonlinearly, and is somewhat out of calibration since the more expensive triggered scope measures 4.4 volts p-p, while this one (at 1 volt per minor division) is almost 5.4 volts - enough to make a substantial measurement difference in small-signal transistor circuits. Observe also that the trace thickens toward the right indicating poor stigmatism and focus circuits or adjustments, since they're interactive.
We'll switch now to the bottom waveform originally produced in Fig. 3 but reduce the repetition rate to 100 Hz as shown in Fig. 5. Here the rounding and tilted low frequency response becomes evident and the recurrent sweep seems to be somewhat other than linear since the duration of the initial left half cycle is less than the others. Vertical amplitude should be about 3 volts, and it indicates just over 4. Of course, this can be adjusted.
Let's go to a 1-MHz repetition rate. The recurrent scope won't hold frequency. So we'll drop down to 900 kHz and see what's cooking. Figure 6 shows the results, somewhat more plainly this time than at either 100 Hz or 10 kHz. The tilt again shows lack of low frequency response. The sloppy rise and fall times of each half cycle either indicate a poor generator - and this one isn't - or poor vertical rise and fall times in the oscilloscope's own amplifiers. The left initial trace is now quite nonlinear, although the Fig. 5 and Fig. 6 amplitudes remain the same. The tops of the 900-kHz pulses should be as flat as pancakes. This, of course, is why good pulse or square waves are usually used to check visual test equipment and all sorts of amplifiers. In the square wave, low frequency information is on the top of the waveform, and the intermediate and high frequency information is on the sides. Therefore, the more harmonics in any square wave, the steeper are its sides, and the more expensive visual instrument it takes to judge them correctly.
Final Considerations. Now, triggered or recurrent, take your pick, but look and ponder before that precious buck flies out the window. A little investigating can tell so much - especially with a good square wave generator. If your decision is ac routine, you spend (for a ready-built) from $180 to $250; for dc deluxe or triggered sweep with single trace, the tag will be $340 to $450; and if you're considering dual trace, 5 or 10 millivolts sensitivity, and low nanosecond sweep speeds, $500 to $600 is the price you'll pay plus about $25 to $30 each for good probes. (Prices for scope kits in the intermediate range where available, are, of course, much lower.) But whatever you have or buy, learn to use it well - for an oscilloscope is every electronics man's best friend.
Posted January 16, 2019
| 1 | 2 |
<urn:uuid:e676a2ca-d413-4e4c-913a-3746c393dabf>
|
In Wilam: A Birrarung Story, we enter an Indigenous world which is presented in full page colour illustrations in acrylics, by Indigenous artist Lisa Kennedy. As the Woiwurrung language does not translate directly into English, many of the words used in this stunning book are in their original language. There is a detailed glossary with miniature illustrations at the end with all the definitions of the words used.
The significance and beauty of this publication cannot be understated, as it is also a dedication to William Barak, Wurundjeri Ngurungaeta, 1874. It opens up opportunity for those that are interested in learning about Australia’s traditional landowners, the history of the Yarra and the birds and animals that called it home, to research, read and learn.
There are so many aspects that make our country great. From our exotic wildlife to our amazing landscapes and landmarks, and also our inspirational national treasures that become icons all over the world. With Australia Day fast approaching, it is a wonderful opportunity to not only research the past and celebrate the present, but also for our younger generation to think about their role in shaping a great, successful future. Here are little teasers of hugely beautiful picture books to honour the joys, wonders and beauties of Australia and all this country has to offer.
Yes, our country is great. But there are certainly ways to make it even better. Beckand Robin Feiner propose this ideology to our children; empowering them to build a vision for our future with their newest picture book, If I Was Prime Minister. This inspiring tale gives readers the opportunity to hear other kids’ ideas as they introduce themselves with concepts they’re passionate about. For example, Ziggy would hold NO CAR DAYS for scooters, bikes and skateboards. Each page encourages further thought and discussion into the benefits and practicalities over the long term. Illustrations are bright and bold, simple and straightforward, and brilliantly represent the narrative’s messages of multiculturalism, compassion, empathy, care and kindness towards each other and our sustainability. Imaginative, fun, insightful and powerful, a highly recommended resource for all our Aussie students to consider.
Joanne O’Callaghan and Kori Song are a dynamic author – illustrator pair from Hong Kong inspired by the beautiful and fascinating city of Melbourne. In Found in Melbourne, two children explore well-known, and not-so-well-known, must-visit places by counting and rhyming their way through the city and beyond. From ONE giant mouth at Luna Park to TWO people singing and dancing at the Princess Theatre, THREE trams past the Shrine, and so on. They reach TWELVE fancy cakes at Hopetoun Tea Rooms in Collins Street, 100 butterflies at the zoo, 1000 triangles in Federation Square, and 1,000,000 stories in the State Library. All sights are explained in the back of the book, which is lusciously illustrated with fine detail and sublime accuracy. A wonderful resource for young Melbournites to explore their own city, as well as visitors looking for superb culture, history and beauty of this vibrant city.
Speaking of loving the place you’re in, The Gum Family Finds Homein this unique and remarkable Aussie tale by Tania McCartney and Christina Booth. The endpapers immediately draw the reader in with illustrated ‘photographs’ of proud and cheeky koalas enjoying their adventures in magnificent locations around Australia (Uluru, Karlu Karlu, The Bungle Bungle Range, just to name a few). McCartney’s language is just as magical with her lulling descriptions and whimsical phrasing, sweeping us up on the journey to find a safer, more suitable home for the Gum family – as opposed to the scarce, wind-swept tree they currently reside. Here is a gorgeous geological trip full to the brim with amazing facts, contemporary knick-knacks and stunningly illustrated landscapes with ancient ancestry. And all the while weaving in the characters’ conundrum, with a marvellous twist and ‘rock-solid’ ending to settle any questions regarding the perfect place to belong. Couldn’t be more exciting, interesting, informative and heartwarming than this!
Another book, which is absolutely gorgeous – a piece of art – by Tania McCartney, is Mamie; based on the upbringing of and celebrating the iconic May Gibbs and 100 years of Snugglepot and Cuddlepie fame. From the imaginative perspective of a little girl, Mamie lives and breathes fairies and pixies, singing, dancing and painting, until she is transported into another strange world across the sea to ‘creeks and dusty plains and the hottest of suns in high blue skies.’But magic for Mamie is not far away and her dreams of reuniting with her beloved fairies and pixies becomes a reality, in the most amazing way possible. Together with the biography on May Gibbs, the gentle, inspiring tale and beautifully visual and playful illustrations, Mamieis an incredible culmination of fact and fiction and Australian native flavour. McCartneyis the perfect choice to represent the supreme talent of this honoured creator and her legacy.
Following picture books, The Singing Sealand Kung-Fu Kangaroo, third in the whimsical ‘True Animal Tales’ series by Merv Lamington and Allison Langton is the tenacious, Quite a Clever Quokka. Based on real life stories with value-based messages and featuring Australian wildlife, these fun rhyming tales always expose readers to a taste of the Australian landscape and our unique native animals. This one, set on Rottnest Island in W.A, circles around themes of chasing your dreams with Leonardo da Quokko, who becomes a famous artist and Archibald Winner, despite missing his home and friends. Clever by nature, clever by illustration, Quite a Clever Quokkacertainly impresses with its energy, and ability to entertain, inform and capture the hearts and souls of any age reader.
Australian literature is the best. Over Christmas I’ve indulged in reading the literature for adults I can’t easily justify reading during the year (but read anyway) when I am focusing on YA and children’s lit for work – and also pleasure.
I’ve been reading a mix of Australian and international fiction. I have to say that the Oz books are better. Highlights have included Trent Dalton’s Boy Swallows Universe (HarperCollins Australia), Marcus Zusak’s Bridge of Clay (Picador, Pan Macmillan, reviewed here), Gail Jones’ The Death of Noah Glass (Text Publishing) – the best literary fiction I read last year by probably my favourite writer for adults – and I have just finished Flames by Robbie Arnott.
The cover is an abstract, reimagined Tasmanian wonderland, designed by the talented and charming W. H. Chong from a lithograph by Harry Kelly. It evokes both place and the elements of each chapter: Ash, Salt, Sky, Snow and Wood … Flames smoulder and taunt or empower the characters.
The tale begins with the return of Edith McAllister two days after her ashes were spread over Notley Fern Gorge. “Now her skin was carpeted by spongy, verdant moss and thin tendrils of common filmy fern. Six large fronds of tree fern had sprouted from her back and extended past her waist in a layered peacock tail of vegetation.” She is one of several McAllister women to appear after cremation, causing her grieving son Levi to commission a coffin for the eventual death of his twenty-three-year-old sister Charlotte in the hope that this will alleviate fear of her own reappearance after death.
As well as Levi’s viewpoint, the author shares the perspective of Charlotte, who escapes to remote southern Melaleuca; Karl, who bonds with a seal to hunt the Oneblood tuna (and witnesses the most harrowing and unforgettable scene in the novel); and his daughter Nicola who loves Charlotte. Other characters include the coffin-maker (whose derangement is largely shown through his letters to Levi); the “Esk God” water rat; the farm manager who cares for the wombats; the gin-swilling female private investigator and the enigmatic Jack McAllister. The entwined lives of these characters are skilfully explored.
The setting will be familiar – and not – to those who know the Tasmanian landscape. Place is wrought superbly. Images are unique and expressionistic. Flames are volatile.
Flames has deservedly been shortlisted for the Victorian Premier’s Literary Awards fiction prize, alongside The Death of Noah Glass. Australian literature is flourishing.
In Part 1of the ‘preparing for school’ series, we focused our attention on themes relating to new beginnings and gentle steps towards independence and new friendships. This post will include picture books with beautifully heartwarming sentiments of embracing our own and others’ individuality, uniqueness and personal preferences, what makes us human and advocating for equality. A value-driven start to the new year will set us all up for a peaceful, harmonious future.
Beginning with P. Crumbleand Jonathan Bentley’s new release; We Are All Equal, this issue-based, prevalent topic in today’s society is a terrific resource to introduce to youngsters right from the get-go. Actress, comedian and LGBTQIA rights activist, Magda Szubanski, gives it “A resounding YES!”Here’s a book that truly celebrates the richness of difference and the reinforcement of equality despite lifestyle, origin, wealth, ability, size, shape, or gender or sexuality preference. We Are All Equaluses its gorgeous illustrations of a range of animals to highlight our wonderful diversity without preaching didactic messages. Rather, it phrases each rhyming verse gently and with the opening of “We are all EQUAL…” It dispels the idea of bullying and performance-based pressures, and focuses on sharing our hopes and dreams, pride and sense of community. A must-read for children and adults globally.
Ann Stottand Bob Graham address another current topic of today in Want to Play Trucks?. Acceptance, compromise and negotiation are all qualities that make the friendship between Jack and Alex so special. Here are two boys with differing preferences that encourage us as readers to challenge common gender stereotypes. They are excellent role models for our young children who may come to the playground with already-formed preconceptions on what is ‘typical’ behaviour. The narrative involves heavy dialogue between Jack, who likes noise, action and danger, and Alex, who enjoys “dolls that dance and wear tutus”. Graham further reinforces the notion of ‘getting along’ in this diverse environment with his subtle illustrative references to culture, ability and lifestyle in and around the sandpit setting. Want to Play Trucks?shows us a very raw and real look into a non-stereotypical world of imagination and pretend play. Recommended for pre-schoolers and beyond.
The pairing of Nicola Connelly and Annie White come together again following the gorgeous My Dad is a Bearin this fun, light-hearted tale of diversity and inclusivity; it’s Is It The Way You Giggle?This is a sweet rhyming story with whimsical, soft-palette and energetic illustrations that ooze with the magical essence of joy in childhood. The narrative begs a thousand questions for the reader to ponder, beginning (and ending) with the essential premise – “What makes you special?” There are a multitude of qualities, skills and characteristics that make us all unique, and this book is a beautiful discussion starter to have with your little one upon entering the journey of new experiences – to be able to be proud of and confident in who they are, as well as recognising and welcoming the similarities and differences in others. From the colours of your eyes or skin, to the shape of your ears, the things you enjoy like singing and dancing, the way you giggle or wiggle, your interests in painting, writing, reading or swimming, or how you love your family. Big, small, common or quirky, this book allows us the freedom and celebration of being unique. Is It The Way You Giggle?is a feel-good story for preschool-aged children that will certainly bring a smile to their face.
Filthy Fergalcomes delivered in a whole league of its own when it comes to books on individuality. Sigi Cohenof the My Dead Bunnyfame, together with illustrator, Sona Babajanyan, unapologetically present this disturbingly witty rhyming tale of a filthy boy thriving in the repugnant squaller of rubbish and flies. In similar vein to the legendary classics of Paul Jennings, through grime and repulsion and gag-worthy moments, there is love and family and an all-important ‘twist’ that aims to melt your heart. The text’s dark humour matches perfectly with the illustrations’ ominous and grungy mixed-media, multi-layered techniques. Filthy Fergalmay not overtly promote good hygiene practices, but it does clean up in the areas of exploring belonging, commonality and difference, and being true to yourself. Suitably unsightly for school-aged children.
Solli Raphael is a phenomenal Australian slam-poet. I was fortunate to meet him at a Penguin Random House roadshow. He is a personable, thoughtful young man with an enormous talent. He is only thirteen.
Solli is the youngest Australian to win the all-age poetry competition, the National Australian Slam Poetry Finals, held at the Sydney Opera House in 2017. This led to a TEDx solo live poetry performance at Sydney’s International Convention Centre in front of 5000 people and a solo performance at the Commonwealth Games on the Gold Coast in front of 35,000 people (with millions watching here and around the world) in 2018.
He has a vision that sees people caring for all humanity, as well as for our environment. He writes and delivers his poems with thoughtfulness and engagement. You can view some of his performances online (links below). He presents his important themes and issues with developing tone and pacing, enhanced by thoughtful, apt facial expressions and gestures.
And now he has written a book, Limelightwhere he introduces slam poets as people who “use their personal experience to tell a poetic story”, often employing rhyme. Repetition, alliteration and assonance also feature in Solli’s work. Solli and his fellow slam poets aim to raise awareness on issues such as the environment or racism.
In Limelight Solli shares his experiences of some of his formative performances and gives writers’ tips. These include his creative discipline of brainstorming ideas at the same time each day and how he counteracts writers’ block. He explains some of the figurative speech he uses, such as similes, metaphors and idioms.
There are over 30 poems (in a range of forms) and slam poetry in the book. The title poem, ‘Limelight’ is a combination of slam poetry and song. ‘We Can be More’ is a paean to protect the planet: “realise that your litter is a bitter pinch to the earth”. Solli’s performance of ‘Australian Air’ has been viewed 3.5 million times online and is a highlight of the book. Its play on “air” and “heir” challenges us to act to save our country. Its refrain, “We breathe in, we breathe out” gives us space to physically breathe in and out and recognise the essential nature of air and breath: something we can’t survive without and we ignore at our peril. Other poems include ‘Media Literacy: Fake News’ and ‘Evolution’.
Solli has a list of upcoming appearances on his website. He is worth seeing as well as reading.
Slam and similar poetry are of particular appeal to young readers but Solli Raphael offers creative, intelligent, challenging ideas, all wrapped in hope, for everyone.
Marcus Zusak exceeds expectations in his new novel Bridge of Clay. This is an epic Australian tale awash with masculinity: the masculinity of deep, beautiful men. It is a story full of heart, intelligence and sensitivity. Its men are mates, brothers and family and they are men who love and cherish women. The Dunbar men are athletic, physical and even hard, yet tender and loyal. They are a “family of ramshackle tragedy”.
The structure is sophisticated. Matthew, the eldest of five Dunbar brothers, is typing the story of “one murderer, one mule and one boy”. Each chapter begins in typewritten font before settling into Goudy Old Style. The typewriter itself is part of the narrative and family heritage. The boy who Matthew writes about is the one “who took it all on his shoulder” – the fourth Dunbar boy, Clay.
Early on we know that the boys’ mother has died and their father has fled. We are forewarned about the long backstory about the mule, Achilles, only one of a number of past tales that enrich this book. These strands are elemental and seamless, and we are swept up in each.
We learn of the boys’ mother, Penelope – the Mistake Maker, the pianist, the teacher, the refugee from the Eastern Bloc. She grew up steeped in the ancient Greek classics of The Iliad and The Odyssey and shared them with Michael Dunbar and their children.
When she dies, the boys call their father “the Murderer”. After years away, he returns asking for help to build a bridge on his property. Clay, the quiet smiler, the runner, the boy who sits on the roof, the one who loves Carey and shares the book, The Quarryman with her, is the son who goes.
Zusak draws the female characters with love, respect, admiration and affection, even old neighbour Mrs Chilman, a minor character. Carey is a ground-breaker, an independent, aspiring female jockey.
There is a strong sense of place: the racetrack, The Surrounds and house in Archer Street in the city; Featherton, the town where it all began; and the bridge itself, the overarching metaphor. The writing is uniquely Zusak: idiosyncratic (“cars were stubbed out rather than parked”, “The furniture all was roasted.”); humorous, enigmatic and prophetic.
Bridge of Clay is published by Picador, PanMacmillan Australia. It is a contemporary classic.
The Brightsiders by Jen Wilde is a story about fame and misfortune, queer identities, and being true to yourself even when it’s terrifying. I really loved the author’s previous book, Queens of Geek, and how that one was an incredible geek-loving story featuring autistic and Australian characters from an Aussie author of our own! The Brightsiders definitely had a different feel, but if you like Wilde’s work, this is very much worth picking up still. (Although it is set firmly in America this time! Although the gang from Queens of Geek make a cameo which I thought was fun.)
The story follows Emmy King, a celebrity drummer with the rock band The Brightsiders. They’re trio of Alfie, Ryan and Emmy all exploded into fame fairly overnight and they’re still teens, trying to keep their heads above water and grapple with this intense fame, as well as make the music that they love. Emmy comes from a toxic partying family and unfortunately, when things get stressful, that’s where she slides back to. When the story opens, she’s been underage drinking and ends up in a minor accident, which the paparazzi and media gobble up like golden gossip: look at this teen celebrity falling apart. Emmy is determined to get her life back on track and she has an epic support network of friends…but she also has plenty of toxic people she needs to learn how to deal with. And as she starts slowly falling for one of her band mates (baaad idea) she has to ask if this is love or is she avoiding her own fears and anxieties?
Books centring around music, especially famous musicians, are always intensely interesting to me! It reminded me of Open Road Summer and I Was Born For Thisimmediately, with teens making messy mistakes…but now in such public view that it has huge repercussions for their careers. Emmy is a very earnest character and you quickly feel for her as she feels smothered by the media, haunted by her awful parents, and just wants to please people and have them like her. People Pleasing does nooot go well for her in this one.
It also explores the difference between toxic vs healthy relationships and friendships. This is such a good topic to unpack, because every teen faces that horrible decisions of not knowing whether to keep people in your life (you’re used to them, grown up with them, maybe even in love with them) or break away and take care of your own mental health. I loved Emmy’s gang, wither her team mates Ryan and Alfie being intensely supportive of her, and also her best friend Chloe, showing up to smack her back into reality. The book is totally friendship-centric. And very very queer! Almost every character identifies somewhere on the lgbtqia+ spectrum, and I couldn’t be happier. It’s really nice also reading books with a genderqueer love interest!
The Brightsiders is really a story about healing and friendship, with a forbidden and intense romance on the side. It feels more upper-YA with all the teens having finished school and now working their gigs without parents (or avoiding awful parents). And it’s so refreshing to read a book so unapologetically proud of its rainbow pride!
Magabala Books are going from strength to strength. They are perhaps most well-known at the moment for publishing Bruce Pascoe’s books for adults and children such as Dark Emu, Mrs Whitlam and Fog a Dox but Magabala has a strong backlist across age-groups and genres with great new books coming all the time.
Two new titles are standouts.
Black Cockatoo is of comparable quality to Bruce Pascoe’s writing for young people. It is written by Jaru and Kija man Carl Merrison and Hakea Hustler, illustrated by Dub Leffler (Once There Was a Boy and Sorry Day). It is a memorable story about an Aboriginal family living in the Kimberley.
Thirteen-year-old “Mia, her skin unblemished, radiated optimism and hope.” Mia loves her Country but sometimes wants more. Her grandmother tells her that she lives in both worlds. “You will be strong both ways.”
Although she is a spirited character she must show respect to her older brother. However fifteen-year-old Jy’s anger ripples “under his scarred skin”. He disrespects the family’s past and is killing birds, including Mia’s totem the dirrarn. Mia protects the injured bird for as long as she can.
Education is valued by the family and language, particularly used for bird names, is included.
Both Black Cockatoo and Blakwork are insightful, confronting literary works.
Blakwork by Gomeroi woman, poet Alison Whittaker (Lemons in the Chicken Wire) spans genres. It is poetry, memoir, critique, fiction and satire for adults or mature young adults.
‘a love like Dorothea’s’ is a reinterpretation of Dorothea Mackellar’s ‘I Love a Sunburnt Country’ and is positioned sidewards on the page. “I love a sunburnt country. That is mine but not for me.”
‘outskirts’ is a chilling tale about a woman who worked in an abattoir and ‘killwork’ is non-fiction set in the same place. ‘vote’ addresses refugees and intermarriage where “blakness” is “a code embedded in your bones – it didn’t bleed through you, it constituted you, so there was no letting out.”
‘tinker tailor’ is a satire about Blacktown in Western Sydney. There are different stories behind the naming of Blacktown such as, “‘They call this place Blacktown because it was given to two Aboriginal men.’ Seemed weird to me that the whole continent wasn’t Blacktown.” The National Centre for Indigenous Excellence in Redfern on Gadigal land is considered in ‘futures. excellence’: “For people so put out on the fringes, we blaks love the centre”. In ‘the last project’ a note on the Centre says “We’re coming back, daught. There’s work to do.”
‘bathe’ is set at Maroubra Baths. “This is a poem about not suffering.”
‘The History of Sexuality Volume III’ is a poem about desire: “two blak women [who] love each other”.
Language is used in ‘’palimpsest’ and there is some superb writing in ‘rework’. “Pull over here, watch some spinning nightly fights reach across a highway’s ribs. At the Kamiloroi Highway’s spine two signs rise and speak and re-speak…”
Blakwork has just been shortlisted for the Victorian Premier’s Literary Awards. Both it and Black Cockatoo are strong, significant works.
Books are the gift that just keep on giving, aren’t they?! They’re worth so much more than the latest toy that lasts a whole five minutes. Here’s a small roundup of some great books for kids that make for beautiful gifts and can be shared over the festive season and well into the holidays.
This is the fourth time this superlative duo have come together, following the successes of The Underwater Fancy-Dress Parade , Captain Starfish and Under the Love Umbrella. Bell and Colpoys will be winning awards once again with this stunning picture book that is so intelligent in its own way. For all children wondering what their kind of smart is, this energetic rhyming guide reinforces a confidence that there is certainly more than one. From artistic endeavours to scientific explorations, using your imagination to skills in building, retaining important facts to showing compassion and empathy are all but a few. Coordination and music abilities, polite manners, ‘feeling scared but taking chances.’ The list is endless and these book creators have absolutely nailed it with their verve, humour, versatility and diversity. The language rolls off the tongue to perfection, whilst the neon colours draw your eye just the way an artist should. All the Ways to be Smart – adding much brightness to any child’s mind – in more ways than one.
What Do You Wish For? puts a smile on every face and a glow in every heart. It’s that all kinds of fuzzy warmth, peace and togetherness that Christmas time really represents. Godwin’s intention for this book is for readers to understand that this time of year is, and should be, one of gratitude. The combination of her inspiring, tender words, and Anna Walker’s beautifully dreamy, intricate illustrations, is simply divine. There is an excited buzz in the air every Christmas. Ruby and her friends always put on a special show in the park, and write a wish to hang on the tree. But Ruby’s wish is too big to write on a little piece of paper. Her wish is of spirit; it’s made of smells of baking, candlelight amongst the dark, wonderful surprises and quality family time. But most of all, her Christmas wish is one of complete serenity, and a warm sparkle in the sky. What Do You Wish For? is the most magical treasure for any young reader and their family to cherish this Christmas.
I always love books that encourage exploration of the imagination. In this one, it’s the walls, floors and windows that get to discover what the bear child is conjuring up in his mind – much to his family’s dismay. The little bear speaks a lyrical tongue as to what his crayon and pen scribbles represent. A red Santa makes an appearance above the fireplace, a green frog on the toilet, a black witch inspired by broomsticks, a blue frothy sea and yellow splotchy bumbley bees. It’s amazing what each colour of the rainbow can be turned into, and where they happen to turn up! But somehow, this cheeky bear is able to win over the family with his colourful, magical, whimsical, wonderful charm. A beautifully alluring, absolutely sweet, vivacious and child-centred book in its words and pictures. It’s Not a Scribble to Me is ideal for children from age three as a facilitator of self-expression, creativity and boundless possibilities.
I absolutely adored this book when it was first released back in 2016. Now I (we all) get to relive the magic once again with this much anticipated 2nd edition recently re-published. Australia Illustrated is a visual festive celebration, the ultimate pictorial encyclopaedia of our beautiful land. Tania McCartney’s expansive array of detail and design, even if only a snippet, takes us on a wonderful journey around the country exploring major attractions to pockets of hidden gems we may have otherwise missed. My kids loved traveling around Australia; spotting familiarities, discovering new mysteries of the unknown, and giggling along at the cute and quirky nuances. Vivacious watercolours and a mix of media showcase the well-known to the unique. From the BIG and beautiful Queensland Mango and Big Banana in Coffs Harbour, the diverse native animals, bush tucker, sports, slang and weather, and a taste of idiosyncrasies from State to State. A gloriously scrumptious edition to pore over with the kids at home or away.
And another exquisite book from Tania McCartney that is a piece of art in itself is Mamie. Published by HarperCollins, November 2018. With her large, round gumnut eyes and angelic face, Mamie shares her story of adapting to change, fairies, pixies, elves and friendship. Celebrating the life of renowned and much-loved Australian icon – author and illustrator, May Gibbs of the Snugglepot and Cuddlepie fame, McCartney takes readers on a historical yet imaginative journey. She gently and expertly showcases the exceptional creativity, inspiration and achievements of Gibbs absolutely beautifully and with bunches of natural charm. Mamie is sure to win hearts abound, just as she has done over the past 100 years.
The attitude and tenacity of The Little Princess mixed with a quintessentially unique dialect like Lola (Charlie and Lola) together brings about this charming new face to the bookish world, Princess Peony. Partner that with the perfectly scruffy tomboy/girl-looking character in grey tones with pops of hot pink and you’ve got yourself a popular new series for girls (and boys) in the junior reader market. Princess Peony, the name which must be reminded to the audience every now and then, begins her fairy tale in front of her house, erm, Castle with her dog, no, Dragon; Totts. Her mission: to be Obeyed. But things take a wrong turn and her story is interrupted by Prince Morgan the Troll (aka, her big brother). Attempts to outsmart each other lead to some pretty hilarious events and a new mission to avoid child-eating bears. The text and pictures work brilliantly together providing plenty of visual literacy opportunities for readers to laugh about. And there is a remarkably True Princess Information and Quiz Sheet for all Princesses in Waiting to absolutely study and swear by. Just gorgeous! I will be buying The First Adventures of Princess Peony for my nearly six year old and all her friends!
The Tales of Mr Walker is inspired by a real-life Labrador named Mr Walker who is a Guide Dog Ambassador and helper at the Park Hyatt Melbourne. This is an adorable book containing four enchanting stories about life working at the grandest hotel in town. Targeted at independent readers from age eight, we are delighted with the adventures this canine companion takes us on, viewed from the dog’s perspective. ‘Tracy must like parks as much as I do’. With his Guide Dog training behind him, Mr Walker is very well disciplined and loyal. But naturally, he has certain things on his mind, such as chasing balls, and food. Romp along on the fun adventures with Mr Walker. He doesn’t disappoint. Fluid and bright illustrations bounce in and around the text. The cover is appropriately high-end with its linen bound spine and gold trimmings. Royalties going to Guide Dogs Victoria is just another excuse to pick up this book as a gift for someone you love, and someone who loves dogs.
There’s deservedly a big buzz about this novel. It’s for middle readers – what age group is that?
We’re finding the novel appeals to anyone from 9 to 109! It’s found in book stores and libraries on the ’middle reader’ shelves as it is published for that age, but it is suitable for anyone in upper primary to tweens, young adults and adults alike.
How is this book different from your other works?
My other stories each features a species of vulnerable Australian wildlife, and a young person trying to save them. Written for 7-10 year olds, they’re completely fictional adventure stories, (although based on real animal issues), plot driven and very animal focussed. My new story is based very much on my own family’s experience of living with a young person with a disability and the story is more character focussed. The plot allows a unique insight into a moment in time in a family’s life, and as such, probably appeals to a wider reading age and is a more emotional, heartfelt story.
Could you tell us about the major characters in Everything I’ve Never Said?
Everything I’ve Never Said is about a fictional character, Ava, an eleven year old with Rett syndrome who can’t talk or use her hands to communicate. Based on my own daughter, Charlotte, who suffers with Rett, the fictional Ava lives with her nearly fourteen year old sister, Nic, and her mum and dad, but struggles to tell them what she wants and how she’s feeling. Through Ava’s inner voice, the reader hears what she wants to say, even when her parents and sister don’t understand her. It takes the arrival of her new carer, Kieran for the family to work out a way to help her.
How do you show the authentic relationship between protagonist Ava and her older sister Nic?
My eldest daughter, Beth, helped me a lot with the relationship between the two girls. Despite having raised my daughters, and watching them grow up together, I found it hard capture their relationship on paper. Like many siblings, it’s not all hugs and love – there’s rivalry and jealousy, but when the chips are down, true love is exposed. It was important to me to accurately show how Nic would respond to her sister in various situations. I didn’t want to show the typical eye rolling teenager. For example, when I asked Beth what Nic would say when Mum wanted to put Ava in respite, she very quickly replied, ‘She’d say no, Ava would hate that!’ This wasn’t the reaction I expected. I thought Beth would think Nic would love time without her annoying sibling, rather than consider her sister’s feelings.
Why did you write the book as fiction rather than non-fiction?
Ava’s voice was very powerful when I began writing, but having never heard my own daughter speak, I could only imagine what was going on inside her head. Right from the start, I had to use poetic licence to interpret what was happening for Ava, which meant the book naturally became a work of fiction. Many of the raw, difficult experiences in the book are based on true events, for example, being placed on hold for hours with Centrelink, Ava having a melt down in the hospital, the embarrassment of Nic and the exhaustion of Mum, but there were some things about our life I wasn’t ready to share.
Ava starts at Rosie’s Cottage, a respite home. In your experience, how accessible and worthwhile is respite care for those with a disability?
We’ve always struggled with respite. Having a non-verbal child means they can’t tell you if everything is as it should be when they stay somewhere else overnight. Also, because our daughter is so physically fragile, the other clients were often not a good match. She’d be knocked over, or just left sitting on a couch all weekend. If it is a good service, respite can be very worthwhile as it gives the person a chance to make their own friends and have experiences they would never have with their own family. For example, we’ve never taken Charlotte to Dreamworld, but she’s been with respite. We currently don’t have a safe, enjoyable overnight respite place for our daughter, so we pay carers to care for her one-on-one in our home, so she feels safe and protected when we’re not there.
How helpful is art for young people like Ava?
Art can be incredibly soothing. In the story, Ava’s colours in her paintings to reflect her mood, and I think my own daughter would do the same. But more than that, the art teachers and music teachers we’ve encountered with Charlotte seem to have a way of bringing out the best in their students. Perhaps it’s accessing that other side of the brain? I’ve got a feeling that would be the same for people with or without disabilities. Art and music are very therapeutic.
Many people don’t treat those with a disability well, e.g. substitute teacher Wendy. What is something you would like people to know about how to treat someone with Rett syndrome?
I often ask people to consider Stephen Hawkins. Bent and twisted in his wheelchair, how would we ever know what he had to say if he couldn’t use a speech device? So, I try and tell people not to judge a book by its cover. People with Rett syndrome and any disability are just like us. They may not be able to communicate, they may look a bit different, but talking to them like any other person, smiling, and asking how their day is going, will make them feel less isolated and more included as part of the community. Empathy is so important.
What parallel have you created between Ava’s life and what happens to her father?
In the story, Ava’s dad falls unexpectedly ill in Ava’s presence. This creates a situation where Ava feels her lack of communication more keenly than ever. She can’t help him, or even call for help. I see this in my own daughter when she tries so hard to say something; her eyes shine and her lips make the shape of a word, but no words comes out. It’s incredibly hard. Creating a situation where Dad can’t communicate for a while gives him a true understanding of what it’s like for Ava, and helps the family advocate more strongly to find a way to help her.
How has your family reacted to the story?
My husband was surprised at first, saying, ‘Is that what you really think is going on inside Charlotte’s head?’ He said the book has helped him understand her more and make more of an effort to try and understand her subtle ways of communicating. Both daughters, Charlotte and Beth, are very proud of the book, with Charlotte grinning all through the recent book launch, and any time I talk about it.
Your books have received recognition in many awards. Which has meant the most to you and why?
Recognition from your peers is so important. I’m incredibly proud and grateful for any award nomination as we have so many talented authors in Australia. I think, in particular, when my first book, Smooch & Rose was voted in the Readings Top 5, and shortlisted for the Qld Literary Awards, it really help me believe I should keep writing. More recently, winning the Environmental Award for Wombat Warriors was pretty fantastic!
What do you hope for Everything I’ve Never Said?
I hope my story will shed some light on people living without a voice. People who can’t speak up, whether they have a disability, or are shy or too scared to say what they think, need to know we do care about what they have to say. I also hoped people with Rett or other disabilities, families, siblings, carers, friends would feel less alone. We’re in this together, and while it might not be ‘Italy’, it’s a very special type of ‘Holland’ where, even with its ups and downs, we live lives full of unexpected treasures.
Thank you so much for giving us even more insight into Rett syndrome and living with disability, Samantha. It has been a privilege.
Thank you, thanks for the opportunity!
(Everything I’ve Never Said is published by University of Qld Press)
Before you race out to spend a fortune on the latest toy this Christmas, check out these crazy Christmas books. They are more fun than a box of crayons and can be enjoyed individually or with a loved one. How’s that for value. And there is enough Christmas spirit in each one to jingle your Christmas bells well into the new year! Enjoy the roundup.
Each year the good folk at Christmas Press present an entertaining seasonal anthology for kids. This year, A Miniature Christmas explores the, you guessed it, miniature worlds of all things tiny from genies, mice, elves, fuchsia fairies, even app characters! Several well -known authors and illustrators share their short stories alongside new names in the children’s literary world, each crafting tales that intrigue, entertain and make you ponder. For example:
The Funactor by Oliver Phommavanh is a clever observation of our 21st obsession with apps.
Goblin Christmas by Ian Irvine combines urban social issues with fantasy that has a touch of Harry Potter mystic about it.
George the Genie by Dianne Bates has all the form, plot and cheeky wisdom of a classic fairytale whilst Small Creatures by Rebecca Fung is just plain good fun.
The stories are short enough to share with your child each night on the countdown to Christmas, with special drawings to enhance the magic of each tale. This collection would make a jolly Christmas stocking addition for young primary aged readers.
For me, this is the best of the Macca instalments by far. Funny, fast paced and full of Christmas cheer coupled with a warming message about the true spirit of Christmas, this seasonal romp with Macca the Alpaca reminds us that the best Christmases need not cost anything but love, friendship and goodwill. A cheerful lesson for kids (that is not the slightest bit preachy or forced) and a timely reminder for us big kids to slow down and regain seasonal perspective. Aztec bright and brilliant!
Today we’re joined by the remarkable Teena Raffa-Mulligan, author to a number of children’s titles including picture books, junior fiction and middle grade novels, as well as romantic reads for the adults. Always possessing a love of the imagination, magic, excitement and adventure, Teena has produced such engaging titles like Friends, True Blue Amigos,Mad Dad for Sale, amongst others, and her latest re-release edition of Who Dresses God? The latter is a gentle and touching story inspired by her daughter’s spiritual exploration of the practicalities of the higher being, that is, God. When years ago as a young child, this divine little soul sought philosophical insights into how God can hear, see and speak, how He transcends yet blends into everything, everywhere, without any physical connection. This is a tender and loving rhyming picture book that opens the gateways to enlightened discussion amongst families with preschoolers and beyond, and is particularly delightful to share around this holy time of year. And here’s Teena to share more with us…
Teena, you have had a long relationship with writing coming from a background in journalism. How did your path lead you to become a children’s author, and what do you love about the world of children’s books?
I knew from an early age that I wanted to be a writer. Books opened a door into the wonderful world of imagination for me and from the time I learnt to read my head was filled with story ideas of my own. The journalism came about by accident rather than intention. In high school when the vocational guidance officer suggested I become a journalist I dismissed the idea as I thought it would be far too boring to write news stories.
My ambition was to be a ‘real’ writer and I had dreams of living a Bohemian life in Paris and writing serious literary novels. However a good looking surfer came onto scene and instead I married and we bought a home and started a family. I’ve always loved books, so I read to our baby son from the time he was a few months old. That’s when I decided I wanted to write for children. I knew nothing about the publishing industry and it was long before computers and the Internet, so it was a learning journey. I received some lovely feedback about my ‘beautiful writing’ and ‘engaging characters’ but all my early manuscripts were rejected by multiple publishers.
That’s when I decided it would be easier to get freelance articles published than children’s books – and it was. Editors bought my stories, requested more and I soon found myself doing – and enjoying – the job I’d dismissed as ‘boring’ in my teens when local papers came on the scene. General reporting and feature writing evolved into sub-editing and editing and I learnt some invaluable skills that I was able to use in my creative writing.
I never lost my dream of becoming a published author, so continued to write, submit and learn everything I could about writing for children. In time the acceptances began to come in. I love the world of children’s books because imagination is unlimited and possibilities abound. It’s a world of magic, wonder, excitement and adventure and the kid in me revels in having the chance to explore it through writing and reading.
You’ve written a mix of articles, short stories, poetry, picture books, juvenile fiction and adult titles. Do you have a genre you feel most comfortable with? What do you find are the most common themes or influences in your writing?
I’m happiest writing for younger readers, and that can be a poem, short story, picture book or chapter book. I’m a bit of a butterfly so staying focused on a novel is a bit of a challenge for me. Many of my stories have themes of belonging, family and friendship, though I don’t set out with that in mind. Essentially, I look on the brighter side of life and my stories invariably have a lightness and optimism about them.
You have recently re-released your gentle and loving story, ‘Who Dresses God?’, originally published in 2012. What can you tell us about this book and what is your aim for readers sharing it with family members, particularly around this time of the year?
The book was inspired by my younger daughter, who asked me that question as a child after a conversation with my mum. We weren’t a religious family so the question came out of the blue for me. I answered it the best I could, we had an interesting discussion and I didn’t give the subject any further thought until a few days later when my writers’ brain clicked into gear. I didn’t consciously set out to write a picture book. It was one of those ‘gifts’ that turn up from time to time in a writing life; a story, poem or scene from a larger work that arrives without warning and the only effort on the author’s part is to commit the words to the page or screen.
I hope the story will start a discussion between children and their family members and encourage young people to think about the world we share and whether there is more to it than there appears to be.
What kinds of strategies, discussions or activities would you suggest for parents and educators to engage in following the reading of ‘Who Dresses God?’?
These two awareness exercises are simple for young children to do:
1. Close your eyes. What do you see? How does it feel? Cover your ears with your hands. What can you hear? How does that feel? Close your lips and cover your mouth. Try to speak. Does it work? How does it feel when you can’t use your mouth and tongue to speak?
2. Go outdoors to a nature area such as the park, bushland or seashore. Stand perfectly still and look around you. What do you see? Listen. What do you hear? Can you feel anything? Then go through the same process, only this time with closed eyes and blocked ears. How much of the world around you are you aware of when you do this? NB. This can also be done in a suburban shopping centre or city street; also while travelling in a car, bus or train.
Here’s one for older children:
Imagine you have the amazing power to create your own world and everything in it. How would it look and how would things work? Write a description or draw a picture of your world.
You and illustrator, Veronica Rooke, have not only collaborated on the development of this and several other books, but also conduct school presentations together. What has it been like working with her on these projects?
I met Veronica when I was working for a local newspaper and she was producing a weekly cartoon strip for the publication, so our friendship goes way back. Our paths used to cross from time to time and I knew she was a talented artist but our creative collaborations didn’t start until she moved into the street where I live about 12 years ago. I was looking for someone to illustrate the new edition of my stranger danger picture book and saw her jogging in the street so stopped to ask if she’d be interested. As it turned out, she’d recently made an employment change and the timing was right.
I was impressed with the way Veronica worked, because I had no idea how to brief an artist. I simply handed over the manuscript and said, “See what you come up with. I’d like it to be bright and colourful with cartoony characters.” She asked the right questions, produced wonderful illustrations, designed the book and organised it to be print ready for the printer.
I still take the same approach when I commission Veronica to create illustrations or book covers, though occasionally I will suggest a particular style or mood. I was thrilled when Serenity Press commissioned her to illustrate my picture book, Friends, and encouraged a collaborative approach, because we work so well together. I give her space to interpret my stories artistically and she is always willing to make changes if there’s something I feel isn’t right.
As for dual presentations, it’s great for a writer to have an artist in the room. We take turns to show how we work, interact with each other and the students, and while I’m talking, Veronica can add pictures to my words in the background. We’ve also put together a joint workshop presentation that gives young people the chance to make their own picture book.
Fun Question: If you could dress God, what would you choose for Him to wear?
Hmm. This one’s tricky! Because God isn’t like you and me, I’d dress Him in a rainbow, a symphony of birdsong and the gentle caress of a spring breeze.
What does Christmas time look like for you and your family? What are your favourite festive traditions?
We always have a family get together at our house in the evening for our children and their families. The meal is buffet style, with contributions from everyone: a selection of salads, sliced chicken and turkey, vegan and vegetarian options, trifle and fruit salad for dessert. Every year I make the chocolate snowballs and chocolate fudge my mother-in-law used to make, and the bean salad and nut meat pasties that my mum made at Christmas.
After the meal there’s gift giving, followed by a walk to the beach just over the hill and a cricket game in the cul-de-sac opposite our house. I love that our family can be together at this time.
For many years there was another tradition on Christmas Day, and that was a visit to the Italian family home in Fremantle. It began in my childhood and long after my grandmother died my bachelor uncle continued to hold open house there. My father’s side of the family would all turn up at various times, gather around the enormous table that filled the big kitchen and catch up on all the news. Sadly, after my uncle died eight years ago the house was sold and that tradition is no more. I miss it.
Anything else of excitement you’d like to add? News? Upcoming projects? TBR pile?
I have a new picture book in production and scheduled for release by Daisy Lane Publishing in mid-2019. When the Moon is a Smile is about the special times a small girl spends with her dad, who no longer lives with them. I’m thrilled to be working with publisher Jennifer Sharp, who spent a week exploring London with me last year after we both attended the Serenity Press writers’ retreat at Crom Castle in Ireland. I also can’t wait to see what illustrator Amy Calautti comes up with for the illustrations.
Thank you very much for your time, Teena! It’s been wonderful learning more about you! 🙂
It’s been a pleasure chatting with you. You asked some great questions and the dressing God one put me on the spot!
Visit Teena Raffa-Mulligan at her website, and on her blog tour for Who Dresses God?here.
Thanks for speaking with Boomerang Books, Michael. You have an incredible, and awarded, body of work for children and young adults.
I remember first reading The Running Man as a proof copy and knowing that this was an Australian classic; literally falling off my chair with laughter when I read Don’t Call Me Ishmael; and judging the Qld Literary Awards when Just a Dog won best children’s book.
Could you please tell us about these and some of your other books?
I often get asked at school visits which of my books is my favourite. Of course, a bit like choosing between your children, it’s probably an impossible question to answer. I’m happy to say that I love and am proud of everything I’ve written and each book has something that makes it special for me. I would never have the nerve to send them off to my publisher if that wasn’t the case.
The Running Man of course will always be special to me. It made me a published author, won the CBCA Book of the Year and changed my life in ways that I’d only ever dreamed about. It also says some things that are important to me – like how we often judge and label people and put them in a convenient box, without really knowing them or seeing the human being behind the label. I was writing it back in the early 2000s when the issue of refugees was very much in the news and they were being demonised by some. Sadly not much has changed.
Some people might think it strange, but of all the things I’ve written, I’m probably most proud of the Ishmael trilogy. I’d happily be judged as a writer just on the basis of those three books. I love the mix of comedy with more serious moments and the way the characters grow and develop and reveal different aspects of themselves as the series unfolds. I’m also pleased with how the series ends and that ultimately it’s all about the saving power of love and friendship. It was a sad day for me when I wrote the final scene and said goodbye to characters I loved. I have a special place in my heart for readers who take the time to follow the journey of Ishmael and his friends all the way from year nine through to graduation. Some of the loveliest emails I’ve received are about these books.
I loved writing Just a Dog. I enjoyed the challenge of trying to write a powerful story in the simple language of a young boy. It took quite a few drafts to get there but I was really pleased with the way it turned out. A number of Corey’s and Mr Mosely’s stories were based on childhood memories of dogs I grew up with. The response to this book has been overwhelmingly positive and beautiful but because of the serious and ‘more adult’ issues it also touches on, it’s had a bit of a polarising effect on readers. One lady said after reading the book that it was going ‘straight in the fire’! I remember when I submitted it, my publisher asked me who I thought the story was for. My answer was, ‘I’m not sure. Maybe it’s just for me.’
Like I said at the start, I could give reasons why each of my books is special to me – but don’t worry I won’t do that! However I have to mention what a joy it was to work on the Eric Vale and Secret Agent Derek ‘Danger’ Dale series with my beautiful (and genius!) son Joe. (AND you can check out Joe and wife Rita’s ARTSPEAR ENTERTAINMENT YouTube channel to see for yourself why this super-talented couple have 1 MILLION subscribers.)
Where are you based and what’s your background in children’s and YA literature?
I’m based in Brisbane. I’ve lived most of my life in the suburb of Ashgrove which was the setting for The Running Man. We now live in the bordering suburb. Look how far I’ve come!
I was a secondary school teacher of English and Economics for quite a few years and dreamed of being a writer. I had what amounted to a quadruple major in English Literature from Qld Uni but my awareness and depth of knowledge of children’s and YA literature was quite limited until I got a job at Marist Ashgrove (the school St Daniel’s is based on in the Ishmael books). The wonderful English co-ordinator there who interviewed me and who was ultimately responsible for me getting the job, said I needed to know more about what young people were reading. She handed me a stack of YA and middle grade novels to read over the Christmas holidays. It opened my eyes to a whole new world of stories.
How are you involved in this community at the moment?
I’m very fortunate living in Brisbane as we have a very vibrant, active and enthusiastic writing and illustrating community. It’s a large and supportive group and I’m often in contact with other local writers and illustrators through book launches and other literary function and events. My involvement comes about mainly via such organisations as the Queensland Writers Centre, ASA, Book Links and the local branch of the CBCA. I’m also a member of the May Gibbs Children’s Literature Trust where I’m on the selection committee for their Fellowships and Residencies.
Well if you insist! The Things That Will Not Stand is a YA novel set over just nine or so hours at a University Open Day for senior school students. It is told in the first person, present tense by a year eleven boy called Sebastian, who is attending the day with his best friend (and perhaps mentor) Tolly. Sebastian is a bit of a lost soul as well as a romantic and when he has a brief encounter with the ‘perfect girl’ he can’t help himself hoping and dreaming that they might make a connection and his day will pan out like some feel-good, rom-com movie. Instead, he meets Frida – the ‘wrong’ girl – and his and Tolly’s day takes a very different and much more unpredictable, turn.
It’s a story about two teenagers who are both hurting and damaged in their own way. It’s about the stories they tell, the secrets they keep and the courage and faith it takes to share their real selves. The novel is a mix of comedy and drama because as Sebastian says about life, ‘It’s never just one type of thing … It’s all over the place. One minute it’s tears. Next minute it’s laughter. Then, just when you think you’re headed for a happy ending, the monsters turn up.’ I hope readers enjoy spending the day with Frida and Sebastian and Tolly. I certainly did.
How important is an opening scene and how did you write it here?
An opening scene is crucial. First impressions count, as they say. I think a good opening scene feels like the curtains are suddenly drawn back and you find yourself as the reader in the centre of someone else’s world. A world that hopefully draws you in and hangs on. TTTWNS opens with Sebastian standing in a cinema foyer staring at a set of big sliding doors, hoping and praying that soon they will glide open and the girl of his dreams will walk through. I chose to start here because it’s a dramatic and pivotal moment that could go either way. It also a scene that reveals a lot about Sebastian’s character and personality.
How does Sebastian represent a “Very Ordinary Guy”?
This is Sebastian’s description of himself and it reflects the doubts and lack of confidence a lot of young people – both male and female – have about themselves, especially when they compare themselves to others around them and (unfairly) to larger than life celebrities. In that way he is an ordinary teenager because like most teenagers, he doesn’t see or appreciate the extraordinary and admirable qualities he actually does have. But I’d like to think that readers will see them.
Frida has a sharp wit. How did you form her dialogue?
I enjoy writing dialogue and I loved creating the exchanges between Frida and the boys. I can’t explain the process of writing the dialogue or where the ideas come from. I think knowing the character well and seeing them as real people helps. Because of Frida’s connection to Frida Kahlo I imagined her as someone who was creative, fiery, intelligent and strong-willed but also with a sense of fun and humour and compassion. I tried to channel that. Writing for me is often like picturing a scene in my head and watching it like film and then trying to capture in words what I see and hear.
I think everyone loves humour but it’s so difficult to write. It’s something you do well! How do you pull it off?
I often get asked how I come up with the jokes and humour in books like the Ishmael series and the Eric Vale series. I can never answer those questions. I sometimes do workshops on writing humour and talk about how the key to all humour is ‘surprise’ or the ‘unexpected’ and how you can apply this to creating surprising and unexpected characters, situations, storylines and language use. But I must admit that I don’t have a conscious process I go through or a formula in my mind when I’m writing comedy. I just try to think of things that I find funny. Pathetic explanation, I know! I was never extroverted or a ‘class clown’ at school, but I could always make my friends laugh. I think it helps that I’ve loved comedy and have devoured funny movies, TV shows, cartoons and books ever since I was a little kid. One of the strengths I think I have as a creator of stories, it is that I often see connections and links between things. Perhaps being able to see surprising and unexpected connections between words and ideas and situations, helps with producing humour and witty dialogue.
What is the significance of the movie Casablanca and other movies in the novel?
Like The Big Lebowski, Casablanca is one of my favourite films. Best dialogue ever. It’s significant in the novel because as a love story it stands in contrast and challenges Sebastian’s happy ending rom-com fantasies. The final scene of Casablanca shows that love is not a selfish thing, that sometimes it involves pain and sacrifice. After watching the film together, Frida comments jokingly that Sebastian is nothing like Bogart’s character Rick in the film. I like to think that by the end of the novel she might not be so sure.
How are Sebastian and his mate Tolly actually not Ordinary Guys, but superheroes?
Aren’t most superheroes ordinary people most of the time until those crucial moments when they are called on to reveal their alter-ego? Sebastian and Tolly don’t have superhero costumes but they do have those moments when they reveal who they are through their words or actions – such as when Tolly takes on Frida’s tormentor in the lecture theatre. But they’re not your classic superheroes. If they do possess any ‘superpowers’ it’s just their essential decency and empathy.
The Things That Will Not Stand is an engaging read that, at first, conceals scars and depths in the character’s lives. How do you unpeel these layers?
Every time you have a character in a scene they are revealing something of themselves – how they act, their appearance and mannerisms, the words and images they choose to use, how they react to other characters, other situations and ideas, their thoughts and feelings and attitudes – all of these things and more help readers’ build up an understanding and appreciation of a character. Even if the character is trying to hide or disguise who they are, their real nature can be shown to seep through.
Sometimes in TTTWNS hidden layers are exposed when cracks and inconsistencies appear in a character’s story. More importantly, layers are peeled back when trust grows between the characters – when they feel brave enough to place some of their secret pain and hurt in someone else’s hands. The various events of the day provide the opportunities for the trust and connection between Sebastian and Frida to grow and strengthen.
What is the significance of the title?
The title is a line from the movie The Big Lebowski – a big favourite of mine. The main character in the film, The Dude (Jeff Bridges) says at one point, ‘This aggression must not stand, man.’ In the book the statement ‘It will not stand’ is used by Sebastian and Tolly as a declaration of intent, a call to action against some perceived wrong or injustice or any unacceptable situation. A bit like how recently all those amazing school kids around Australia saw the lack of commitment by our country’s leaders in dealing with Climate Change and took to the streets. To my mind, that was a big ‘It will not stand’ moment. I could well imagine Sebastian and Frida being there, with Tolly leading the way.
What are you writing next?
There might be a sequel to Rodney Loses It. I hope so anyway. Winning the CBCA award this year as well as the Speech Pathology of Australia award and sharing that success with the amazing Chrissie Krebs has been such a great thrill. I’m pottering around with some ideas and verses at the moment, but I won’t submit anything to my publishers unless I think it’s up to the standard of the first book.
The main thing I will be writing next year is a serious YA novel (my first completely serious book since The Running Man). I was very fortunate recently to receive a Queensland Writers Fellowship to support this project. The working title of the book is Gaps and Silences. Like The Running Man, it will be set in Ashgrove, but further in the past. There might also be some slight connections between the two stories. Haven’t quite worked that out yet. There’s still a lot of pieces of the jigsaw puzzle of find and assemble before I get a clear idea of the full picture.
Anything else you’d like to mention?
I have a blog/webpage at michaelgerardbauer.com and I’m on Facebook at Michael Gerard Bauer Author, Twitter @m_g_bauer and Instagram at mgbauerpics.
Thank you, Michael, for your generous and insightful responses.
Michael writes across age-groups – so seek out his works for Christmas gifts. I highly recommend The Things That Will Not Stand for teen readers.
It was a great privilege to attend the Prime Minister’s Literary Awards in Canberra yesterday. I was on the judging panel of the Children’s and Young Adult categories and we were thrilled with both our shortlisted and winning books.
It was wonderful to see the value that Prime Minister Scott Morrison placed on Australian literature in his speech, citing David Malouf’s Johnno, for instance, and the importance of children’s books.
All of our Children’s shortlisted authors and illustrators attended as well as a number of our YA authors. It was such a treat to speak with Lisa Shanahan and Binny Talib, creators of the highly engaging and layered Hark, It’s Me, Ruby Lee! (Hachette); and Sarah Brennan and the legendary Jane Tanner (Drac and the Gremlin, The Fisherman and the Theefyspray, Isabella’s Bedroom and There’s a Sea in My Bedroom) – creators of Storm Whale (Allen & Unwin); and the winners of this category – some of children’s lit loveliest and most talented people – Glenda Millard and Stephen Michael King for the stunning Pea Pod Lullaby (Allen & Unwin). This is a lyrical directive to everyone to care for refugees and anyone needing help.
Scholastic Australia was very well represented, with a table full of shortlisted authors and illustrators hosted by publisher Clare Halifax. Beautiful picture book Feathers was written by the ever-smiling Phil Cummings (Ride, Ricardo, Ride!, Bridie’s Boots, Boy, Newspaper Hats) and illustrated by Phil Lesnie (Once a Shepherd).
Rising star Tamsin Janu was again awarded for her Figgy series set in Ghana. This time for Figgy Takes the City. Her novel Blossom, about a girl who looks after an alien, was also entered and she has another original work due to be published next year.
In the YA category, Bruce Whatley’s extraordinary graphic novel, Ruben, was shortlisted. Bruce was accompanied by his exuberant wife, Rosie Smith (My Mum’s the Best).
And Scholastic published the winning YA work: the delightful Richard Yaxley’s originally-constructed holocaust novel, This is My Song.
Authors don’t know in advance if they have won so it was an emotional time for all as the winning books were announced.
I also loved catching up with some of the poets, such as eminent writer Judith Beveridge; genre-crossing Adam Aitken, shortlisted for Archipelago (Vagabond Press); and Brian Castro who won with Blindness and Rage: A Phantasmagoria (Giramondo) and appropriately read a poem-speech. His prose work, The Bath Fugues, is a personal favourite.
Gerald Murnane, winner of the fiction category for Border Districts (another winner for Giramondo) is known as a recluse. He tried hard to get to Canberra but just couldn’t manage the distance. It is great to see his work recognised further with this prestigious award.
The ceremony was a very special and memorable event. Sincere thanks to the awards committee.
The First Adventures of Princess Peony is sub-titled ‘In which she could meet a bear. But doesn’t. But she still could.’ This adds intrigue to the tale because bears don’t feature at all as the story is set up. Instead we get to know the ‘dear little girl called Peony’ but it is Peony who is telling us that she is a ‘dear little’ girl. She is actually very bossy and one of her favourite things is ‘being obeyed!’. Peony is an unreliable narrator, full of personality, who addresses the reader at times.
Lucinda Gifford’s lively black, grey, white and pink illustrations tell another side to the story as well. Princess Peony tells us that she lives in a castle with her dragon Totts but the pictures show something else. She says that princesses ‘never lose their temper when things go wrong’ but the pictures show her looking far from serene.
She has trouble with Prince Morgan the Troll who is always interrupting, pats the Dragon under its wings and is building a bear trap. This is the catalyst for Princess Peony’s possible encounter with a bear. The illustrations again add to the humour with expressive eyes and partly hidden bears peering from the hedge. A chook also has a lively cameo.
This is a book to read multiple times. It is so engaging children will want to rush through it the first time but it is also a book to savour. The plotting, characterisation and humour are superb. It is a wonderful place for young readers to share and develop imagination and revel in pretend-play and role-play alongside Princess Peony. The First Adventures of Princess Peony is fun and exciting and has a most satisfying story arc. It is a triumph.
There is no denying it – the countdown is on. We’ve got you covered for Christmas, though. Discover the fantastic array of kids’ stories between these covers. Every week until Christmas, I’ll be listing a selection of new releases and top-rate reads for children from pre-schoolers to new young adults. Here’s a swag of super entertaining picture books just right for sharing this summer.
I detest the colour pink and princesses who like to adorn themselves in it. However, I LOVED this supremely funny tale about a little girl named Peony who lives in a castle with her dragon, pink bits and all. Beautifully told from Peony’s unabashed point of view and illustrated with striking tri-coloured drawings, this is a joyful read about giant imaginations, dogs, family and princesses, of course. Highly recommended for pre-schoolers, early primary schoolers, emergent readers and those of us struggling to accept the little princess within. Watch out for following titles in this illustrated series.
Thanks for speaking with Boomerang Books Blog, Nikki.
Where are you based and how are you involved in the YA literary community?
Thanks so much for reading Liberty, Joy, and for these wonderful questions … here goes.
I am based in Terrigal, just north of Sydney. I love being a part of the YA writing community and have made some dear friends. Currently a writer friend and I are putting together a Central Coast writers’ group to offer each other support and encouragement. It’s easy to feel isolated as a writer so community is important.
I was swept away by your new novel Liberty (University of Queensland Press). What else have you written?
I’ve written two memoirs and Liberty is the second of a series of three books in a loose trilogy called The Systir Saga. Hexenhaus was first released in 2016 and Liberty will be followed by Saga in 2019. The books can be read in any order or as stand-alone books but work well as companions or ‘sister’ books.
How did you select the three story strands and protagonists in Liberty? Could you give an outline of each?
I chose the historical characters of Betsy Gray and Jeanne Laisne after extensive searching for girls who could fit my agenda. They were chosen because they were strong and courageous, standing up and out in times of conflict and raising their voices for themselves and the women who came after them. History has overlooked women’s stories of valour in favour of ‘hero’ tales and I wanted to lift these girls’ stories from the footnotes and shine a spotlight on them. In Liberty, Frenchwoman Jeanne Laisne leads an army of women against a hostile invading force in the late 1400s; Irish Betsy Gray rides beside her brother and sweetheart in a rebellion against the English; and Fiona McKechnie marches for peace and freedom in the anti-war movement in the late sixties in Brisbane.
Which are based on historical figures?
Betsy and Jeanne were real historical characters while Fiona is a fictional composite of some of the strong women in my own life (grandmothers/mother/aunts).
How have you used romance in the stories?
There is romance in each of the stories but I made sure that none of my girls were defined by the men in their lives. Each broke with the traditions of their time. Arranged marriage was the norm in France at that time but Jeanne wanted to marry for love. Betsy was in no hurry to settle into butter-churning and domestic servitude and Fiona wanted to ‘be’ a lawyer as opposed to her father’s hope that she might ‘marry’ a lawyer.
How do you show female powerlessness and oppression in these tales?
During each of the eras, women faced significant powerlessness and oppression. Women were largely seen as property or ‘helpmates’ to their fathers and husbands. My three characters feel suffocated by this and seek to break those bonds and assert themselves as individuals.
How do you highlight the power and agency of women in the novel?
Power and agency were not on offer to my three girls; they had to wrest it for themselves with great strength and determination. Each took the harder path and refused to let society dictate who they were and what they were capable of. They had to break some old rules to make way for new ones.
Female bloodlines are shown, even leading back to Jeanne d’Arc/Joan of Arc. Why have you included these?
The female bloodline, written in the mysterious Systir Saga book, a matrilineal family tree that spanned many centuries, is the life-force of Liberty and the other books in the trilogy. While there are actual historical female figures in this book, including Joan of Arc, it really is symbolic of the global sisterhood – a force that runs beneath the surface of movements such as #metoo. We have the liberties we do today because of women like Betsy and Jeanne and Fiona who raised their voices, which allowed those that came after to raise theirs.
All three protagonists have missing mothers and none want to disappoint or dishonour their fathers. Why these missing mothers?
My characters have no mothers in their lives. This is interesting because I think being raised by fathers shaped the girls, to a certain extent, but all felt compelled to make their late mothers proud of them and they sought to right the wrongs that had taken their mothers from them too early. In Fiona’s case, she wanted to attend university because that had not been an option for her own mother.
How does Jeanne query the predestination of fate?
Jeanne does question the concept of predestination. Her fate was to be ‘sold off’ and married to a cruel man she did not love because the Captain of her town made decisions about her life and not even her father could prevent that. Jeanne, as a poor peasant girl, felt miserable about not being able to make her own life choices and so she seized her moments when they presented themselves and managed to change her destiny. This is true for all of us. No matter how trapped we feel, we always have choices.
Betsy’s heroine was Mary Ann McCracken. Who is yours?
I love that you ask who my heroine is. I have so many. I actually do a daily visualisation and have an imaginary council of strong women that includes Michelle Obama, Mary Shelley, Emily Brontë, Queen Elizabeth the First, Madonna, Oprah Winfrey and Malala Yousafzai. I know that sounds a bit wacky but it works for me.
Your novel title is ‘Liberty’. What liberty do you hope for?
I have a great desire for true liberty for women in our world. This would mean that women felt safe to walk at night; safe in their workplaces, schools and in their homes. Equal pay would be a reality and women would sit in equal measure in boardrooms, governments and in every walk of life. Women would be valued for themselves and respected for the great human beings they are.
What are you writing next?
I have just finished writing Saga, the third book where I introduce three new heroines. Hexenhaus has three women accused of witchcraft, Liberty has three warrior women and in Saga I have three young women who change the world through words.
Thanks very much Nikki and I greatly look forward to reading Saga.
Liz Anelli and Sheryl Gwyther will be sharing their knowledge and experience of writing and illustrating with aspiring children’s book creators and other interested people in an event organised by CBCA(NSW) this week, the Aspiring Writers Mentorship Program.
Liz Anelli has been achieving recognition for her distinctive illustrations. Her picture books include Desert Lake, written in Pamela Freeman’s assured text. It’s published as part of the Walker Books ‘Nature Storybook’ series and shows how Kati Thanda-Lake Eyre changes when the floods arrive using texture, pattern and colour. It has been shortlisted for several awards, including the NSW Premier’s Literary Awards and the Educational Publishing Awards.
Ten Pound Pom, written by Carole Wilkinson (Black Dog Books, Walker Books), is another well-designed book from the illustrated ‘Our Stories’ series. It is a Britain-to-Australia immigration story. Liz Anelli has created authentic detail, even using fabrics from her family to fashion the clothing.
Maddie’s First Day, written by Penny Matthew and also published by Walker Books, looks at the evergreen subject of a child’s first day at school.
Grace and Katiewritten by Suzanne Merritt (EK Books) is a wonderful vehicle for Liz’s skills as she shows the differences between these twin sisters. One is creative and messy. The other is ordered and tidy. Their map-making is a triumph.
Sheryl Gwyther is an incredible support to the Australian children’s literature world. I know Sheryl from my years living in Brisbane and she is a wonderful advocate of children’s book organisations and those who are part of them. She is an active member of SCBWI (Society of Children’s Book Writers & Illustrators), an excellent organisation for children’s book creators.
Her books include the engrossing Secrets of Eromanga (Lothian Books, Hachette) about Australian dinosaur fossils andSweet Adversity(HarperCollins), a historical novel set in Australia during the Great Depression with Shakespearian and theatrical touches.
Both Liz Anelli and Sheryl Gwyther will be speaking tomorrow night (Thursday 8th November) at an event for aspiring writers at HarperCollins in Elizabeth St, Sydney. Full details and booking information are in the flyer below or follow the link.
HarperCollins keeps our Australian children’s book heritage alive by continuing to publish and promote the works of May Gibbs and Norman Lindsay.
They recently published Emily Rodda’s The House at Hooper’s Bend, a brilliant book, which has been shortlisted for several awards including the 2018 CBCA Children’s Book Awards and the Qld Literary Awards. It is followed by His Name Was Walter, which I look forward to reading.
My other recent favourite from HarperCollins is Jackie French’s Just a Girl. It’s one of the best books I’ve read this year.
Belinda Murrell is a much-loved author of children’s series fiction and time-slip and historical novels. Her series include ‘Pippa’s Island’, ‘Lulu Bell’, ‘The Sun Sword’ trilogy and ‘The Timeslip’ series. Her books are warm and rich with appealing characters and captivating storylines.
Thank you for speaking with Boomerang Blog, Belinda.
What is your background and where are you based?
I grew up on the North Shore of Sydney in a rambling old house full of books and animals. I studied media, writing and literature at Macquarie University and worked for many years as a travel journalist and corporate writer, before becoming a children’s author. Now I live with my family in a gorgeous old house, filled with books, overlooking the sea in Manly.
What led to your writing books for children?
About 14 years ago I started writing stories for my own three children, Nick, Emily and Lachlan. Some of these stories became The Sun Sword Trilogy, a fantasy adventure series which was published about 12 years ago. I’ve been writing for children of all ages ever since.
What else do you enjoy doing?
My favourite things to do include walking my dog along the beach, riding my horse at my brother’s farm, skiing, reading books and travelling the world having lots of adventures with my family.
What themes or issues appear across some of your books?
Finding your courage, being brave and kind, standing up for what you believe in, accepting people’s differences and the importance of family and friends are all themes which I explore in my books. Another issue which is very important to me is creating strong, inspirational female protagonists which girls can relate to. When my daughter was younger, I was disheartened by the number of children’s books which always had boys as the heroes. “You cannot be what you cannot see” and so I strive to create lots of different, interesting and aspirational female characters.
Could you tell us about your books, particularly your latest series, ‘Pippa’s Island’?
The Lulu Bell series is about Lulu and all her animal adventures, living in a vet hospital, inspired by my own childhood as the daughter of a vet. The 13 books have been hugely popular with younger readers, aged about 6 to 8.
My new series, Pippa’s Island is for readers about 8 to 10 years old, and includes five books about friendship, families and seaside adventures. Pippa and her family move halfway across the world to start a new life on gorgeous Kira Island, where Pippa’s mum has the crazy idea of buying a rundown old boatshed and turning it into a bookshop café. Pippa makes friends with Charlie, Cici and Meg, and they form a secret club, called The Sassy Sisters, which meets after school in the tower above the boatshed. Their motto is “Be Brave. Be Bold. And be full of happy spirit.”
For older readers, aged about 10 to 14, I have written a series of seven historical and time slip novels, such as The Ivory Rose, The Lost Sapphire and The Forgotten Pearl – each with a modern-day story woven together with a long-forgotten mystery or family secret from the past.
What is Pippa dealing with in the new ‘Pippa’s Island’?
Pippa’s noisy family have been living crammed into a tiny caravan in her grandparents’ back garden and money has been super tight. Pippa’s can’t wait to move into their new apartment right above the Beach Shack café but the builders are taking forever! Pippa’s feeling frustrated until she comes up with a genius plan to make some pocket money: Pippa’s Perfect Pooch Pampering. With a lot of help from her best friends, Pippa starts her own dog walking business. Soon she has her hands full with adorable but pesky pups. What could possibly go wrong? Puppy Pandemonium!!
Which character has most surprised you in ‘Pippa’s Island’?
I absolutely fell in love with my main characters – Pippa and her best friends Charlie, Cici and Meg, who are all so different yet so caring of each other. Yet I also enjoyed discovering how some of my other characters developed over the series. Pippa’s arch rival is Olivia, who is good at everything, whether it’s winning the class academic prize, gymnastics or dancing. Olivia is popular and a natural leader but can be very competitive. At first, Olivia and Pippa seem like they will be friends, but when Pippa tops the class in a maths quiz, Olivia feels threatened and tries to exclude her. Over the course of the series, the girls have a prickly relationship but gradually they work out their differences and learn to appreciate each other.
The ‘Pippa’s Island’ books are full of delicious cupcakes. What is your favourite flavour?
Initially my favourite was Cici’s lemon cupcakes in book one, but then at a Pippa’s Island book launch, a gorgeous librarian baked dozens of divine strawberry cream cupcakes. They were heavenly, and of course starred in book 2!!
Who are your core readers? Where do you have the opportunity to meet them, and have any of their responses been particularly memorable?
My core readers are girls aged between 6 and 14 and it’s been lovely to see readers growing up reading my books, starting with Lulu Bell, then Pippa’s Island and then the time slip books. One of the greatest joys of being a children’s author is meeting kids who love my books at schools, libraries, bookshops and festivals. They get so excited! Every year I spend about four months visiting schools and book events all over the country. This year I’ll visit schools in Adelaide, Melbourne, Brisbane, Tasmania, all over Sydney as well as many regional areas.
The most memorable and humbling experiences have been hearing from readers, particularly of my time slip novels, who feel that my books have changed their lives. A year 12 student wrote a heartfelt letter to thank me for writing her favourite book, The Ivory Rose, that “she’d held so dear for so long”, that helped her decide what she wanted to do with her life. Another 18-year-old girl wrote to say that the adult she’d become and the values she treasured were inspired by my books that she’d read over and over. These letters are so beautiful and make me cry.
What is the value of series fiction, particularly in comparison with stand-alone works?
Kids love reading a whole series of books because they have the chance to really get to know the characters and see how those characters develop and change. It is also comforting for younger readers to know what to expect in a book – that they will love the setting, the style and the writing, so it’s much easier for them to become lost in the story. A series that you love can be completely addictive, whereas a stand-alone book, even a brilliant one is over too soon!
What have you enjoyed reading recently?
For my book club, I’ve just finished reading Eleanor Oliphant is Completely Fine by Gail Honeyman, which I’ve been wanting to read for ages. It was fantastic – so funny, witty and moving. I absolutely loved The Peacock Summer by Hannah Richell, set in a crumbling old English mansion, about a modern-day character called Maggie, trying to discover the secrets of her grandmother’s mysterious past. Another historical book which I loved was The Juliet Code by Christine Wells, about Juliet Barnard, a British spy parachuted into France during World War 2, to help the French Resistance in occupied Paris and her turmoil in dealing with these experiences when the war is over.
What are you writing about now or next?
I have two completely different and new projects that I’m very excited about. The first is a middle-grade children’s fantasy novel set in a world inspired by Renaissance Italy. I’m in the early stages of writing the story and am heading to Tuscany in the New Year to explore tiny fortified hill towns, medieval towers and secret tunnels. The other project is very special – a book which I’m writing with my sister Kate Forsyth, to be published by the National Library of Australia. It is a biblio-memoir about growing up in a family of writers and the life of our great-great-great-great grandmother Charlotte Atkinson, who wrote the very first children’s book, published in Australia in 1841.
Thanks Belinda, and all the very best with your books.
Children adore funny stories so thanks to the publishers who are commissioning them and authors who are writing them.
Penguin Random House Australia has recently published the brilliant Oliver Phommavanh’s new novel Natural Born Leader Loser; Mr Bambuckle’s Remarkables Fight Back by Tim Harris, where the exploits of Mr Bambuckle and his class continue; and Total Quack Up!, an appealing anthology edited by Sally Rippin and Adrian Beck.
Pan Macmillan Australia has extended its popular comedy series with Laugh Your Head Off 4 Ever, illustrated by Andrea Innocent. Highlights here include Felice Arena’s ‘Dad Dancing’ about Hamish’s dad who dances cringeably at the end-of-year formal. Bully, Craig Dickson, films it on his phone until the music changes … Penny Tangey’s ‘Use Your Words’ is about the power of words and could also be used in schools to illustrate this in a fun way. James Roy’s ‘Evil Genius’ is a clever comeuppance featuring jelly snakes. Lisa Shanahan has an alien tale in ‘Harriet’s Spacey Friend’. And Andy Griffiths’ ‘Runaway Pram’ has been published previously but is a superb slapstick piece. The bright yellow cover with contrasting pink makes this book stand out.
Another anthology is Total Quack Up! It’s edited by Sally Rippin, much-loved writer of ‘Polly and Buster’, ‘Billie B Brown’, ‘Hey Jack!’, awarded picture book The Rainbirds (with David Metzenthen) and stunning middle-grade novel Angel Creek; and Adrian Beck, author of the ‘Champion Charlies’ and ‘Kick it to Nick’ series. It’s illustrated by James Foley of My Dead Bunny fame. Deborah Abela uses the hills hoist to dramatic effect in ‘How to be a Superhero’. Tristan Bancks has a funny take on a football game in ‘The Pigs’. Jacqueline Harvey will scare anyone off pet sitting in ‘Pet Sit Pandemonium: Operation Snowball’. Using a clever play-on-words Sally Rippin shows what could happen to disobedient children in ‘Do Not Open’. The hilarious R.A. Spratt has another funny Nanny Piggins story in ‘Pigerella’. And Matt Stanton has a selfie-inspired cautionary tale in ‘What Hippopotamuses and Sharks Have in Common’. The only story published previously is Paul Jennings’ ‘A Mouthful’. It’s a very funny Dad tale.
Tim Harris’s ‘Mr Bambuckle’ stories (illustrated by James Hart) are incredibly popular. In Mr Bambuckle’s Remarkables Fight Back we meet his class of 15 students again. They get the better of horrible teachers and Scarlett has an original plan to get rid of the dire Miss Frost. Mr Bambuckle inspires creative ideas, such as asking students to think of “a ridiculous use for a cake” and “an imaginative way to enter the classroom”. As a bonus, books with illustrations are championed as a way of managing the terrible behaviour of a kindergarten buddy. It’s followed by Mr Bambuckle’s Remarkables Go Wild.
Raymond in Oliver Phommavanh’s Natural Born Leader Loser is a memorable character to whom children will relate and cheer on. He is in Year 6 at apathetic Barryjong Primary. Bullies run rife. New principal, Mr Humble who looks like a retired wrestler, wants to change the culture and selects four prefects: energetic soccer star Zain; forthright, hijab-wearing Randa; artistic Ally and Raymond who believes he’s a nobody. He doesn’t want to be in the spotlight but he does want to make the school better. As he challenges, and dares, himself he starts to make more difference than he could have imagined. The process is agonising at times but also full of fun, wildly creative ideas, jokes and wonderful emerging and changing friendships. I would love to see all children in primary school, including quiet achievers like Raymond, read this book. It could change negative cultures and transform the timid into confident leaders without spoiling their natural personalities.
Author-poet Lorraine Marwood won the Prime Minister’s Literary Award for Children’s Fiction in 2010 for Star Jumps. Her new verse novel Leave Taking (University of Qld Press) is just as good. Both are set on a farm and are for primary-aged readers.
Leave Taking refers to both the title and Toby’s experiences as he and his parents pack up their dairy farm and the belongings of Toby’s younger sister, Leah, who recently died from cancer. Of course, such weighty themes are sobering but grief is recognised and faced through the natural rhythms of Australian rural life, Toby’s steps around the property and loving memories of Leah’s tangible and intangible footprints.
The map of the farm on the front endpaper has changed by the end of the book as Toby revisits and labels special places: the machinery shed where both children scratched their initials in the concrete; the old red truck where Leah wrote pretend bus tickets during their last game there; and Memorial Hill where they buried pets and other animals and birds.
Toby camps at significant places on the property but is always close enough to the farmhouse to help with the cows or have a quick check in with his mother. He is also comforted by the company of his dog Trigger.
Leah was a gentle girl who loved stories and taking photos, shared jobs, delighted in April Fools’ jokes and left so many drawings that some will be taken to the new farm and the rest placed in the heart of the bonfire – which would have made her happy.
The writing is often sensory and poetic, beginning with a contrast between the light of the “faint silver of dawn” and the dark shadows outside Toby’s tent. The author sketches the natural world of magpies and native trees and gumnuts with evocative strokes. She uses figurative language to describe the huge milk vat purring “like a big-stomached cat” and personifies the bonfire as a dragon.
There is a supportive, although laid-back, sense of community and hope of new life with the imminent birth of a new baby as Toby maps his goodbye to his home and much-loved sister.
The cover illustrations and line drawings are by Peter Carnavas, who has just won the Griffith University Children’s Book Award in the Queensland Literary Awards. After creating a number of thoughtful picture books, Peter illustrated his first novel, The Elephant, a brilliantly executed study of a family’s grief and path to healing. I will always remember this outstanding novel when I see jacaranda trees in flower.
I remember when I was a pre-schooler, the day our World Book Encyclopedia and Childcraft How and Why Library sets arrived. They lived in their own custom-built bookshelf and went with us whenever we moved house. I was contemplating selling them this year to free up space or failing that, surrendering them to the compost heap. Now, after spending time with Lenny and Davey, I’m not so sure. Like their Burrell’s Build-It-At-Home Encyclopedia, each lettered volume holds countless childhood memories anchored in place by facts and figures now hopelessly out of date but somehow still completely valid. How does one discard their former life – a childhood of countless special moments and first-time discoveries – so decidedly?
Moreover, how does one describe Lenny’s story. Wrenching (you will need tissues – preferably 3 ply), soaring (pack your wings), absorbing (allow for a few sleepless nights spent page turning), tragic (get another box of nose-wipes just in case).
Lenny’s Book of Everything is a story with a heart as big as Phar Lap’s and gallops along at a pace that both rips you apart emotionally but is simultaneously restorative and mindful such is Karen Foxlee’s talent for powerful story telling. This story describes the relationship between Lenny, her younger brother who has a rare form of gigantism and their beleaguered mother. Theirs appears a drab ‘moon-rock’ coloured existence yet flashes of brilliance strike everywhere, everyday: their mother’s pink work uniform, the pigeons on their windowsill, Mrs Gaspar’s outrageous beehive, the ubiquitous letters from Martha Brent and of course, her regular dispatch of encyclopedic issues to them. All conspire to create warmth and hope and put the reader at ease while sweeping them ever closer to the inevitable conclusion.
Children love a splash of spook, a gash of ghoul and a dash of danger, but only if it’s laced with humour and courage. If you’re looking for some creepy crawlies, menacing monsters and terrifying trolls to give you the shivers this Halloween, then check out these wild picture books… don’t worry, they’re not actually so scary.
A Monster in my House is written by the internationally acclaimed comedians The Umbilical Brothers, so you know you’re in for an amusing feast rather than a nightmarish one. Their undeniably popular wit is clear with their multi-layered twists that pleasingly surprise. The first-person narration warns of the danger associated with having a different monster in each room of the house. However, upon inspecting the images, Berlin artist Johan Potma has done a brilliant job to capture a mix of the classic, old-style horror with a beautiful warmth and humour that just does the opposite of chilling. He neatly infuses newspaper collage with pencil sketching and oil paint in subdued browns, reds and greens with the loopiest of monster characters you’ve ever seen. And take note of the little mouse in each spread… it holds some very important clues! In a charming rhyming text, the suspense is thrilling, leading us to a conclusion that is totally unexpected.
A Monster in my House is a delightfully playful romp abound with some pretty cool characters that will simply warm your soul.
With a nod to the legendary We’re Going a Bear Hunt comes this exasperatingly satisfying Beware the Deep Dark Forest by Sue Whiting and Annie White. Sure, there are creepy bits, with carnivorous plants and venomous snakes and all. But that doesn’t stop Rosie from being the heroine in this suspenseful adventure tale. Braving it out through the sublimely detailed and juicy scenes, the young girl sets off to rescue her pup Tinky through terrifying obstacles, including a bristly wolf, a deep ravine, and an enormous hairy-bellied, muddy troll. But rather than shy away and run like the children did with a certain shiny-eyed, wet-nosed Bear in another story, Rosie stands tall and defiant proving her saviour qualities. Then she can squelch back through the deep and dark and muddy forest back home.
Beware the Deep Dark Forest captures just the right amount of creepiness with the rewarding inclusion of excitement and adventure and a strong female character determined to get her hands dirty and tackle the tough stuff. This is how you face your fears for children from age four.
Following the long-lasting success of The Wrong Book, Nick Bland has come out with this latest cracker, The Unscary Book. It follows a boy, Nicholas Ickle, suitably costumed in an alien / skeleton attire, attempting to introduce us to his ‘scary’ book. So, prepare to be frightened! However, each page turn sends readers into fits of giggles rather than a state of alarm. Poor Nicholas is more terrified at the nice-ness and bright-ness of what is revealed behind all his pre-prepared props. ‘But ice-cream isn’t scary, it’s delicious!’, he shouts. ‘I’m trying to scare people, not make them hungry!’. The brilliantly colourful and energetic (non-scary) book continues to amuse our young audience as Nicholas becomes more frustrated with things that are NOT spooky, terrifying, frightening, or horrifying. And just when you think he’s finally won, well, you’ll just have to read it to find out!
The Unscary Book has plenty of animation and visuals to pore over, as well as fantastic language and comprehension elements to explore. Comedic bliss that all went wrong in just the right way. No preschooler will un-love this one!
Not so much scary, but more like stinky! Which is actually helpful for scaring those unwanted pests away. Tohby Riddle has got this story spot-on with his knack for harnessing the powers of philosophy with humour and an understanding of human complexities – although in the form of bugs and critters. Here Comes Stinkbug! is completely captivating with its brilliantly simplistic plot and dry wit about the unpleasantness of a smelly Stinkbug. None of the other crawlies want to be around Stinkbug because, well, he stinks. They try to raise the matter with him, but that makes him worse. Until he tries to charm the others with a lot of effort. However, it seems Stinkbug has attracted the wrong sort… Maybe it’s best to just be yourself.
The aptly hued garden tones and textures combined with a mixture of typed narrative and handwritten speech bubbles elicit a nature that is both endearingly casual and candid. Here Comes Stinkbug! empowers readers to consider embracing who you are, playing to your strengths and being wary of those who might take advantage of you. Children from age four will find this book utterly and proposterously reeking with the sweetest kind of comedy, bugging their parents for more.
Befittingly released on the tail end of our Southern Hemisphere autumn, The Perfect Leaf is a glorious explosion of colour and joy. Smothered in hues of honey-on-warm-toast, this book oozes the golden splendour of autumn on each page, promoting friendship, imagination and creativity in a way adults often forget about but children naturally embrace.
In a world where imperfections are deemed as failures rather than avenues for alternative thought and being, this book serves as an important reminder for us all to rejoice in the small things in life and look for the unique beauties within them. Plant’s multi perspective illustrations saturate each page, providing the perfect backdrop for his syrupy prose. The Perfect Leaf is a lovely vehicle for discussion about nature, seasons, perception, acceptance and friendship. And, while more autumn hued than spring, worthy of treasuring as the days warm.
Fleur Ferris has endorsed Lili Wilkinson’s latest novel After the Lights Go Out (Allen & Unwin) with the words, “A terrifying yet hope-filled story of disaster, deceit, love, sacrifice and survival.” These words could also apply to her new book Found (Penguin Random House Australia). Both Australian YA novels have intriguing titles and are classy examples of thrillers set outside country towns in hidden bunkers. They complement, and could be read alongside, each other.
After the Lights Go Out begins with an absolutely riveting scene where homeschooled Pru and her younger twin sisters Grace and Blythe have to escape from their house on an isolated property on the edge of the desert to a hidden underground bunker. Their father, a mining engineer, built it in secret and named it the Paddock after Winston Churchill’s WWII bunker. We learn quickly that he is paranoid, anticipates secret government conspiracies and that he is a doomsday prepper. This is a training drill.
Later, when the lights go out, the girls know that this is The Big One and they execute their exhaustive training and protocols such as Eat perishables and Exchange worthless currency for supplies. Tension ratchets because Pru is anaphylactic, there has been an explosion at the zinc mine and her father is missing, and the girls aren’t sure whether they should share their supplies with the townspeople of Jubilee.
Bear, Elizabeth’s father in Found is also highly protective and intimidating. He wouldn’t be happy about her kiss with Jonah but he doesn’t witness it – he’s been taken by unknown people in a white van. When her mother realises what has happened she whisks Beth out of town and through a cross-country route along channels across the paddocks to a bunker under a dry dam on their farm. This bunker is made from shipping containers and is as well-equipped as Pru’s. Their flight is also just as original and exciting.
The reason for Beth’s family’s dangerous plight is quickly revealed and the story then steams ahead with help from Jonah (who shares the narration) and Trent, a bad boy who may be trying to reform. The stakes are raised even higher when Beth’s mother is shot.
Both Fleur and Lili describe their very Australian rural settings with authenticity and care. Lili’s diverse characters range from a British Asian church minister to warm-skinned love interest Mateo who has two mums. Found is action-packed and heartbreaking and will be relished by all high school readers who love a fast-paced, filmic read.
Other highly recommended books by these authors include:
Leaf Stone Beetleis written by Ursula Dubosarsky and illustrated by Gaye Chapman. Its publisher Dirt Lane Press is a ground-breaking new publishing company based in Orange, NSW. They believe in creating quality literature and are publishing books by some of Australia’s best, including Matt Ottley, Ursula Dubosarsky and Gaye Chapman.
Leaf Stone Beetle is a deeply-considered, poignant tale telling the interlinked stories of leaf, stone and beetle. The book’s physical small, almost square shape is ideal for small hands and, along with its understated cover and ink and woodcut style illustrations, signals that it belongs outside the usual. Thoughtful, perceptive readers of all ages will find Leaf Stone Beetle resonant.
Little leaf is the smallest and greenest leaf on the tree. When the other leaves change colour and are tussled away by the wind, it stays behind until swept by a gentle breeze to a stream. A stone lies on the bottom of the water and notices the changes in tree, weather and stars without expecting any transformation itself. When a storm moves the stone near the gnarled roots of the tree, it is terrified.
Beetle is different from the other beetles. Without haste she absorbs the minutiae of her world. “She looked at the tiny purple flowers. She looked at a slip of golden pollen that fluttered by in the wind”. The other beetles realise that a storm is coming and scurry away. Beetle then has no one to follow home.
The stories intersect when Beetle is kept safe by leaf and stone in completely natural ways. They are all accepting of their transient safety, recognising their ultimate role in nature’s cycle. With interest and without angst, readers glean that change is an inevitable part of life.
Leaf Stone Beetle is a unique construct of narrative science and story in words and illustrations. It is simple, yet philosophical and profound.
Some books, amongst many, written by Ursula Dubosarsky include Brindabella, The Blue Cat, The Golden Day, The Red Shoe, The Word Spy and The Return of the Word Spy.
Other books published by Dirt Lane Press include The Sorry Tale of Fox & Bear by Margrete Lamond, illustrated by Heather Valence. This wily, nuanced tale was shortlisted for the 2018 NSW Premier’s Literary awards. The Dream Peddler by Irena Kobald and Christopher Nielsen is published this month.
Dimity Powell, author of evocatively and beautifully written (and illustrated by Nicky Johnston) titles including The Fix-It Man (my review and interview) and At the End of Holyrood Lane (my review) is here to discuss the creation of the latter in an insightful interview. Dimity is a well-established presenter in Australia and overseas and a strong advocate for literacy as a workshop leader and Books in Homes Role Model. As you would be aware from her Boomerang Books reviews, Dimity has a flair for exquisite language, and her picture books are conveyed no differently. I’m grateful for this opportunity to talk with you, Dim!
Congratulations, Dim, on your newest, very special picture book, once again collaborating with the gorgeous Nicky Johnston!
Thank you, Romi!
Following your successful partnership on The Fix-It Man, was this second joint venture something you always planned or just a lucky coincidence?
It is something we both secretly always wished for – we adore working together – but was definitely more of a case of fate than design. When EK Books accepted Holyrood Lane, the first person publisher, Anouska Jones and I thought of to illustrate this story was Nicky. Her style was just right for projecting the type of feeling this work required.
Your story deals with a delicate topic on domestic violence and emotional safety through the metaphorical torment of a thunderstorm. We know Nicky has the knack for capturing the deep and true essence of a story. How do you feel she portrayed your intention? Was there much collaboration throughout the process?
She portrayed every intention brilliantly! Nicky has a phenomenal initiative grasp of the story behind my stories. It’s as if she has direct access into my head and is able to see exactly how I’d love the characters and their emotions be displayed. This occurs with little to no consultation at all, which stuns me. I can only paint with words. Nicky’s illustrations do all the rest of the work.
What I really enjoyed about working with her on this project was when I happened to be in Melbourne last year (for the Victorian launch of The Fix-It Man) and was invited into her work studio. Oh, what a sublime experience that was. She had a query about a certain spread of Holyrood Lane and invited me to offer solutions. Together we nutted through the various ways of portraying the message. It was a turning point in the story for the main character, Flick and for me. I have never experienced such joy working so closely with such a divinely talented creator as Nicky. I know this is not everyone’s experience so I feel very blessed.
As mentioned, At the End of Holyrood Lane is an intensely moving and powerful tale that prevalently and superbly brings an awareness to its readers. What was your motivation in writing this story and what do you hope your audience gains from reading it?
I hope first and foremost readers engage with Flick’s story in a way that is meaningful for them and leave it feeling more hopeful and reflective. I was prompted to write this book after a meeting with a prominent children’s charity founder, who proclaimed more mainstream, accessible picture books addressing this subject matter were needed. I rose to the challenge. But in doing so, had to clear tall hurdles. Most mainstream publishers felt this type of story was ‘too hot to handle’. Fortunately, for me, EK Books had the foresight and determination to take it on with me.
Did the story go through many re-writes? How did you perfect the language and level of emotional impact for an audience that may be as young as three or four?
Oh, yes! After several knock backs, I set about restructuring Flick’s story into a more metaphoric one, one that would appeal to children worldwide regardless of their situation and whether or not they were victims of abuse. If it wasn’t for the initial reactions and the feedback received from those publishers, I would not have had the impetus to fight on so determinedly nor explore my story from a different perspective. Reasons to be grateful for rejections!
Each rewrite brought me closer to that sweet spot, where words and emotions sing in perfect harmony. To ensure that the words matched the emotional maturity of my audience I sought the help of my erstwhile writing critique buddy, Candice Lemon-Scott. Normally when we assess each other’s work, it only takes one or two feedback sessions to understand the strengths and weaknesses of a particular manuscript. Working on this one was like slogging it through the finals of a tennis match; there was much back and forwarding, but finally after about six rewrites and months of massaging, I knew I had a winner.
What is the significance of the title? Is there a hidden meaning behind it?
Yes and no. I love the term Holyrood, having noticed it on my travels and always thought I’d love to incorporate it into one of my books one day. After rewriting Holyrood Lane a few times under the old working title of Holding On, I realised I needed something better, stronger and more meaningful. Holyrood has various religious connections, appropriated to be an ancient Christian relic of the true cross and was the subject of veneration and pilgrimage in the middle ages. It is also the placename of several notable locations throughout Europe. I liked the subtle spiritual connotations and the sense of venturing away from the norm into a potentially better unknown that this title evokes.
The excitement of your book launch in Brisbane is imminent! What do you have planned for the big day?
The launch is taking place at the Brisbane Square Library, which is smack bang in the middle of Brisbane on the 23 September – a Sunday – so hopefully young and old will be able to make it. In addition to the usual cupcake consumption (they’ll look and taste gorgeous I can assure you!), there’ll be kids’ activities, special guest speakers from various domestic violence organisations, book readings, signings and a raffle with over $1,040 worth of terrific book prizes to be won. Kids’ Lit guru, Susanne Gervay is also travelling up from Sydney to launch this book with me for which I’m eternally grateful. This industry thrives on the support from people like her so I look forward to celebrating this with everyone at the launch.
You are hugely active in the literary community with workshops, festivals, school visits and the like. What other kinds of events and presentations have you been involved in recently? What value do you see for authors presenting to children?
I’ve been facilitating and conducting a few school holiday kids’ writing camps this year in addition to bookshop appearances and workshops. I really love these camps because on a personal level they consolidate what it means to write and how to do it well. They are also heaps of fun and put me in touch with tomorrow’s writers in a very real and exciting way. I’m not really teaching them to write; it feels more like a privileged position of mentoring; guiding and nurturing young raw talent is unspeakably satisfying.
One of the camps I facilitate is the Write Like An Author Camps designed by Brian Falkner. The immense value of having published active authors presenting to kids is that validation they gain from linking facts, tips, tricks and methods with real world experience. We (authors) are the living proof of what we do and say!
Anything else of excitement you’d like to add? News? Upcoming projects? TBR pile?
My TBR pile is tall enough to crush an elephant should it ever topple which it has, toppled that is, not killed any elephants, yet. My Christmas wish would be for more time to read AND write. I’m bubbling with new picture book ideas but have been writing in snatches since entering pre-publication mode for Holyrood Lane. There are a couple more publications on the horizon for 2019 and 2020 though, which makes me happier than a bear with a tub of honey ice cream.
Things are also ramping up on the SCBWI front as we prepare for the next Sydney-based Conference taking place in February 2019. Bookings for this immensely popular conference have just opened and are filling fast. I have the enviable task of coordinating a dynamic team of Roving Reporters again next year whose job is to cover every inch of the conference and it share with the world. It’s another time gobbling occupation but a thrilling one nonetheless.
Thanks so much for chatting with me, Dimity! And congratulations again on such a special book! 🙂
Assimilating history into a palatable, meaningful tale for today’s children is no easy thing. Get it wrong and you risk children shunning not only a potentially great read, but learning about periods of our past that explain the character of our future as a people and a nation. A situation of unquestionable adversity, yet adversity has many advantages – ‘sweet are the uses of adversity’ after all. Get it right, and children will embrace history with gusto and every ounce of the here and now vigour that defines childhood.
Sheryl Gwyther’s ability to immerse young readers into worlds of yesteryear with such a clear strong presence of today is exemplary. Her narrative slides along as alluringly as a sweet mountain brook, mesmerizing readers with plenty of action and emotion. Sweet Adversity is exactly the type of book my 12-year-old-self would have lapped up with unbridled zeal, especially as it acquaints children with the wondrous words of Shakespeare, some of which adult readers will connect with of course, but which provide a beautiful rich new seam of learning for tweens.
Believing in yourself when all else around you is in a state of upset and confusion is an emotion children are more than capable of recognising. Keeping the faith when adrift in turbulent seas is not only testing and difficult at times, it also determines your future perspectives on life. These next few books that touch on the importance of keeping the faith in dire times provide intense and touching lifelines to children (and adults) of all ages.
Marwood is more than adept at distilling emotions into moving verse novels. Attaching emotion and memories to physical things is something humans are adept at, as well. This story deftly portrays a young boy’s heart-felt attempt to retain and simultaneously farewell everything he holds dear in his life as he and his family prepare to sell up and leave their family farm.
Since the CBCA shortlist was announced I have been blogging about the 2018 shortlisted books and am now concluding with the Early Childhood books (in two parts). You may find some of the ideas across the posts helpful for Book Week this month.
Boy by Phil Cummings, illustrated by Shane Devries (Scholastic Australia)
Boy is a morality tale about conflict and misunderstanding; understanding & communicating. It covers issues of deforestation, fighting and living in harmony and peace.
The trees on the mountain are destroyed by a powerful dragon, which illustratively evolves from threatening to cute during the tale.
People are blaming others and fighting. Boy can’t hear the fighting but perhaps he can understand the situation better than anyone because of his hearing loss.
Might the boy be unnamed because the book is aimed at all boys or for all children?
The digital illustrations are an unusual colour palette of mauve, brown and blue tones.
The endpapers could be copied and used for the card game ‘Happy Families’.
The cover is tactile, with the word ‘BOY’ written in sand. Boy communicates by drawing pictures in sand. Children could write an important question in the sand (sandpit or sandtray) e.g. ‘Why are you fighting?’ alongside a picture.
Children could further develop awareness and affirmation of the hearing impaired. This could include learning some Auslan and also saying ‘Thank you’ ‘with dancing hands’ like Boy does.
Children could look at the endpapers to see how the children at the start become adults by the end. They could draw themselves as a child and then as an adult, imagining a possible future.
Onset and rime in the rhyming text include ‘day/stay’ ‘small/all’ ‘yet/vet’ ‘far/star’ and ‘strife/life’ (others are more difficult for very young children).
Many countries are represented in the book e.g. Syria, China, Afghanistan and Italy.
The refrain, ‘How about you?’ could be answered by readers and they could also suggest which countries are not represented; which Australian capital cities and other places are mentioned and what are some missing Australian places?
Children could show or make flags for countries represented by students in the class or school.
The story settles into a rhythmic security to precede a chilling page:
Sadly, I’m a refugee –
I’m not Australian yet.
But if your country lets me in,
I’d love to be a vet.
Australia’s refugee situation is political, and far more complex that this, but I’m Australian Too will no doubt influence children’s attitudes towards refugees.
Rodney Loses It! by Michael Gerard Bauer, illustrated by Chrissie Krebs (Omnibus Books)
The title has a double meaning and the book is humorous in words and pictures.
It’s unusual that readers are able to see the missing pen and other objects, a mark of slapstick. Rodney Loses It! is slapstick in book form.
The illustrative style is cartoon-like; lively, bright and shows active body language.
The writing shows good word choice and maintains a successful rhythm.
Children could compare the endpapers, which are different.
Rodney loves drawing but loses his favourite pen, Penny.
The illustrations show the pen and other missing items.
The message or moral is that we can love doing things but not get around to them because of distractions.
In the story, Rodney could have used other colours but he was fixated on one pen and one colour so he missed out on doing what he loved.
Children could draw pictures like Rodney’s or make Rodney using play dough and LED lights for his eyes or pen.
Being the leader of the pack is not a role everyone relishes, especially if you are that shy kid who never kicks a goal or that odd sounding, looking kid whose school lunches never quite fit the norm. However it is often the most reluctant heroes that make the biggest impact and save the day. Being at odds with yourself and your perceived persona is the theme of these books, so beautifully summarised in their paradoxical titles. What I love about these two authors is their inherent ability to commentate messages of significant social weight with supreme wit and humor. It’s like feeding kids sausage rolls made of brussel sprouts.
Raymond is stuck in a school with a reputation grubbier than a two-year-old’s left hand and choked with bullies. The best way he knows of fighting these realities is not to fight at all. Raymond is king of fading into the background especially when it comes to his friendship with best mate, Zain Afrani.
Zain is a soccer nut and self-confessed extrovert whom has a deep affinity for Raymond. He likes to flash his brash approach to bullying about much to the consternation of Raymond who happily gives up the spotlight to Zain whenever he’s around. Constant self-depreciation just about convinces Raymond that he’ll never amount to anything of much significance, which he is sort of all right with until their new principal blows his social-circumvention cover by appointing him as one of the new school prefects.
Raymond is as shocked as the rest of the school but reluctantly assumes the role along with a kooky cast of radically differing kids. Under the calm, consistent leadership of Raymond, this eclectic team not only manages to drag Barryjong Primary School out of its bad-rep quagmire by winning the hearts and minds of the students and faculty alike but while doing so, raises enough money for new air conditioners for every classroom.
I have been posting about the CBCA 2018 shortlisted books and am now concluding with the Early Childhood books (in two parts). You may find some of the ideas across the posts helpful for Book Week in August.
This picture book is imaginative and exciting. It is also humorous, for example the teacher’s funny but apt name – “Mrs Majestic-Jones”; Ruby Lee is the best at announcing “Hark, it’s me, Ruby Lee!” – an unusual gift; and tactful George Papadopoulos even suggests that Ruby Lee be quiet and still but then she even loses him.
Ruby Lee loves helping. Young readers could compare and contrast her with helpful Debra-Jo in the Little Lunch TV series and books.
The letters ‘P‘ and ‘H’ could be taught or reinforced. Ruby Lee loves pockets, peaches, puddles and polka dots. (P)
She loves humming and hopping and handstands at night. (H)
Vocabulary is interesting and extending, e.g. hark, intrepid, valiant, ingenious.
The illustrations are in a cartoon manga style where the heads are large in proportion to bodies and the eyes are big and exaggerated. Children could view online how-to-draw tutorials and construct their own characters in this style. They could colour them using the colours in the book.
Children could act out some of the things Ruby Lee does; collect things she loves and invent fictitious creatures like she does.
Gilbert the penguin falls into another world (almost like into a rabbit hole) – the ocean. He must find where’s he comfortable, at home and can fly.
It is a fictional narrative but also an accessible information book, particularly about penguins, without being forced. It utilises many verbs and active language: waddled, flapped, waddled and flapped; slipped, tripped, stumbled; slipping Spinning Stumbling Tumbling; tumbled, bubbled and sank.
The book’s message is that everyone is different and everyone must find their own strengths.
Before reading, children could suggest what a second sky might be.
Children could make a model of Gilbert and possibly one that moves using rubber bands.
This is a clever, funny book for babies and those who read to them. It is carefully structured in 2 parts: firstly, where the animals are reported lost; and then when they reappear in the park.
The book begins with observations of baby noises, which people mistake for animal noises. There are carefully placed visual clues that prompt the baby to make an appropriate noise e.g. stripy sleep suit, on rocking horse.
Less than a week ago, notable Aussie author / illustrator and prodigious writer for children, Rebecca Lim, release her latest action-packed middle grade series, Children of the Dragon. Book One: The Relic of the Blue Dragon promises magic, mystery and martial arts and I know for one already has young primary aged readers perched avidly on the edge of their seats.
Today we welcome Rebecca to the draft table to share a bit more about what drives her to write what she does and reveal her motivation behind Relic.
Florette written & illustrated by Anna Walker (Penguin Random House Australia)
Mae moves to a new home, an apartment. She is sick of all the packing boxes but draws on many of them, particularly drawing daisies. She misses gathering things for her treasure jar. After going to the park, she finds a forest inside a florist but it is closed. A ‘stalk of green [is] peeping through a gap … a piece of forest’. It becomes a treasure for her jar. She goes on to grow a plant for her new (shared) garden.
Themes include moving home; making new friends; the importance of greenery, trees, gardens; and natural and built environments.
Children could compare and contrast the endpapers (there are different creatures in each).
They could consider the meaning of Florette and related words such as florist and forest.
Garden They could make a terrarium or a green wall – a vertical garden or area covered in ivy or vines, dotted with flowers including daisies, model toadstools, other foliage and small model or toy creatures e.g. rabbit, turtle, bird, ladybird.
Children could do some of what Mae does:
Decorate treasure jars and find precious items to fill them, perhaps a plant like Mae’s
Chalk drawings on asphalt or cardboard boxes
Set up a picnic
Use pebbles to make daisies
Mae’s movements could lead to making a story map – on paper, cardboard, or using an app.
Other books by Anna Walker include Today we have no Plans, Go Go & the Silver Shoes, Peggy, Starting School and Mr Huff.
Mum went to buy gumboots but she returned with a rabbit called Gumboots. His attributes are described positively at the start but the illustrations show otherwise
This is a cumulative tale with people joining in like in Pamela Allen’s Alexander’s Outing. There’s even a nod to the fountain of that book.
Humour Examples of humour include Gumboots who doesn’t stop to chat with anyone while escaping; the mother chasing him in towel; and the illustrations that sometimes tell a different story.
Illustrations Media: watercolour, pencil and oil paint
Freya Blackwood uses her signature spotted clothes and domestic details e.g. an ironing board. Red is used as a ‘splash’ colour and there is a worm’s eye view of the underground tunnel.
Themes community; simple outdoor pleasures; friends (even for rabbits); and how rabbits multiply.
Setting The creek scene is a peaceful interlude, a moment in time, shown by a bird’s eye view. ‘Mrs Finkel’s forehead uncrinkles’ there. The trees are described as a simile: ‘They are like giants with their long legs stuck in the ground.’
The endpapers of this picture book are like a board game, which children could play on.
Children could look at a doll’s house where the front wall is removed. They could make a cutaway diagram (where some of surface is removed to look inside) showing the inside of the house and tunnel (as in the last double page spread). Or they could make a model inside a shoebox lying on its side.
This tale is taken from the ballad of Swan Lake, a tragic love story of a princess transformed into a swan by an evil sorcerer. The women are swans by day and humans by night. The princess plans to meet the prince at midnight at the ball. The sorcerer’s daughter is disguised as the Swan Queen and the prince chooses her as his bride.
The book is described as passion, betrayal and heartbreak in the Murray-Darling. Children may be able to identify the region from images of the area and the book.
The book is structured/played in III Acts, like the ballet. The written text is followed by pages of illustrations.
Children could listen to some of the ballet music e.g. Tchaikovsky’s Swan Theme; Saint-Saens’ The Dying Swan.
Ballet in pictures They could view some of the ballet.
Visual Literacy The colours are mainly monochromic, with red as a splash (feature) colour.
Camera angles show some variety: from underneath – red queen; from above – fleeing girl.
There are close-ups of the swan face and neck; black bird of prey.
Texture Children could emulate the texture through printmaking using leaves and sticks.
This picture book is Carole Wilkinson’s memoir of immigrating from Britain to Australia as part of the Assisted Passage Migration Scheme, so it could also be regarded as an information book. Detail is shown to give verisimilitude.
Migration Carole Wilkinson packed her 101 glass animals and even tried to pack soil to take to Australia. Imagining they are migrating, children could be asked what treasured possessions they would take.
Compare/contrast Children could compare and contrast migration in the 1950s and 1960s with other ways of migrating to Australia in the past and present. They could use Popplet (a mind mapping tool http://popplet.com/ ) to organise their ideas.
Poem Carole Wilkinson wrote a poem about her empty house. Children could write a similar poem, including their circumstances and their emotions if leaving home.
Illustrator Liz Anelli says: ‘So much of her (Carole Wilkinson’s) tale rung true with my own journey and made it a delight to delve into. I loved researching details for the cruise ship they travelled on and especially enjoyed being able to ‘dress’ the characters in Anelli fabrics, sourced from my grandparents’ photo album.’
Some of her illustrations pay homage to John Brack’s paintings in style & colour and some of her other books are One Photo and Desert Lake.
Mopoke is structured using black and white alternating pages. The pages are well composed with the mopoke carefully positioned on each. The style is static, with a picture of mopoke in different poses. This style can also be seen in Sandcastle by the author/illustrator; and the Crichton shortlisted, I Just Ate My Friend by Heidi McKinnon.
Humour appears throughout Mopoke e.g. ‘This is a wombat.’
The book can also be dark e.g. ‘Nopoke’, where both pages are black.
Children could perform the text as a performance poem (see the work of Sollie Raphael, teen Oz Slam Poetry champion, who has a book, Limelight).
Safe styrofoam printing (like lino cuts) Children could select one of the mopoke pictures or design their own to make a printing tool. They could cut the rim off a styrofoam plate; etch the mopoke shape using a blunt pencil, pen or stick; etch some texture; add paint; place the paper on top and press.
Poster Making The bold, striking illustrations reflect current trends in graphic design so children could make a poster of a mopoke in this style.
There’s an interesting relationship between Grandad and (possum-like but actual cat) Iggy. Iggy doesn’t want to emulate Grandad; he seems more aware, while Grandad often seems oblivious to what they see in the bush.
The author/illustrator has a detailed eye for natural bush sights and sounds such as plants, animals and birds and silhouettes and shadows are executed in a light colour. The style is reminiscent of Roland Harvey.
The bushland setting is an integral part of A Walk in the Bush. To enable children to experience this, teachers or parents could find an area where there is some bush. It may be part of a State Forest, nearby bushland or a bushy area within a local park or the school playground.
Sensory Wheel Students look, listen and use other senses to note the sounds, sights and other features of the bush e.g. eucalyptus leaves to crush and scribbly marks on trees. They could record sights, sounds, smells, feel/touch, taste (where safe) on a sensory wheel.
Children could create literary texts by selecting one of the senses to focus on. They write a brief sensory description of the bush using language generated from their experiences in the bush.
They could write this description onto a piece of paperbark (if accessible without causing damage to trees) or onto recycled paper or wrapping or scrapbooking paper that emulates the colour, content or texture of the description. (NB paperbark is also available from some kitchen suppliers)
Soundscape While in the bush, could listen to and identify bush sounds.
They create then a soundscape by listing five of the sounds and recording these. The free recording tool Audacity could be downloaded to create soundscapes http://www.audacityteam.org/download/.
You’re most welcome and thanks so much for having me.
What is your background and where are you based?
I’m based in Melbourne. I lived in Greece as a child but came to Australia as a non-English speaking migrant at around eight years old.
My work background is mostly journalism. I was a reporter at various newspapers (five years at the Herald Sun) and then a communications strategist for the union movement for five years. Since having kids I have had to take a step back from that sort of high-octane work.
How involved in the YA literary community are you?
I first became aware of the LoveOZYA community through author Nicole Hayes. I was a member of her writing group and as my manuscript progressed she spoke to me about where she thought it might fit and how wonderful and supportive people were.
I love reading YA, especially Australian YA, and following other writer’s journey on social media. I’ve found the LoveOZYA community inspiring and vibrant. I love being a part of it. There’s a real effort to support each other and this makes the sometimes insecure life of a writer easier. We celebrate each other’s wins and commiserate with difficulties.
Stone Girl (Penguin Random House) is a searing, unforgettable story. Could you tell us about the protagonist Sophie and the symbol of a ‘stone girl’?
As I write my second novel I realize that I’m interested in the triggers and experiences in life that change as. What needs to happen to transform a person from one thing to another? As a journalist, reporting straight news, I would be stunned by the things people did and wonder how they grew from a kid into this adult. What forms their decision making and choices?
Stone Girl follows Sophie’s life from 12 to 16 years old as she becomes someone society typically judges, despises and ultimately dismisses. The persona of Stone Girl is her survival mechanism in a world where there’s no one to rely upon but herself. Sophie soon comprehends her place and makes a number of decisions about who and how she must be in response. It’s about resilience. She toughens up, she becomes Stone Girl, and this is both positive and negative.
Hardening herself, especially against adults, serves to both protect her and isolate her because stony self-preservation cuts both ways. She doesn’t trust anyone. Doesn’t ask for help even though she often desperately needs it. Her Stone Girl persona is what she uses to hide her vulnerability. When she lifts her chin against the world then she can shut out the things that have happened to her. She uses her anger to protect her. But in the end, it’s what she does with the Stone Girl facade that makes this a story of redemption.
Most of us have a mask we wear in order to fit in and protect ourselves. It just happens that Sophie has to wear hers 24 hours a day.
Could you tell us about some significant other characters?
Gwen is one of my favourites. Girlfriends have been the backbone to my life. They’ve saved me many times over, from my sister to the besties I’ve known over the years. I love the closeness and trust that grows between some women. The friendships in the homes Sophie moves through are formed as fast as they must be abandoned but in Gwen, Sophie finds a true ally. It’s a friendship that underscores everything else. It doesn’t just disappear because there’s a love interest.
I think of it like an old western when there’s a shoot-out and friends protect themselves by standing back to back. Gwen and Sophie bond in the knowledge that, despite appearances, adults actually have no idea what they’re doing.
Spiral came to me when I was at the Varuna’s Writer’s House. I knew Sophie needed someone, possibly a love interest, but I couldn’t figure out who would be strong enough break through to her.
Then, as I strolled through Katoomba, Spiral’s form became clear. I saw what he looked like, his motivations and that, like Sophie, in a world of broken promises, he too needed someone to trust.
Writing about Spiral was fun, especially at first. He’s gorgeous! A fiery and enigmatic character that I was drawn to completely – his name serving as prophesy.
I’ve always loved books with gritty honest characters that both shock and charm and I try to write this way.
How did you create such authentic experiences in the homes Sophie had to live in and her spiral into such terrible situations?
This is fictional novel but I have borrowed heavily from my time as a teen growing up in group homes.
I tried to write the real story but felt unable to. Fiction freed me up and images and events appeared quite clearly to me; the rooms, the feelings, the flavor of being of being someone who lived that way. I put myself easily into Sophie’s shoes.
When I lived in the homes there were many younger kids and I’ve thought about them so often since. Sophie is how I imagined one life.
You’ve made drug-taking very appealing at times, e.g. chapter 22? How did you weigh up the risk of including this?
The truth is that before drugs destroy you, they feel good. That’s the trick. That’s why people keep taking them. If I pretended they were terrible all the way though then this would not be the realistic trajectory of addiction. It could be dismissed and then this would not be a true cautionary tale. Protectionism is not helpful for most teens, especially when you consider the type of world we live in right now.
How important is Sophie’s racial background to the story?
Her racial background and her estrangement from her Greek family contribute to her feeling of dislocation. She doesn’t belong there. She has no family here. She must let go of the past and carve her own way through the world.
Like Sophie, I grew up in Greece and left family behind. My Greek heritage and the memories of leaving my first home have significantly contributed to who I am today and I found it quite cathartic to include this in Sophie’s life.
What does she learn about family and others?
When she first goes into the homes Sophie is hopeful that she will once again find family, either with a social worker or with her Baba. However this is not to be. Sophie soon understands that in a world where the only constant is change, she can only rely on herself.
With the kids in the homes there’s a unique bond that makes them a kind of family – albeit temporary.
Could you explain what turned her situation around towards the end of the novel – and why have you chosen this form of redemption?
The fight to survive that carried Sophie through is her saving grace. I actually didn’t know how it was going to end until three or four drafts in. I just kept thinking, this is not the story of a victim. And finally I realized what had to happen.
Kids in care, people with addictions and the homeless are either viewed with pity or fear and I wanted to show how we should never underestimate anyone. People are amazing! They want to survive and many can achieve much given a chance.
You thank God in the Acknowledgements. Why have you done this?
Doing something you love, answering a calling to the self, which is what writing feels like to me, can mean many sacrifices in other areas of life. Financial, physical, mental; you turn yourself inside out. I found myself praying more. Especially after writing I feel quite close to ‘God’. This isn’t in a religious way but more a universal spiritual one.
Who would you particularly like to see read your novel?
Everyone. I need to fund my next novel.
But seriously, I guess if I was choosing readers based on getting the message across then I’d hope people from the world that deals with kids like these. Social workers, kids in care, etc.
I’ve also loved the responses I’ve received from those who are surprised about this world. I would like there to be a common understanding about the fact that hundreds, if not thousands of kids live this way right now in Australia. A public conversation about kids in care could finally bring change to this difficult, misunderstood and largely ignored section of Australian society. That, for me, would be a dream come true. I’d love to know that others wouldn’t feel the way I did when I was living in government care in the early 1990s.
Have you already had any memorable responses from readers to Stone Girl?
A redit post my husband put up went viral and I was shocked and amazed by the response. Social workers, lawyers, ex homes and foster kids from around the world commented and it solidified what I had always suspected. Despite the fact we don’t often acknowledge the plight of kids without parents, the situation matters to many. It’s a private pain. Or a job they really care about. Or they don’t know how to help someone… Some of them contacted me after reading Stone Girl, sending quite heartfelt messages. As an author, this is the best feeling in the world.
Putting aside the issue of kids in care, I wrote this book because gritty subjects, love at ‘the edge of a cliff’, characters living dangerously is what I find interesting to read. I’ve been floored by the generous reviews so far, especially those where people say they couldn’t stop reading. The number one reason for writing a fictional book has to be entertainment, doesn’t it?
This was the first review I received and I remember the relief I felt. Rob at Lamont Books really got what Stone Girl was about.
Wow! This is a must read novel for older teens, but a word of caution – it is definitely a YA title aimed at teens 15 years and older.
It took me back to my school days reading Go Ask Alice, which I found totally confronting, but at the same time an educational and inspirational cautionary tale. Stone Girl is certainly that as it takes us on Sophie’s downhill journey through institutional care as a ward of the state from when she is 12 until she is 16.
It is written with a real understanding and depth of character, as it is inspired by the real life experiences of the debut author, journalist Eleni Hale. Many dark topics are covered including death, poverty, heartbreak and substance dependence. But shining through the story is identity, survival, resilience and ultimately a coming of age empowerment.
I will not give the story away but suffice to say you cannot help but be swept along by the incredible Sophie, as the world continues serving up crap to her. She often stumbles and is so very nearly broken, but we continue to hold out hope for her throughout the story.
Stone Girl will change the way you look at the homeless, and hopefully enlighten young minds as to the plight of wards of the state.
This is a brilliant debut, but as it does contain extreme language, mature themes and substance abuse, it is suited to older teens, 15 years and up.
How can we protect young people and help if we encounter someone in a situation like Sophie’s or someone at risk?
From memory and for reasons I can’t really explain, kids in care seemed to be treated differently, like no-hopers. I don’t know if it was the way we dressed or looked. Maybe we were too loud or other times we seemed too quiet and uncommunicative. I just know that people changed towards you once they knew you were a kid who lived like that. From cops, to teachers, to people on the street, I was often hyper-aware of being a ‘lesser other’.
So in terms of talking to them in an encounter, simply show respect even if you don’t understand them, hold your judgment before you really know them (perhaps after as well) and don’t assume the worst.
Also important is to support the organizations set up to help them such as the ‘Make It 21’ campaign that seeks to extend support from 18 years old to 21. This could lessen the shocking number of government kids who end up homeless, drug addicted and/or mentally ill.
It’s really hard to get through to someone like Sophie once they hardened up. They guard strictly against pity and judgment. The communication channels are nearly closed. Improving their experiences in the ‘system’ is obviously an important way to avoid their slide into the margins of society.
I don’t have all the answers for this – I don’t think anyone does – but talking about it publically is a good start. Don’t let their lives be our society’s dirty secret any longer. Let their issues matter the same way that other’s kid’s problems are discussed regularly in public forums.
What are you writing now?
I’m writing the sequel to Stone Girl. What happens after you leave the home system and your support is cut off? What will Sophie do now that she is out in the world and responsible for herself in every way? She has no family and must scrape together the money she needs to live. Where will this new fight for survival lead her?
This book is structured chronologically with a focus on inventors and aviators we’ve heard of including Lawrence Hargrave, Nancy Bird, Charles Kingsford Smith, Rev John Flynn of the Flying Dr Service; and those we may not have heard of such as Dr William Bland (who appeared before Hargrave) in the 1850s.
The structure and writing styles provide variety: words in the aviators’ voices; 3 Amazing Facts about most aviators; and ‘Did You Know?’ columns. The book acknowledges difficulties for women in the past who wished to fly.
Some interesting information from the book:
George Taylor In 1909 he flew a glider from Narrabeen, NSW. His wife Florence also flew, tucking her long skirts into her bloomers. At age ten Taylor wrote an essay, ‘The Future of Flying Machines in Australia’. He was a cartoonist and suffered from epilepsy.
Bert Hinkler In 1921 he flew the nine hours from Sydney to Bundaberg wearing a suit and tie. His RAF flying instructor was Cpt W.E. Johns, who wrote the Biggles books.
Like Lawrence Hargrave, children could make box kites. The ‘e-how’ website could be helpful. It suggests using dowel, bendy straws and a plastic/vinyl tablecloth. https://www.ehow.com/how_4882168_make-box-kites.html Alternatively they could make gliders or paper planes.
Decorative Patterning is used for sections such as J is for Jail and N is for Nurture. Children could select an alternative description for one of the letters e.g. C is for Convicts (instead of Cook) and create decorative patterning in Bern Emmerichs’ style.
Like last year’s shortlisted book by this author, Gigantic Book of Genes, this is a glossy science publication with high quality photos. It includes seamless explanations of left and right with clear examples for children to understand.
It includes a clever idea where children hold their hands out in front and touch their thumbs. Their left hand forms an L shape (helping them remember which hand is left).
The author recognises that it is easy to mix up left and right and looks at situations where right may connote good and left signify weak or bad. For example, in Albania it has been a crime to be left-handed.
It features symmetry, spirals, clockwise and anticlockwise, and the compass.
The author includes incredible information, such as ‘Nearly all kangaroos are left-handed… Parrots use their left feet to pick up food.’ ‘Female cats tend to be right-handed, and male cats … left’. And when driving, island nations tend to drive on the left-hand side of the road.
Min is a microbe. She is small. Very small. In fact, so small that you’d need to look through a microscope to see her.
I know from comments by a young family that this tactile, interactive book about microbiology has great appeal. The title is provocative – tempting and almost urging children to lick the book. Min the microbe guides the reader through the informative content, which is well designed with bright comic style illustrations and high-quality photographs. The information is clever, irreverent and quirky. It probably reflects the creators – a team consisting of writer Idan (quiet loud thoughts), Julian (who likes comics and toast) and Linnea, the scientist.
Children could consider, ‘Where will you take Min tomorrow?’ Like the book, they could take Min on a journey using a mix of photographic backgrounds, cartoon characters and written text.
Hygiene is taught and encouraged using reverse psychology. Teachers and parents may use the book to reinforce good hygiene (without losing the text’s inherent appeal).
Koala by Claire Saxby, illustrated by Julie Vivas (Walker Books)
Koala is most appropriate for the very young. It traces the experiences of a young koala achieving independence.
The writing is both literary and factual: providing parallel texts which are particularly useful for children who prefer one style over the other and to expose readers to both forms. The illustrations are distinctive for their rounded lines and shapes.
Koala is part of Walker Books’ excellent ‘Nature Storybooks’ series. Others include Claire Saxby’s Big Red Kangaroo, Emu and Dingo; and Sue Whiting’s Platypus. This could also be a good opportunity to introduce the classic Blinky Bill by Dorothy Wall.
The Big Book of Antarctica by Charles Hope (Wild Dog Books)
This is another big, glossy production from Wild Dog Books. The photos are exceptional. There is minimal written text and key words are shown in large coloured font.
Antarctica is studied in the Australian curriculum and this book covers explorers, scientists, transport, ice, plants (moss, algae, plankton), and much about animals and birds, e.g. giant petrels who vomit on anything they think is a threat (page 37). Climate change and global warming also feature (page 60)
Ice is looked at on page 22. There are many experiments about ice in other books and online to extend this subject.
Laughter, mishaps, laughing at mishaps; these are the grist of good picture books. Throw in a few feathered birds, the odd duck and a penguin or two and you have the makings of hours of picture book fun pre-schoolers and avian lovers everywhere are sure to get in a flap about.
McKinlay’s predilection for waddling birds works a treat in this re-release paperback about an exciting new addition down at the zoo. Every animal is a-twitter and a-flutter because the penguins are coming only trouble is no one is exactly certain what a penguin is. Supremely illustrated pages depict each animal’s supposition of these new-comers, each description becoming more implausible and exaggerated than the last until even our accepted idea of a penguin is altered from boring little black and white bird to Hawaiian shirt wearing, pizza gobbling, party animal. The Zookeeper tries to set the record straight, supplying his charges and readers with sensible genuine penguin facts only to be ultimately comically upstaged. Oceans of fun and colour with plenty of apt facts and enough animal imagery to fill a real life zoo.
Sometimes curiosity can land you in trouble. But it is the being brave part that will ultimately lead to triumph. These few picture books show children that exploration is a healthy thing to help overcome fear or uncertainty. And they are a ‘hole’ lot of fun, too!
Be sure to also check out Dimity’s great list of Picture Books that Celebrate Overcoming Doubts.
Squirrel starts the line up of dangling animals overly curious about a long-drop hole that lies in the middle of the track. Teetering on the edge of total panic about the presumed formidable, black-holed monster within, Squirrel cries out for help, only to drag Ostrich and three chattering monkeys into the lightly-suspended quandary. A brave and clever field mouse makes the call, ensuing a deep suspension of baited breath amongst characters and readers alike. Luckily, the ‘monster’ isn’t interested in animals for tea.
Brown’s delightful rhyming couplets come with a sensory feast of emotive and visual language to fill you with empathy, wonder, and even a few giggles. The illustrations by Lucia Masciullo are whimsical and witty in the face of perceived danger. The Hole is beautifully alluring, brilliantly enlightening and wonderfully heartwarming for children from age three.
I love the play on reality and literal meanings behind this story of rehoming a lost hole. Charlie doesn’t realise that picking up a hole and putting it in his pocket, and backpack, are the worst places to have a hole. So he boldly sets off to find it a new owner. Young readers will already be amused at the thought, ‘you can’t pick up a hole!’, and now they are left to wonder who would want it and how it could possibly be useful. Well, Charlie greets a whole lot of people who are clearly NOT interested in the hole, such as the arachnid and reptile store owner, the boat builder, the seamstress, gardener, and doughnut maker. So, who is?
Canby’s energetic, sharp and unconventional narrative paired with her cartoonish, fluid illustrations complete the story that allow children to open their minds to the absurd, and also assess some very real and practical concepts. The Hole Story makes for great discussion and learning opportunities, as well as a fun and wacky adventure of finding a place to belong.
Curiosity did not get the cat, in this case, because Scaredy Cat, as the name suggests, is too scared to face even the meekest of things. A little girl’s four-legged friend shies away from sight in every scene, only to reveal its white, fluffy paws and tail in a terrified, obscure stupor. Gallagher’s delectable repetitive rhyme cajoles us along chasing poor Scaredy Cat through bees, towering trees and Granny’s super-duper sneeze. Hoses, wandering noses and costumed kids, striking poses. Each verse beginning with, ‘Have you seen my Scaredy Cat? He’s afraid of this and afraid of that!’, eventually leads us to the climax where a proud, flexing little girl claims her gallantry and saves the day. Now the girl has revealed her true and brave identity, will Scaredy Cat?
With Tortop’s ever-gorgeous, enticing and infectious artwork charging with colour and energy, it would be no surprise if Scaredy Cat is chosen to play his hiding game over and over again. Preschoolers will adore this romping tale of friendship, bravery, pets and love.
| 1 | 6 |
<urn:uuid:80adcb3b-541e-43ae-96b8-8db6a6f7bb8e>
|
|Developer(s)||The OpenSSL Project|
|Stable release||1.1.1d (September 10, 2019) [±]|
|Preview release||none [±]|
|Written in||C, assembly, Perl|
|License||Apache License 2.0|
OpenSSL is a software library for applications that secure communications over computer networks against eavesdropping or need to identify the party at the other end. It is widely used by Internet servers, including the majority of HTTPS websites.
OpenSSL contains an open-source implementation of the SSL and TLS protocols. The core library, written in the C programming language, implements basic cryptographic functions and provides various utility functions. Wrappers allowing the use of the OpenSSL library in a variety of computer languages are available.
The OpenSSL Software Foundation (OSF) represents the OpenSSL project in most legal capacities including contributor license agreements, managing donations, and so on. OpenSSL Software Services (OSS) also represents the OpenSSL project, for Support Contracts.
- 1 Project history
- 2 Major version releases
- 3 Algorithms
- 4 FIPS 140-2 compliance
- 5 Licensing
- 6 Notable vulnerabilities
- 6.1 Timing attacks on RSA Keys
- 6.2 Denial of Service ASN.1 parsing
- 6.3 OCSP stapling vulnerability
- 6.4 ASN.1 BIO vulnerability
- 6.5 SSL, TLS and DTLS Plaintext Recovery Attack
- 6.6 Predictable private keys (Debian-specific)
- 6.7 Heartbleed
- 6.8 CCS Injection Vulnerability
- 6.9 ClientHello sigalgs DoS
- 6.10 Key Recovery Attack on Diffie Hellman small subgroups
- 7 Forks
- 8 See also
- 9 References
- 10 External links
The OpenSSL project was founded in 1998 to provide a free set of encryption tools for the code used on the Internet. It is based on a fork of SSLeay by Eric Andrew Young and Tim Hudson, which unofficially ended development on December 17, 1998, when Young and Hudson both went to work for RSA Security. The initial founding members were Mark Cox, Ralf Engelschall, Stephen Henson, Ben Laurie, and Paul Sutton.
As of May 2019[update], the OpenSSL management committee consisted of 7 people and there are 17 developers with commit access (many of whom are also part of the OpenSSL management committee). There are only two full-time employees (fellows) and the remainder are volunteers.
The project has a budget of less than one million USD per year and relies primarily on donations. Development of TLS 1.3 is sponsored by Akamai.
Major version releases
|Version||Original release date||Comment||Last minor version|
|Old version, no longer maintained: 0.9.1||December 23, 1998||
||0.9.1c (December 23, 1998)|
|Old version, no longer maintained: 0.9.2||March 22, 1999||
||0.9.2b (April 6, 1999)|
|Old version, no longer maintained: 0.9.3||May 25, 1999||
||0.9.3a (May 27, 1999)|
|Old version, no longer maintained: 0.9.4||August 9, 1999||
||0.9.4 (August 9, 1999)|
|Old version, no longer maintained: 0.9.5||February 28, 2000||
||0.9.5a (April 1, 2000)|
|Old version, no longer maintained: 0.9.6||September 24, 2000||
||0.9.6m (March 17, 2004)|
|Old version, no longer maintained: 0.9.7||December 31, 2002||
||0.9.7m (February 23, 2007)|
|Old version, no longer maintained: 0.9.8||July 5, 2005||
||0.9.8zh (December 3, 2015)|
|Old version, no longer maintained: 1.0.0||March 29, 2010||
||1.0.0t (December 3, 2015)|
|Old version, no longer maintained: 1.0.1||March 14, 2012||
||1.0.1u (September 22, 2016)|
|Old version, no longer maintained: 1.0.2||January 22, 2015||
||1.0.2u (December 20, 2019)|
|Old version, no longer maintained: 1.1.0||August 25, 2016||
||1.1.0l (September 10, 2019)|
|Current stable version: 1.1.1||September 11, 2018||1.1.1d (September 10, 2019)|
|Future release: 3.0.0||N/A||N/A|
OpenSSL supports a number of different cryptographic algorithms:
- AES, Blowfish, Camellia, Chacha20, Poly1305, SEED, CAST-128, DES, IDEA, RC2, RC4, RC5, Triple DES, GOST 28147-89, SM4
- Cryptographic hash functions
- MD5, MD4, MD2, SHA-1, SHA-2, SHA-3, RIPEMD-160, MDC-2, GOST R 34.11-94, BLAKE2, Whirlpool, SM3
- Public-key cryptography
- RSA, DSA, Diffie–Hellman key exchange, Elliptic curve, X25519, Ed25519, X448, Ed448, GOST R 34.10-2001, SM2
FIPS 140-2 compliance
As of December 2012[update], OpenSSL is one of two open source programs involved in validation under the FIPS 140-2 computer security standard by the National Institute of Standards and Technology's (NIST) Cryptographic Module Validation Program (CMVP). (OpenSSL itself is not validated, but a component called the OpenSSL FIPS Object Module, based on OpenSSL, was created to provide many of the same capabilities).
A certificate was first awarded in January 2006 but revoked in July 2006 "when questions were raised about the validated module's interaction with outside software." The certification was reinstated in February 2007.
OpenSSL is double licensed under the OpenSSL License and the SSLeay License, which means that the terms of both licenses apply. The OpenSSL License is Apache License 1.0 and SSLeay License bears some similarity to a 4-clause BSD License.
As the OpenSSL License is Apache License 1.0, but not Apache License 2.0, it requires the phrase "this product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit" to appear in advertising material and any redistributions (Sections 3 and 6 of the OpenSSL License). Due to this restriction, the OpenSSL License and the Apache License 1.0 are incompatible with the GPL. Some GPL developers have added an OpenSSL exception to their licenses that specifically permits using OpenSSL with their system. GNU Wget and climm both use such exceptions. Some packages (like Deluge) explicitly modify the GPL license by adding an extra section at the beginning of the license documenting the exception. Other packages use the LGPL-licensed GnuTLS and MPL-licensed NSS, which both perform the same task.
OpenSSL announced in August 2015 that it would require most contributors to sign a Contributor License Agreement (CLA), and that OpenSSL would eventually be relicensed under the terms of Apache License 2.0. This process commenced in March 2017, and was complete in 2018.
Timing attacks on RSA Keys
On March 14, 2003, a timing attack on RSA keys was discovered, indicating a vulnerability within OpenSSL versions 0.9.7a and 0.9.6. This vulnerability was assigned the identifier CAN-2003-0147 by the Common Vulnerabilities and Exposures (CVE) project. RSA blinding was not turned on by default by OpenSSL, since it is not easily possible to when providing SSL or TLS using OpenSSL.
Denial of Service ASN.1 parsing
OpenSSL 0.9.6k had a bug where certain ASN.1 sequences triggered a large number of recursions on Windows machines, discovered on November 4, 2003. Windows could not handle large recursions correctly, so OpenSSL would crash as a result. Being able to send arbitrary large numbers of ASN.1 sequences would cause OpenSSL to crash as a result.
OCSP stapling vulnerability
When creating a handshake, the client could send an incorrectly formatted ClientHello message, leading to OpenSSL parsing more than the end of the message. Assigned the identifier CVE-2011-0014 by the CVE project, this affected all OpenSSL versions 0.9.8h to 0.9.8q and OpenSSL 1.0.0 to 1.0.0c. Since the parsing could lead to a read on an incorrect memory address, it was possible for the attacker to cause a DoS. It was also possible that some applications expose the contents of parsed OCSP extensions, leading to an attacker being able to read the contents of memory that came after the ClientHello.
ASN.1 BIO vulnerability
When using Basic Input/Output (BIO) or FILE based functions to read untrusted DER format data, OpenSSL is vulnerable. This vulnerability was discovered on April 19, 2012, and was assigned the CVE identifier CVE-2012-2110. While not directly affecting the SSL/TLS code of OpenSSL, any application that was using ASN.1 functions (particularly d2i_X509 and d2i_PKCS12) were also not affected.
SSL, TLS and DTLS Plaintext Recovery Attack
In handling CBC cipher-suites in SSL, TLS, and DTLS, OpenSSL was found vulnerable to a timing attack during the MAC processing. Nadhem Alfardan and Kenny Paterson discovered the problem, and published their findings on February 5, 2013. The vulnerability was assigned the CVE identifier CVE-2013-0169.
Predictable private keys (Debian-specific)
OpenSSL's pseudo-random number generator acquires entropy using complex programming methods. To keep the Valgrind analysis tool from issuing associated warnings, a maintainer of the Debian distribution applied a patch to the Debian's variant of the OpenSSL suite, which inadvertently broke its random number generator by limiting the overall number of private keys it could generate to 32,768. The broken version was included in the Debian release of September 17, 2006 (version 0.9.8c-1), also compromising other Debian-based distributions, for example Ubuntu. Ready-to-use exploits are easily available.
The error was reported by Debian on May 13, 2008. On the Debian 4.0 distribution (etch), these problems were fixed in version 0.9.8c-4etch3, while fixes for the Debian 5.0 distribution (lenny) were provided in version 0.9.8g-9.
OpenSSL versions 1.0.1 through 1.0.1f had a severe memory handling bug in their implementation of the TLS Heartbeat Extension that could be used to reveal up to 64 KB of the application's memory with every heartbeat (CVE-2014-0160). By reading the memory of the web server, attackers could access sensitive data, including the server's private key. This could allow attackers to decode earlier eavesdropped communications if the encryption protocol used does not ensure perfect forward secrecy. Knowledge of the private key could also allow an attacker to mount a man-in-the-middle attack against any future communications. The vulnerability might also reveal unencrypted parts of other users' sensitive requests and responses, including session cookies and passwords, which might allow attackers to hijack the identity of another user of the service.
At its disclosure on April 7, 2014, around 17% or half a million of the Internet's secure web servers certified by trusted authorities were believed to have been vulnerable to the attack. However, Heartbleed can affect both the server and client.
CCS Injection Vulnerability
CCS Injection Vulnerability (CVE-2014-0224) is a security bypass vulnerability that exists in OpenSSL. The vulnerability is due to a weakness in OpenSSL methods used for keying material.
This vulnerability can be exploited through the use of a man-in-the-middle attack, where an attacker may be able to decrypt and modify traffic in transit. A remote unauthenticated attacker could exploit this vulnerability by using a specially crafted handshake to force the use of weak keying material. Successful exploitation could lead to a security bypass condition where an attacker could gain access to potentially sensitive information. The attack can only be performed between a vulnerable client and server.
OpenSSL clients are vulnerable in all versions of OpenSSL before the versions 0.9.8za, 1.0.0m and 1.0.1h. Servers are only known to be vulnerable in OpenSSL 1.0.1 and 1.0.2-beta1. Users of OpenSSL servers earlier than 1.0.1 are advised to upgrade as a precaution.
ClientHello sigalgs DoS
This vulnerability (CVE-2015-0291) allows anyone to take a certificate, read its contents and modify it accurately to abuse the vulnerability causing a certificate to crash a client or server. If a client connects to an OpenSSL 1.0.2 server and renegotiates with an invalid signature algorithms extension, a null-pointer dereference occurs. This can cause a DoS attack against the server.
A Stanford Security researcher, David Ramos, had a private exploit and presented it before the OpenSSL team where they patched the issue.
OpenSSL classified the bug as a high-severity issue, noting version 1.0.2 was found vulnerable.
Key Recovery Attack on Diffie Hellman small subgroups
This vulnerability (CVE-2016-0701) allows, when some particular circumstances are met, to recover the OpenSSL server's private Diffie–Hellman key. An Adobe System Security researcher, Antonio Sanso, privately reported the vulnerability.
OpenSSL classified the bug as a high-severity issue, noting only version 1.0.2 was found vulnerable.
In 2009, after frustrations with the original OpenSSL API, Marco Peereboom, an OpenBSD developer at the time, forked the original API by creating Agglomerated SSL (assl), which reuses OpenSSL API under the hood, but provides a much simpler external interface. It has since been deprecated in light of the LibreSSL fork circa 2016.
In April 2014 in the wake of Heartbleed, members of the OpenBSD project forked OpenSSL starting with the 1.0.1g branch, to create a project named LibreSSL. In the first week of pruning the OpenSSL's codebase, more than 90,000 lines of C code had been removed from the fork.
In June 2014, Google announced its own fork of OpenSSL dubbed BoringSSL. Google plans to co-operate with OpenSSL and LibreSSL developers. Google has since developed a new library, Tink, based on BoringSSL.
- Comparison of TLS implementations
- Comparison of cryptography libraries
- POSSE project
- "OpenSSL: Newslog". Retrieved 2019-09-11.
- "Change license to the Apache License v2.0 · openssl/openssl@1513331". GitHub.
- Laurie, Ben (1999-01-06). "ANNOUNCE: OpenSSL (Take 2)". ssl-users (Mailing list). Retrieved 2018-10-29.
- "New Committers". OpenSSL Software Foundation. 2019-05-20. Retrieved 2019-11-03.
- "OpenSSL Management Committee". OpenSSL Software Foundation. Retrieved 2019-11-03.
- "OpenSSL Committers". OpenSSL Software Foundation. Retrieved 2019-11-03.
- Marquess, Steve (2017-01-19). "Akamai sponsors TLS 1.3". openssl-announce (Mailing list). Retrieved 2018-11-09.
- "OpenSSL – Changelog". OpenSSL Software Foundation. Retrieved 2016-09-26.
- "OpenSSL – Release Strategy". OpenSSL Software Foundation. Retrieved 2016-09-26.
- "OpenSSL 1.0.1 Series Release Notes". Archived from the original on 2015-01-20. Retrieved 2017-02-20.
- "OpenSSL 1.0.2 Series Release Notes". Retrieved 2017-02-20.
- "Release Strategy". www.openssl.org. OpenSSL Foundation. 2019-02-25.
- "OpenSSL 1.1.0 Series Release Notes". Retrieved 2017-02-20.
- Caswell, Matt (2018-09-11). "OpenSSL 1.1.1 Is Released". www.openssl.org. OpenSSL Foundation.
- Caswell, Matt (2018-02-08). "Using TLS1.3 With OpenSSL - OpenSSL Blog". www.openssl.org. OpenSSL Foundation.
- Matt Caswell (2018-11-28). "The Holy Hand Grenade of Antioch". OpenSSL Blog. Retrieved 2019-10-07.
- "GOST engine OpenSSL 1.0.0 README". cvs.openssl.org. Archived from the original on 2013-04-15.
- "OpenSSL source code, directory crypto/whrlpool". Retrieved 2017-08-29.
- "Protecting data for the long term with forward secrecy". Retrieved 2012-11-05.
- Validated FIPS 140-1 and FIPS 140-2 Cryptographic Modules. NIST. Retrieved 2012-12-19.
- "FIPS-140". openssl.org. Retrieved 2019-11-12.
- "OpenSSL User Guide for the OpenSSL FIPS Object Module v2.0" (PDF). openssl.org. 2017-03-14. Retrieved 2019-11-12.
- "NIST recertifies open source encryption module". gcn.com. Archived from the original on 2007-10-10.
- "OpenSSL: Source, License". openssl.org.
- "Licenses – Free Software Foundation". fsf.org.
- "WGET 1.10.2 for Windows (win32)". users.ugent.be. Archived from the original on 2008-01-02.
- "Releases of source and binaries". climm.org. Archived from the original on 12 February 2011. Retrieved 30 November 2010.
- "Deluge LICENSE file". deluge-torrent.org. Retrieved 24 January 2013.
- Salz, Rich (1 August 2015). "License Agreements and Changes Are Coming". openssl.org. Retrieved 23 August 2015.
- "OpenSSL Re-licensing to Apache License v. 2.0 To Encourage Broader Use with Other FOSS Projects and Products". 2017-03-23. Archived from the original on 2018-07-18. Retrieved 2018-08-06.
- Lee, Victoria; Radcliffe, Mark; Stevenson, Chirs (5 February 2019). "Top 10 FOSS legal developments of 2018". Opensource.com, Red Hat. Archived from the original (html) on 5 February 2019. Retrieved 28 September 2019.
The OpenSSL project announced that it had completed its shift from the OpenSSL/SSLeay license to the Apache Software License version 2 (ASLv2).
- "OpenSSL Updates Fix Critical Security Vulnerabilities [09 Aug 2014]". Retrieved 25 Aug 2014.
- "OpenSSL ASN.1 asn1_d2i_read_bio() Heap Overflow Vulnerability". Cisco.
- "ASN1 BIO vulnerability". OpenSSL.
- "On the Security of RC4 in TLS". www.isg.rhul.ac.uk.
- "research!rsc: Lessons from the Debian/OpenSSL Fiasco". research.swtch.com. Retrieved 2015-08-12.
- "SSLkeys - Debian Wiki". debian.org. Retrieved 2015-06-19.
- "Debian OpenSSL – Predictable PRNG Bruteforce SSH Exploit Python – Exploits Database". exploit-db.com. 1 June 2008. Retrieved 12 August 2015.
- "DSA-1571-1 openssl – predictable random number generator". Debian. May 13, 2008.
- OpenSSL.org (7 April 2014). "OpenSSL Security Advisory [07 Apr 2014]". Retrieved 9 April 2014.
- OpenSSL (2014-04-07). "TLS heartbeat read overrun (CVE-2014-0160)". Retrieved 2014-04-08.
- Codenomicon Ltd (2014-04-08). "Heartbleed Bug". Retrieved 2014-04-08.
- "Why Heartbleed is dangerous? Exploiting CVE-2014-0160". IPSec.pl. 2014.
- Mutton, Paul (8 April 2014). "Half a million widely trusted websites vulnerable to Heartbleed bug". Netcraft Ltd. Retrieved 8 April 2014.
- "OpenSSL continues to bleed out more flaws – more critical vulnerabilities found". Cyberoam Threat Research Labs. 2014.
- "CVE-2014-0224". CVE. 2014.
- "OpenSSL Security Advisory [05 Jun 2014]". OpenSSL. 2014.
- "OpenSSL Patches Severe Denial-of-Service Vulnerability [20 March 2015]". Brandon Stosh. 2015.
- "High-severity bug in OpenSSL allows attackers to decrypt HTTPS traffic [28 Jan 2016]". 2016.
- "security/assl: assl-1.5.0p0v0 – hide awful SSL API in a sane interface". OpenBSD ports. 2014-05-22. Retrieved 2015-02-10.
- "OpenBSD has started a massive strip-down and cleanup of OpenSSL". OpenBSD journal. 2014-04-15.
- "OpenBSD forks, prunes, fixes OpenSSL". ZDNet. 21 April 2014. Retrieved 21 April 2014.
- "BoringSSL". Git at Google.
- "Google unveils independent 'fork' of OpenSSL called 'BoringSSL'". Ars Technica. 2014-06-21.
- "BoringSSL". Adam Langley's Weblog. 2014-06-20.
- "BoringSSL wants to kill the excitement that led to Heartbleed". Sophos. 24 June 2014.
- Buchanan, Bill (2018-08-30). "Goodbye OpenSSL, and Hello To Google Tink". Medium. Retrieved 2019-04-04.
|Wikimedia Commons has media related to OpenSSL.|
| 1 | 3 |
<urn:uuid:c51ce5ce-116d-456c-9a96-8dc2e9efebda>
|
About 5,000 armored vehicles 1930-2016
1909 and the fall of the old Sultanate
By the end of the XIXth century, the Ottoman Empire was only the shadow of its former self, and a political movement led by nationalistic young officers influenced by foreign powers, the “Young Turks”, which ended the Sultanate in a coup that deposed Abdul Hamid II. The irony in this is that the Sultan ordered the first armored vehicles of the Turkish Army, four Hotchkiss model 1909, which were seized en route to Istanbul (Constantinople) by the Young Turks and proved very useful in their struggle against loyalists of the Army. However, it would be years before the Turks considered using armored cars in combat (the model 1909 was only ordered to deal with rioters in the capital).
The Balkan War (1912-13)
Another important episode in this troubled period, which was also partly responsible for the regional tensions leading to the assassination of Franz Joseph I and the start of ww1: The Italian-Turkish war (1911-1912) and the Balkan war (1912-13). This was before ww1, the first conflicts in which all three machine guns, planes and armored vehicles were actually used in combat.
In this struggle, Greeks and Turks were opposed in Asia Minor, the Aegean sea, and Bulgaria, but also Turks and Italians in Africa. Interestingly enough, it saw the first aerial bombardment, by an Etrich Taube aircraft on November 1st, 1911 and the first engagement of armored cars, the Italian Fiat Arsenale (only 2 made). The Greeks also used some older French Charron model 1903 but the Turks seemed to not have ever used an armored car themselves.
The Hotchkiss model 1909, first Turkish armored vehicle.
The Ottoman Army in ww1
At the start of the “war to end all wars”, the Turkish Army, just like the German Army was organized in a classic fashion with Army corps counting infantry, cavalry and artillery, including faster horse-drawn artillery units. Mobility was quite limited as rare cars and motos were only used for liaison and dispatches. Later on, an ambulance corps was established, that the use of armored cars was never seriously considered. According to a single (dubious) source, a few German Ehrhardt were sent to Palestine, used to support Turkish operations.
Before that perhaps one of the numerous automobile transport units could have been escorted by a single (repaired) British armoured car left behind after the evacuation by British Forces at Bagbag and Sidi Barrani on November 23, 1915 (Dispatches of Lieutenant-General Sir John Maxwell, June 1916 – but himself thought the vehicle was German). The use of former vehicles in action in the Caucasus is also purely conjectural. Nothing more could be added to the 1920 war of independence on this topic.
Skoda TH7 tracked artillery tractor. At the same period, the 1st armored brigade counted a mix of T-26 and BA-3/6 armored cars.
Armoured vehicles in the Interwar
Turkey was the first country to actually purchased Soviet-built vehicles, ZiS 6×4 trucks in 1936-37, and before Bussing-Nag trucks from 1933 and Ford trucks from 1935 which constituted the core of motorized units. At least several tracked vehicles, of the Praga TH6, TH7 models were also used as tractors for heavy artillery from 1936-38.
According to the Turkish Wikipedia and defense magazine “Savunma ve Havacılık sy. 2000-05, sa.82/83, a few Renault FT were purchased in 1928 to be given to the Infantry Gunnery School in Maltepe/Istanbul for evaluation and training. By 1932, several T-26 and T-27 Tanks were also purchased and eventually, the first armored battalion (“Tank Taburu”) was established in 1934 in Luleburgaz, under the command of major Tahsin Yazici.
Apparently, an order was passed for 60 Soviet T-26 model 1933 (two twin-turreted T-26 mod. 1931 were also presented) which formed the 102nd and the 103rd Companies, and 34 (or 60) Soviet BA-6 which formed the 2nd Cavalry Division. It was followed by the formation of an armoured brigade in 1937, and eventually, this formed the base of the foundation of the first armoured regiment (1. Zirhli Alayi), still at Lüleburgaz in 1940, while 100 Renault (FTs or R35 depending on the sources) were ordered from France, as well as 16 Vickers light Tanks Mark 6B, added to the already existing Soviet tanks and ACs.
Turkish tanks and ACs in ww2
Although Turkey remains neutral in ww2, it was courted either by the two neighboring European superpowers of the time, Nazi Germany (then with a solid foothold in Africa and strong presence in the Balkans) and USSR. In 1942, the 1st Armoured regiment was relocated to Davutpasa Barracks in Istanbul. By March to May 1943 the Germans tried to incite Turkey to attack USSR and delivered by order from Hitler 56 Panzer III Ausf.J/H (model armed with a 50 mm gun) and 15 Panzer IV Ausf G. But Turkish authorities did not make any promises. These tanks formed the 6th Tank Regiment (Ankara).
BA-6 in parade at Ankara
So much so that in late 1943, two new brigades were funded in Nigde and Selimiye Barracks (Istanbul) while the 1st was renamed 3rd Armoured Brigade and was equipped with allied tanks. At that time, the allies tried to drag Turkey on their side, allowing them were to be re-equipped with 25 M4 Sherman, 220 M3 Stuart tanks from the US, and 180 Valentine, 150 MK 6 light tanks and 60 Bren carriers from UK. Opening an alternative lend-lease supply road to USSR was also a strong consideration as well as containing a possible extension of the German (and later Soviet) influence in the region. Eventually, these mixed tanks formed the basis for the 3rd Armored Division in 1944, based in Istanbul (1st and 6th Regiments). This unit comprised T3 and T4 tanks (Renamed Panzer III/IVs) and was allocated a recce battalion equipped with 23 a recce battalion a company of 23 Daimler Dingos.
Turkish armor in the Cold War
Interestingly enough, although Turkey maintained an uneasy neutrality on the edge of the mighty Soviet Empire, the Turkish army also participated under the UN banner in the coalition defending South Korea. The Turkish brigade, 5000 strong (supported by tanks) was deployed here and lost 731 deaths in combat, distinguishing itself in the battles of Kunu-ri where its action was credited to save the encircled U.S. 2nd Infantry Division and the battle of Gimnyangjang-ni (Operation Ripper) or the fourth battle of Seoul (Operation Vegas). Eventually, this alignment led Turkey to officially join NATO on February 18, 1952.
The impetus for standardization led to a radical upgrade of matériels, tactics, equipment and to reforms. From now on, Turkey would equip itself with US-built tanks and armored vehicles until recent times. This de facto alignment has gone as far as to accept on its territory famous batteries of US tactical missiles which were the object of the final bargain that critically save the world from a nuclear war under the Kennedy administration…
M46 Patton, 1st USMC in support to the Turkish brigade, Korea July 1953.
1973 invasion of Cyprus
The Greek military Junta indeed backed a coup organized by EOKA-B and led by Nikos Sampson which ousted the recently elected democratic president Archbishop Makarios III, trying also to establish a future integration of Cyprus into Greece; Turkey invaded Cyprus in July 1974 to reestablish order and the elected president, with two landings in force. The operations comprised mostly infantry, with little armor, but massive air and naval support. A second offensive began in August, whereas to weight in the discussions just started between Greek and Turkish Cypriot authorities in Geneva.
Although the government asked for a UN backing, only peacekeeping forces were sent locally but failed to prevent a major offensive this time massively supported by tanks, which was successful in a matter of days, spearheaded by the 28th and 39th Infantry Divisions. Peace negotiations led to a partition between Northern Turkish Cypriots and Southern Greek Cypriots. Tensions gradually eased up after the fall of the military Junta in Greece, ended in 1974 for the most, but Turkish troops are still maintained here today.
The same year, the operations have shown some shortcomings in the Turkish Army, that were to be fixed by major reforms; The First, second and third armies under the under the NATO Headquarters Allied Land Forces Southeastern Europe (LANDSOUTHEAST). After 1990 it was briefly the Joint Command Southeast and eventually became the Allied Air Component Command Izmir in 2004. By the 1980s the Turkish armored forces reached a peak with a total of 3570 tanks, for one armored, two mechanized and fourteen infantry divisions. By 1990, the army was deployed on the frontier with Iraq but shown some deficiencies of strategic deployment in distant areas. A wide restructuration which begun in the fall of the 1980s was amplified in the 1990s and continued until today.
New tanks: Turning from the USA to Germany
In the early 1980s, Turkey was overwhelmingly equipped with the M60. It was even its most prolific user outside the USA at some point. Some were modified and modernized by the local industries, and eventually, the Israeli-built Sabra was purchased, which was the most radical upgrade of the type so far. But by adopting the Leopard Turkey added to its armored forces a more modern and potent asset, with 170 A1s and 227 A3s purchased in separate batches in the 1980s. The Leopard 1A1s were already obsolete before the 1990s and a local upgrade was ordered, performed by local company Aseslan. The upgrade consisted of mounting the Volkan fire control system which improved the nigh/day/low visibility capabilities and greatly enhanced the first hit capabilities. This concerns now the 170 A1s renamed leopard 1T Volkan, assured to stay in service until 2020.
In 2005, the Bundeswehr offered surplus Leopard 2A4 to the export market and Turkey was soon interested, ordering no less than 354 of these for its armed forces where they constitute now the edge of the Armoured Divisions. The A4 is equipped with the “standard barrel” L/44 120 mm and boxy turret composite armor. Its design had some influences for the development of the Otokar Altay in 2007 (when the program was started) to 2012 with the start of the production.
Tip of the sword: The Altay
Turkey is one of the selected club of nations capable to design and built a main battle tank from the computer screen stage. This club includes so far the USA, Russia, Ukraine, China, UK, France, Germany, Japan, India, Italy and South Korea. The program was already discussed in the 1990s as to gain independence from foreign manufacturers and get rid of license, copyrights or export issues sometimes linked to government policies and regional context. It was officially started by the Turkish MoD in 2007 and really started in 2008 for the initial design phase.
Supervised by the 1st Army Maintenance Center Command, it regroups the collective work of Aselsan for sub-systems and fire control system, MKEK for main gun system, Otokar as main contractor and final assembler, Roketsan for the composite armor package and Rotem for the technical support and assistance. So far only four were operational as pre-production tanks for testings and evaluations since 2012. Orders concerns the first batch of 250 scheduled for 2017 with an additional three in the next decades for a total of 1,000 to replace the M60s and Leopard 1s. In capabilities and performances, the Altay is comparable so far to any late 3rd generation MBT today.
Only known as the “Next Gen APC” one of the biggest contract disputed between local manufacturers is about replacing the M113 and BTR-80s in service with a modern, versatile 8×8 vehicle. Two designs already competing for the title, the Otokar Arma and FNSS Pars. Competition has started with extensive comparative trials and the process will end in 2016 with the announcement of the winner. Estimated production will be probably on the order of 2,000 vehicles at least.
The other big contract is about Turkey next gen Infantry Fighting Vehicle (IFV) supposed to complement the Altay. It was in fact developed in 2011-2013 by the same time working on the Altay and reached prototype stage in 2013 as the Tulpar. So far two were built and are being evaluated. A first batch of 400 has been already provisioned.
FNSS KAPLAN-MT/Harimau, the 70kph, latest Turkish light tank, armed with a CMI Cockerill 3105 turret and 105mm high-pressure gun. It is targeting export as an offer for this niche of the market with few real competitors but wheeled tank destroyers.
Developed from 30 March 2007, prototype testings are scheduled for 2016, and it will enter service in 2018 (est.). Production will replace older M48/M60s for 1,000 units total. Its technology is partly based on the K2 Black Panther (technological agreement) but is a joint venture between Otokar (main contractor), Aselsan (sub-systems, FCS), MKEK (main gun), Roketsan (armor), and Rotem (tech support & assistance).
Leopard 2A4T MBT
About 354 Leopard 2 in service. Current upgrade by Aslesan: Same level as the Leopard 2NG with the IBD Deisenroth add-on armor packages (like the Singaporean 2SG). Also comprises upgraded optics, overhauled turret mechanics and a new FCS (same as the Altay MBT).
Leopard 1A3/1TU Volkan MBT
About 397 Leopard 1 in service, including 227 Leopard 1A3K and about 170 Leopard 1T Volkan modified with a modernized fire control system developed by Aselsan.
M60T Sabra MBT
Namely the latest iteration of the M60 Patton, fully modernized in Israel for export. Its only client was and remains Turkey. 170 in service, for 1646 M60s of all types. See cold war section for M60s and M48s still in service.
(1992) About 2245 produced in association with DefTech Malaysia. Main Turkish tracked APC/IFV derived from the AIFV family. Replace older M113s still in service. 650 ACV-AIFV in service, and 1,550 ACV-AAPCs.
(2010) A family of 6×6 and 8×8 wheeled APCs developed by Otokar. For the moment only targeted for export, so far Azerbaijan was declared interested. The Arma is competing with the Pars to obtain a major contract for the replacement of the M113 and BTR-80 APC fleet which is supposed to end in 2016.
(1997) This is the modern light main recce/MRAP vehicle of the Turkish Army, produced to an extent of 1,200 vehicles so far and well exported.
(2013) IFV. Current programme by the same team responsible for the Altay. The first batch of 400 expected.
(1994). Light reconnaissance vehicle developed by Otokar. 370 are in service with the Turkish, many more were exported to far.
Local modification by Otokar and licence production, Fast attack vehicle based on Land Rover 110.
Developed in 1998-2000 with South Korea, production started in 2001 with an ongoing order for 350 units.
Co-developed with USA, based on the M113A3, 150 are in service so far.
Co-developed with USA and UK, based on the Land Rover defender, 80 are in service so far.
Cold War Turkish Armour
Here an M60A3 TTS in the 1990s. 1646 in service (104 M60A1, 753 A3 and 619 A3TTS).
Here a modernized M48 T5 in 1985. 1377 in service, 619 modified A5T1 and 758 A5T2.
The main Turkish tracked APC until the 1990s (and still today pending its replacement by the new wheeled APC programme, see above), partly replaced by the FNSS ACV-15 AAPC. 2,813 were in service of several versions by the 1980s.
Main Turkish 8×8 APC. 536 were purchased from Russia an delivered between 1993 and 1999.
Co-produced with Germany. Tactical Ballistic missile launcher, unknown production so far.
Other foreing equipments includes the LAV-150 armoured car (124), MGM-140 ATACMS MRLS, TOROS MLRS (Co-developed from the Yugoslavian M-87 Orkan), M270 MLRS (12), T-122 Sakarya MLRS (unarmoured-developed from the BM-21 with German parts), locally designed RA-7040 MRLS, M110 SPG (203 mm, 219), M107 SPG (175 mm, 36), M109 SPG (unknown), M52 SPG (362), M44 (164), M108 SPG (26), M42A1 Duster (262, to be retired), M88A1 ARV (33), M48A5T5 Tamay (co-developed with US, 105 in service), AZMİM local dozer version of the M9ACE, Leguan AVLB (36), and the SYHK amphibious bridging vehicle developed with Germany. Heavy duty armored trucks of the US PLS and HEMTT are also in service (unknown numbers).
| 1 | 25 |
<urn:uuid:9f2b908f-0a17-497c-84bb-4eb2094c6c78>
|
The progressive decline in maximal aerobic power and work capacity with ascent to increasing altitudes is well known and documented(22,27). It is also well known that the physiologic adaptations that occur with altitude training and acclimatization will improve physical performance at altitude. Considerable controversy exists, however, as to whether performance at sea level can be enhanced by altitude training. If one uses ˙VO2max as the criterion, most studies indicate no improvement in sea level values when athletes return from a period of altitude training(1,6,8,9,28). Despite this, many coaches and athletes believe that sea level performance in endurance events will benefit from brief periods of training at moderate altitude and offer numerous anecdotal accounts attesting to its effectiveness.
Investigation of the literature on altitude acclimatization reveals that following 10 or more days of exposure to moderate altitude a significant improvement is seen in physical work capacity at altitude(4,22) and a reduction in plasma lactate occurs when subjects exercise at the same power output (22). There is, however, little or no change in ˙VO2max over this time(1,25). This disproportionately large increase in work capacity and apparent decrease in lactate production, despite minimal change in ˙VO2max, suggests that adaptations may have occurred at the muscle level rather than adaptations that may have enhanced oxygen delivery.
We have investigated muscle adaptations to 40 d of extreme hypoxia in a“live-in” hypobaric chamber (11,21) and concluded that hypoxia on its own is not a stimulus for increased mitochondria synthesis or oxidative enzyme activity, although there was a slight increase in muscle capillary density as a result of fiber atrophy. In contrast, Terrados et al. (29) demonstrated that when hypoxia is combined with exercise significantly greater increases occur in oxidative enzyme activity and myoglobin than when the same training is performed in normoxia. This has been substantiated by Bigard et al. with rats(3) and Kaijser et al. (18) with humans, who found greater increases in citrate synthase activity in muscles that were trained under conditions of local ischemia than in muscles trained under normal conditions. The purpose of this study was to determine whether training under normobaric hypoxic conditions (simulating medium level altitude) would enhance physical performance and selected muscle adaptations over and above that which would occur with normoxic training. By using a unilateral training model whereby one leg was trained under normoxic conditions and the other under hypoxic conditions (29), we were able to control for adaptations that may affect performance other than those at the muscle level.
Subjects. Ten healthy male volunteers (19-25 yr) served as subjects. Although they were physically active, none had undergone previous endurance training. All participants were fully informed of the purposes of the study and associated risks as approved by the Human Ethics Committee of McMaster University. Participants received remuneration for participating in the study and training compliance was 100%.
Design. Exercise performance characteristics and biochemical and morphometric properties of the vastus lateralis were assessed for each leg before and after 8 wk of endurance training.
Training consisted of unilateral cycle ergometry so that one leg was trained while subjects breathed an inspirate of 13.5% oxygen (balance nitrogen) and the other leg was trained while breathing normal ambient air. Performance characteristics included measurements of maximum aerobic power(˙VO2max) and maximum aerobic capacity (time to fatigue at 95%˙VO2max) for each leg. Biochemical measurements included assays for oxidative and glycolytic enzyme activity and morphometric measurements included assessments of muscle capillary density, fiber area and% fiber type, and mitochondrial and lipid volume density.
The hypoxic condition was achieved by having subjects breathe out of a 350-l Tissot gasometer which was coupled to an electrically controlled valve which diluted ambient air by bleeding nitrogen into it at a controlled rate. The configuration was such that ambient air was first bubbled through a smaller tank filled with water to be moisturized before being mixed with nitrogen and drawn into the Tissot tank by vacuum pump. Subjects breathed the gas mixture through a standard Rudolph valve. The system was capable of delivering the inspirate on line up to a maximum inspired ventilation of approximately 120 l·min-1 (BTPS). The inspirate was continuously monitored by a second oxygen analyzer and a variation of less than 0.2%(FIO2) was maintained throughout all training sessions.
Training program. Subjects trained 3 times per week for 8 wk. For the first 6 wk, each training session consisted of 30 min of continuous unilateral cycling in the normoxic condition and 30 min of cycling by the opposite leg in the hypoxic condition. The order of conditions was alternated each training session and our design was such that, for five subjects, the hypoxically trained leg was randomly designated as the left leg and for the remaining five subjects it was the right leg. Initial training intensity was set at 75% of the lower of the two legs' pre-training maximum power output. This value was increased on an individual basis every 2 wk to provide a progressive overload. The magnitude of this increase depended upon each subject's tolerance ability and amounted to ≈2% every second week. The increased training load was always applied first in the hypoxic condition so that, if subjects were unable to complete the full 30 min, the load could be reduced and the training load for the normoxic condition matched to it. Thus, the same absolute power output was always maintained for the two legs on any given day.
Training for the final 2 wk consisted of a combination of interval and continuous training. In each session and for each leg, subjects performed five 3-min intervals at 100% of the pre-training maximum power output with 3-min recovery periods. This was followed by 10 min of continuous training at the power output reached in the sixth week of training. The interval training was added to simulate the training programs of middle distance athletes who often combine interval training with continuous training as part of their normal preparation for competition. All training sessions were supervised, and inspired FO2 was continuously monitored and held constant during the hypoxic training condition.
Measurements. ˙VO2max was recorded for each leg separately during unilateral cycling on an electrically braked cycle ergometer. The test began at an initial power output of 50 W and the load was increased 15 or 30 W every 2 min until fatigue. Heart rate was continuously recorded by a 3 lead ECG throughout the test, as was oxygen uptake by means of a computerized open circuit system which calculated ˙VO2 on-line every 30 s. The peak ˙VO2 attained was considered to be˙VO2max and the highest power output that was sustained for at least 1 min was considered to be maximum power output.
The maximum aerobic capacity (MAC) test consisted of unilateral cycling to fatigue on the ergometer at a power output which corresponded to that at 95% of pre-trained ˙VO2max for each leg. Fatigue was considered to occur when subjects could no longer maintain a pedaling rate of 60 rpm and test duration was recorded to the nearest 1.0 s.
Needle biopsies were extracted from the vastus lateralis of the right leg before the training period and from both legs after the training period using the Bergström technique (1962) and applying “suction” with a 50-ml syringe. Post-training biopsies were taken 2 or 3 d after the last training sessions. On each occasion two biopsy samples were taken. The first was divided for electron microscopy and histochemistry, and the second was immediately frozen in liquid nitrogen for subsequent biochemical analysis.
For histochemistry, cryostat sections (7 μm) were stained for myofibrillar ATPase activity (following pre-incubation at pH of 4.3,4.6, and 10.0) or with hematoxylin and eosin. The slides were then photographed under a light microscope at 40 × magnification and the photographs projected onto either a 144 square grid for capillary counting or a computerized digitizer for measurement of fiber area. Capillaries were counted from the hematoxylin and eosin stained tissue (for a field of ≈250 fibers) and expressed per square millimeter as well as per fiber. Cross-sectional area was determined for an average of 125 Type I fibers and 125 Type II fibers per biopsy. Percent fiber type was estimated by counting an average of ≈300 fibers per biopsy.
For electron microscopy, tissue was prepared as has been described(21). Serial ultrathin sections were made at a slightly oblique angle to the fibers, stained with uranyl acetate and lead citrate, and mounted on copper/rhodian grids. These sections were photographed at approximately 50,000 × magnification under a Philips EM 301 (Eindhoven, The Netherlands). Where possible, 50 fibers were randomly selected per biopsy, and for each a photographic field for the interior of each fiber was randomly selected and photographed. Stereological analysis was performed on each micrograph by means of a 168 point shortline test system(30) according to the method as described by Hoppeler et al. (15). For each biopsy, volume densities were calculated for myofibrils, interior mitochondria, lipid, and cytoplasm.
The second biopsy samples were stored at -80° C until biochemical analysis. Tissue was freeze dried, dissected free of blood and connective tissue, and homogenized in 50% glycerol, 20 mM sodium phosphate buffer (pH = 7.4), 5 mM B-mercaptoethanol, 0.5 mM EDTA, and 0.02% BSA(19). The activities of citrate synthase (CS), succinate dehydrogenase (SDH) and phosphofructokinase (PFK) were determined fluorometrically (12) and expressed as either mmol·g-1 tissue or mmol·hr-1·g-1 tissue.
Data were analyzed with a two-factor (pre-post-training × training condition) analysis of variance. When a significant interaction was found, apost-hoc test (Tukey A) was used to identify significant differences among mean values. Statistical significance was accepted at P ≤ 0.05.
Performance measurements. Compared to pre-training values,˙VO2max was significantly (P < 0.05) higher for both legs following training (Fig. 1). The mean increase was≈13% in the normoxically-trained leg and ≈11% in the hypoxically-trained leg with no difference between conditions.
Following training, time to fatigue during unilateral cycling at 95% of the pre-training ˙VO2max, markedly increased for both legs (P< 0.05). Subjects were able to maintain exercise ≈400% longer with the normoxically-trained leg and ≈510% longer with the hypoxically-trained leg, but this difference between conditions did not achieve statistical significance (Fig. 2).
Muscle enzyme activities. The activity of CS, SDH, and PFK increased significantly in the muscle of both legs following training(Fig. 3). CS activity was ≈51% higher in the normoxically-trained leg and ≈71% higher in the hypoxically-trained leg compared with pre-training values. The increase in CS activity in the hypoxically-trained leg was also significantly greater than that in the normoxically-trained leg (P < 0.05). SDH activity was ≈35% higher in the normoxically-trained leg and ≈63% higher in the hypoxically-trained leg, but the difference between conditions was not statistically significant. Similarly PFK activity was ≈23% higher in the normoxically-trained leg and 32% higher in the hypoxically-trained leg, but again the difference between training conditions was not statistically significant.
Morphometric measurements. The effects of training on mitochondrial volume density, fiber type and area, and muscle capillarization are summarized in Table 1. Although capillary/fiber ratio, capillary density, and mitochondrial volume density tended to be higher following training and especially in the hypoxically-trained leg, none of these changes achieved statistical significance. Percent fiber type and cross-sectional area of Type I fibers were unaffected by training although there was a tendency for Type II fiber areas to be greater following training.
Additional measurements. Following training there was no change in body mass, pre-exercise hemoglobin concentration increased significantly from 14.7 g% to 15.8 g%, exercise heart rate was significantly lower at the same submaximal power outputs, peak power output was significantly higher for both legs, and peak plasma lactate was significantly higher following the˙VO2max test for each leg.
Since the purpose of this study was to isolate and examine the effects of the combination of exercise training and hypoxia, the subjects breathed the hypoxic mixture only while they were training the designated leg. They were thus only exposed to the hypoxic environment for a total of 90 min·wk-1, and it should be recognized that our study was not intended to simulate a condition in which training is conducted under chronic hypoxia. Our selection of an inspirate with a fractional oxygen concentration of 13.5% was based on a pilot study of different hypoxic mixtures. In that study five healthy young subjects performed progressive and constant load exercise on separate occasions while breathing either normal ambient air or inspirates that ranged from 11.0-14.0% O2 in a randomized and blinded manner. An FIO2 of 13.5% was the lowest that these subjects could tolerate for 30 min while performing cycle exercise at 75% of their maximum normoxic power output. Peak ˙VO2 at this inspirate was ≈83% of their normoxic ˙VO2. An FIO2 of 13.5% results in a PIO2 of ≈103 mm Hg and corresponds to an altitude of ≈3,292 m.
Our subjects trained each leg at the same absolute intensity. Since absolute ˙VO2 measurements during training were the same under both conditions, the level of oxidative phosphorylation was probably the same for both legs. Because peak ˙VO2 is reduced under hypoxic conditions our training protocol was such that the hypoxic training represented a higher relative intensity (>90% ˙VO2peak) than the normoxic training(≈78% ˙VO2max). The question thus arises as to whether any differences in adaptive response between the two legs are a result of the hypoxia per se or simply a result of the differences in relative training intensity imposed by the hypoxic condition. While one would normally expect differences in relative training intensity to affect the nature and magnitude of the adaptive response in muscle, such differences are normally accompanied by differences in absolute power output, oxygen uptake, and enzyme kinetics. In the present study, care was taken to ensure that each leg trained for the same duration and at the same absolute power output each training day so that the only difference was the presence or absence of the hypoxic condition. Thus it is valid to attribute any between-leg differences in muscle adaptation to the hypoxia per se.
The relatively large increases in activity of CS, SDH, and PFK indicate that the training program resulted in considerable adaptation at the muscle level in both legs. It is also apparent that the hypoxic condition combined with exercise training resulted in a significant increase in CS activity over and above that which occurred with the same training under normoxic conditions. SDH activity also increased approximately 28% more in the hypoxically-trained leg than in the normoxically-trained leg, but this difference was not statistically significant. Although one might normally expect changes in one oxidative enzyme to parallel changes in another, we have previously noted greater changes in CS activity than in SDH activity in a training study of similar duration and intensity (23). The changes in PFK activity were somewhat surprising and may be related to our inclusion of high intensity interval training in the final weeks of the program.
Although capillary density, capillary/fiber ratio, and mitochondrial volume density tended to be higher following training, the magnitude of these changes was not statistically significant. Again, although one normally expects increased oxidative enzyme activity to be closely coupled with an increase in mitochondial volume density (13,14), we have previously observed significant increases in CS activity with training in the absence of significant increases in mitochondial volume density(23). We interpret this greater sensitivity of CS as an oxidative marker as being due to the relatively lower precision of the morphometric technique for quantifying mitochondial density.
Our finding that ˙VO2max increased to the same extent for each leg indicates that the enhanced enzymatic adaptations in the hypoxically trained leg had little or no effect on ˙VO2max. This result is consistent with the commonly held belief that ˙VO2max is primarily determined by an individual's maximum cardiac output(26), whereas MAC is affected to a greater extent by muscle respiratory capacity (13). We did not measure cardiac output in the present study, but since HR was lower at the same submaximal power outputs following training, it is probable that training-induced increases in stroke volume, coupled with the slight increase in hemoglobin concentration, resulted in increased oxygen delivery to the muscles in the post-trained state.
The more than four- and five-fold increases in MAC which were found for the normoxically- and hypoxically-trained legs, respectively, were considerably greater than expected. Since ˙VO2max increased for both legs, part of this improvement can be attributed to the fact that the same absolute power output (95% of the pre-training ˙VO2max) represented a lower relative intensity following training. In addition, our subjects probably improved their mechanical efficiency for unilateral cycling as a result of the training period since ˙VO2 during the MAC test was lower(P < 0.05) following training. The combined effects of an improvement in ˙VO2max and unilateral cycling economy probably inordinately prolonged fatigue time (in excess of 90 min in some subjects) to the extent that the factors causing fatigue may have been quite different in the two tests. Consequently, the validity of this performance test as a true measure of the improvement in aerobic work capacity is questionable. However, since ˙VO2max increased to the same extent in each leg and each leg did the same absolute amount of training, between-leg comparisons in the post-trained state are justifiable. Time to fatigue was approximately 4 min 7 s longer in the hypoxically-trained leg than in the normoxically-trained leg, but this difference was not statistically significant. We are thus left to conclude that either the changes that occurred in CS concentration had little or no effect on MAC or that the power output that was used in our MAC test was not appropriate for discriminating changes in exercise capacity.
Our data confirm those of Terrados et al. (29) indicating that training under a moderate hypobaric hypoxic condition increases CS activity to a greater extent than does the same amount of training under a normoxic condition. Since even extreme chronic hypobaric hypoxia on its own does not increase oxidative enzyme activity and mitochondial density (11,21), the results of the present study indicate that the combination of exercise training with a moderate hypoxic environment provides an enhanced stimulus for adaptation and not the hypoxia per se. Theoretically, one might expect these enzymatic changes to also enhance exercise performance in the hypoxically-trained leg (18,29). In this regard, however, our results were inconclusive and may have been obscured by the method that was used to quantify exercise capacity.
In summary, our results indicate that training under moderate normobaric hypoxic conditions results in a greater increase in muscle oxidative enzyme activity than does the same volume of training under normoxic conditions. These changes have little or no effect on ˙VO2max but may enhance aerobic exercise capacity. A training protocol in which subjects only experience hypoxia while they are training (as in the present study) may be superior to actual training at altitude. Although a sojourn at altitude may boost oxygen carrying capacity owing to elevated Hgb, such adaptations may be offset by maladaptations such as reduced maximal Qc(20,24), muscle atrophy(11,16,17,21), or reduced buffer capacity and anaerobic power(2,5,7,11). With a protocol like the one in the present study, the hypoxic exposure time is so brief that these negative affects are probably avoided. The method used to simulate altitude is inexpensive, adaptable to a number of sports, and does not necessitate transportation of athletes to altitude for training.
1. Adams, W. C., E. M. Bernauer, D. B. Dill, and J. B. Bormar, Jr. Effects of equivalent sea level and altitude training on˙VO2max
and running performance. J. Appl. Physiol.
2. Bender, P. R., M. G. Bertron, R. G. McCullough, et al. Decreased exercise muscle lactate release after high altitude acclimatization.J. Appl. Physiol.
3. Bigard, A., A. Brunet, C. Y. Guezennec, and H. Monad. Skeletal muscle changes after endurance training at high altitude. J. Appl. Physiol.
4. Billings, C. R., R. Bason, D. Mathews, and E. Fox. Cost of submaximal and maximal work during chronic exposure at 3,800 m. J. Appl. Physiol.
5. Brooks, G. A., G. E. Butterfield, R. R. Wolfe, et al. Decreased reliance on lactate during exercise after acclimatization to 4,300 m. J. Appl. Physiol.
6. Buskirk, E. R., J. Kollias, R. E. Akers, E. K. Prokop, and E. P. Reatequi. Maximal performances at altitude and on return from altitude in conditioned runners. J. Appl. Physiol.
7. Cerretelli, P., A. Veicsteinas, and C. Marconi. Anaerobic metabolism at high altitude: the lactacis mechanism. In: High Altitude Physiology and Medicine
, W. Brendel and R. A. Zink (Eds.). New York: Springer Verlag, pp. 94-102, 1983.
8. Essen, B., E. Jansson, J. Henriksson, A. W. Taylor, and B. Saltin. Metabolic characteristics of fiber types in human skeletal muscle.Acta Physiol. Scand.
9. Faulkner, J. A., J. T. Daniels, and B. Balke. Effects of training at moderate altitude on physical performance capacity. J. Appl. Physiol.
10. Green, H. J. Muscle metabolism in chronic hypoxia. In:Hypoxia, The Tolerable Limits
. J. R. Sutton, C. S. Houston, and G. Coates (Eds.)., Indianapolis: Benchmark Press, pp. 101-120, 1988.
11. Green, H. J., J. R. Sutton, P. M. Young, A. Cymerman, and C. S. Houston. Operation Everest: II. adaptations in human skeletal muscle. J. Appl. Physiol.
12. Henriksson, J., M. M. Chi, C. S. Hintz, et al. Chronic stimulation of mammalian muscle: changes in enzymes of six metabolic pathways.Am. J. Physiol.
13. Holloszy, J. O. and E. F. Coyle. Adaptations of skeletal muscle to endurance training and their metabolic consequences.J. Appl. Physiol.
14. Hoppeler, H. Exercise induced ultrastructural changes in skeletal muscle. Int. J. Sports Med.
15. Hoppeler, H., P. Luthi, H. Claassen, E. R. Weibel, and H. Howald. The ultrastructure of the normal human skeletal muscle: a morphometric analysis on untrained men, women, and well-trained orienteers.Pflugers Arch.
16. Hoppeler, H., E. Kleinert, C. Schlegal, et al. Morphological adaptations of human skeletal muscle to chronic hypoxia.Int. J. Sports Med.
11(Suppl 1):S3-S9, 1990.
17. Howald, H., D. Pette, J. A. Simoneau, A. Uber, H. Hoppeler, and P. Cerretelli. Effects of chronic hypoxia on muscle enzyme activities. Int. J. Sports Med.
11(Suppl 1):S10-S14, 1990.
18. Kaijser, L., C. J. Sundberg, O. Eiken, et al. Muscle oxidative capacity and work performance after training under local leg ischemia. J. Appl. Physiol.
19. Lowry, O. H. and J. V. Passonneau. A Flexible System of Enzymatic Analysis
, 1st Ed. New York: Academic Press, 1972.
20. MacDougall, J. D., W. G. Reddan, J. A. Dempsey, and H. Forester. Acute alterations in stroke volume during exercise at 3,100 m altitude. J. Human Erg.
21. MacDougall, J. D., H. J. Green, J. R. Sutton, et al. Operation Everest II. structural adaptations in skeletal muscle in response to extreme simulated altitude. Acta. Physiol. Scand.
22. Maher, J. R., L. C. Jones, and L. H. Hartley. Effects of high altitude exposure on submaximal endurance capacity of men. J. Appl. Physiol.
23. Sale, D. G., J. D. MacDougall, I. Jacobs, and S. Garner. Interaction between concurrent strength and endurance training.J. Appl. Physiol.
24. Saltin, B. Aerobic and anaerobic work capacity at an altitude of 2,250 m. In: The International Symposium on the Effects of Altitude on Physical Performance
. R.F. Goddart (Ed.). Chicago: The Athletic Institute, pp. 97-102, 1967.
25. Saltin, B., R. F. Grover, C. G. Blomqvist, L. H. Hartley, and R. L. Johnson. Maximum oxygen uptake and cardiac output after 2 weeks at 4,300 m. J. Appl. Physiol.
26. Saltin, B. and L. B. Rowell. Functional adaptations to physical activity and inactivity. Fed. Proc.
27. Stenberg, J., B. Ekblom, and R. Messin. Hemodynamic Response to Work at Simulated Altitude. J. Appl. Physiol.
28. Terrados, N., J. Melichna, C. Sylven, E. Jansson, and L. Kaijser. Effects of training at simulated altitude on performance and muscle metabolic capacity in competitive road cyclists. Eur. J. Appl. Physiol. Occup. Physiol.
29. Terrados, N., E. Jansson, C. Sylven, and L. Kaijser. Is hypoxia a stimulus for synthesis of oxidative enzymes
and myoglobin? J. Appl. Physiol.
30. Weibel, E. R. Stereological Methods. Vol I: Practical Methods for Biological Morphometry
. Toronto: Academic Press, 1979, Chs.4,6.
| 1 | 2 |
<urn:uuid:a233078a-50ec-4907-8b7c-fa0811f233ba>
|
- Open Access
Non-rhinovirus enteroviruses associated with respiratory infections in Peru (2005-2010)
Virology Journal volume 11, Article number: 169 (2014)
Enteroviruses (EVs) are a common cause of respiratory tract infections and are classified into seven species (EVA-D and rhinoviruses [RHVs] A-C) with more than 200 different serotypes. Little is known about the role of non-RHV EVs in respiratory infections in South America. The aim of this study was to describe the epidemiology of non-RHV EVs detected in patients with influenza-like illness enrolled in a passive surveillance network in Peru.
Throat swabs and epidemiological data were collected from participants after obtaining verbal consent. Viral isolation was performed in cell culture and identified by immunofluorescence assay. Serotype identification of EV isolates was performed using commercial monoclonal antibodies. Identification of non-serotypeable isolations was carried out by reverse transcriptase-PCR, followed by sequencing.
Between 2005 and 2010, 24,239 samples were analyzed, and 9,973 (41.1%) possessed at least one respiratory virus. EVs were found in 175 samples (0.7%). Our results revealed a clear predominance of EVB species, 90.9% (159/175). No EVDs were isolated. The mean and median ages of EV-positive subjects were 9.1 and 4.0 years, respectively, much younger than the population sampled, 17.6 and 12.0 years. Sixteen serotypes were identified, four EVA, 11 EVB, and one EVC species. The most common serotypes were coxsackievirus B1, coxsackievirus B2, coxsackievirus B5, and coxsackievirus B3.
This study provides data about the serotypes of EVs circulating in Peru and sets the need for further studies.
Acute respiratory infections (ARIs) are a significant source of morbidity and mortality worldwide and disproportionately affect children, who have an average of two to seven ARIs each year . Enteroviruses (EVs) ―family Picornaviridae, genus Enterovirus―are small, non-enveloped, and possess a single-stranded positive (messenger)-sense RNA genome of ~7.4 kb. Historically, these viruses were characterized by physical features such as stability or lability to acid pH, insensitivity to nonionic detergents, and resistance to ether, chloroform, and alcohol. For the most part, molecular characterization and taxonomic analysis of picornavirus genomes have replaced physical characterization and has led to the current classification scheme [2–4]. EVs are classified into seven species (EVA-D and rhinoviruses [RHVs] A-C), according to their genotypic and antigenic characteristics. Throughout the remainder of this report, EV will refer to only EVA, EVB, EVC, and EVD species and not the RHV species.
EVs are transmitted mainly from person to person by fecal-oral or oral-oral routes and by contact with upper respiratory secretions, fomites, or fluid from blisters [2, 5]. Although most EV infections remain asymptomatic, they are also responsible for a wide range of clinical syndromes, such as aseptic meningitis, encephalitis, myocarditis, acute flaccid paralysis, hand-foot-and-mouth disease, and herpangina [2, 3, 6, 7].
EVs have been isolated during influenza-like illnesses (ILI) surveillance studies that our team and others have conducted [8–13]. During the 2009 pandemic of influenza A virus (pH1N1), coxsackievirus (CV) and echovirus (E) were the most common viral pathogens in pH1N1-negative samples . Although EV serotypes can co-circulate, different predominant serotypes are observed in different regions: for example, in France and Spain , E11 and E6; in Taiwan and China , CVB3 and CVA21; and in Brazil , E11. Epidemiological surveillance provides important information to understand the changing patterns of EV circulation and disease association. Accurate identification of the EV serotype may provide relevant epidemiological information such as the cause of a localized outbreak or the dominant EV circulating each year or it may be used to detect new serotypes or variants. Our current understanding of the role of EVs in respiratory infections in South America is restricted to prevalence data in some surveillance studies of ILI or ARI [8–11, 19], with only one report of specific EV serotypes associated with ARIs . The purpose of this study was to detect, classify, and analyze the epidemiologic characteristics of EVs isolated from participants with ILI in Peru between 2005 and 2010.
During the 6-year study period, throat swabs were collected from 24,239 participants aged 0-100 years (median 12.0 years, mean 17.6 years); 53.7% of the participants were less than 15 years. Among the 11 provinces included in this study, four accounted for more than 71.0% of the samples. These were Piura (23.4%), Loreto (21.0%), Lima (14.1%), and Tumbes (13.4%). Moreover, 33.0% of the specimens were collected in 2009 (year of pH1N1; Table 1).
At least one respiratory virus was detected in 9,973 (41.1%) specimens, with influenza A virus being the most prevalent pathogen (62.3%). EVs were isolated in 175 participants (0.7% of samples taken) aged 0-89 years (median 4.0 years, mean 9.1 years). Rhesus monkey kidney cells (LLCMK2) identified six of the 13 EVA isolates, 152 of the 159 EVB isolates, and all three of the EVC isolates, while African green monkey kidney cells (Vero E6) identified 12 of the 13 EVAs, 115 of the 159 EVBs, and none of the EVCs.
Children under 15 years were the group with highest proportion of EVs detected (144/13,030; 1.1%) compared with the proportion found in the group of subjects ≥15 years (31/11,139; 0.3%), X2 = 53.3, p < 0.001. The male/female ratio for EV infection was 1.3, a ratio (1.1) similar to the study population (X2 = 0.84, p > 0.05). Also, our findings revealed that provinces located on the coast had the highest proportion of EV cases (121/13,369; 0.9%), followed by the samples collected from sites located in the jungle (45/7,787; 0.6%) and the southern highlands (9/3,083; 0.3%), (X2 = 15.1, p < 0.001).
Enterovirus species in Peru
Of the 175 EVs isolated, 13 (7.4%) were EVA species, 159 (90.9%) were EVB species, and the remaining three (1.7%) were EVC species. The three EVC isolates were all related to the poliovirus (PV) Sabin vaccine strain, PV1. EVD species were not isolated in our study. EVA and EVC species were restricted entirely to children between five months and six years of age (median 1.5 years, mean 1.9 years).On the other hand, EVB species were detected from participants 0-89 years of age (median 5.0 years, mean 9.7 years). Table 1 shows the distribution of EV serotypes by age group, collection province, and year. Sixteen different EV serotypes were found: four EVA, 11 EVB, and one EVC serotype. The four main serotypes isolated were CVB1 (48.0%), CVB2 (17.7%), CVB5 (8.6%), and CVB3 (6.9%). EVs such as EV71, CVA4, CVA6, E4, CVB4, CVA9, E30, and PV1 were isolated exclusively in children less than five years. Fifteen of the 17 serotypes identified in this study were found in Piura, four of which were isolated only there. Moreover, Loreto and Tumbes had six and five serotypes, respectively.
Other respiratory viruses were co-detected with EV in 42 (24.0%) samples, 40 samples with one virus (influenza A virus [n = 16], herpes simplex virus, [n = 11], adenovirus [n = 9], parainfluenza virus 1 [n = 2], influenza B virus [n = 1], or respiratory syncytial virus [n = 1]) and two samples with two viruses (influenza A virus/herpes simplex virus [n = 1] and influenza B virus/human metapneumovirus [n = 1]). At the time of study enrollment, the EV cases had a median duration of illness of two days. Besides the inclusion criteria symptoms ― fever (96.0%), cough (83.1%), and sore throat (68.8%) ― participants presented with other common symptoms such as malaise (81.5%), rhinorrhea (72.7%), and headache (57.8%). No difference was found between EVA and EVB cases with respect to duration of illness, median axillary temperature, and symptoms.
Phylogenetic analyses of EV in Peru
Samples that were assigned the same serotype by BLAST analysis clustered with their homologous prototype strain, confirming serotype designation and supported by high bootstrap values. In addition, to investigate the genetic relationship among CVB1 strains (the predominant serotype in our study), we performed phylogenetic analysis, including sequences from the GenBank database of different geographical regions and year of isolation. Our CVB1 isolates clustered together (nucleotide identities 99.6 – 99.9%) and were phylogenetically close to isolates circulating in Spain during 2008 and less related to Asian strains (Figure 1).
This study represents a retrospective analysis of the different EV serotypes detected in ILI cases in Peru over a 6-year period and complements the sparse existing data for the country [8–10, 13]. Our findings indicate that EVs were isolated more frequently in children younger than 5 years, similar to other studies that also examined EVs in older children or adults, including investigations from Spain, France, and Peru [7, 13, 15, 16]. Furthermore, the predominance of EVB in our study--representing 90% of all EV isolates--is in concordance with other studies that have shown EVB species as the main EV associated with ARIs [15, 17, 19].
Three PV1 isolates were also observed in this study all related to the Sabin vaccine strain. Although wild-type PV infection has been eliminated from South America and much of the world, vaccine-derived polioviruses still may have public health implications. These viruses may cause polio outbreaks in areas with low vaccine coverage, can replicate for years in persons who are immunodeficient, and may rarely cause paralysis in those with no known immunodeficiency.
Our regional results revealed that the coastal region, in particular Piura, had the highest EV isolation rate. Also, this province provided most of the isolations (Table 1), with the majority occurring from November through March. These findings may have been influenced by climatic conditions, which are notable for a rainy season during the summer (December – March). Others have noted a similar correlation with either the summer or rainy season [2, 4].
Our CVB1 isolates were closely related to one another, suggesting that a single strain was responsible for the cases of CVB1 that occurred throughout Peru from 2005 – 2010. Phylogenetically, our CVB1 strains possessed similarity to an isolate from Australia detected in 2005 (found in GenBank with no reference cited) and an even closer relation to isolates circulating in Spain that were isolated in a hand-foot-and-mouth disease study during 2008 . Asian strains, in particular from South Korea and China, seemed to be less related to our isolates.
A limitation of our study was the use of cell culture for initial EV identification. No cell line is capable of supporting the growth of all EVs, and the use of different cells is recommended for EV isolation [2, 21, 22]; however, there is no consensus about which one(s) should be used. Some investigators use at least four cell lines , and we employed Madin-Darby canine kidney cells (MDCK), LLCMK2, and Vero cells for routine isolation of respiratory viruses in our laboratory. However, detection and identification of EVs only occurred in LLCMK2 and Vero cells, most likely due to their established better sensitivity for EVs compared with MDCK [2, 21]. Although direct comparisons between LLCMK2 and Vero cells for EV isolation are uncommon, a few studies indicate that Vero cells support the growth of EVA well and LLCMK2 cells support the growth of EVB and EVC well, similar to our study [21, 23, 24]. Utilization of other cell lines—such as Buffalo green monkey kidney cells and continuous human diploid fibroblasts—may have further increased our sensitivity in detecting EVs , in particular those that may not have grown well on the cell lines used in our study.
Initial use of PCR directly on the specimens would have probably increased the detection of EVs, as shown in an aforementioned study in Peru that detected EVs in 3% of respiratory samples . It would have also allowed a better comparison between the different cell culture types. When used in the healthcare setting, PCR may better elucidate the role and frequency of EVs in central nervous system infections and hand-foot-and-mouth disease.
Another limitation was the passive surveillance nature of the study. Although this allowed us to collect a large number of samples, we were not able to get mild (i.e., not sick enough to go to a medical clinic) or severe (i.e., admitted directly to the hospital) cases. Also, we evaluated our subjects at just one point in time which made it impossible to assess the duration or severity of the disease after a subject left the clinic.In summary, our data reveal that EVs are commonly recovered from Peruvian children with ILI. Moreover, our findings show the concomitant circulation of distinct EV species in four provinces (Figure 2). We hope that these results will stimulate further research of EV and ILI. Such studies will provide justification for development of diagnostic and treatment options for a virus that may account for a large fraction of ARIs, both within Peru and throughout the world.
The study protocol (NMRCD.2002.0019) was classified as less than minimal risk and was approved by the institutional review board (IRB) of the U.S. Naval Medical Research Center in Silver Spring, Maryland. Local government approval was obtained to conduct the study. This study was part of a respiratory virus surveillance network conducted by the Peruvian MoH in collaboration with NAMRU-6. Verbal consent was obtained from all participants using an information sheet approved and stamped by the NMRC IRB.
Specimen and data collection
ILI was defined as axillary temperature (≥37.5°C) and cough and/or sore throat. Subjects of any age were allowed to participate. Samples collected from January 2005 through December 2010 were included in this study. Throat swab specimens were obtained by trained personnel using flocked swabs and placed immediately into a 3 ml viral transport media tube (UTM Diagnostic Hybrids; USA). Each specimen was stored at -70°C on site until delivery on dry ice to NAMRU-6 for laboratory analysis. Samples analyzed in this study were collected in health facilities located in 11 provinces divided in three regions: coast, southern highlands, and jungle (Figure 2, Table 1).
Cell culture isolation and virus identification
All samples were inoculated onto three cell lines obtained from the American Type Culture Collection (ATCC®): Madin-Darby canine kidney cells (MDCK; CCL-34TM), African green monkey kidney cells (Vero E6; CRL-1586TM), and Rhesus monkey kidney cells (LLCMK2; CCL-7TM). Either after 10 days of inoculation or once cytopathic effect was observed, the presence of respiratory viral antigens was tested in all samples by immunofluorescence using commercial monoclonal antibodies (Diagnostic Hybrids; USA). Our immunofluorescence assay (IFA) used a blend of monoclonal antibodies for EV (Diagnostics Hybrids; USA) and specific monoclonal antibodies for 18 serotypes (Millipore; USA); PV1-3; CVA9, CVA16, and CVA24; CVB1-6; E4, E6, E9, E11, E30, and EV71. All assays were performed following the manufacturers’ established protocols. Besides EVs, viral antigens for adenovirus, influenza A virus, influenza B virus, parainfluenza viruses 1-3, respiratory syncytial virus, herpes simplex virus, and human metapneumovirus were tested by IFA (Diagnostics Hybrids; USA).
RNA extraction, PCR, and sequencing
EVs detected with the blend of monoclonal antibodies and with negative results for specific group and/or serotype testing were selected for classification by molecular methods. RNA was extracted from cell culture supernatant using the Viral RNA Mini Kit (QIAamp, Qiagen; USA), according to the manufacturer’s recommendations. For EV typing, a semi-nested PCR that amplified part of the viral protein 1 (VP1) gene was carried out using a previously described method . For direct sequencing, gene fragments were amplified and sequenced using the Big Dye terminator cycle sequencing kit version 3.1 (Applied Biosystems; USA) on an 3130XL DNA Sequencer (Applied Biosystems; USA). Nucleotide sequences of PCR products were analyzed by sequencing using Sequencher 4.8 software (Applied Biosystems; USA) and BioEdit software version 7.0.0 (Isis Pharmaceuticals Inc.; USA). PVs isolated in this study were further analyzed at the Centers for Disease Control and Prevention (Atlanta, Georgia) by intratypic differentiation real-time PCR and vaccine derived poliovirus real-time PCR.
To determinate the serotype, the VP1 sequences obtained were compared pairwise with sequences reported in the GenBank database through the BLAST search system. According to the results of this comparison, the EV detected in a sample was assigned to a serotype if it shared ≥75% nucleotide or ≥88% amino acid sequence identity . Moreover, to illustrate the relationship between sequences from our most common EV serotype, CVB1, and CVB1 isolates from other locations in the world, we performed phylogenetic analysis on a partial 234 nucleotide segment of the VP1 gene. Multiple sequence alignments were performed by the Clustal program in the Mac Vector software package (Mac Vector Inc.; USA). Genetic distances were calculated using the Kimura 2-parameter model of nucleotide substitution and the reliability of the phylogenies was estimated by bootstrap analysis with 1,000 replicates. Phylogenetic trees were reconstructed by the neighbor-joining algorithm, using the MEGA 5.05 software.
Eleven partial VP1 genome sequences of CVB1 obtained in this study have been deposited in the GenBank database under accession numbers: KF962544 – KF962554.
All the data from forms and laboratory results were entered using Microsoft Office Access 2003. Proportions were compared using Pearson Chi-Square test. A two-tailed critical value of alpha = 0.05 was used for statistical analysis using the SPSS Statistics software version 17.0 (SPSS Inc; USA). Although there was an overlap of four months between this and a prior study from our group , no sample was used in both studies.
Monto AS: Epidemiology of viral respiratory infections. Am J Med 2002,112(Suppl 6A):4S-12S.
Romero JR: Enteroviruses and Parechoviruses. In Manual of Clinical Microbiology. Edited by: Murray PR, Baron EJ, Jorgensen MA, Pfaller MA, Landry ML. Washington D.C: ASM press; 2007:1392-1404.
Minor PD, Muir P: Enteroviruses. In Principles and Practice of Clinical Virology. 6th edition. Edited by: Zuckerman AJ, Banatvala JE, Schoub BD, Griffiths PD, Mortimer P. Oxford: Wiley-Blackwell; 2009:601-624.
Modlin J: Introduction to the Enteroviruses and Parechoviruses. In Mandell, Douglas, and Bennett’s Principles and Practice of Infectious Diseases. 7th edition. Edited by: Mandell G, Bennet J, Dolin R. Philadelphia: Elsevier; 2010:2337.
Wikswo ME, Khetsuriani N, Fowlkes AL, Zheng X, Penaranda S, Verma N, Shulman ST, Sircar K, Robinson CC, Schmidt T, Schnurr D, Oberste MS: Increased activity of Coxsackievirus B1 strains associated with severe disease among young infants in the United States, 2007-2008. Clin Infect Dis 2009, 49: e44-e51. 10.1086/605090
Park K, Lee B, Baek K, Cheon D, Yeo S, Park J, Soh J, Cheon H, Yoon K, Choi Y: Enteroviruses isolated from herpangina and hand-foot-and-mouth disease in Korean children. Virol J 2012, 9: 205. 10.1186/1743-422X-9-205
Bracho MA, Gonzalez-Candelas F, Valero A, Cordoba J, Salazar A: Enterovirus co-infections and onychomadesis after hand, foot, and mouth disease, Spain, 2008. Emerg Infect Dis 2011, 17: 2223-2231. 10.3201/eid1712.110395
Laguna-Torres VA, Gomez J, Ocana V, Aguilar P, Saldarriaga T, Chavez E, Perez J, Zamalloa H, Forshey B, Paz I, Gomez E, Ore R, Chauca G, Ortiz E, Villaran M, Vilcarromero S, Rocha C, Chincha O, Jimenez G, Villanueva M, Pozo E, Aspajo J, Kochel T: Influenza-like illness sentinel surveillance in Peru. PLoS One 2009, 4: e6118. 10.1371/journal.pone.0006118
Laguna-Torres VA, Sanchez-Largaespada JF, Lorenzana I, Forshey B, Aguilar P, Jimenez M, Parrales E, Rodriguez F, Garcia J, Jimenez I, Rivera M, Perez J, Sovero M, Rios J, Gamero ME, Halsey ES, Kochel TJ: Influenza and other respiratory viruses in three Central American countries. Influenza Other Respir Viruses 2011, 5: 123-134. 10.1111/j.1750-2659.2010.00182.x
Laguna-Torres VA, Gomez J, Aguilar PV, Ampuero JS, Munayco C, Ocana V, Perez J, Gamero ME, Arrasco JC, Paz I, Chavez E, Cruz R, Chavez J, Mendocilla S, Gomez E, Antigoni J, Gonzalez S, Tejada C, Chowell G, Kochel TJ: Changes in the viral distribution pattern after the appearance of the novel influenza A H1N1 (pH1N1) virus in influenza-like illness patients in Peru. PLoS One 2010, 5: e11719. 10.1371/journal.pone.0011719
Douce RW, Aleman W, Chicaiza-Ayala W, Madrid C, Sovero M, Delgado F, Rodas M, Ampuero J, Chauca G, Perez J, Garcia J, Kochel T, Halsey ES, Laguna-Torres VA: Sentinel surveillance of influenza-like-illness in two cities of the tropical country of Ecuador: 2006-2010. PLoS One 2011, 6: e22206. 10.1371/journal.pone.0022206
Tokarz R, Kapoor V, Wu W, Lurio J, Jain K, Mostashari F, Briese T, Lipkin WI: Longitudinal molecular microbial analysis of influenza-like illness in New York City, May 2009 through May 2010. Virol J 2011, 8: 288. 10.1186/1743-422X-8-288
Garcia J, Espejo V, Nelson M, Sovero M, Villaran MV, Gomez J, Barrantes M, Sanchez F, Comach G, Arango AE, Aguayo N, de Rivera IL, Chicaiza W, Jimenez M, Aleman W, Rodriguez F, Gonzales MS, Kochel TJ, Halsey ES: Human rhinoviruses and enteroviruses in influenza-like illness in Latin America. Virol J 2013, 10: 305. 10.1186/1743-422X-10-305
Koon K, Sanders CM, Green J, Malone L, White H, Zayas D, Miller R, Lu S, Han J: Co-detection of pandemic (H1N1) 2009 virus and other respiratory pathogens. Emerg Infect Dis 2010, 16: 1976-1978. 10.3201/eid1612.091697
Jacques J, Moret H, Minette D, Leveque N, Jovenin N, Deslee G, Lebargy F, Motte J, Andreoletti L: Epidemiological, molecular, and clinical features of enterovirus respiratory infections in French children between 1999 and 2005. J Clin Microbiol 2008, 46: 206-213. 10.1128/JCM.01414-07
Trallero G, Avellon A, Otero A, De Miguel T, Perez C, Rabella N, Rubio G, Echevarria JE, Cabrerizo M: Enteroviruses in Spain over the decade 1998-2007: virological and epidemiological studies. J Clin Virol 2010, 47: 170-176. 10.1016/j.jcv.2009.11.013
Lo CW, Wu KG, Lin MC, Chen CJ, Ho DM, Tang RB, Chan YJ: Application of a molecular method for the classification of human enteroviruses and its correlation with clinical manifestations. J Microbiol Immunol Infect 2010, 43: 354-359. 10.1016/S1684-1182(10)60056-4
Xiang Z, Gonzalez R, Wang Z, Ren L, Xiao Y, Li J, Li Y, Vernet G, Paranhos-Baccala G, Jin Q, Wang J: Coxsackievirus A21, enterovirus 68, and acute respiratory tract infection, China. Emerg Infect Dis 2012, 18: 821-824. 10.3201/eid1805.111376
Portes SA, Da Silva EE, Siqueira MM, De Filippis AM, Krawczuk MM, Nascimento JP: Enteroviruses isolated from patients with acute respiratory infections during seven years in Rio de Janeiro (1985-1991). Rev Inst Med Trop Sao Paulo 1998, 40: 337-342.
Centers for Disease Control and Prevention (CDC): Update on vaccine-derived polioviruses--worldwide, April 2011-June 2012. MMWR Morb Mortal Wkly Rep 2012, 61: 741-746.
She RC, Crist G, Billetdeaux E, Langer J, Petti CA: Comparison of multiple shell vial cell lines for isolation of enteroviruses: a national perspective. J Clin Virol 2006, 37: 151-155. 10.1016/j.jcv.2006.06.009
Terletskaia-Ladwig E, Meier S, Hahn R, Leinmuller M, Schneider F, Enders M: A convenient rapid culture assay for the detection of enteroviruses in clinical samples: comparison with conventional cell culture and RT-PCR. J Med Microbiol 2008, 57: 1000-1006. 10.1099/jmm.0.47799-0
Mizuta K, Abiko C, Goto H, Murata T, Murayama S: Enterovirus isolation from children with acute respiratory infections and presumptive identification by a modified microplate method. Int J Infect Dis 2003, 7: 138-142. 10.1016/S1201-9712(03)90010-5
Mizuta K, Abiko C, Aoki Y, Suto A, Hoshina H, Itagaki T, Katsushima N, Matsuzaki Y, Hongo S, Noda M, Kimura H, Ootani K: Analysis of monthly isolation of respiratory viruses from children by cell culture using a microplate method: a two-year study from 2004 to 2005 in yamagata, Japan. Jpn J Infect Dis 2008, 61: 196-201.
Chonmaitree T, Ford C, Sanders C, Lucia HL: Comparison of cell cultures for rapid isolation of enteroviruses. J Clin Microbiol 1988, 26: 2576-2580.
Iturriza-Gomara M, Megson B, Gray J: Molecular detection and characterization of human enteroviruses directly from clinical samples using RT-PCR and DNA sequencing. J Med Virol 2006, 78: 243-253. 10.1002/jmv.20533
Oberste MS, Maher K, Kilpatrick DR, Flemister MR, Brown BA, Pallansch MA: Typing of human enteroviruses by partial sequencing of VP1. J Clin Microbiol 1999, 37: 1288-1293.
We would like to express our gratitude to the physicians at the sentinel centers supporting this study: Silvia Macedo (Tumbes), Monica Cadenas (Puerto Maldonado, Madre de Dios), Stalin Vilcarromero (Iquitos, Loreto), and Julio Custodio (Cusco). In addition, we thank Jane Rios, Maria Esther Gamero, and Patricia Galvan from the NAMRU-6 Virology Department for laboratory technical support. We would also like to thank Ms. Milagros Cifuentes for translation and editorial assistance of the manuscript and Vidal Felices for his assistance reviewing the manuscript. Finally, we thank W. Allan Nix and Carla Burns from the Centers for Disease Control and Prevention Polio and Picornavirus Laboratory Branch for their laboratory assistance and overall insight.
Eric S. Halsey is a military service member and Jose L. Huaman, Gladys Carrion, Julia S. Ampuero, and V. Alberto Laguna-Torres are employees of the U.S. Government. This work was prepared as part of their official duties. Title 17 U.S.C. §105 provides that ‘Copyright protection under this title is not available for any work of the United States Government.’ Title 17 U.S.C. §101 defines a U.S. Government work as a work prepared by a military service member or employee of the U.S. Government as part of that person’s official duties.
The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of the Department of the Navy, Department of Defense, U.S. Government, nor the Ministry of Health of Peru.
This work was supported by the Armed Forces Health Surveillance Center’s Global Emerging Infections Systems Research Program, work unit number 847705.82000.25GB.B0016.
None of the authors has a financial or personal conflict of interest related to this study. The corresponding author had full access to all data in the study and final responsibility for the decision to submit this publication.
JLH, JSA, and ESH contributed in the design and conception of the study. JLH, GC, and JSA performed the analysis of the data. JLH, JSA, VALT, and ESH performed the interpretation of data. JLH drafted and submitted the manuscript. JLH, GC, JG, VO, IP, EG, EC, FS, and EP contributed with the acquisition of data. All authors contributed with the critical revision and final approval of manuscript.
About this article
Cite this article
Huaman, J.L., Carrion, G., Ampuero, J.S. et al. Non-rhinovirus enteroviruses associated with respiratory infections in Peru (2005-2010). Virol J 11, 169 (2014) doi:10.1186/1743-422X-11-169
- Respiratory infections
| 1 | 15 |
<urn:uuid:2aadc65f-6049-47ca-9652-60074eeba4de>
|
For hundreds, if not thousands of years, Ireland was a country ruled by tribes. Historical records mention ancient tribes that stretch so far back it’s uncertain whether they were mythological or real (or a combination of both). When the Celts reigned they had the country unofficially divided into various kingdoms that were ruled by various alliances of tribes. These changed regularly with the many wars and battles that the Celts undertook, until the introduction of Christianity dispersed this somewhat. Things were further disrupted when Norse, Scottish and English settlers arrived to claim Irish territory as their own.
Eventually, the country morphed into the four provinces of Ulster, Munster, Leinster, and Connacht that we still have today. These were ruled by the British monarchy until the struggle for independence in the early 20th century.
From the 13th to the 19th centuries, a new band of tribes came to fore. Although not like before. They were the 14 tribes (or powerful merchant families to be more specific) of Galway, who dominated political, social and commercial affairs in the city and much of the surrounding region during that time.
A Brief History the Galway Tribes
The 14 families of the Galway tribes came from varying backgrounds including Irish, Norse, French, English, Welsh, and various combinations of some or all of the above. After the English conquest of Ireland, the families gained power and influence through extensive trading with continental Europe (particularly Spain). Essentially, they became the de facto rulers of the city.
During medieval times Galway was a thriving trading port. It was considered almost as important as Dublin and equally as significant as the other cities in the country at the time. Although they distanced themselves from the natives living in the land surrounding the city, both groups banded together during one of the first of many rebellions against British rule in the Irish Confederate Wars from 1641 – 1653. However, it wasn’t to last.
In 1649 the notorious British military leader Oliver Cromwell arrived in Dublin. He made his way across the country suppressing all hints of rebellion (and a lot more besides) as he went. His forces laid siege to Galway for almost a year, starting in 1651. When the city surrendered in 1652, Cromwell confiscated all property belonging to the Tribes. In 1654, their influence further waned when English parliamentarians took over the Galway Corporation. Cromwell was the man who first called the families the ‘Tribes of Galway’ as a derogatory name. The families countered this by proudly adopting it for themselves.
After the Cromwell era, the Tribes briefly enjoyed a return to power of sorts under the reign of King Charles II and his successor, James II. The city suffered another defeat in the War of the Two Kings in 1691. This time they never truly bounced back. Their power was gradually transferred to the Protestant population of the city. By the 19th century, the once-great tribes were all but gone from the city.
The 14 families
The 14 families, or ‘Tribes’, were diverse in many ways, not just their backgrounds. Here is a brief summary of each, how they achieved their success and what they did with it.
Athy: The Athy family was of Anglo-Norman descent, rising to prominence under Gerard de Athee, a Norman knight who fought for Richard the Lionhearted, King of England. His descendants migrated to Ireland in the early 1300s, already very wealthy.
The name changed from Athee to Athy. This family was credited with erecting Galway’s first stone building. They went on to build several castles and great houses and survived in Galway until the mid-20th century. The surname is no longer very common in Ireland, and ‘Athy’ is better known as a town in County Kildare (which funnily enough has no connection to the family!).
Blake: The Blake family of Galway descended from Richard Caddell, who was of British extraction and involved in the Norman invasion of Ireland. He gave his successors the title of Blake, meaning ‘dark-haired’, and made a name for himself as sheriff of Connaught.
His successors went on to hold many important seats in the region, building their primary seat at Menlo near Galway. They were considered to be one of the most powerful of the Galway Tribes, and the Blake name is still very common in the city and surrounding areas.
Bodkin: Not immediately a name considered to be ‘Irish’, the Bodkins’ ancestor was in fact Maurice Fitzgerald, Lord of Windsor and one of the first invaders of Ireland under Strongbow. His son and subsequent generations rose to power in Munster through land ownership, and eventually spread their influence to Connaught.
The fourth generation of this line earned the name Bodkin due to his prowess in battle with a short spear called a Baudekin. The Bodkins then allied with the Athys through marriage, further cementing their status as one of the 14 Tribes.
Browne: The original Browne of Galway was yet another member of Strongbow’s invasion in the 1100s. He was appointed Governor of Wexford and laid siege to Limerick with an army of 60 men. He had three sons, one of whom settled in Galway and started off the Browne line there.
Another version of the ancestry states that a branch of the family settled in Brownstown, near Loughrea, and subsequently expanded to Athenry and Galway. The Brownes were influential across Mayo and Galway, and the name is still prominent today.
D’Arcy: The D’Arcy family is was thought to have descended from a powerful French family in Charlemagne, who named themselves after their seat 30 miles from Paris, Castle D’Arcie. A member of this family, Richard, travelled to England with William the Conqueror and was appointed to powerful positions in Ireland in the 14th century.
However, recent DNA evidence has shown that the D’Arcys are in fact ancient Irish.
Deane: The origins of the Deane Tribe is somewhat ambiguous. Some sources say they are descendants from William Allen, who came to Ireland from Bristol during the reign of Henry VI. Allen was later elected Provost.
There are also records of the Deanes having Gaelic origins, specifically the Mac an Deaganaigh or O Deaghain names, both of which mean ‘son of the deacon’. Either way, the Deanes gained high status by their involvement in politics and had a long history of holding official positions such as mayors and chief magistrates of Galway city.
Ffont: The Ffont family is one of the lesser-known Galway Tribes, and since the last surviving Ffont died in 1814 (at the ripe old age of 105), their history seems to have been blurred or lost.
It is known that they settled in Galway at the beginning of the 15th century and that they originated from an ancient English family in Leicestershire. The first significant branch of the family settled in Athenry and eventually made their way to Galway. They most likely became powerful largely through their connections to the other Tribes.
Ffrench: The Ffrench family is another tribe with Norman origins. The first known Ffrench was Maximilian, whose descendants went to England to serve William the Conqueror.
When they arrived in Ireland they initially settled in county Wexford, and gradually spread out across the country. Walter Ffrench was the first of the family to settle in Galway around the year 1425. Although somewhat of a rarity nowadays, there are still small clusters of Ffrench families in the area, including some of the original line who still hold their seat at Castle Ffrench near Ballinasloe.
Joyce: Joyce is still one of the most common names in the west of Ireland, and the original Joyce tribe once owned so much land in the region that it was known as ‘Joyce country’. The origins of the family are Welsh and British, starting with Thomas Joyes who sailed to Ireland under the reign of King Edward I.
Arriving in Munster, he affirmed his power to the natives by marrying Onorah O’Brien, daughter of the King of Munster. Next, he sailed to Connaught, claiming territory as he went. The family later became known in the church, with some of them becoming archbishops and cardinals.
Kirwan: The Kirwan tribe is the oldest of all the 14, and the only proven 100% Irish Gaelic tribe too. They have successfully traced their ancestors all the way back to one of the original Gaels to inhabit Ireland, Milesius.
They appear to have first settled in Galway during Henry VI’s time, although it’s very possible that they were already there long before just under a different variation of the name. They were one of the most respected of the Tribes given their long lineage and consistent success in all areas.
Lynch: The Lynch family was by far the most powerful of the Galway Tribes. Over the course of 169 years, a staggering 84 Lynches held the office of Mayor of Galway. They effectively had a monopoly on the politics of the city and were highly regarded amongst everyone, including the rest of the tribes.
The original Lynch ancestor was John de Lynch, whose grandfather William le Petit was an associate of the well known and powerful Sir Hugh de Lacy. There are still a small number of Lynch noblemen today, including a branch who have settled and become involved in the politics of Bordeaux.
Martin: The Martin family is another whose origins are somewhat vague. Oliver Martin is said to have been the first of the name to settle in Ireland, arriving with Strongbow.
The name was derived from ‘Martius’, meaning ‘warlike’. Other theories claim that Martins were descendants of the ancient Firbolg tribe, one of the very first human arrivals on the island. Either way, the Martins proved to be very lucrative traders and were soon one of the most prosperous of the Tribes.
Morris: The Morris family were not particularly noteworthy when compared with the other Tribes of Galway, but were nonetheless extremely successful and prosperous. They first settled in Galway in 1485 with the name Mares, which later transformed into Morech and finally Morris.
They were heavily involved in running the city’s affairs, regularly winning the titles of mayor and sheriff of the city. They held influence both in Galway city and in the nearby settlement of Spiddal, where their rural seat was located.
Skerritt: The Skerritt name has been closely associated with Galway more or less since historical records began. First, their name was Huscared, a name with English origins who were granted lands in Connaught by Richard de Burgo in the 13th century.
By the time their name had morphed into Skerritt, they had built up a reputation for themselves as distinguished provosts. The Skerritt name is still common around Galway, particularly in the rural areas.
Find this article interesting or do you have anything you’d like to add!
Let us know in the comments below
We’d love to hear from you
See more of our blog posts on Irish History
| 1 | 2 |
<urn:uuid:587d7224-600c-4eef-8e65-2544a3707a48>
|
Nurse Practitioner Salary, Programs and Jobs
Your complete guide on how to become a nurse practitioner, different types and salary info. Learn how to find nurse practitioner schools and jobs near you.
Are you a registered nurse (RN) or a student interested in advancing in the nursing career? The nurse practitioner (NP) profession is a good top-cadre job for anyone interested in helping people overcome their health problems.
Nurse practitioners (NPs) are advanced-practice registered nurses (APRNs) who have completed advanced training and clinical education that is beyond the training for generalist registered nurses (RNs). In fact, a nurse practitioner is a registered nurse who has achieved the decision-making skills, clinical competencies and knowledge level that is beyond that of an RN.
NPs are qualified to order treatments, diagnose medical problems, prescribe medications and make referrals for a huge range of acute and chronic illnesses. Depending on the states in which they work, nurse practitioners may either work under physician supervision or independently. For instance, in internal medicine or primary care fields, the NPs work independently unless they need to consult physicians or make referrals.
CNAnursing Expert Tip: Check out this short video about Nurse Practitioners
The nurse practitioner is a critical component of a multi-disciplinary team of medical-surgical specialists, pharmacists, occupational therapists, physical therapists, mental health workers, physicians, social workers and dietitians whose work is to boost access to primary care and the quality of life for patients. The NPs practice within the full scope of practice outlined by the national and state-level legislations.
Being the core providers of primary care, nurse practitioners strive to offer individualized and holistic care to patients, and place immense emphasis on disease prevention, patient counseling, education and health promotion. The NP profession is state-regulated and the degree of care provided by nurse practitioners depends on their credentials and education. While some states allow the NPs to work independently, a huge majority still require them to have collaborative agreements with physicians. The roles, responsibilities, pharmacologic recommendations and duties of the NPs in those collaborative agreements depend on the state licensure and certification regulations.
As a nurse practitioner, you will undertake the following roles and responsibilities:
- Perform patient health evaluations and maintenance activities (such as physical exam, history taking, wellness exams, breast exams, pap tests, immunizations and prenatal care and education).
- Monitoring ongoing therapies on patients with chronic illnesses by offering counseling and pharmacological interventions.
- Screening for the presence and extent of chronic illnesses.
- Performing examinations using sex/age specific lists of recommended risk assessment and preventive interventions.
- Diagnosing and treating episodic/acute minor illnesses (such as ear and throat infections, respiratory illnesses, genital and urinary tract infections, dermatology and gastrointestinal infections.
- Offering on-phone consultations or triage for acute/episodic illnesses.
- Consistently recording and documenting patient information in the electronic medical record (EMR).
- Formulating and communicating results of medical diagnoses and potential therapies for medical disorders.
- Determining the need for ordering and interpreting ECGs, X-Rays and diagnostic ultrasound tests
- Prescribing drugs according to approved lists.
- Providing family planning and prenatal care services, and recommending occupational therapy, physical therapy and other rehabilitation treatments.
- Performing minor surgeries (such as suturing, casting and dermatological biopsies) and assisting during major procedures.
- Conducting research, training, patient advocacy and policy development at state, regional and national level
Find local and online Nurse Practitioner programs below. It’s fast and free.
Get Your Degree!
Find schools and get information on the program that’s right for you.
Powered by Campus Explorer
As part of the healthcare team, you will also engage in referral, consultation and collaboration with other healthcare personnel. Therefore, you will:
- Conduct regular consultations with physicians in accordance with the guidelines for nurse-physician consultations.
- Liaise with other medical care personnel as frequently as necessary.
- Make arrangements for prompt external specialist consultations.
- Ensure smooth transition in medical care by communicating orally and in writing with hospital staffs, community staffs and other members of the inter-professional team who are engaged in caring for the same patients
Moreover, as a nurse practitioner, you will be engaged in administrative duties such as:
- Developing, initiating and maintaining preventive health monitoring programs, such as cholesterol monitoring tests for women and men above 45 years; blood pressure monitoring tests for women and men above 50 years; and pap tests for sexually-active women between the ages of 20 years and 70 years.
- Arranging follow-ups and appointments with patients as necessary.
- Preparing LTC/MOH Service Reports.
- Performing other administrative duties that are assigned in collaboration with other healthcare personnel.
The skills and knowledge requirements for success in the nurse practitioner career include:
- Critical thinking, leadership, communication, and organizational skills.
- Ability to work as a member of a team.
- Strong clinical and health assessment skills.
- Expertise in creating and maintaining excellent working relationships with other healthcare personnel, partner organizations, communities and other stakeholders.
- Ability to maintain impartiality and confidentiality.
- Self-motivation under stressing work schedules.
- Impressive attention to detail and high degree of accuracy.
- Ability to adapt speedily to fast-paced and dynamic working environments.
- Dependability, consistency and punctuality at work.
- Capacity to prioritize and manage time efficiently, and to be flexible enough to adjust to the extremely active working environment.
In addition to the general skills, you should have a good mastery of English. Computer skills such as familiarity with MAC environment, computer systems and applications, and electronic documentation records are also a necessary qualification.
To become a nurse practitioner, you must begin by earning a Bachelor of Science in Nursing (BSN) degree or a relevant undergraduate degree. After the degree, you will need to take a licensing exam to become a registered nurse and to join the generalist RN role for at least 1 year. As an experienced RN, you can now join a graduate program in nursing, such as master (MSN) and doctoral (DNP) programs, in order to specialize and qualify to be a nurse practitioner. The typical courses completed by aspiring nurse practitioners include health promotion, epidemiology, advanced pathophysiology, diagnostic reasoning and physical assessment, laboratory and radiographic diagnosis, advanced pharmacology, research and statistics, leadership and role development, health policy, and management of acute and chronic diseases in children and adults. Aspiring nurse practitioners must also undertake clinical rotations, with primary focus on their specialization areas.
When they opt for the Doctor of Nursing Practice (DNP) programs, aspiring NPs usually take advanced coursework in research methods, biostatistics, caring for special populations, informatics, health policy and economics, organizational management, and clinical outcomes measures. The specialization areas for aspiring NPs include primary care, acute care, pediatrics, neonatal care, family practice, adult-gerontology, cardiology, general surgery, anesthesiology, emergency medicine, psychiatric-mental health and women’s health. After completing these programs, the NPs must pass the national board’s certifying exam and receive additional credentials (such as DEA registration number, prescriptive authority and APRN license) at the state and federal levels before beginning full practice as NPs. To maintain their licensure and certification, the NPs must also achieve specific hours of clinical practice on continued medical education (CME).
There are over 104 nurse practitioner specialties. The most popular ones include:
- Emergency Department Nurse Practitioners: They work in the fast-paced setting of the emergency department, treating patients of all ages.
- Neonatal Nurse Practitioners: Offer medical care to pre-term and full-term infants and newborns, especially those that are critically ill.
- Retail Health Nurse Practitioners: Work in retail clinics where they treat injuries, minor illnesses and manage chronic diseases.
- Hospital-Based Nurse Practitioners: Are employed in hospital settings where they diagnose and manage various disorders while creating treatment plans for patients admitted in the hospitals.
- Gerontology Nurse Practitioners: They evaluate, manage and treat chronic and acute medical conditions in older adults.
- House Call Nurse Practitioners: They are NPs that are hired by hospitals and healthcare facilities to make house calls and treat homebound patients and individuals who have recently been released from hospitals.
- Mental Health/ Psychiatric Nurse Practitioners: They provide medical services to individuals and families that are affected by mental illnesses.
- Surgical Nurse Practitioners: They perform minor surgeries (such as suturing of wounds) but often assist physicians during surgical procedures.
- Oncology Nurse Practitioners: Are NPs who help with the treatment and management of cancers in partnership with physicians. The nurse practitioners also address wellness and survivorship issues that relate to the cancers.
- Cardiology Nurse Practitioners: They help with the diagnosis, management and treatment of heart disorders such as arrhythmias and CHF.
- Certified Nurse Anesthetists: They administer anesthesia to patients prior to surgical procedures.
- Orthopedic Nurse Practitioners: They specialize in the management of musculoskeletal conditions such as arthritis, joint disorders, diabetes and many other conditions.
- Pediatric Endocrinology Nurse Practitioners: They offer medical care to children suffering from endocrine system disorders.
Nurse Practitioner Salary
Return to top of page >
The salary of a nurse practitioner depends on the area of specialization, location, years of experience, level of education and company size. Currently, the average annual earning for a nurse practitioner in the U.S. is $96,255. This means that 50-percent of NPs earn less than $96,255. The lowest paid NPs earn around $66,960 annually while the highest paid earn around $126,250. Moreover, according to BLS data, the top-paying industries for NPs are:
- Personal Care Centers: $117,300 average pay per year
- Specialty hospitals (such as substance abuse and psychiatric care centers): $109,850 average pay per year.
- Grant-Making Services: $107,350 average pay per year.
The top-paying states for nurse practitioners are:
- Alaska: Average annual pay of $112,090
- Hawaii: Average annual pay of $104,690
- Oregon: Average annual pay of $103,280
- Massachusetts: Average annual pay of $102,340
- New Jersey: Average annual pay of $101,030
The top nine states with the largest concentration of nurse practitioner jobs are:
- Massachusetts: Average annual pay of $102,340
- California: Average annual pay of $98,970
- New York: Average annual pay of $97,730
- Texas: Average annual pay of $97,710
- Florida: Average annual pay of $86,840
- Mississippi: Average annual pay of $91940
- Tennessee: Average annual pay of $88720
- Maine: Average annual pay of $87060
- Utah: Average annual pay of $83,880
Nurse practitioner salary also varies according to the specialty of the NP. Here are the average annual earnings of the most popular NPs:
- Emergency Department Nurse Practitioners: $103, 722
- Neonatal Nurse Practitioners: $99,810
- Retail Health Nurse Practitioners: $96,800
- Hospital-Based Nurse Practitioners: $96,124
- Gerontology Nurse Practitioners: $94,485
- House Call Nurse Practitioners: $93,785
- Mental Health/ Psychiatric Nurse Practitioner: $92,396
- Surgical Nurse Practitioners: $91,023
- Oncology Nurse Practitioners: $90,862
- Cardiology Nurse Practitioners: $90,370
- Certified Nurse Anesthetists: $100,000
- Orthopedic Nurse Practitioners: $86,127
- Pediatric Endocrinology Nurse Practitioners: $97,452
Nursing Practitioner Schools and Programs
Return to top of page >
After you serve for one year as a registered nurse, you qualify to join a broad range of nurse practitioner programs. However, you must choose your NP school and program carefully so that your career path is not hampered by sub-standard instruction, tutoring, facilities and practical hours. The degree programs for NPs are typically:
- Master’s Degree
- DNP Degree
You should check with the American Association of Colleges of Nursing (AACN) whether the school or program you intend to join is accredited by the Commission on Collegiate Nursing Education (CCNE). You may also find a list of accredited programs from the National League for Nursing Accrediting Commission (NLNAC) web site. Most states accept masters or doctoral degree before they can license NPs. As you plan to join a program, make sure to note its characteristics and check if it matches with your personal, career and licensure requirements.
When looking for information regarding nurse practitioner schools and programs, you can get crucial information from websites. Many schools and universities post information regarding their programs, fees, faculties and courses online. You can use the websites to compare different programs and schools, and to assess whether the course descriptions meet the national education standards for National Organization of Nurse Practitioner Faculties (NONPF) Core and Population Competencies, and AACN Essentials of Doctoral Education. Apart from the websites, you should call or visit the colleges to find out in-depth information regarding the programs.
The most critical NP program information to look for includes:
- School and program ranking
- Program costs
- Curriculum delivery
- Length of the program
- Faculty (whether it is in a clinical practice and can maintain expertise, licensure and certification requirements).
- Number of students: crowded classes may hinder a good learning experience.
- Clinical site placement (does the school arrange for it?)
- State, regional and national level accreditation for the program.
Nurse practitioner programs are offered on-campus or online. If you are intending to join an online program, you will need to do meticulous research to get a program that would not inconvenience you during clinical placement. In fact, it is prudent to talk to the school’s Nursing Program Advisor and Admissions Advisor before you join a program. Their valuable advice can easily help you to plan for your NP studies.
Some of the colleges and universities you can join for your nursing practitioner studies include:
- University of Phoenix: Nationwide NP programs (masters and doctoral)
- Chamberlain College of Nursing: DNP and MSN programs
- South University: Family Nurse Practitioner and MSN programs
- Capella University: DNP and MSN programs
- Kaplan University: RN-to-MSN in Nursing and DNP programs, especially to veterans and military students.
- Walden University: DNP and MSN programs for RNs
- American Sentinel University Online: RN-to-MS in Nursing
- Simmons School of Nursing and Health Sciences
- Georgetown University School of Nursing and Health Studies: a variety of NP specialty programs
Other Popular Colleges include
- Grand Canyon University
- DeVry University
- University of Pennsylvania
- Vanderbilt University
- Michigan State University
- University of Iowa
- The University of Tennessee
- University of Vermont
- University of Wyoming
- Baylor University
- DePaul University
- University of North Dakota
Requirements for Nurse Practitioner Career
Return to top of page >
NPs are advanced practice nurses whose expertise, knowledge and skills should be vast and impeccable. Therefore, to become a nurse practitioner, you must first complete a bachelor’s degree program or equivalent and pass a licensing exam for a registered nurse. After achieving at least 1 year experience as a registered nurse, you can join a master’s degree, post-graduate or doctoral degree program in nursing to become a nurse practitioner. All NPs must be licensed and certified by accredited bodies. The job requires basic computer skills such as data entry, using word processors and operating automated medical records programs.
Prerequisites to becoming a nurse practitioner include:
- Bachelor of science in nursing (BSN) is a requirement for enrolling in masters of science in nursing (MSN) program
- An applicant for NP program must be a licensed registered nurse
- To become a nurse practitioner, you must graduate with a Master of Science in Nursing or a Doctoral of Nursing Practice (DNP). The degree programs should enable you to specialize in your field of choice, such as pediatrics, women’s health, mental health, public health, oncology, and geriatric care. Once you earn the MSN or DNP degree, you must take a nurse practitioner exam to get the requisite certification.
- The steps to becoming a nurse practitioner can be summarized as:
- Completing a bachelor’s degree in nursing: This step ensure that the aspiring NPs meet the requirements for becoming registered nurses.
- Obtaining state licensure as a registered nurse: Aspiring NPs must pass the National Council Licensure Examination for Registered Nurses (NCLEX-RN) and meet the state board licensing requirements.
- Experience: NP programs require at least 1 year experience as a registered nurse in the preferred specialty.
- Earning a nurse practitioner master’s degree: The master’s degree programs take 1-3 years and expose aspiring NPs to dialectic learning and clinical experiences.
- Certification: After graduating from accredited nurse practitioner programs, the graduates take certification exams that are administered by the boards dealing with their specialties.
To be a competent NP, you also need to be:
- A critical thinker capable of making evidence-based decisions on when, where and how to deal with health care needs.
- A good oral and written communicator
- An independent observer with acute eye to detail
- Able to cope well with stressing conditions, especially because of the immense human suffering, emergencies and pressures involved in this profession.
As an aspiring nurse practitioner, you should also think about getting a Doctor of Nursing Practice (D.N.P.) degree (known as practice doctorate) because there is a growing movement in favor of requiring all NPs to have DNPs by 2015.
The NP programs cost a lot of money. Most NP practitioner programs require 30-34 credits to complete and the cost per credit is $225 to $665 for in-State programs. Therefore, in-State school tuition for NP programs totals to $18,000. In out-of-State schools, the cost per credit is $570 to $1300, resulting in a tuition fee of up to $31,500. For those who take online programs, the cost per credit is $400 to $800, averaging to $22,500 tuition fee for the program. Similarly, students who attend private universities expect to pay around $45,000 or more. Usually, the NP programs take 1-2 years to complete on a full-time basis and 3-4 years on a part-time basis, and anything that forces the student to take longer time to complete a program will increase the costs.
To reduce the cost of NP education, you should:
- Conduct a thorough program search in order to find a low-cost program.
- Attend a University close to where you reside
- Consider long-distance and online programs if they can offer you sufficient flexibility to minimize living expenses and continue with your work.
- Apply for the Financial Aid for Nurse Practitioner Programs such as scholarships and loans
- Get employer assistance with part of or all of the program’s cost
Job Outlook for the Nurse Practitioner Career
Return to top of page >
Nurse practitioner jobs are expected to increase massively in the next 10 years because of advancing technology that improves the quality of care and increases the number of solutions to health problems. Similarly, as life expectancy improves, many patients survive for many years and require nursing care in their old age. According to the Bureau of Labor Statistics, nurse practitioner jobs will increase by more than 19 percent between 2014 and 2022. By 2025, the number of NPs is expected to double because of their growing role in primary care centers. And between 2008 and 2025, the number of NP jobs is expected to increase from 86,000 to 198,000 per year. This growth is attributed to the continuous passage of state-level laws that give NPs more independence from physicians. Moreover, the fastest growth is expected in residential care and rehabilitation centers because many more people are admitted in those centers with trauma, psychological problems and dementia. Faster growth is expected in outpatient centers because new technologies have allowed complex procedures to be availed in physician offices and outpatient centers.
Finding NP jobs is often less cumbersome because most of the practitioners are registered nurses who have previously worked in different organizations. Nonetheless, the new position may come with a few challenges and the NP must conduct thorough job search in order to get a job quickly. Job-searching efforts should begin during training. While in school, the aspiring NP should keep a good rapport with classmates, instructors, tutors, clinical internship supervisors and other influential people in the job market. It is these people that the nurse practitioner should depend on, later on, for information about possible opportunities in their companies.
Job-seeking NPs must also keep their eyes out on newspapers and other print media for any NP job adverts. When a job is advertised, the NP should tailor a resume and cover letter to address the critical and specific requirements of the job advert. Besides, nurse practitioners can use online job boards and social media (such as LinkedIn) to find and apply for jobs.
Finally, the most crucial step in getting your cherished nurse practitioner job is performing convincingly during the interview. You should prepare well for your interview by finding out what the company deals in, its services and products, and the operations of the department where you are likely to be posted if you succeed in the interview. Similarly, make sure to cruise through basic nurse practitioner content so that you are better prepared to convince your prospective employer that you are knowledgeable and competent.
| 1 | 3 |
<urn:uuid:73927652-4799-4c85-9392-901fcdd8afa4>
|
Non-communicable diseases (NCD) are the main causes of death in developed countries and are largely associated with aging. The worrying trend is that they becoming more common in younger age groups and the increase in life expectancy we have seen in people born in the 1920’s and 1930’s will likely be reversed in the age group born in the 60’s, 70’s & 80’s caught up in the obesity epidemic (1). Obesity is a risk factor for NCD but does not explain the cause as NCD are the major killers of normal weight people as well. Research into the differences between long living mammals and their shorter life cousins have identified superior cell detoxification and repair processes (2) as the key to healthy aging and deficiencies in these processes may be a major cause of the age related diseases in man.
Constant Turnover of Protoplasm
The material make-up of living things is gradually broken down and rebuilt on a daily cycle resulting in profound changes over time during the different phases of life from growth and development to the long involutionary period of aging in later adulthood (3). Cleaning up residual protein fragments generated as a by-product of metabolism is thought to be the underlying function of sleep (4) and explains the molecular basis of neurological impairment associated with sleep deprivation. Efficient removal of protein residues is an essential life function and the regulation of these processes is inextricably tied up with nutritional status of cells (5) (6). Protein recycling is accomplished by a process called autophagy or “self-eating” and the material targeted for recycling is highly selected cell refuse. Autophagy is regulated by recognition of the fasting (7) and feeding cycles and this in turn is dependent primarily on glucose abundance in foods. Here we present the signalling pathways where glucose abundance in the diet can be linked to underlying causes of non-communicable diseases (8).
Health Benefits of Calorie Restriction:
It is generally agreed that over-nutrition is responsible for obesity, diabetes and many age related diseases and it is well known that calorie restriction, even intermittent fasting, benefits all of these things and it is thought that possibly the main mechanism of these benefits is due to up-regulation of autophagy (9) which recycles redundant, potentially toxic material (10). Sensing mechanisms of nutritional status trigger cell signaling that regulates basal autophagy where survival requires mobilization of body stores during fasting. Fat mobilization (lipophagy) takes place by a process akin to autophagy (11) and mobilization of glucose from glycogen is also accomplished via autophagy (12). It may be an opportunist adaption of evolution that autophagy is employed for mobilization of substrate stores during fasting as well as protein recycling and intracellular “refuse disposal” processes. Feeding inhibits and fasting stimulates autophagy through the actions of insulin and glucagon respectively and the primary determinant of insulin and glucagon secretion is blood glucose. Insulin and glucagon control autophagy via their opposing effects on mTOR (mammalian target of rapamycin) whereby mTOR activity suppresses autophagy (5). Glucagon via glucagon receptor activates adenylate cyclase which increases cAMP and activates PKA (cAMP activated protein kinase) which inhibits mTOR thereby stimulating autophagy in situations of fasting. On the other hand glucose stimulated insulin secretion suppresses autophagy via Akt (PKB) activation of mTOR. A ketogenic diet was found to inhibit the mTOR pathway via decreased Akt signaling as well as increased AMPK signaling in the liver of rats (13). Through similar pathways Insulin and glucagon also have a role in regulating mitochondrial biogenesis. Mitochondrial biogenesis is regulated by the master-controller, PGC1 nuclear receptor coactivators. Fernandez and Auwerx (14) discovered how the pancreatic hormones insulin and glucagon play an opposing role in PGC1a transcription. Insulin secretion which by activating Akt(PKB) depresses mitochondrial biogenesis by inhibiting PGC1a transcription while PGC1a transcription is increased via the glucagon receptor-PKA pathway.
NRF2: Recent research by Rochelle Buffenstein’s group into the differences between long lived species the naked mole rat with a 40 year maximum life expectancy compared with 4 years in their short lived relatives, focuses on their superior detoxification and repair abilities largely mediated by the master controller NRF2 (NFE2L2) enhancing transcription of multiple proteins involved in cell protection and detoxification as well as chaperones involved in autophagy and protein stability (2). The activity of two master controller transcription factors NRF2 and PGC1a appear to function in tandem, as they are increased by the same environmental stimuli and cell signaling pathways regulating multiple genes involved in autophagy (15) and mitochondrial regeneration (14) respectively. Both NRF2 and PGC1a are increased by a ketogenic diet (16) (17) (18).
AMPK Another sensor of cellular energy levels is AMP activated protein kinase (AMPK). AMPK responds to increased AMP/ATP levels that occur with exercise. A recent review highlights a central role for AMPK in disease resistance and longevity (19) promoting transcription of FOXO dependent proteins such as PGC1a and NRF2 while promoting autophagy by inhibiting mTOR. Of particular relevance to the mechanism of ketogenic diets is that insulin signaling powerfully suppresses AMPK activation via Akt/PKB (20) while glucagon activates AMPK by activating CaMKIV (21)
Autophagy is induced with ketogenic diet
Because a ketogenic diet profoundly suppresses insulin secretion even in the presence of adequate calorie intake (22) it follows that ketogenic diets enhance basal autophagy (13).
Widespread appreciation of the emerging importance of autophagy in life and disease is likely to focus attention on ways to optimize these processes and macronutrients and phytonutrients have a profound impact as seen from the lessons from epidemiological and basic science studies on restriction of glucose abundance through low glycemic and ketogenic diets (23).
1. A Potential Decline in Life Expectancy in the United States in the 21st Century. S. Jay Olshansky, Ph.D., Douglas J. Passaro, M.D., Ronald C. Hershow, M.D.,Jennifer Layden, M.P.H., Bruce A. Carnes, Ph.D., Jacob Brody, M.D., Leonard Hayflick, Ph.D.,Robert N. Butler, M.D., David B. Allison, Ph.D., and David S. Ludwig, M.D., Ph.D. s.l. : n engl j med, 352;11, March 17, 2005.
2. Viviana I. Pereza, Rochelle Buffenstein, Venkata Masamsetti, Shanique Leonard, Adam B. Salmon, James Meleb, Blazej Andziakd, Ting Yangd, Yael Edreyd, Bertrand Friguete, Walter Ward, Arlan Richardsona, and Asish Chaudhur. Protein stability and resistance to oxidative stress are determinants of longevity in the longest-living rodent, the naked mole rat. s.l. : PNAS March 3, 2009 vol. 106 no. 9 3059–3064.
3. Li-qiang HE, Jia-hong LU, Zhen-yu YUE. Autophagy in ageing and ageing-associated diseases. . s.l. : Acta Pharmacologica Sinica (2013) 34: 605–611; ; published online 18 Feb 2013. doi: 10.1038/aps.2012.188.
4. Varshavsky., Alexander. Augmented generation of protein fragments during wakefulness as the molecular cause of sleep: a hypothesis. . s.l. : PROTEIN SCIENCE 2012 VOL 21:1634—1661 Published by Wiley-Blackwell. VC 2012 The Protein Society.
5. Rajat Singh, Ana Maria Cuervo. Autophagy in the Cellular Energetic Balance. . s.l. : Cell Metab. 2011 May 4; 13(5): 495–504. doi:10.1016/j.cmet.2011.04.004.
6. Carles Canto, Johan Auwerx. PGC-1a, SIRT1 and AMPK, an energy sensing network that controls energy expenditure. s.l. : Current Opinion in Lipidology 2009, 20:98–105.
7. Mehrdad Alirezaei, Christopher C. Kemball, Claudia T. Flynn, Malcolm R. Wood, J. Lindsay Whitton, William B. Kiosses. Short-term fasting induces profound neuronal autophagy. s.l. : Autophagy 6:6, 702-710; August 16, 2010; © 2010 Landes Bioscience.
8. Livesey G, Taylor R, Livesey H, Liu S. Is there a dose-response relation of dietary glycemic load to risk of type 2 diabetes? Meta-analysis of prospective cohort studies. . s.l. : Am J Clin Nutr. 2013 Mar;97(3):584-96. doi: 10.3945/ajcn.112.041467. Epub 2013 Jan.
9. Autophagy and Aging. David C. Rubinsztein, Guillermo Marin, Guido Kroemer. s.l. : Cell 146, September 2, 2011 Elsevier Inc. DOI 10.1016/j.cell.2011.07.030.
10. Selective degradation of mitochondria by mitophagy . Insil Kim, Sara Rodriguez-Enriquez, John J. Lemasters. s.l. : Archives of Biochemistry and Biophysics , 2007, Vols. 462 (2007) 245–253.
11. H. Knævelsrud, A. Simonsen,. Lipids in autophagy: Constituents, signaling molecules and cargo with relevance to disease,. s.l. : Biochim. Biophys. Acta (2012), . doi:10.1016/j.bbalip.2012.01.001.
12. O.B. Kotoulas, S.A. Kalamidas, D.J. Kondomerkos. Glycogen autophagy in glucose homeostasis. s.l. : Pathology – Research and Practice 202 (2006) 631–638.
13. Sharon S. McDaniel, Nicholas R. Rensing, Liu Lin Thio, Kelvin A. Yamada, and Michael Wong. The ketogenic diet inhibits the mammalian target of rapamycin (mTOR) pathway. . s.l. : Epilepsia. 2011 March ; 52(3): e7–e11. doi:10.1111/j.1528-1167.2011.02981.x.
14. Pablo J Fernandez-Marcos, and Johan Auwerx. Regulation of PGC-1a, a nodal regulator of mitochondrial biogenesis. s.l. : Am J Clin Nutr 2011;93(suppl):884S–90S.
15. Kaitlyn N. Lewis, James Mele, John D. Hayes and Rochelle Buffenstein. Nrf2, a Guardian of Healthspan and Gatekeeper of Species Longevity. s.l. : Integrative and Comparative Biology, volume 50, number 5, pp. 829–843.
16. Julie B. Milder, Li-Ping Liang and Manisha Patel. Acute oxidative stress and systemic Nrf2 activation by the ketogenic diet. s.l. : Neurobiology of Disease 2010: Volume 40, Issue 1, 238-244.
17. Bough, Kristopher. Energy metabolism as part of the anticonvulsant mechanism of the ketogenic diet. s.l. : Epilepsia 2008, 49: 91-93.
18. Douglas Wallace, Weiwei Fan, Vincent Procaccio. Mitochondrial Energetics and Therapeutics. s.l. : Annual Review of Pathology: Mechanisms of Disease 2010 5:297-348, 2010.
19. Antero Salminen, Kai Kaarniranta. AMP-activated protein kinase (AMPK) controls the aging process via an integrated signaling network. s.l. : Ageing Research Reviews 11 (2012) 230– 241.
20. Suzanne Kovacic, Carrie-Lynn M. Soltys, Amy J. Barr, Ichiro Shiojima, Kenneth Walsh and Jason R. B. Dyck. Akt Activity Negatively Regulates Phosphorylation of AMP-activated Protein Kinase in the Heart. s.l. : The Journal of Biological Chemistry, 2003: 278, 39422-39427.
21. I-Chen Peng, Zhen Chen, Pang-Hung Hsu, Mei-I Su, Ming-Daw Tsai and John Y-J. Shyy. Glucagon Activates the AMP-Activated Protein Kinase/Acetyl-CoA Carboxylase Pathway in Adipocytes. s.l. : FASEB J.April 201024 (Meeting Abstract 995.4).
22. Adam R. Kennedy, Pavlos Pissios, Hasan Otu, Bingzhong Xue, Kenji Asakura, Noburu Furukawa, Frank E. Marino, Fen-Fen Liu, Barbara B. Kahn, Towia A. Libermann, Eleftheria Maratos-Flier. A high-fat, ketogenic diet induces a unique metabolic state in mice. s.l. : Am J Physiol Endocrinol Metab 292:E1724-E1739, 2007. First published 13 February 2007;.
23. Marwan A Maalouf, Jong M Rho, Mark Mattson. The neuroprotective properties of calorie restriction, the ketogenic diet and ketone bodies. s.l. : Brain Res Rev 2009: March 59: 293-315.
| 1 | 4 |
<urn:uuid:f640cfde-6abd-4efb-946e-61c9797a42c7>
|
What was the virtual reality of the 1990s
Virtual reality helmets allow you to get into the fantasy world of game developers, heal mental diseases, train surgeons and astronauts. But for most people, virtual reality is above all a game with full immersion, which, nevertheless, is still far from perfect.
How far have we got from projects from the 1990s? Let's see how virtual reality looked a quarter of a century ago.
Shot from the movie "Johnny Mnemonic"
Brief history of virtual reality helmets
The history of helmets of virtual reality began the system Sensorama from 1956. It is more correct to call this system not even a helmet, but an apparatus or a virtual reality cabin. And more precisely - the first 5D cinema. The device not only showed short 3D films: the device's seat vibrated, and the system generated smells to impart more realism.
Interview with Morton Heilig, who patented Sensorama, and filming the device from different angles .
Sensorama and the process of creating videos for the device
For the full immersion, the Sensorama “helmet” lacked tracking of the user's head movements. In 1961, the opportunity was studied by the American military. The project was called Headsight. The system consisted of magnetic sensors for tracking the user's head position, a video helmet with a display, and broadcast cameras. The point was to create a device for remote study of any places where you can not personally go. For example, so could study Mars .
Shot from the movie "Exposing". Always maintains information about the Philco Headsight project.
NASA in 1985 worked on the Virtual Environment Display System - a helmet that looks like modern. He had a LCD-display, LEDs, tracked the position of the head.
First commercially available systems
Until 1984, all such projects remained at the development stage, inaccessible to the mass user. The first on sale appeared system RB2. Because of its value, it was doomed to failure. The basic version cost $ 50,000. The translation in today's money is 118 thousand dollars. The standard version would cost twice as much.
Virtual reality system RB2
Their game versions of helmets were presented by Sega for Genesis and Nintendo as a separate 3D game system Virtual Boy. If the helmet from Sega did not go beyond the concept, then the Nintendo Virtual Boy was sold for a while. But because of the pain in the neck of the popularity of the players he has not received.
Virtual Boy Gameplay - Mario Clash Game
Nintendo Virtual Boy of 1994 showed a black and red image using two monochrome displays
First of all, the systems were needed as rides in gaming halls. But already in the 1990s there were other ways to use them — for example, for holding conferences and inspecting buildings that were not yet constructed.
Virtual Reality of the 1990s
Superscape Virtual Realities
In 1991, the developers of Superscape Virtual Realities released an interactive demonstration of their capabilities. Emulation is available by reference .
Five years later, the company introduced a tool for VR application developers. In the description of the video they write that in those years it was one of the most popular tools of this kind.
In the early 1990s, Rich Gossweiler and Randy Pausch were virtualized with virtual reality .
In 1997, Gossweiler, who worked at that time in Xerox PARC, introduced the CosmoWorld system. Gosweiler worked with various similar projects at the Hewlett-Packard, IBM Almaden Research Center and Nasa.
In the early 1990s, the Virtuality Group developed a virtual reality system. The company offered a system that allowed, with a minimum delay - up to 50 milliseconds - to play in virtual reality with the help of stereoscopic glasses, joysticks and stylized, for example, car seats. The Virtuality Group has also developed gloves to control movement in the game. The system provided for the fights of several players on the network. Among the games were robot fights and air battles.
There were two types of devices - those in which the player stood, and sedentary options. In both cases, used virtual reality helmets "Visette". The helmets were equipped with two LCD screens with a resolution of 276x372 pixels, four speakers, a microphone and a magnetic head position tracking system.
A page from the magazine advertising the arcades of Virtuality. Wikipedia
Control for standing players was carried out using a joystick — it also had a position tracking system to transmit the virtual hand to the system. Devices in which the player was sitting, had a steering wheel or aircraft steering wheel, depending on the game.
The Virtuality 1000CS system worked on the Commodore Amiga 3000 computer. Subsequent models used the Intel 486-PC and Motorola 88110 processor.
A very interesting example is the Dactyl Nightmare game in which you could smash an opponent into pieces with a pistol shot. Unless, however, they have fallen into the hands of a terrible pterodactyl that flies over all the players.
In the video below - the gameplay of the game Legend Quest.
The 1991 Grid Busters game was a robots gladiator battle.
In this video in 1994, they talk about the creation of arcade machines with virtual reality based on the SU 2000 system from Virtuality, 1000CS developers. You can see the process of creating characters and gameplay.
Thanks to the Virtuality, virtual reality systems have found the way not only to gaming halls, but also to consumers' homes. After the sale of the company in parts, its founder in 1998 launched, in conjunction with Philips Electronics, the Scuba device at a price of $ 299. It sold more than 55,000 copies, mostly in Japan.
Spheres of application
Among the applications of virtual reality are entertainment, education, treatment of mental illness and phobias, telepresence. Already in 1993, researchers worked on applications that allow conferences in a virtual environment. One such system was the Distributed Interactive Virtual Environment (DIVE).
In 1995, the company Virtuality developed by order of IBM Elysium system for architects, builders and their clients. Virtual reality helped to see what would be the result of the work.
And, of course, such a system can be used instead of anesthesia in the dentist’s office. Why I have never seen Oculus Rift there is not clear.
Has virtual reality gone far? First, the cost of the devices themselves has noticeably decreased. Thanks to solutions using smartphones, you can cut glasses out of cardboard yourself, while even such a system is able to track the movement of the user's head. Secondly, the graphics today are much better compared to the 1990s. But the control remains the same - joysticks and tracking the movement of the user's head.
Have you already bought an Oculus Rift?
| 1 | 8 |
<urn:uuid:8b1592e0-d315-4176-b3d5-d7846585fd55>
|
- Open Access
Standardised alcohol screening in primary health care services targeting Aboriginal and Torres Strait Islander peoples in Australia
Addiction Science & Clinical Practice volume 13, Article number: 5 (2018)
Introduction and aims
Aboriginal and Torres Strait Islander Community Controlled Health Services (ACCHSs) around Australia have been asked to standardise screening for unhealthy drinking. Accordingly, screening with the 3-item AUDIT-C (Alcohol Use Disorders Identification Test—Consumption) tool has become a national key performance indicator. Here we provide an overview of suitability of AUDIT-C and other brief alcohol screening tools for use in ACCHSs.
All peer-reviewed literature providing original data on validity, acceptability or feasibility of alcohol screening tools among Indigenous Australians was reviewed. Narrative synthesis was used to identify themes and integrate results.
Three screening tools—full AUDIT, AUDIT-3 (third question of AUDIT) and CAGE (Cut-down, Annoyed, Guilty and Eye-opener) have been validated against other consumption measures, and found to correspond well. Short forms of AUDIT have also been found to compare well with full AUDIT, and were preferred by primary care staff. Help was often required with converting consumption into standard drinks. Researchers commented that AUDIT and its short forms prompted reflection on drinking. Another tool, the Indigenous Risk Impact Screen (IRIS), jointly screens for alcohol, drug and mental health risk, but is relatively long (13 items). IRIS has been validated against dependence scales. AUDIT, IRIS and CAGE have a greater focus on dependence than on hazardous or harmful consumption.
Discussion and conclusions
Detection of unhealthy drinking before harms occur is a goal of screening, so AUDIT-C offers advantages over tools like IRIS or CAGE which focus on dependence. AUDIT-C’s brevity suits integration with general health screening. Further research is needed on facilitating implementation of systematic alcohol screening into Indigenous primary healthcare.
Although Aboriginal and Torres Strait Islander (Indigenous) Australians are more likely to abstain from drinking alcohol than other Australians, a greater proportion of those who do consume alcohol engage in risky drinking . These patterns of drinking have historical roots and often reflect ongoing experience of dispossession, marginalisation, disadvantage, racism, grief, trauma and loss. As a result, Indigenous Australians are up to eight times more likely to be hospitalised and five times more likely to die from an alcohol-related condition than their non-Indigenous counterparts .
Screening for unhealthy alcohol use (drinking over recommended limits or alcohol use disorders) allows identification of people who are at risk of developing a health or social problem due to alcohol even if they have not experienced such a problem. Health problems linked to alcohol include common conditions such as raised blood sugar or blood pressure, poor sleep, anxiety or depression or alcohol dependence. The screening process itself can give the individual a chance to reflect on their consumption and may result in reduced consumption [2, 3]. In addition, a brief structured conversation on drinking (brief intervention) has been found to result in reductions of drinking for a broad range of unhealthy alcohol use, at least in the short term . A brief discussion about drinking after a ‘positive’ screen, is a cost-effective way to help individuals in primary health care settings whose drinking poses a risk to their health or wellbeing . Those with alcohol dependence can also be referred to specialised drug and alcohol services if needed.
Around the world, drinkers with an alcohol use disorder (harmful use or dependence) tend to seek help late when significant harms have already occurred. There are many barriers to Indigenous Australians accessing alcohol treatment, including lack of culturally appropriate services and resources, lack of transport or childcare, and actual or perceived racism [5, 6]. These barriers may further delay help-seeking [7, 8]. Because of this, active screening and discussion of drinking is particularly important.
Aboriginal and Torres Strait Islander Community Controlled Health Services (ACCHSs) provide access to culturally appropriate and accessible services. However, in these busy primary health care services, clients often present with complex health and social needs . So, it can be difficult to find time to conduct alcohol screening alongside responding to the reason for a person’s visit. Alcohol can also be a sensitive topic, because of experience of racially-based assumptions about drinking, or because of shame about alcohol-related social problems.
Alcohol screening has been included for many years in the annual Aboriginal or Torres Strait Islander health check, and reporting on clients’ drinking status has been part of national key performance indicators for ACCHSs . However, the criteria used to classify an individual as a ‘safe’ or ‘unsafe’ drinker were not defined. Different health staff could have different perceptions of what drinking patterns are safe. Recently the federal government asked ACCHSs, which receive federal funding, to standardise their alcohol screening. As a result, from June 2017 all ACCHSs were asked to report results of screening using the 3-question Alcohol Use Disorders Identification Test—Consumption (AUDIT-C) .
AUDIT-C asks about frequency and quantity of drinking, and the frequency of drinking six or more ‘standard’ drinks (where a standard drink is 10 g ethanol in Australia). AUDIT-C has been widely validated internationally as a tool for detecting unhealthy drinking in a primary care setting. It is one of many brief screening tools that have been used globally. AUDIT-C and other alcohol screening questionnaires vary in specificity, sensitivity, cut-off score, length and ease of use. Their performance can also vary with different population subgroups . Some of these screening tools, including AUDIT, AUDIT-C, CAGE (Cut-down, Annoyed, Guilty and Eye-opener) and CRAFFT (Car, Relax, Alone, Forget, Friends, Trouble) have been used among Indigenous populations in other parts of the world [13,14,15,16,17,18,19,20,21,22,23,24]. However, only a small number of studies examine their validity and acceptability in that setting [15,16,17].
In this paper we examine evidence for the suitability and acceptability of AUDIT-C and of alternative validated brief alcohol screening tools for routine use in primary health care services targeting Indigenous Australians.
A review was conducted of all original data on validity, acceptability or feasibility of alcohol screening among Indigenous Australians published up to April 2017. A range of search terms were used in Web of Science, PubMed and MEDLINE to identify potential peer-reviewed articles (Fig. 1). Grey literature was also searched (e.g., reports, monographs and clinical guidelines) for original data on alcohol screening among Indigenous Australians using the Australian Indigenous HealthInfoNet, the Indigenous Australian Alcohol and Other Drugs Bibliographic Database and the Google Scholar search engine. Finally, hand searching of reference lists was undertaken. The literature search was conducted by the first and second author (MMI, HO), and the search approach and retrieved articles were checked by an expert librarian.
Peer-reviewed articles that provided original data on validity, acceptability or feasibility of alcohol screening tools and/or brief interventions among Aboriginal or Torres Strait Islander peoples in Australia were included. Duplicate studies were excluded. Data was extracted independently by the first author (MMI) utilising a template in line with the aims of this review. A narrative synthesis of the retrieved literature was conducted by the first (MMI) and the senior author (KC). A narrative synthesis is an approach to synthesise and summarise findings from multiple studies that relies primarily on the use of words and text; it uses a textual approach to describe the key findings extracted from the reviewed article [25, 26]. This method is suited where there is considerable diversity in the methods used in the retrieved literature, including in design and/or data collection techniques .
A total of 170 articles were found from searches of mainstream academic databases and an additional 10 references from other sources (Fig. 1). After applying the inclusion/exclusion criteria, 15 articles were considered and 13 were finally selected for data extraction and analysis.
The literature revealed an awareness of the need to use culturally appropriate but standardised measures for screening and assessment of alcohol use among Indigenous Australians [5, 9]. For instance, Gray et al. mentions that interventions to reduce alcohol-related harm cannot simply be transferred from non-Aboriginal to Aboriginal settings. However, there were few investigations about the acceptability and validity of alcohol screening tools in ACCHSs (Table 1). A summary of the literature, which includes data on the validity, acceptability or feasibility of AUDIT and its short forms (e.g. AUDIT-C, AUDIT-3), and on CAGE, SMAST (Short Michigan Alcoholism Screening Test), IRIS and KAT (Khavari Alcohol Test) questionnaires is presented below.
The Alcohol Use Disorders Identification Test (AUDIT) and its short-forms
AUDIT is a 10-item screening tool that was developed and internationally validated under the auspices of the World Health Organization. It has three questions which ask about consumption (also known as AUDIT-C), three about dependence, and four about effects of drinking. AUDIT and its short-forms predominate in the sparse literature available on alcohol screening in ACCHSs.
AUDIT has been found to have good internal consistency (alpha coefficient of 0.94) and good correlation (r = 0.69) with a 12-item measure of alcohol consumption, (KAT) in remote northern Queensland . However, challenges in quantifying alcohol consumption were noted, particularly given the common practice of sharing alcohol. In a New South Wales (NSW) urban setting, AUDIT was found to be acceptable and was observed to prompt reflection and provide a springboard for a conversation on drinking .
Despite AUDIT’s acceptability in a community setting, several mixed methods, qualitative and quantitative studies reported barriers to using AUDIT in ACCHSs. In a study in an urban ACCHS, Aboriginal health workers said that they found the full AUDIT long. Some clients were reported to be displeased when presenting to the ACCHS for one health concern and then being asked 10 seemingly unrelated questions about alcohol . Staff in that ACCHS and another service expressed a strong preference for only 2–3 consumption questions instead of the full AUDIT [20, 29] (see below).
In the urban study above, Aboriginal health workers also found questions in the full AUDIT were “intrusive”, “getting too close”, and “prying into their [clients’] private life” . They said that: “You need someone out[side] of the extended family [to do this screening], someone out of it all” . After switching to screening for consumption only, and after 12-months implementation, staff reported screening for alcohol consumption was getting easier.
Several studies pointed out the difficulty of quantifying consumption, in particular, the difficulty of asking individuals to convert their drinking to ‘standard drinks’ when using AUDIT with its original wording . Several approaches were used to help with this. Visual aids, either printed or on a computer, to show the clinician or client what the equivalent measure of a standard drink is [3, 29,30,31]. Three studies in urban and regional NSW used a modified version of AUDIT, which allowed respondents to record their consumption as ‘drinks’ rather than as ‘standard drinks’ [5, 30, 32]. The authors acknowledged that this approach may not be perfect, but that having a tool that was understandable and easy to administer outweighed any potential loss in accuracy. The authors were not able to examine the impact of this modification on sensitivity. In another study in an ACCHS in regional NSW, a touchscreen computer showed an image of a drinking threshold (e.g. four standard drinks was shown as an image of 1.5 × 750 ml bottles of beer) when asking a modified version of AUDIT-Q3 (frequency of drinking 2+ or 4+ drinks per day) . The computer was found to be an acceptable way to conduct screening in the clinic waiting room. Another challenge with quantifying drinking, is that sharing is a cultural norm, and drinkers may sometimes report on the consumption of the entire group, rather than on their own consumption [5, 28, 33].
Some researchers reported that AUDIT Question 4 (“How often during the last year have you found that you were not able to stop drinking once you had started?”) can cause confusion, as some individuals regularly stop drinking when they run out of alcohol or money . So, continuation of drinking is more reliant on supply than on presence of alcohol dependence.
The phrasing of several questions of AUDIT was adapted to local English in consultation with local Aboriginal people. For example, the local English translation of Question 7 (on guilt or regret about drinking) was different in a remote and in an urban Australian location [24, 30].
Shorter forms of AUDIT have been found acceptable in several ACCHSs. In some NSW ACCHSs the preferred short screen was AUDIT-C (the first three questions of AUDIT) . In one urban ACCHS the preferred screen was a variant of only AUDIT Questions 1 and 2 (i.e. asking about number of days drinking in a week, and quantity and type of drinking) . In another regional NSW study, a modification of AUDIT-3 alone was used and found to be acceptable .
In urban and regional NSW, recommended cut-off scores for AUDIT-C and AUDIT-3 were determined in comparison with the full AUDIT . The cut-off scores selected were: for at-risk drinkers, AUDIT-C ≥ 5, AUDIT-3 ≥ 1; for high-risk drinkers, AUDIT-C ≥ 6, AUDIT-3 ≥ 2; and for likely dependent drinkers, AUDIT-C ≥ 9, AUDIT-3 ≥ 3. Adequate sensitivity and specificity were achieved for these cut-off scores for both AUDIT-C and AUDIT-3, relative to the 10-item AUDIT. The authors concluded that AUDIT-C provided nearly as good an estimate of alcohol misuse as the full AUDIT. However, no external criteria (e.g. clinical assessment) were available to assess performance of the full AUDIT.
In regional NSW, the modified version of AUDIT-3 (AUDIT-3m; Table 1) agreed well with a 1-week retrospective drinking diary . However, the AUDIT-3m identified more current drinkers than the diary. The authors comment that this was because a 1-week diary did not adequately capture episodic drinking patterns.
The 4-item CAGE has been used among Indigenous Australians in Western Australia, sometimes with modified wording [24, 36]. CAGE was found to have reasonable validity in a remote setting, where individuals with a high score were found to have consumed significantly more alcohol on the day before interview . Similarly, in a later study in very remote Western Australia, CAGE scores were associated with frequency of drinking . However, in the latter study it was noted that over half of ex-drinkers scored two or more on the CAGE items . In a pilot study for the above work in remote Western Australia, the SMAST was administered to 12 Aboriginal participants, but was not used further as participants had difficulty understanding its 12 questions .
As noted above, in a remote Queensland Aboriginal community the KAT (a 12-item scale to assess consumption) was compared with AUDIT. There was good correlation between the two measures, however AUDIT was found easier to administer and had greater face validity .
The Indigenous Risk Impact Screen (IRIS)
IRIS is a 13-item tool which screens jointly for risk of alcohol use, other drug use, and mental health . It was developed by Indigenous and non-Indigenous investigators. The IRIS has been reported to be acceptable and culturally appropriate and found valid in relation to recognised international questionnaires for assessing substance use dependence and mental health at the time of its development. IRIS asks about alcohol and other drugs simultaneously (e.g. “In the last 6 months have you needed to drink or use more drugs to get the effects you want?”). Its seven substance use questions focus only on aspects of dependence. There is no question on amount or frequency of consumption. In men it had high sensitivity for detecting 11+ standard drinks per occasion, but in women it had imperfect sensitivity for detecting 7+ drinks. In a subsequent study of Indigenous prison inmates in Queensland , a version of IRIS modified to ask about the pre-prison period was found to have high sensitivity (94%), but low specificity (33%) in detecting substance use disorders. The final six questions of IRIS screen for mental health risk and past psychological trauma. IRIS is said to be used and found to be acceptable by a range of services for Indigenous Australians however it is not clear if this is primary care sections of the services, or other (e.g., mental health and wellbeing) sections.
Screening and early discussion of drinking is important in improving health, given the role of alcohol as a risk factor for a wide range of common conditions, such as diabetes, hypertension, cardiac arrhythmias and cancers [39, 40]. However, only a small number of screening tools have been validated for use with Indigenous Australian peoples. AUDIT and its short forms, IRIS and CAGE were all found to have validity compared to other screening tools or questions on alcohol consumption. Responses to the 12-item KAT correlated with those of AUDIT, but KAT was found less easy to use in Indigenous settings. AUDIT and its short forms were the only instruments for which data was available on feasibility of routine implementation in ACCHS primary care. Services found the full 10-item AUDIT too lengthy for busy primary care settings, and strongly preferred only 2–3 of AUDIT’s consumption questions.
Acceptability and feasibility for screening in an ACCHS setting
ACCHSs offer a unique opportunity for screening, given their accessibility and appropriateness for Indigenous Australian peoples. However, services are dealing with many other complex health and social needs. A screening tool for use in ACCHSs needs to be acceptable, easily understood by the clients and staff, and quick to use and score [29, 38]. Anecdotally many ACCHSs have adopted AUDIT or (more often) its shorter versions and found it useful, even in remote settings. Others, particularly in remote regions, have reported challenges with quantifying consumption, which may be of a ‘stop-start’ rather than a regular pattern. Meanwhile, other services have chosen IRIS as their preferred screening tool. However, there is no publicly available data on the extent of use of either IRIS or AUDIT-C in ACCHSs, and on whether these are being used more in primary care sections of the service, or by drug and alcohol, mental health or social and emotional wellbeing units.
AUDIT-C’s brevity (at 3 items) is a major strength for the primary care context . There are several reports on use of AUDIT’s short forms (1–3 items) in ACCHSs [29, 32, 35, 38]. These brief screening tools can more readily be embedded into a general clinical interview or routine health check than a 10–13 item instrument, such as the full AUDIT or IRIS. Because of AUDIT-C or AUDIT-3’s focus on consumption, these tools have good potential to detect drinking that is over recommended limits, and not necessarily causing current harms or symptoms of dependence.
Another advantage of AUDIT-C (or AUDIT) over other alcohol screening tools is that these start with a mild question (“How often do you have a drink containing alcohol?”). The response options include “never”. Given that the majority of Indigenous Australians are likely to be current non-drinkers , this may be more acceptable than an initial question that focuses on heavy drinking or dependence , which is the case with CAGE or IRIS. Only one study examined AUDIT-3 (in modified form) as a single question, and in this study, electronic delivery mode was used to visually demonstrate the drinking thresholds (e.g. How often did you drink more than this?).
IRIS was developed in clinical and non-clinical settings by and for Indigenous Australian peoples . IRIS’s approach to integrated screening for alcohol, other drug use disorders and mental health risk is compatible with the holistic view of health among Indigenous Australians. Its final item: “Do past events still affect your wellbeing today?” recognises the frequency of trauma, including that inflicted by government child removal policies. Also, ending on a question about past psychological trauma may require de-briefing. In addition, all IRIS’s substance use questions focus on dependence. This means that like CAGE, it is less well suited to detecting drinking which may be above recommended limits (and so pose a risk for health), but is not currently resulting in health problems, or dependence. There is not published data available on the routine implementation of IRIS as a tool for universal screening in primary health care, but with 13 items, its length may pose challenges.
National and international comparability
AUDIT-C has been used in many other countries, cultures, and racial and ethnic groups [18, 19] such as African-American and Hispanic patients , Maori peoples [21, 22], and First Nations Canadians . Because of this, AUDIT and its short forms allow comparability of screening results with other services, and with international research.
Reported challenges of screening
Quantification of drinking was reported to be challenging in several studies . This challenge affects any screening tool, such as AUDIT or its short forms which record consumption. People in ‘dry’ regions (where alcohol is prohibited) may have only episodic access to alcohol. Also, in any setting, relatively few people (Indigenous or other) have a clear understanding of the size of a ‘standard drink’, and individuals may not know the volume of a drink that they pour themselves. Non-standard containers may be used, for example wine poured into empty soft drink bottles . Furthermore, sharing of drinks, educational disadvantage , or differing traditional approaches to numbering can add to the challenge of quantifying the amount of alcohol consumed in terms of standard drinks . Hand-held iPad or interactive touch-screen computers have been used to assist participants to estimate consumption [31, 45]. These devices may also potentially reduce the time required to assess consumption .
Several authors pointed to challenges in understanding questionnaires if they were not translated into local use of English or local language in consultation with local Aboriginal or Torres Strait Islander people [24, 30]. Formal translation and back translation may be indicated if significant changes are required . If re-wording goes beyond simple ‘translation’, then the new scale may need cross-validation or checking against external criteria . Even with translation, some questions may function differently in different settings. For example, Question 7 of AUDIT asks if a person feels guilty about their drinking, but the response may reflect local community attitudes to drinking (acceptance of drinking) as much as individual regret .
Areas for further research
The AUDIT-C cut-off score and false positives
Given the overall high prevalence of risky drinking among those who currently drink any alcohol among Indigenous Australians , and the challenges in accurately reporting drink size, a relatively low cut-off score (AUDIT-C ≥ 3 for women and ≥ 4 for men) is suggested. This is to ensure good sensitivity for detecting unhealthy alcohol use. These scores are lower than the nationally recommended cut-off scores for screening in the current Australian alcohol treatment guidelines (≥ 5 for both women and for men). No screening test is perfect, and with these cut-offs some clients with low risk drinking can potentially screen as a ‘false’ positive. However the recommended ‘treatment’ response for a positive result is further assessment or empathic discussion of drinking . This can include clarification of recommended limits . So it could be argued that such discussion may contribute to prevention and greater community-wide health literacy, regardless of the individual’s current level of risk. However, further research could assess the overall impact of false positive assessment on staff workload and attitudes to screening. Also, training and evaluation of this is needed to ensure that discussions are conducted sensitively.
Clinical assessment after a positive screen result typically involves checking the drinking history, including drink sizes. Where drinking is above recommended limits, questions can be asked about harms from drinking or evidence of dependence, such as ‘grog shakes’ or loss of control over drinking [51, 52]. Some clinicians with limited experience working with alcohol may prefer to use the remaining seven AUDIT or IRIS questions as a second stage screen for alcohol use disorders.
Refining the gold standard
Alcohol screening tools have typically been validated against internationally published screening or assessment instruments. However it is not clear how valid those ‘gold standards’ themselves are in an Indigenous context [33, 34, 37]. Further research is needed to refine or develop reference standards. As AUDIT-C is now recommended for routine implementation in ACCHS, it is timely to assess this tool against an acceptable and appropriate gold standard in an Indigenous context.
Research or evaluation of implementation
Any screening or assessment approach could benefit from pilot testing across a range of settings [33, 34], as Indigenous Australians comprise many diverse peoples, including those living with more traditional lifestyles and speaking languages other than English.
Likely challenges in implementation and need for training
Clinicians need to be trained on how to estimate alcohol consumption, including standard drink quantities, drink sizes and sharing. There may also be cultural barriers to Indigenous health professionals asking about alcohol use when the client may be a close friend, or family or community member . Approaches such as embedding the alcohol questions into a general health check, and explaining that all clients are asked them is likely to reduce sensitivity [9, 44]. Also, universal rather than targeted screening, should reduce the sensitivity over time .
Clinicians are likely to benefit with the provision of an aid for converting drinking into standard drink sizes. A touchscreen computer or computer ‘app’ may eventually help overcome difficulties in assessing consumption, and may also increase privacy and lessen social desirability bias [31, 33, 45, 53].
There is a limited evidence base of literature on alcohol screening that is specific to Indigenous Australians. Much of the screening research in Indigenous settings has included AUDIT or its short forms, so more data were available on this than on other tools. Moreover, while the findings favoured the short forms of AUDIT over other tools, estimating standard drinks in order to calculate an AUDIT-C score accurately is cumbersome in an Indigenous context. Furthermore, the synthesis of evidence in this report relied on the authors’ clinical and public health experience, so subjective judgements were needed. Thus, findings should be interpreted with caution. This review examines only validity and acceptability of brief alcohol screening tools. There remains minimal published evidence on the effectiveness of subsequent brief intervention, treatment or referral for unhealthy alcohol use in an Indigenous Australian setting [54,55,56]. This is an important area for further research.
Research on appropriate alcohol screening tools for Indigenous Australians is sparse. However the short forms of AUDIT, including AUDIT-C appear to be suitable and valid for ACCHS primary care settings when delivered in locally appropriate language. Training may be needed to facilitate implementation, including accurate screening of consumption level, responses to a positive screening result. Embedding the screening questions into practice software will also support implementation of screening. Clients (and clinicians) should be supported to quantify drinking by an interpreter, and/or by use of visual aids and/or computer technology. Positive screening should be followed either by clinical assessment or a second stage screen (e.g. IRIS or the remaining AUDIT questions). IRIS may be valuable as an additional tool in drug and alcohol, or social and emotional wellbeing sections of ACCHSs where there may be less time pressure, and to put alcohol use in its broader context of other substance use and mental health. Given the high prevalence of alcohol-related harms, routine and regular screening in ACCHSs needs to proceed, even while consultation, research and evaluation continues to optimise screening approaches.
Australian Institute of Health and Welfare. Substance use among Aboriginal and Torres Strait Islander people. Canberra: Australian Institute of Health and Welfare; 2011.
Jenkins RJ, McAlaney J, McCambridge J. Change over time in alcohol consumption in control groups in brief intervention studies: systematic review and meta-regression study. Drug Alcohol Depend. 2009;100(1–2):107–14.
Australian Government Department of Health and Ageing. Alcohol treatment guidelines for Indigenous Australians. Canberra: Australian Government Department of Health and Ageing; 2007.
Kaner E, Dickinson H, Beyer F, Pienaar E, Campbell F, Schlesinger C, Heather N, Saunders J, Burnand B. Effectiveness of brief alcohol interventions in primary care populations review. Cochrane Database Syst Rev. 2007;18:CD004148.
Conigrave K, Freeman B, Caroll T, Simpson L, Lee K, Wade V, Kiel K, Ella S, Becker K, Freeburn B. The Alcohol Awareness project: community education and brief intervention in an urban Aboriginal setting. Health Promot J Austr. 2012;23(3):219–25.
National Indige nous Drug and Alcohol Committee. Alcohol and other drug treatment for Aboriginal and Torres Strait Islander peoples. Canberra: Australian National Council on Drugs; 2014.
ABS. National Aboriginal and Torres Strait Islander social survey. Canberra: ABS; 2002.
Teasdale KE, Conigrave KM, Kiel KA, Freeburn B, Long G, Becker K. Improving services for prevention and treatment of substance misuse for Aboriginal communities in a Sydney Area Health Service. Drug Alcohol Rev. 2008;27(2):152–9.
Gray D, Wilson M, Allsop S, Saggers S, Wilkes E, Ober C. Barriers and enablers to the provision of alcohol treatment among Aboriginal Australians: a thematic review of five research projects. Drug Alcohol Rev. 2014;33:482–90.
Australian Institute of Health and Welfare. National Key Performance Indicators for Aboriginal and Torres Strait Islander primary health care: first national results June 2012 to June 2013. In: National key performance indicators for Aboriginal and Torres Strait Islander primary health care series. Canberra: AIHW; 2014.
Bush K, Kivlahan DR, McDonell MB, Fihn SD, Bradley KA, for the Ambulatory Care Quality Improvement Project (ACQUIP). The AUDIT alcohol consumption questions (AUDIT-C): an effective brief screening test for problem drinking. Arch Intern Med. 1998;158(16):1789–95.
Leeflang MM, Deeks JJ, Gatsonis C, Bossuyt PM, Cochrane Diagnostic Test Accuracy Working G. Systematic reviews of diagnostic test accuracy. Ann Intern Med. 2008;149(12):889–97.
Kypri K, McCambridge J, Cunningham JA, Vater T, Bowe S, De Graaf B, Saunders JB, Dean J. Web-based alcohol screening and brief intervention for Māori and non-Māori: the New Zealand e-SBINZ trials. BMC Public Health. 2010;10(1):781.
Kypri K, Mccambridge J, Vater T, Bowe S, Saunders J, Cunningham J, Horton N. Web-based intervention for Maori university students with hazardous drinking: double-blind, multi-site randomised controlled trial. Injury prevention. 2012;18(Suppl 1):A46.
Robin RW, Saremi A, Albaugh B, Hanson RL, Williams D, Goldman D. Validity of the SMAST in two American Indian tribal populations. Subst Use Misuse. 2004;39(4):601–24.
Saremi A, Hanson RL, Williams DE, Roumain J, Robin RW, Long JC, Goldman D, Knowler WC. Validity of the CAGE questionnaire in an American Indian population. J Stud Alcohol. 2001;62(3):294–300.
Cummins LH, Chan KK, Burns KM, Blume AW, Larimer M, Marlatt GA. Validity of the CRAFFT in American-Indian and Alaska-Native adolescents: screening for drug and alcohol risk. J Stud Alcohol. 2003;64(5):727–32.
Meneses-Gaya Cd, Zuardi AW, Loureiro SR, Crippa JAS. Alcohol Use Disorders Identification Test (AUDIT): An updated systematic review of psychometric properties. Psychol Neurosci. 2009;2(1):83–97.
Saunders JB, Aasland OG, Babor TF, de la Fuente JR, Grant M. Development of the Alcohol Use Disorders Identification Test (AUDIT): WHO collaborative project on early detection of persons with harmful alcohol consumption—II. Addiction. 1993;88:791–804.
Clifford A, Shakeshaft A. Evidence-based alcohol screening and brief intervention in Aboriginal Community Controlled Health Services: experiences of health-care providers. Drug Alcohol Rev. 2011;30(1):55–62.
Herbert S, Stephens C. Alcohol Use and Older Maori in Aotearoa. J Ethn Subst Abuse. 2015;14(3):251–69.
Kypri K, McCambridge J, Vater T, Bowe SJ, Saunders JB, Cunningham JA, Horton NJ. Web-based alcohol intervention for Maori university students: double-blind, multi-site randomized controlled trial. Addiction. 2013;108(2):331–8.
Ober C, Dingle K, Clavarino A, Najman JM, Alati R, Heffernan EB. Validating a screening tool for mental health and substance use risk in an Indigenous prison population. Drug Alcohol Rev. 2013;32(6):611–7.
Hunter E, Hall W, Spargo R. The distribution and correlates of alcohol consumption in a remote aboriginal population. NDARC Monograph 12. Sydney: National Drug and Alcohol Research Centre; 1991.
Arai L, Britten N, Popay J, Roberts H, Petticrew M, Rodgers M, Sowden A. Testing methodological developments in the conduct of narrative synthesis: a demonstration review of research on the implementation of smoke alarm interventions. Evid Policy. 2007;3:361–83.
Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, Britten N, Roen K, Duffy S. Guidance on the conduct of narrative synthesis in systematic reviews: a product from the ESRC Methods Programme. UK: Lancaster University; 2006. https://doi.org/10.13140/2.1.1018.4643.
Lucas PJ, Baird J, Arai L, Law C, Roberts HM. Worked examples of alternative methods for the synthesis of qualitative and quantitative research in systematic reviews. BMC Med Res Methodol. 2007;7:4.
Kowalyszyn M, Kelly AB. Family functioning, alcohol expectancies and alcohol-related problems in a remote aboriginal Australian community: a preliminary psychometric validation study. Drug Alcohol Rev. 2003;22(1):53–9.
Brady M, Sibthorpe B, Bailie R, Ball S, Sumnerdodd P. The feasibility and acceptability of introducing brief intervention for alcohol misuse in an urban Aboriginal medical service. Drug Alcohol Rev. 2002;21(4):375–80.
Lee KSK, Dawson A, Conigrave KM. The role of an Aboriginal women’s group in meeting the high needs of clients attending outpatient alcohol and other drug treatment. Drug and Alcohol Review. 2013;32:618–26.
Noble NE, Paul CL, Carey ML, Sanson-Fisher RW, Blunden SV, Stewart JM, Conigrave KM. A cross-sectional survey assessing the acceptability and feasibility of self-report electronic data collection about health risks from patients attending an Aboriginal Community Controlled Health Service. BMC Med Inform Decis Mak. 2014;14(1):34.
Calabria B, Clifford A, Shakeshaft A, Conigrave K, Simpson L, Bliss D, Allan J. Identifying Aboriginal-specific AUDIT-C and AUDIT-3 cut-off scores for at-risk, high-risk and likely dependent drinkers using measures of agreement with the 10-item AUDIT. Addict Sci Clin Pract. 2014;9:17.
Lee KSK, Chikritzhs T, Wilson S, Wilkes E, Gray D, Room R, Conigrave KM. Better methods to collect self-reported alcohol and other drug use data from Aboriginal and Torres Strait Islander Australians. Drug Alcohol Rev. 2014;33:466–72.
Dawe S, Loxton NJ, Hides L, Kavanagh DJ, Mattick RP: Review of diagnostic screening instruments for alcohol and other drug use and other psychiatric disorders. 2nd ed. Commonwealth Department of Health and Ageing, Canberra, Australia; 2002.
Noble N, Paul C, Conigrave K, Lee K, Blunden S, Turon H, Carey M, McElduff P. Does a retrospective seven-day alcohol diary reflect usual alcohol intake for a predominantly disadvantaged Australian Aboriginal population? Subst Use Misuse. 2015;50(3):308–19.
Skowron S, Smith DI. Survey of homelessness, alcohol consumption and related problems amongst Aboriginals in the Hedland area. Perth: Western Australian Alcohol and Drug Authority; 1986.
Schlesinger C, Ober C, McCarthy M, Watson J, Seinen A. The development and validation of the Indigenous Risk Impact Screen (IRIS): a 13-item screening instrument for alcohol and drug and mental health risk. Drug Alcohol Rev. 2007;26:109–17.
Clifford A, Shakeshaft A, Deans C. How and when health-care practitioners in Aboriginal Community Controlled Health Services deliver alcohol screening and brief intervention, and why they don’t: a qualitative study. Drug Alcohol Rev. 2012;31(1):13–9.
Australian Chronic Disease Prevention Alliance. Alcohol and chronic disease prevention position statement. Sydney: Australian Chronic Disease Prevention Alliance; 2013.
Burke V, Lee AH, Hunter E, Spargo R, Smith R, Beilin LJ, Puddey IB. Alcohol intake and incidence of coronary disease in Australian aborigines. Alcohol Alcohol. 2007;42(2):119–24.
Nordqvist C, Johansson K, Bendtsen P. Routine screening for risky alcohol consumption at an emergency department using the AUDIT-C questionnaire. Drug Alcohol Depend. 2004;74(1):71–5.
Frank D, DeBenedetti AF, Volk RJ, Williams EC, Kivlahan DR, Bradley KA. Effectiveness of the AUDIT-C as a screening test for alcohol misuse in three race/ethnic groups. J Gen Intern Med. 2008;23(6):781–7.
Currie CL, Wild TC, Schopflocher DP, Laing L, Veugelers PJ, Parlee B, McKennitt DW. Enculturation and alcohol use problems among aboriginal university students. Can J Psychiatry. 2011;56(12):735–42.
Fahy P, Croton G, Voogt S. Embedding routine alcohol screening and brief interventions in a rural general hospital. Drug Alcohol Rev. 2011;30(1):47–54.
Lee KSK, Wilson S, Perry J, Room R, Callinan S, Assan R, Hayman N, Chikritzhs T, Gray D, Wilkes E, et al. Developing a tablet computer based application (‘App’) to measure self-reported alcohol consumption in Indigenous Australians. BMC Med Inform Decis Mak. 2018;18(1):8.
World Health Organization (WHO): Process of translation and adaptation of instruments. In: Management of substance abuse. Department of Mental Health and Substance Abuse, WHO. Geneva: World Health Organization; 2014.
Reinert DF, Allen JP. The Alcohol Use Disorders Identification Test: an update of research findings. Alcohol Clin Exp Res. 2007;31(2):185–99.
Commonwealth Department of Human Services and Health. National Drug Strategy Household Survey Urban Aboriginal and Torres Strait Islander Supplement. Canberra: Commonwealth Department of Human Services and Health; 1994. p. 107.
National Institute on Alcohol Abuse and Alcoholism. Socila work educaiton for the prevention and treatment of alcohol use disorders. Module 6 - motivation and treatment intervention; National Institutes of Health (NIH), USA. 2005.
Bradley KA, Kivlahan DR, Zhou XH, Sporleder JL, Epler AJ, McCormick KA, Merrill JO, McDonell MB, Fihn SD. Using alcohol screening results and treatment history to assess the severity of at-risk drinking in Veterans Affairs primary care patients. Alcohol Clin Exp Res. 2004;28(3):448–55.
World Health Organization. The ICD-10 classification of mental and behavioural disorders. Clinical descriptions and diagnostic guidelines. Geneva: World Health Organization; 1992.
Haber P, Lintzeris N, Proude E, Lopatko O. Quick reference guide to the treatment of alcohol problems: companion document to the guidelines for the treatment of alcohol problems. Canberra: Australian Government Commonwealth Department of Health and Ageing; 2009. p. 9.
Islam MM, Topp L, Conigrave KM, van Beek I, Maher L, White A, Rodgers C, Day CA. The reliability of sensitive information provided by injecting drug users in a clinical setting: clinician-administered versus audio computer-assisted self-interviewing (ACASI). AIDS Care. 2012;24(12):1496–503.
Sibthorpe BM, Bailie RS, Brady MA, Ball SA, Sumner-Dodd P, Hall WD. The demise of a planned randomised controlled trial in an urban Aboriginal medical service. MJA. 2002;176:273–6.
d’Abbs P, Togni S, Rosewarne C, Boffa J. The Grog Mob: lessons from an evaluation of a multi-disciplinary alcohol intervention for Aboriginal clients. Aust N Z J Public Health. 2013;37(5):450–6.
Schlesinger C, Ober C. An evaluation of a brief intervention for drug and mental health risk in an Australian Indigenous population [abstract]. In.: Australian Winter School, Alcohol and Drug Foundation; 2007.
MMI and KMC conceived the review and wrote the first draft. All other authors reviewed and edited the subsequent drafts of the manuscript. All authors read and approved the final manuscript.
This study was supported by funding contributions from the Australian Institute of Health and Welfare and by the National Health and Medical Research Council (Grants Number ID #1105339, #1087192 and #1117198). The senior author was supported by an NHMRC Practitioner Fellowship (#1117582). We acknowledge the support of Mira Branezac, Library Manager of NSW Health’s Drug and Alcohol Health Services Library.
All authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Islam, M.M., Oni, H.T., Lee, K.S.K. et al. Standardised alcohol screening in primary health care services targeting Aboriginal and Torres Strait Islander peoples in Australia. Addict Sci Clin Pract 13, 5 (2018). https://doi.org/10.1186/s13722-018-0108-2
- Alcohol screening
- Aboriginal and/or Torres Strait Islander
| 1 | 2 |
<urn:uuid:87b43720-aa34-487a-8286-aaf1519e21d2>
|
CPT codes describe the physical procedures (including injections, lab tests, exams, etc.) that healthcare providers perform when patients come in for an office visit.
Understanding these codes is an essential part of doing your job as a medical coder. Without CPT codes, you cannot bill anything to an insurance company.
There are two basic parts to any medical claim. They are the most important part of describing of what happened at the patient visit. They are the ICD-9 codes (diagnosis) and CPT, or Common Procedural Terminology, codes.
Put together, these two codes explain why the patient came in to the office and how they were treated by the doctor.
The diagnosis, or ICD-9 code, describes the reason why the patient came into the office.
For example, the patient could be suffering from a sore throat, and so he or she would come into the office for pharyngitis (sore throat). The ICD-9 code, then, would be 462.
The next part of the claim would include how the patients were treated in the office. These are the procedures, or CPT medical billing codes.
In other words, the procedures describe what the doctors or nurses did at the office visit to treat the sore throat or to test for any diseases or infections.
In this example, the procedure codes would include an evaluation and management service (99211-99215) and a strep screen to make sure the patient does not have strep throat (87880).
Each one of the procedure codes would be included with the same diagnosis (sore throat).
Besides being an essential part of coding any type of doctor visit, CPT procedure codes are the codes that you charge for. When you enter a claim, you will list the procedure code, along with the appropriate diagnosis, in addition to the charges for each specific procedure.
This means that these are the codes that are paid by insurance companies.
An insurance company won't pay just because you tell them that the patient had a sore throat. You have to include CPT codes for each procedure performed, so that the doctor can get paid for each component of the office visit.
There are many categories of CPT medical billing codes. Each category is specific to the type of service.
Most of the major categories correspond to the main systems of the body according to the principles of the anatomy of the human body. They are the following:
Sometimes it's necessary to include a modifier with a procedure code. What this does is change the meaning of the procedure code.
This helps the insurance company understand the service that was provided at the office visit by including additional information.
Modifiers are also sometimes necessary to make sure your claims are paid in full.
Click for more information on medical coding modifiers and how they affect claim payment.
The CPT manual is a two-pound monster, complete with hundreds of pages and thousands of codes. But don't worry, you normally only need to work a small amount of these codes on a regular basis.
Furthermore, once you become accustomed to reading the code descriptions and finding the codes that you need, using this manual will become a normal part of your daily routine.
Another important thing to note is that most of your procedure codes will be included in your doctor's encounter form, which is a list of all commonly used procedure and diagnosis codes. This form is what you will use to enter the codes for a medical claim.
For more on encounter forms, see our article on encounter forms.
You may have noticed on the previous list that there's a small set of codes within another set. These are evaluation and management codes. These describe normal patient office visits and services, and are included in almost every outpatient doctor visit.
Now the question remains, how do you actually use the CPT manual to find the right codes? This is probably the most difficult part of being a medical coder. Sometimes it is hard to find exactly the right code, as they are very complex and the CPT manual contains thousands of procedural descriptions.
There is no way we could go into any type of detail in this short amount of space, as this is usually a major part of learning how to be a medical coder. In your medical coding education, you will spend months learning exactly how to find the right code in the CPT manual.
That being said, we can at least learn the basic steps.
In your actual medical coding education, you will spend many hours studying the CPT coding manual and practicing looking up the correct codes.
There are 2 additional categories of CPT codes: Category II and Category III.
Category II codes are a specific set of codes used to track performance.
They're included in the CPT manual to help decrease the need for record requests and chart reviews.
These codes make it easier for healthcare professionals, office personnel, healthcare practice administrators, hospitals, and other organizations in the medical industry to track performance.
Because Category II codes are optional, they're not a mandatory component of healthcare claims. They're simply additional information which can be used to measure the progress and performance of certain medical personnel.
Furthermore, because they're not necessary components of the coding process, they're not reimbursed by insurance companies. This means Category II codes are not paid components of healthcare claims.
Category III codes are made up of emerging technology, services, and procedures. In other words, they're not federally regulated, and they're new to the healthcare industry. Even though they're emerging codes, you have to use them if they replace an older technology.
Using Category III codes is an important part of keeping the medical community up to date, and supporting advancements in the medical community and healthcare technology.
Knowing and understanding the many types and uses of CPT medical billing codes is a fundamental part of being a successful medical coder.
Keeping up to date with advancements and changes in the medical coding industry, including changes in CPT codes, will help you be the best medical coder you can be.
If you found this page useful, please click "Like"! Thanks.
| 1 | 3 |
<urn:uuid:d3dfae61-6c89-49db-a132-d78f75ea0df7>
|
Last updated: October 31, 2014
Synonyms: SpA; seronegative SpA; spondyloarthropathy, HLA-B27-related SpA. NB. The preferred terminology has
ICD-9 Codes: Spondyloarthropathy/Spondylitis, 720.9; Codes for individual disorders: AS, 720.0; reactive arthritis / Reiter ’s syndrome, 099.3; psoriatic arthritis, 696.0.
ICD-10 codes: Spondyloarthopathy/spondyloarthritis M45-M49; Ankylosing spondylitis M45; reactive arthritis/Reiter’s syndrome M02.3; juvenile ankylosing spondylitis M08.1
Definition: SpA is a generic term applied to a group of disorders characterized be a constellation of shared clinical, radiographic, and immunogenetic features. These disorders include ankylosing spondylitis (AS), reactive arthritis (formerly known as Reiter’s syndrome), psoriatic arthritis, and arthritis associated with inflammatory bowel disease (Crohn’s disease, ulcerative colitis; also known as enteropathic arthritis). Some patients manifest complete or classic findings of one of these individual disorders, while others demonstrate incomplete or overlapping features. In past years, patients with SpA features who did not fit a specific condition were sometimes designated as having ‘incomplete’ or ‘undifferentitated’ SpA. Various criteria for the diagnosis and/or classification of SpA have been developed over the years (e.g., Table 44). More recently, it has been suggested that the terms “axial spondyloarthritis” and “peripheral spondyloarthritis” be used to encompass this group of disorders.
Etiology: Because of a common association with HLA-B27, immunopathophysiologic similarities, and overlapping clinical and radiographic features, the disorders are presumed to share a similar etiopathogenesis. Proposed theories include infectious inciting events occurring in a genetically susceptible host, leading to either molecular mimicry or a chronic, antigen-driven reactive condition.
Demographics: Overall, SpA may have a prevalence of 0.5 – 2% of the population. SpAs are more likely to be seen in men. Most patients present between the ages of 20 and 50 years.
Pathology: These disorders share a common pathologic profile, which includes a propensity for axial and peripheral inflammatory arthritis and inflammation involving the eye (conjunctivitis, uveitis), skin (psoriasis, nail changes), mucosal surfaces (oral and genital), and tendinous attachments to bone (enthesitis). Synovial membranes show histologic inflammation similar in some ways but with clear differences from that seen in RA. Different than RA, in SpA there is a greater propensity for fibrous ankylosis, osseous resorption, and heterotopic bone formation.
Cardinal Findings: The SpA share a constellation of characteristic clinical, radiographic, and immunogenetic manifestations that suggest a common or related etiopathogenesis (Table 44). Distinctive features include a propensity for axial arthritis (sacroiliitis and spondylitis); peripheral arthritis (often asymmetric and oligoarticular); inflammation at tendinous, ligamentous, or fascial insertions (enthesitis); and a familial pattern of inheritance based on the presence of the class I major histocompatibility complex antigen HLA-B27. These disorders can manifest extraarticular features that suggest a particular SpA. Extraarticular manifestations may involve periarticular structures (enthesitis), eyes (uveitis), GI tract (oral ulcerations, asymptomatic gut inflammation), genitourinary tract (urethritis), cardiac involvement (aortitis, heart block), dermatologic involvement (psoriasis, keratoderma blennorrhagica).
Diagnostic Criteria: Criteria for the diagnosis of an SpA were developed by the European Spondyloarthropathy Study Group (Table 44). The criteria of Amor et al. perform equally well in population studies. These were devised because other disease-specific criteria (e.g., Rome criteria for AS) exclude many patients with SpA. Diagnostic criteria for “axial spondyloarthritis” and “peripheral spondyloarthritis” have been developed by the ASAS group (www.asas-group.org/). Broader definitions used in criteria allow for earlier diagnosis, greater inclusion in clinical trials, and earlier therapy.
Imaging: Radiographic abnormalities are similar to those seen in AS and reactive arthritis. There is a propensity for sacroiliitis, spondylitis, peripheral arthritis with soft tissue swelling, juxtaarticular osteopenia, joint space narrowing, or ill-defined erosions. Areas of periostitis, reactive new bone formation, or osteitis are not uncommon.
Dougados M, van der Linden SM, Juhlin R, et al. The European Spondyloarthropathy Study Group preliminary criteria for the classification of spondyloarthropathy. Arthritis Rheum 1991;34:1218–1227. PMID: 1930310
Khan MA. Update on spondyloarthropathies. Ann Intern Med 2002;136:896–907. PMID: 12069564
Khan MA, van der Linden SM. A wider spectrum of spondyloarthropathies. Semin Arthritis Rheum 1990;20:107–113. PMID: 2251505
Miceli-Richard C, van der Heijde D, Dougados M. Spondyloarthropathy for practicing rheumatologists: diagnosis, indication for disease-controlling antirheumatic therapy, and evaluation of the response. Rheum Dis Clin North Am 2003;29:449–462. PMID: 12951861
| 1 | 8 |
<urn:uuid:e056f69e-268b-4cd2-8ffa-2478ae176267>
|
What is lymphoma?
Lymphoma is cancer of the lymphocytes (the white blood cells that help to fight infection). Lymphocytes are found in a liquid called lymph which travels throughout our body in the lymphatic system (a series of tubes, nodes and organs such as the spleen and thymus that are part of our immune system). Lymphocytes often gather in the lymph nodes (most commonly in the armpit, neck or groin) to fight infection, but can also found in almost any part of the body. Lymphoma occurs when abnormal lymphocytes grow out of control and collect in the lymph nodes or other parts of the body.
There are over 80 different forms of lymphoma, known as subtypes. The symptoms, diagnosis, prognosis and treatment of lymphoma vary according to the subtype that is diagnosed.
Lymphomas are either:
- Low grade (also referred to as indolent or chronic), because the abnormal cells are slow-growing; or
- High grade (also known as aggressive or acute).
Lymphomas have been generally categorised either into Hodgkin lymphoma or non-Hodgkin lymphoma. This categorisation is based upon the name of the doctor, Dr Thomas Hodgkin, who first described what was then labelled Hodgkin’s Disease in the early nineteenth century.
Lymphoma Coalition discourages the use of the term ‘non-Hodgkin lymphoma’ as the category does not give the patient any important information about their cancer. Given the variety and complexity of the different subtypes, it is important for people diagnosed with lymphoma to know and understand their specific subtype, including whether it is low grade or high grade. Knowing the specific subtype helps people better understand their disease (including its diagnosis, prognosis and treatment plan), and access tailored information and support.
Lymphomas, chronic lymphocytic leukaemia (CLL) and blood cancers
As a cancer of the white blood cells, lymphoma is often grouped with other blood cancers (or haematological malignancies) such as myeloma and leukaemia. In terms of the number of new cases each year, lymphoma is the most common blood cancer in Europe, as well as being the fifth most common cancer overall (after breast, lung, bowel and prostate cancers).
Despite its name, chronic lymphocytic leukaemia (CLL) is clinically a lymphoma and so this page also includes information and data on CLL, where it is available.
Lymphoma incidence, mortality and survival data for Europe
About 111,100 new cases of lymphoma were diagnosed in Europe in 2012 (representing about 3.5% of the total cancer cases in the region). Of these, 17,600 were Hodgkin lymphoma (0.5% of total cases) and 93,500 were other B-cell and T-cell lymphomas (3%). Europe accounts for approximately one-quarter of all new lymphoma cases worldwide, based on the available data from 2012.
The European Cancer Information System estimates these figures will have risen to 134,311 new lymphoma cases in 2018 (with 19,193 being Hodgkin lymphoma and 115,000 other lymphomas).
In Europe in 2012, Croatia had the highest age-standardised incidence rate for Hodgkin lymphoma for both men and women, while the lowest rates were in Iceland for men and Albania for women. The comparable data for other B-cell and T-cell lymphomas showed the highest incidence rates were in Italy for men and the Netherlands for women, while the lowest were in Albania for both men and women.
Reliable incidence data for CLL in Europe is not readily available.
In 2012, there were about 42,500 deaths from lymphoma in Europe (accounting for nearly 2.5% of all deaths from cancer in the region). Of these, 4,600 were from Hodgkin (0.3% of all European cancer mortality) and 37,900 from other B-cell and T-cell lymphomas (2%). Europe accounts for nearly one-fifth (18 to 19%) of all lymphoma mortality worldwide, based on the available data from 2012.
The European Cancer Information System estimates these figures will have risen to 52,403 lymphoma deaths in 2018 (with 4,307 from Hodgkin lymphoma and 48,096 from other B-cell and T-cell lymphomas).
Reliable mortality data for CLL in Europe is not readily available.
The European average for five-year relative survival for men with Hodgkin lymphoma is 80%. For countries where data is available, the rates range from 57% in Bulgaria to 87% in Norway.
The European average for five-year relative survival for women with Hodgkin lymphoma is 82%. For countries where data is available, the rates range from 65% in Bulgaria to 89% in Slovenia.
Graph: Hodgkin lymphoma (C81), Age-Standardised Five-Year Relative Survival, Adults (Aged 15+), European Countries, 2000-2007
With permission from Cancer Research UK - https://www.cancerresearchuk.org/health-professional/cancer-statistics/statistics-by-cancer-type/hodgkin-lymphoma/survival#heading-Three – accessed 13 August 2018.
The European average for five-year relative survival for men with other B-cell and T-cell lymphomas is 57%. For countries where data is available, the rates range from 33% in Bulgaria to 69% in Iceland.
The European average for five-year relative survival for women with other B-cell and T-cell lymphomas is 62%. For countries where data is available, the rates range from 44% in Bulgaria to 79% in Iceland.
Given the high number of subtypes within the generalised category of lymphoma it is worth bearing in mind that survival rates will vary greatly from subtype to subtype, particularly between indolent (low-grade or chronic) and aggressive (high-grade or acute) lymphomas.
Graph: Other B-cell and T-cell Lymphomas (C82-C85, Age-Standardised Five-Year Relative Survival, Adults (Aged 15+), European Countries, 2000-2007
With permission from Cancer Research UK - https://www.cancerresearchuk.org/health-professional/cancer-statistics/statistics-by-cancer-type/non-hodgkin-lymphoma/survival#heading-Five – accessed 13 August 2018.
The European average for five-year relative survival for men with CLL is 68%. For countries where data is available, the rates range from 42% in Bulgaria to 80% in Switzerland.
The European average for five-year relative survival for women with CLL is 74%. For countries where data is available, the rates range from 50% in Bulgaria to 82% in France.
Graph: Chronic Lymphocytic Leukaemia (C91.1), Age-Standardised Five-Year Relative Survival, Adults (Aged 15+), European Countries, 2000-2007
With permission from Cancer Research UK - https://www.cancerresearchuk.org/health-professional/cancer-statistics/statistics-by-cancer-type/leukaemia-cll/survival#heading-Zero – accessed 13 August 2018.
Sources and notes
- Cancer Research UK, https://www.cancerresearchuk.org/health-professional/cancer-statistics/statistics-by-cancer-type/hodgkin-lymphoma; https://www.cancerresearchuk.org/health-professional/cancer-statistics/statistics-by-cancer-type/non-hodgkin-lymphoma; https://www.cancerresearchuk.org/health-professional/cancer-statistics/statistics-by-cancer-type/leukaemia-cll; accessed August 2018.
Incidence and mortality information and data
- Ferlay J, Steliarova-Foucher E, Lortet-Tieulent J, et al. Cancer incidence and mortality patterns in Europe: Estimates for 40 countries in 2012. European Journal of Cancer (2013) 49, 1374-1403.
- Ferlay J, Soerjomataram I, Ervik M, et al. GLOBOCAN 2012 v1.0, Cancer Incidence and Mortality Worldwide: IARC CancerBase No. 11 [Internet]. Lyon, France: International Agency for Research on Cancer; 2013. Available from: http://globocan.iarc.fr, accessed December 2013.
- European Cancer Information System (ECIS) - https://ecis.jrc.ec.europa.eu/index.php.
- The incidence and mortality data is for Europe and Worldwide, 2012, ICD-10 C81; for the 2018 incidence estimates from ECIS are for 40 European countries based on the data from the cancer registries participating in IARC’s “CI5: Cancer Incidence in Five Continents series” - http://ci5.iarc.fr/Default.aspx. ECIS’s mortality estimates for 2018 are extracted from the WHO mortality database - http://www.who.int/healthinfo/statistics/mortality_rawdata/en/.
- It is worth noting that variations in data between individual countries may be the result of a range of factors, including the different prevalence of risk factors, different diagnostic methods and/or varying data quality.
Survival information and data
- De Angelis R, Sant M, Coleman MP, et al. Cancer survival in Europe 1999-2007 by country and age: results of EUROCARE-5 - a population-based study. Lancet Oncol 2014;15:23-34.
- The survival data is for: 29 European countries, patients diagnosed in 2000-2007 and followed up to 2008, non-Hodgkin lymphoma (C82-C85).
- The survival data consists of both observed and predicted 5-year relative survival. Where sufficient follow-up was not available for recently diagnosed patients the period approach was used to predict 5-year cohort survival.
- Variations in international survival data may be the result of a number of factors such as differences in cancer biology, the use of different diagnostic tests and screening, stage at diagnosis, differences in access to high-quality care, and data collection practices.
| 1 | 7 |
<urn:uuid:dbebef2d-8d56-4b4a-be9e-133acfde5322>
|
History of artificial intelligence
|History of computing|
|Timeline of computing|
|Glossary of computer science|
The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.
Eventually, it became obvious that they had grossly underestimated the difficulty of the project. In 1973, in response to the criticism from James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an "AI winter". Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned by the absence of the needed computer power (hardware) and withdrew funding again.
Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry due to new methods, the application of powerful computer hardware, and the collection of immense data sets.
- 1 AI in myth, fiction and speculation
- 2 Automatons
- 3 Formal reasoning
- 4 Computer science
- 5 The birth of artificial intelligence 1952–1956
- 6 The golden years 1956–1974
- 7 The first AI winter 1974–1980
- 8 Boom 1980–1987
- 9 Bust: the second AI winter 1987–1993
- 10 AI 1993–2011
- 11 Deep learning, big data and artificial general intelligence: 2011–present
- 12 See also
- 13 Notes
- 14 References
AI in myth, fiction and speculation
Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion's Galatea. In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as Jābir ibn Hayyān's Takwin, Paracelsus' homunculus and Rabbi Judah Loew's Golem. By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots), and speculation, such as Samuel Butler's "Darwin among the Machines." AI has continued to be an important element of science fiction into the present.
Realistic humanoid automatons were built by craftsman from every civilization, including Yan Shi, Hero of Alexandria, Al-Jazari , Pierre Jaquet-Droz, and Wolfgang von Kempelen. The oldest known automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotion—Hermes Trismegistus wrote that "by discovering the true nature of the gods, man has been able to reproduce it."
Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanical—or "formal"—reasoning has a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), al-Khwārizmī (who developed algebra and gave his name to "algorithm") and European scholastic philosophers such as William of Ockham and Duns Scotus.
Spanish philosopher Ramon Llull (1232–1315) developed several logical machines devoted to the production of knowledge by logical means; Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge. Llull's work had a great influence on Gottfried Leibniz, who redeveloped his ideas.
In the 17th century, Leibniz, Thomas Hobbes and René Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry. Hobbes famously wrote in Leviathan: "reason is nothing but reckoning". Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that "there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate." These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research.
In the 20th century, the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole's The Laws of Thought and Frege's Begriffsschrift. Building on Frege's system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematica in 1913. Inspired by Russell's success, David Hilbert challenged mathematicians of the 1920s and 30s to answer this fundamental question: "can all of mathematical reasoning be formalized?" His question was answered by Gödel's incompleteness proof, Turing's machine and Church's Lambda calculus.
Their answer was surprising in two ways. First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second (and more important for AI) their work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machine—a simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.
Calculating machines were built in antiquity and improved throughout history by many mathematicians, including (once again) philosopher Gottfried Leibniz. In the early 19th century, Charles Babbage designed a programmable computer (the Analytical Engine), although it was never built. Ada Lovelace speculated that the machine "might compose elaborate and scientific pieces of music of any degree of complexity or extent". (She is often credited as the first programmer because of a set of notes she wrote that completely detail a method for calculating Bernoulli numbers with the Engine.)
The first modern computers were the massive code breaking machines of the Second World War (such as Z3, ENIAC and Colossus). The latter two of these machines were based on the theoretical foundation laid by Alan Turing and developed by John von Neumann.
The birth of artificial intelligence 1952–1956
In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of artificial intelligence research was founded as an academic discipline in 1956.
Cybernetics and early neural networks
The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 1930s, 1940s, and early 1950s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener's cybernetics described control and stability in electrical networks. Claude Shannon's information theory described digital signals (i.e., all-or-nothing signals). Alan Turing's theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an electronic brain.
Examples of work in this vein includes robots such as W. Grey Walter's turtles and the Johns Hopkins Beast. These machines did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry.
Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions. They were the first to describe what later researchers would call a neural network. One of the students inspired by Pitts and McCulloch was a young Marvin Minsky, then a 24-year-old graduate student. In 1951 (with Dean Edmonds) he built the first neural net machine, the SNARC. Minsky was to become one of the most important leaders and innovators in AI for the next 50 years.
In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines that think. He noted that "thinking" is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was "thinking". This simplified version of the problem allowed Turing to argue convincingly that a "thinking machine" was at least plausible and the paper answered all the most common objections to the proposition. The Turing Test was the first serious proposal in the philosophy of artificial intelligence.
In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess. Arthur Samuel's checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur. Game AI would continue to be used as a measure of progress in AI throughout its history.
Symbolic reasoning and the Logic Theorist
When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines.
In 1955, Allen Newell and (future Nobel Laureate) Herbert A. Simon created the "Logic Theorist" (with help from J. C. Shaw). The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead's Principia Mathematica, and find new and more elegant proofs for some. Simon said that they had "solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind." (This was an early statement of the philosophical position John Searle would later call "Strong AI": that machines can contain minds just as human bodies do.)
Dartmouth Conference 1956: the birth of AI
The Dartmouth Conference of 1956 was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it". The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research. At the conference Newell and Simon debuted the "Logic Theorist" and McCarthy persuaded the attendees to accept "Artificial Intelligence" as the name of the field. The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI. The term "Artificial Intelligence" was chosen by McCarthy to avoid associations with cybernetics and connections with the influential cyberneticist Norbert Wiener.
The golden years 1956–1974
The years after the Dartmouth conference were an era of discovery, of sprinting across new ground. The programs that were developed during this time were, to most people, simply "astonishing": computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such "intelligent" behavior by machines was possible at all. Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years. Government agencies like DARPA poured money into the new field.
There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these:
Reasoning as search
Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end. This paradigm was called "reasoning as search".
The principal difficulty was that, for many problems, the number of possible paths through the "maze" was simply astronomical (a situation known as a "combinatorial explosion"). Researchers would reduce the search space by using heuristics or "rules of thumb" that would eliminate those paths that were unlikely to lead to a solution.
Newell and Simon tried to capture a general version of this algorithm in a program called the "General Problem Solver". Other "searching" programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter's Geometry Theorem Prover (1958) and SAINT, written by Minsky's student James Slagle (1961). Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of their robot Shakey.
An important goal of AI research is to allow computers to communicate in natural languages like English. An early success was Daniel Bobrow's program STUDENT, which could solve high school algebra word problems.
A semantic net represents concepts (e.g. "house","door") as nodes and relations among concepts (e.g. "has-a") as links between the nodes. The first AI program to use a semantic net was written by Ross Quillian and the most successful (and controversial) version was Roger Schank's Conceptual dependency theory.
Joseph Weizenbaum's ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned response or repeated back what was said to her, rephrasing her response with a few grammar rules. ELIZA was the first chatterbot.
In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on a "blocks world," which consists of colored blocks of various shapes and sizes arrayed on a flat surface.
This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team), Adolfo Guzman, David Waltz (who invented "constraint propagation"), and especially Patrick Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. The crowning achievement of the micro-world program was Terry Winograd's SHRDLU. It could communicate in ordinary English sentences, plan operations and execute them.
The first generation of AI researchers made these predictions about their work:
- 1958, H. A. Simon and Allen Newell: "within ten years a digital computer will be the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem."
- 1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do."
- 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."
- 1970, Marvin Minsky (in Life Magazine): "In from three to eight years we will have a machine with the general intelligence of an average human being."
In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (later known as DARPA). The money was used to fund project MAC which subsumed the "AI Group" founded by Minsky and McCarthy five years earlier. DARPA continued to provide three million dollars a year until the 70s. DARPA made similar grants to Newell and Simon's program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963). Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965. These four institutions would continue to be the main centers of AI research (and funding) in academia for many years.
The money was proffered with few strings attached: J. C. R. Licklider, then the director of ARPA, believed that his organization should "fund people, not projects!" and allowed researchers to pursue whatever directions might interest them. This created a freewheeling atmosphere at MIT that gave birth to the hacker culture, but this "hands off" approach would not last.
In Japan, Waseda University initiated the WABOT project in 1967, and in 1972 completed the WABOT-1, the world's first full-scale intelligent humanoid robot, or android. Its limb control system allowed it to walk with the lower limbs, and to grip and transport objects with hands, using tactile sensors. Its vision system allowed it to measure distances and directions to objects using external receptors, artificial eyes and ears. And its conversation system allowed it to communicate with a person in Japanese, with an artificial mouth.
The first AI winter 1974–1980
In the 1970s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared. At the same time, the field of connectionism (or neural nets) was shut down almost completely for 10 years by Marvin Minsky's devastating criticism of perceptrons. Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas.
In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, "toys". AI researchers had begun to run into several fundamental limits that could not be overcome in the 1970s. Although some of these limits would be conquered in later decades, others still stymie the field to this day.
- Limited computer power: There was not enough memory or processing speed to accomplish anything truly useful. For example, Ross Quillian's successful work on natural language was demonstrated with a vocabulary of only twenty words, because that was all that would fit in memory. Hans Moravec argued in 1976 that computers were still millions of times too weak to exhibit intelligence. He suggested an analogy: artificial intelligence requires computer power in the same way that aircraft require horsepower. Below a certain threshold, it's impossible, but, as power increases, eventually it could become easy. With regard to computer vision, Moravec estimated that simply matching the edge and motion detection capabilities of human retina in real time would require a general-purpose computer capable of 109 operations/second (1000 MIPS). As of 2011, practical computer vision applications require 10,000 to 1,000,000 MIPS. By comparison, the fastest supercomputer in 1976, Cray-1 (retailing at $5 million to $8 million), was only capable of around 80 to 130 MIPS, and a typical desktop computer at the time achieved less than 1 MIPS.
- Intractability and the combinatorial explosion. In 1972 Richard Karp (building on Stephen Cook's 1971 theorem) showed there are many problems that can probably only be solved in exponential time (in the size of the inputs). Finding optimal solutions to these problems requires unimaginable amounts of computer time except when the problems are trivial. This almost certainly meant that many of the "toy" solutions used by AI would probably never scale up into useful systems.
- Commonsense knowledge and reasoning. Many important artificial intelligence applications like vision or natural language require simply enormous amounts of information about the world: the program needs to have some idea of what it might be looking at or what it is talking about. This requires that the program know most of the same things about the world that a child does. Researchers soon discovered that this was a truly vast amount of information. No one in 1970 could build a database so large and no one knew how a program might learn so much information.
- Moravec's paradox: Proving theorems and solving geometry problems is comparatively easy for computers, but a supposedly simple task like recognizing a face or crossing a room without bumping into anything is extremely difficult. This helps explain why research into vision and robotics had made so little progress by the middle 1970s.
- The frame and qualification problems. AI researchers (like John McCarthy) who used logic discovered that they could not represent ordinary deductions that involved planning or default reasoning without making changes to the structure of logic itself. They developed new logics (like non-monotonic logics and modal logics) to try to solve the problems.
The end of funding
The agencies which funded AI research (such as the British government, DARPA and NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI. The pattern began as early as 1966 when the ALPAC report appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC ended all support. In 1973, the Lighthill report on the state of AI research in England criticized the utter failure of AI to achieve its "grandiose objectives" and led to the dismantling of AI research in that country. (The report specifically mentioned the combinatorial explosion problem as a reason for AI's failings.) DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of three million dollars. By 1974, funding for AI projects was hard to find.
Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. "Many researchers were caught up in a web of increasing exaggeration." However, there was another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been under increasing pressure to fund "mission-oriented direct research, rather than basic undirected research". Funding for the creative, freewheeling exploration that had gone on in the 60s would not come from DARPA. Instead, the money was directed at specific projects with clear objectives, such as autonomous tanks and battle management systems.
Critiques from across campus
Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gödel's incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could. Hubert Dreyfus ridiculed the broken promises of the 1960s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little "symbol processing" and a great deal of embodied, instinctive, unconscious "know how". John Searle's Chinese Room argument, presented in 1980, attempted to show that a program could not be said to "understand" the symbols that it uses (a quality called "intentionality"). If the symbols have no meaning for the machine, Searle argued, then the machine can not be described as "thinking".
These critiques were not taken seriously by AI researchers, often because they seemed so far off the point. Problems like intractability and commonsense knowledge seemed much more immediate and serious. It was unclear what difference "know how" or "intentionality" made to an actual computer program. Minsky said of Dreyfus and Searle "they misunderstand, and should be ignored." Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI researchers "dared not be seen having lunch with me." Joseph Weizenbaum, the author of ELIZA, felt his colleagues' treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus' positions, he "deliberately made it plain that theirs was not the way to treat a human being."
Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote a "computer program which can conduct psychotherapeutic dialogue" based on ELIZA. Weizenbaum was disturbed that Colby saw a mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published Computer Power and Human Reason which argued that the misuse of artificial intelligence has the potential to devalue human life.
Perceptrons and the attack on connectionism
A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he was optimistic about their power, predicting that "perceptron may eventually be able to learn, make decisions, and translate languages." An active research program into the paradigm was carried out throughout the 1960s but came to a sudden halt with the publication of Minsky and Papert's 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Frank Rosenblatt's predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was done in connectionism for 10 years. Eventually, a new generation of researchers would revive the field and thereafter it would become a vital and useful part of artificial intelligence. Rosenblatt would not live to see this, as he died in a boating accident shortly after the book was published.
Utilizing logic and symbolic reasoning
Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal. In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 1960s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems. A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog. Prolog uses a subset of logic (Horn clauses, closely related to "rules" and "production rules") that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum's expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.
Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof. McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problems—not machines that think as people do.
Another approach: frames and scripts
Among the critics of McCarthy's approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like "story understanding" and "object recognition" that required a machine to think like a person. In order to use ordinary concepts like "chair" or "restaurant" they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that "using precise language to describe essentially imprecise concepts doesn't make them any more precise." Schank described their "anti-logic" approaches as "scruffy", as opposed to the "neat" paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.
In 1975, in a seminal paper, Minsky noted that many of his fellow "scruffy" researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be "logical", but these structured sets of assumptions are part of the context of everything we say and think. He called these structures "frames". Schank used a version of frames he called "scripts" to successfully answer questions about short stories in English. Many years later object-oriented programming would adopt the essential idea of "inheritance" from AI research on frames.
In the 1980s a form of AI program called "expert systems" was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.
The rise of expert systems
An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach.
Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point.
In 1980, an expert system called XCON was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986. Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion.
The knowledge revolution
The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. "AI researchers were beginning to suspect—reluctantly, for it violated the scientific canon of parsimony—that intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways," writes Pamela McCorduck. "[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay". Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s.
The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Douglas Lenat, who started and led the project, argued that there is no shortcut ― the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand. The project was not expected to be completed for many decades.
The money returns: the Fifth Generation project
In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings. Much to the chagrin of scruffies, they chose Prolog as the primary computer language for the project.
Other countries responded with new programs of their own. The UK began the £350 million Alvey project. A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or "MCC") to fund large scale projects in AI and information technology. DARPA responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988.
The revival of connectionism
In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a "Hopfield net") could learn and process information in a completely new way. Around the same time, Geoffrey Hinton and David Rumelhart popularized a method for training neural networks called "backpropagation", also known as the reverse mode of automatic differentiation published by Seppo Linnainmaa (1970) and applied to neural networks by Paul Werbos. These two discoveries helped to revive the field of connectionism.
The new field was unified and inspired by the appearance of Parallel Distributed Processing in 1986—a two volume collection of papers edited by Rumelhart and psychologist James McClelland. Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs like optical character recognition and speech recognition.
Bust: the second AI winter 1987–1993
The business community's fascination with AI rose and fell in the 1980s in the classic pattern of an economic bubble. The collapse was in the perception of AI by government agencies and investors – the field continued to make advances despite the criticism. Rodney Brooks and Hans Moravec, researchers from the related field of robotics, argued for an entirely new approach to artificial intelligence.
A New and Different AI winter
The term "AI winter" was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow. Their fears were well founded: in the late 1980s and early 1990s, AI suffered a series of financial setbacks.
The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.
Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts.
In the late 1980s, the Strategic Computing Initiative cut funding to AI "deeply and brutally." New leadership at DARPA had decided that AI was not "the next wave" and directed funds towards projects that seemed more likely to produce immediate results.
By 1991, the impressive list of goals penned in 1981 for Japan's Fifth Generation Project had not been met. Indeed, some of them, like "carry on a casual conversation" had not been met by 2010. As with other AI projects, expectations had run much higher than what was actually possible.
Over 300 AI companies had shutdown, gone bankrupt, or been acquired by the end of 1993, effectively ending the first commercial wave of AI.
The importance of having a body: nouvelle AI and embodied reason
In the late 1980s, several researchers advocated a completely new approach to artificial intelligence, based on robotics. They believed that, to show real intelligence, a machine needs to have a body — it needs to perceive, move, survive and deal with the world. They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill (see Moravec's paradox). They advocated building intelligence "from the bottom up."
The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties. Another precursor was David Marr, who had come to MIT in the late 1970s from a successful background in theoretical neuroscience to lead the group studying vision. He rejected all symbolic approaches (both McCarthy's logic and Minsky's frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. (Marr's work would be cut short by leukemia in 1980.)
In a 1990 paper, "Elephants Don't Play Chess," robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since "the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough." In the 1980s and 1990s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.
The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI's failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of "artificial intelligence". AI was both more cautious and more successful than it had ever been.
Milestones and Moore's law
On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second. The event was broadcast live over the internet and received over 74 million hits.
In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail. Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws. In February 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.
These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous increase in the speed and capacity of computer by the 90s. In fact, Deep Blue's computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951. This dramatic increase is measured by Moore's law, which predicts that the speed and memory capacity of computers doubles every two years. The fundamental problem of "raw computer power" was slowly being overcome.
A new paradigm called "intelligent agents" became widely accepted during the 1990s. Although earlier researchers had proposed modular "divide and conquer" approaches to AI, the intelligent agent did not reach its modern form until Judea Pearl, Allen Newell, Leslie P. Kaelbling, and others brought concepts from decision theory and economics into the study of AI. When the economist's definition of a rational agent was married to computer science's definition of an object or module, the intelligent agent paradigm was complete.
An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are "intelligent agents", as are human beings and organizations of human beings, such as firms. The intelligent agent paradigm defines AI research as "the study of intelligent agents". This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence.
The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful. It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory. It was hoped that a complete agent architecture (like Newell's SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.
Implementation of Rigor
AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past. There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, economics or operations research. The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous "scientific" discipline. Russell & Norvig (2003) describe this as nothing less than a "revolution" and "the victory of the neats".
Judea Pearl's influential 1988 book brought probability and decision theory into AI. Among the many new tools in use were Bayesian networks, hidden Markov models, information theory, stochastic modeling and classical optimization. Precise mathematical descriptions were also developed for "computational intelligence" paradigms like neural networks and evolutionary algorithms.
AI behind the scenes
Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems and their solutions proved to be useful throughout the technology industry, such as data mining, industrial robotics, logistics, speech recognition, banking software, medical diagnosis and Google's search engine.
The field of AI received little or no credit for these successes in the 1990s and early 2000s. Many of AI's greatest innovations have been reduced to the status of just another item in the tool chest of computer science. Nick Bostrom explains "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."
Many researchers in AI in 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. In part, this may be because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of the AI Winter continued to haunt AI research into the 2000s, as the New York Times reported in 2005: "Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."
In 1968, Arthur C. Clarke and Stanley Kubrick had imagined that by the year 2001, a machine would exist with an intelligence that matched or exceeded the capability of human beings. The character they created, HAL 9000, was based on a belief shared by many leading AI researchers that such a machine would exist by the year 2001.
In 2001, AI founder Marvin Minsky asked "So the question is why didn't we get HAL in 2001?" Minsky believed that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blamed the qualification problem. For Ray Kurzweil, the issue is computer power and, using Moore's Law, he predicted that machines with human-level intelligence will appear by 2029. Jeff Hawkins argued that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems. There were many other explanations and for each there was a corresponding research program underway.
Deep learning, big data and artificial general intelligence: 2011–present
In the first decades of the 21st century, access to large amounts of data (known as "big data"), cheaper and faster computers and advanced machine learning techniques were successfully applied to many problems throughout the economy. In fact, McKinsey Global Institute estimated in their famous paper "Big data: The next frontier for innovation, competition, and productivity" that "by 2009, nearly all sectors in the US economy had at least an average of 200 terabytes of stored data".
By 2016, the market for AI-related products, hardware, and software reached more than 8 billion dollars, and the New York Times reported that interest in AI had reached a "frenzy". The applications of big data began to reach into other fields as well, such as training models in ecology and for various applications in economics. Advances in deep learning (particularly deep convolutional neural networks and recurrent neural networks) drove progress and research in image and video processing, text analysis, and even speech recognition.
Deep learning is a branch of machine learning that models high level abstractions in data by using a deep graph with many processing layers. According to the Universal approximation theorem, deep-ness isn't necessary for a neural network to be able to approximate arbitrary continuous functions. Even so, there are many problems that are common to shallow networks (such as overfitting) that deep networks help avoid. As such, deep neural networks are able to realistically generate much more complex models as compared to their shallow counterparts.
However, deep learning has problems of its own. A common problem for recurrent neural networks is the vanishing gradient problem, which is where gradients passed between layers gradually shrink and literally disappear as they are rounded off to zero. There have been many methods developed to approach this problem, such as Long short-term memory units.
State-of-the-art deep neural network architectures can sometimes even rival human accuracy in fields like computer vision, specifically on things like the MNIST database, and traffic sign recognition.
Language processing engines powered by smart search engines can easily beat humans at answering general trivia questions (such as IBM Watson), and recent developments in deep learning have produced astounding results in competing with humans, in things like Go and Doom (which, being a First-Person Shooter game, has sparked some controversy).
Big data refers to a collection of data that cannot be captured, managed, and processed by conventional software tools within a certain time frame. It is a massive amount of decision-making, insight, and process optimization capabilities that require new processing models. In the Big Data Era written by Victor Meyer Schonberg and Kenneth Cooke, big data means that instead of random analysis (sample survey), all data is used for analysis. The 5V characteristics of big data (proposed by IBM): Volume, Velocity, Variety, Value, Veracity. The strategic significance of big data technology is not to master huge data information, but to specialize in these meaningful data. In other words, if big data is likened to an industry, the key to realizing profitability in this industry is to increase the "Process capability" of the data and realize the "Value added" of the data through "Processing".
Artificial general intelligence
Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that responds in a manner similar to human intelligence. Research in this area includes robotics, speech recognition, image recognition, Natural language processing and expert systems. Since the birth of artificial intelligence, the theory and technology have become more and more mature, and the application fields have been expanding. It is conceivable that the technological products brought by artificial intelligence in the future will be the "container" of human wisdom. Artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is not human intelligence, but it can be like human thinking, and it may exceed human intelligence. Artificial general intelligence is also referred to as "strong AI", "full AI" or as the ability of a machine to perform "general intelligent action". Academic sources reserve "strong AI" to refer to machines capable of experiencing consciousness.
- Outline of artificial intelligence
- Timeline of machine learning
- Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons. 62: 15–25. doi:10.1016/j.bushor.2018.08.004.
- McCorduck 2004, p. 5; Russell & Norvig 2003, p. 939
- McCorduck 2004, pp. 15–16; Buchanan 2005, p. 50 (Judah Loew's Golem); McCorduck 2004, pp. 13–14 (Paracelsus); O'Connor 1994 (Geber's Takwin)
- McCorduck 2004, pp. 17–25.
- Butler 1863.
- Needham 1986, p. 53
- McCorduck 2004, p. 6
- Nick 2005.
- McCorduck 2004, p. 17 and see also Levitt 2000
- Quoted in McCorduck 2004, p. 8. Crevier 1993, p. 1 and McCorduck 2004, pp. 6–9 discusses sacred statues.
- Other important automatons were built by Haroun al-Rashid (McCorduck 2004, p. 10), Jacques de Vaucanson (McCorduck 2004, p. 16) and Leonardo Torres y Quevedo (McCorduck 2004, pp. 59–62)
- Berlinski 2000
- Cfr. Carreras Artau, Tomás y Joaquín. Historia de la filosofía española. Filosofía cristiana de los siglos XIII al XV. Madrid, 1939, Volume I
- Bonner, Anthonny, The Art and Logic of Ramón Llull: A User's Guide, Brill, 2007.
- Anthony Bonner (ed.), Doctor Illuminatus. A Ramon Llull Reader (Princeton University 1985). Vid. "Llull's Influence: The History of Lullism" at 57–71
- 17th century mechanism and AI:
- Hobbes and AI:
- Leibniz and AI:
- The Lambda calculus was especially important to AI, since it was an inspiration for Lisp (the most important programming language used in AI). (Crevier 1993, pp. 190 196,61)
- The original photo can be seen in the article: Rose, Allen (April 1946). "Lightning Strikes Mathematics". Popular Science: 83–86. Retrieved 15 April 2012.
- The Turing machine: McCorduck 2004, pp. 63–64, Crevier 1993, pp. 22–24, Russell & Norvig 2003, p. 8 and see Turing 1936
- Menabrea 1843
- McCorduck 2004, pp. 61–62, 64–66, Russell & Norvig 2003, pp. 14–15
- McCorduck (2004, pp. 76–80)
- McCorduck 2004, pp. 51–57, 80–107, Crevier 1993, pp. 27–32, Russell & Norvig 2003, pp. 15, 940, Moravec 1988, p. 3, Cordeschi, 2002 & Chap. 5.
- McCorduck 2004, p. 98, Crevier 1993, pp. 27–28, Russell & Norvig 2003, pp. 15, 940, Moravec 1988, p. 3, Cordeschi, 2002 & Chap. 5.
- McCorduck 2004, pp. 51–57, 88–94, Crevier 1993, p. 30, Russell & Norvig 2003, p. 15−16, Cordeschi, 2002 & Chap. 5 and see also Pitts & McCullough 1943
- McCorduck 2004, p. 102, Crevier 1993, pp. 34–35 and Russell & Norvig 2003, p. 17
- McCorduck 2004, pp. 70–72, Crevier 1993, p. 22−25, Russell & Norvig 2003, pp. 2–3 and 948, Haugeland 1985, pp. 6–9, Cordeschi 2002, pp. 170–176. See also Turing 1950
- Norvig & Russell (2003, p. 948) claim that Turing answered all the major objections to AI that have been offered in the years since the paper appeared.
- See "A Brief History of Computing" at AlanTuring.net.
- Schaeffer, Jonathan. One Jump Ahead:: Challenging Human Supremacy in Checkers, 1997,2009, Springer, ISBN 978-0-387-76575-4. Chapter 6.
- McCorduck 2004, pp. 137–170, Crevier, pp. 44–47
- McCorduck 2004, pp. 123–125, Crevier 1993, pp. 44–46 and Russell & Norvig 2003, p. 17
- Quoted in Crevier 1993, p. 46 and Russell & Norvig 2003, p. 17
- Russell & Norvig 2003, p. 947,952
- McCorduck 2004, pp. 111–136, Crevier 1993, pp. 49–51 and Russell & Norvig 2003, p. 17 Newquist 1994, pp. 91–112
- See McCarthy et al. 1955. Also see Crevier 1993, p. 48 where Crevier states "[the proposal] later became known as the 'physical symbol systems hypothesis'". The physical symbol system hypothesis was articulated and named by Newell and Simon in their paper on GPS. (Newell & Simon 1963) It includes a more specific definition of a "machine" as an agent that manipulates symbols. See the philosophy of artificial intelligence.
- McCorduck (2004, pp. 129–130) discusses how the Dartmouth conference alumni dominated the first two decades of AI research, calling them the "invisible college".
- "I won't swear and I hadn't seen it before," McCarthy told Pamela McCorduck in 1979. (McCorduck 2004, p. 114) However, McCarthy also stated unequivocally "I came up with the term" in a CNET interview. (Skillings 2006)
- Crevier (1993, pp. 49) writes "the conference is generally recognized as the official birthdate of the new science."
- McCarthy, John (1988). "Review of The Question of Artificial Intelligence". Annals of the History of Computing. 10 (3): 224–229., collected in McCarthy, John (1996). "10. Review of The Question of Artificial Intelligence". Defending AI Research: A Collection of Essays and Reviews. CSLI., p. 73 "[O]ne of the reasons for inventing the term "artificial intelligence" was to escape association with "cybernetics". Its concentration on analog feedback seemed misguided, and I wished to avoid having either to accept Norbert (not Robert) Wiener as a guru or having to argue with him."
- Russell and Norvig write "it was astonishing whenever a computer did anything remotely clever." Russell & Norvig 2003, p. 18
- Crevier 1993, pp. 52–107, Moravec 1988, p. 9 and Russell & Norvig 2003, p. 18−21
- McCorduck 2004, p. 218, Newquist 1994, pp. 91–112, Crevier 1993, pp. 108–109 and Russell & Norvig 2003, p. 21
- Crevier 1993, pp. 52–107, Moravec 1988, p. 9
- Means-ends analysis, reasoning as search: McCorduck 2004, pp. 247–248. Russell & Norvig 2003, pp. 59–61
- Heuristic: McCorduck 2004, p. 246, Russell & Norvig 2003, pp. 21–22
- GPS: McCorduck 2004, pp. 245–250, Crevier 1993, p. GPS?, Russell & Norvig 2003, p. GPS?
- Crevier 1993, pp. 51–58,65–66 and Russell & Norvig 2003, pp. 18–19
- McCorduck 2004, pp. 268–271, Crevier 1993, pp. 95–96, Newquist 1994, pp. 148–156, Moravec 1988, pp. 14–15
- McCorduck 2004, p. 286, Crevier 1993, pp. 76–79, Russell & Norvig 2003, p. 19
- Crevier 1993, pp. 79–83
- Crevier 1993, pp. 164–172
- McCorduck 2004, pp. 291–296, Crevier 1993, pp. 134–139
- McCorduck 2004, pp. 299–305, Crevier 1993, pp. 83–102, Russell & Norvig 2003, p. 19 and Copeland 2000
- McCorduck 2004, pp. 300–305, Crevier 1993, pp. 84–102, Russell & Norvig 2003, p. 19
- Simon & Newell 1958, p. 7−8 quoted in Crevier 1993, p. 108. See also Russell & Norvig 2003, p. 21
- Simon 1965, p. 96 quoted in Crevier 1993, p. 109
- Minsky 1967, p. 2 quoted in Crevier 1993, p. 109
- Minsky strongly believes he was misquoted. See McCorduck 2004, pp. 272–274, Crevier 1993, p. 96 and Darrach 1970.
- Crevier 1993, pp. 64–65
- Crevier 1993, p. 94
- Howe 1994
- McCorduck 2004, p. 131, Crevier 1993, p. 51. McCorduck also notes that funding was mostly under the direction of alumni of the Dartmouth conference of 1956.
- Crevier 1993, p. 65
- Crevier 1993, pp. 68–71 and Turkle 1984
- "Humanoid History -WABOT-".
- Robotics and Mechatronics: Proceedings of the 4th IFToMM International Symposium on Robotics and Mechatronics, page 66
- "Historical Android Projects". androidworld.com.
- Robots: From Science Fiction to Technological Revolution, page 130
- Handbook of Digital Human Modeling: Research for Applied Ergonomics and Human Factors Engineering, Chapter 3, pages 1-2
- Crevier 1993, pp. 100–144 and Russell & Norvig 2003, pp. 21–22
- McCorduck 2004, pp. 104–107, Crevier 1993, pp. 102–105, Russell & Norvig 2003, p. 22
- Crevier 1993, pp. 163–196
- Crevier 1993, p. 146
- Russell & Norvig 2003, pp. 20–21
- Crevier 1993, pp. 146–148, see also Buchanan 2005, p. 56: "Early programs were necessarily limited in scope by the size and speed of memory"
- Moravec 1976. McCarthy has always disagreed with Moravec, back to their early days together at SAIL. He states "I would say that 50 years ago, the machine capability was much too small, but by 30 years ago, machine capability wasn't the real problem." in a CNET interview. (Skillings 2006)
- Hans Moravec, ROBOT: Mere Machine to Transcendent Mind
- Russell & Norvig 2003, pp. 9,21–22 and Lighthill 1973
- McCorduck 2004, pp. 300 & 421; Crevier 1993, pp. 113–114; Moravec 1988, p. 13; Lenat & Guha 1989, (Introduction); Russell & Norvig 2003, p. 21
- McCorduck 2004, p. 456, Moravec 1988, pp. 15–16
- McCarthy & Hayes 1969, Crevier 1993, pp. 117–119
- McCorduck 2004, pp. 280–281, Crevier 1993, p. 110, Russell & Norvig 2003, p. 21 and NRC 1999 under "Success in Speech Recognition".
- Crevier 1993, p. 117, Russell & Norvig 2003, p. 22, Howe 1994 and see also Lighthill 1973.
- Russell & Norvig 2003, p. 22, Lighthill 1973, John McCarthy wrote in response that "the combinatorial explosion problem has been recognized in AI from the beginning" in Review of Lighthill report
- Crevier 1993, pp. 115–116 (on whom this account is based). Other views include McCorduck 2004, pp. 306–313 and NRC 1999 under "Success in Speech Recognition".
- Crevier 1993, p. 115. Moravec explains, "Their initial promises to DARPA had been much too optimistic. Of course, what they delivered stopped considerably short of that. But they felt they couldn't in their next proposal promise less than in the first one, so they promised more."
- NRC 1999 under "Shift to Applied Research Increases Investment." While the autonomous tank was a failure, the battle management system (called "DART") proved to be enormously successful, saving billions in the first Gulf War, repaying the investment and justifying the DARPA's pragmatic policy, at least as far as DARPA was concerned.
- Lucas and Penrose' critique of AI: Crevier 1993, p. 22, Russell & Norvig 2003, pp. 949–950, Hofstadter 1980, pp. 471–477 and see Lucas 1961
- "Know-how" is Dreyfus' term. (Dreyfus makes a distinction between "knowing how" and "knowing that", a modern version of Heidegger's distinction of ready-to-hand and present-at-hand.) (Dreyfus & Dreyfus 1986)
- Dreyfus' critique of artificial intelligence: McCorduck 2004, pp. 211–239, Crevier 1993, pp. 120–132, Russell & Norvig 2003, pp. 950–952 and see Dreyfus 1965, Dreyfus 1972, Dreyfus & Dreyfus 1986
- Searle's critique of AI: McCorduck 2004, pp. 443–445, Crevier 1993, pp. 269–271, Russell & Norvig 2004, pp. 958–960 and see Searle 1980
- Quoted in Crevier 1993, p. 143
- Quoted in Crevier 1993, p. 122
- "I became the only member of the AI community to be seen eating lunch with Dreyfus. And I deliberately made it plain that theirs was not the way to treat a human being." Joseph Weizenbaum, quoted in Crevier 1993, p. 123.
- Colby, Watt & Gilbert 1966, p. 148. Weizenbaum referred to this text in Weizenbaum 1976, pp. 5, 6. Colby and his colleagues later also developed chatterbot-like "computer simulations of paranoid processes (PARRY)" to "make intelligble paranoid processes in explicit symbol processing terms." (Colby 1974, p. 6)
- Weizenbaum's critique of AI: McCorduck 2004, pp. 356–373, Crevier 1993, pp. 132–144, Russell & Norvig 2003, p. 961 and see Weizenbaum 1976
- McCorduck 2004, p. 51, Russell & Norvig 2003, pp. 19, 23
- McCorduck 2004, p. 51, Crevier 1993, pp. 190–192
- Crevier 1993, pp. 193–196
- Crevier 1993, pp. 145–149,258–63
- Wason (1966) showed that people do poorly on completely abstract problems, but if the problem is restated to allow the use of intuitive social intelligence, performance dramatically improves. (See Wason selection task) Tversky, Slovic & Kahnemann (1982) have shown that people are terrible at elementary problems that involve uncertain reasoning. (See list of cognitive biases for several examples). Eleanor Rosch's work is described in Lakoff 1987
- An early example of McCathy's position was in the journal Science where he said "This is AI, so we don't care if it's psychologically real" (Kolata 2012), and he recently reiterated his position at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence" (Maker 2006).
- Crevier 1993, pp. 175
- Neat vs. scruffy: McCorduck 2004, pp. 421–424 (who picks up the state of the debate in 1984). Crevier 1993, pp. 168 (who documents Schank's original use of the term). Another aspect of the conflict was called "the procedural/declarative distinction" but did not prove to be influential in later AI research.
- McCorduck 2004, pp. 305–306, Crevier 1993, pp. 170–173, 246 and Russell & Norvig 2003, p. 24. Minsky's frame paper: Minsky 1974.
- Newquist 1994, pp. 189–192
- McCorduck 2004, pp. 327–335 (Dendral), Crevier 1993, pp. 148–159, Newquist 1994, p. 271, Russell & Norvig 2003, pp. 22–23
- Crevier 1993, pp. 158–159 and Russell & Norvig 2003, p. 23−24
- Crevier 1993, p. 198
- McCorduck 2004, pp. 434–435, Crevier 1993, pp. 161–162,197–203 and Russell & Norvig 2003, p. 24
- McCorduck 2004, p. 299
- McCorduck 2004, pp. 421
- Knowledge revolution: McCorduck 2004, pp. 266–276, 298–300, 314, 421, Newquist 1994, pp. 255–267, Russell & Norvig, pp. 22–23
- Cyc: McCorduck 2004, p. 489, Crevier 1993, pp. 239–243, Newquist 1994, pp. 431–455, Russell & Norvig 2003, p. 363−365 and Lenat & Guha 1989
- "Chess: Checkmate" (PDF). Retrieved 1 September 2007.
- McCorduck 2004, pp. 436–441, Newquist 1994, pp. 231–240, Crevier 1993, pp. 211, Russell & Norvig 2003, p. 24 and see also Feigenbaum & McCorduck 1983
- Crevier 1993, pp. 195
- Crevier 1993, pp. 240.
- Russell & Norvig 2003, p. 25
- McCorduck 2004, pp. 426–432, NRC 1999 under "Shift to Applied Research Increases Investment"
- Crevier 1993, pp. 214–215.
- Crevier 1993, pp. 215–216.
- Crevier 1993, pp. 203. AI winter was first used as the title of a seminar on the subject for the Association for the Advancement of Artificial Intelligence.
- Newquist 1994, pp. 359–379, McCorduck 2004, p. 435, Crevier 1993, pp. 209–210
- McCorduck 2004, p. 435 (who cites institutional reasons for their ultimate failure), Newquist 1994, pp. 258–283 (who cites limited deployment within corporations), Crevier 1993, pp. 204–208 (who cites the difficulty of truth maintenance, i.e., learning and updating), Lenat & Guha 1989, Introduction (who emphasizes the brittleness and the inability to handle excessive qualification.)
- McCorduck 2004, pp. 430–431
- McCorduck 2004, p. 441, Crevier 1993, p. 212. McCorduck writes "Two and a half decades later, we can see that the Japanese didn't quite meet all of those ambitious goals."
- Newquist, HP (1994). The Brain Makers: Genius, Ego, And Greed In The Quest For Machines That Think. New York: Macmillan/SAMS. ISBN 978-0-672-30412-5.
- McCorduck 2004, pp. 454–462
- Moravec (1988, p. 20) writes: "I am confident that this bottom-up route to artificial intelligence will one date meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."
- Crevier 1993, pp. 183–190.
- Brooks 1990, p. 3
- See, for example, Lakoff & Turner 1999
- McCorduck (2004, p. 424) discusses the fragmentation and the abandonment of AI's original goals.
- McCorduck 2004, pp. 480–483
- "Deep Blue". IBM Research. Retrieved 10 September 2010.
- DARPA Grand Challenge – home page Archived 31 October 2007 at the Wayback Machine
- "Archived copy". Archived from the original on 5 March 2014. Retrieved 25 October 2011.CS1 maint: archived copy as title (link)
- Markoff, John (16 February 2011). "On 'Jeopardy!' Watson Win Is All but Trivial". The New York Times.
- Kurzweil 2005, p. 274 writes that the improvement in computer chess, "according to common wisdom, is governed only by the brute force expansion of computer hardware."
- Cycle time of Ferranti Mark 1 was 1.2 milliseconds, which is arguably equivalent to about 833 flops. Deep Blue ran at 11.38 gigaflops (and this does not even take into account Deep Blue's special-purpose hardware for chess). Very approximately, these differ by a factor of 10^7.
- McCorduck 2004, pp. 471–478, Russell & Norvig 2003, p. 55, where they write: "The whole-agent view is now widely accepted in the field". The intelligent agent paradigm is discussed in major AI textbooks, such as: Russell & Norvig 2003, pp. 32–58, 968–972, Poole, Mackworth & Goebel 1998, pp. 7–21, Luger & Stubblefield 2004, pp. 235–240
- Carl Hewitt's Actor model anticipated the modern definition of intelligent agents. (Hewitt, Bishop & Steiger 1973) Both John Doyle (Doyle 1983) and Marvin Minsky's popular classic The Society of Mind (Minsky 1986) used the word "agent". Other "modular" proposals included Rodney Brook's subsumption architecture, object-oriented programming and others.
- Russell & Norvig 2003, pp. 27, 55
- This is how the most widely accepted textbooks of the 21st century define artificial intelligence. See Russell & Norvig 2003, p. 32 and Poole, Mackworth & Goebel 1998, p. 1
- McCorduck 2004, p. 478
- McCorduck 2004, pp. 486–487, Russell & Norvig 2003, pp. 25–26
- Russell & Norvig 2003, p. 25−26
- McCorduck (2004, p. 487): "As I write, AI enjoys a Neat hegemony."
- Pearl 1988
- See Applications of artificial intelligence § Computer science
- NRC 1999 under "Artificial Intelligence in the 90s", and Kurzweil 2005, p. 264
- Russell & Norvig 2003, p. 28
- For the new state of the art in AI based speech recognition, see The Economist (2007)
- "AI-inspired systems were already integral to many everyday technologies such as internet search engines, bank software for processing transactions and in medical diagnosis." Nick Bostrom, quoted in CNN 2006
- Olsen (2004),Olsen (2006)
- McCorduck 2004, p. 423, Kurzweil 2005, p. 265, Hofstadter 1979, p. 601
- CNN 2006
- Markoff 2005
- The Economist 2007
- Tascarella 2006
- Crevier 1993, pp. 108–109
- He goes on to say: "The answer is, I believe we could have ... I once went to an international conference on neural net[s]. There were 40 thousand registrants ... but ... if you had an international conference, for example, on using multiple representations for common sense reasoning, I've only been able to find 6 or 7 people in the whole world." Minsky 2001
- Maker 2006
- Kurzweil 2005
- Hawkins & Blakeslee 2004
- Steve Lohr (17 October 2016), "IBM Is Counting on Its Bet on Watson, and Paying Big Money for It", New York Times
- Hampton, Stephanie E; Strasser, Carly A; Tewksbury, Joshua J; Gram, Wendy K; Budden, Amber E; Batcheller, Archer L; Duke, Clifford S; Porter, John H (1 April 2013). "Big data and the future of ecology". Frontiers in Ecology and the Environment. 11 (3): 156–162. doi:10.1890/120103. ISSN 1540-9309.
- "How Big Data is Changing Economies | Becker Friedman Institute". bfi.uchicago.edu. Retrieved 9 June 2017.
- LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey (2015). "Deep learning". Nature. 521 (7553): 436–444. Bibcode:2015Natur.521..436L. doi:10.1038/nature14539. PMID 26017442.
- Baral, Chitta; Fuentes, Olac; Kreinovich, Vladik (June 2015). "Why Deep Neural Networks: A Possible Theoretical Explanation". Departmental Technical Reports (Cs). Retrieved 9 June 2017.
- Ciregan, D.; Meier, U.; Schmidhuber, J. (June 2012). Multi-column deep neural networks for image classification. 2012 IEEE Conference on Computer Vision and Pattern Recognition. pp. 3642–3649. arXiv:1202.2745. Bibcode:2012arXiv1202.2745C. CiteSeerX 10.1.1.300.3283. doi:10.1109/cvpr.2012.6248110. ISBN 978-1-4673-1228-8.
- Markoff, John (16 February 2011). "On 'Jeopardy!' Watson Win Is All but Trivial". The New York Times. ISSN 0362-4331. Retrieved 10 June 2017.
- "AlphaGo: Mastering the ancient game of Go with Machine Learning". Research Blog. Retrieved 10 June 2017.
- "Innovations of AlphaGo | DeepMind". DeepMind. Retrieved 10 June 2017.
- University, Carnegie Mellon. "Computer Out-Plays Humans in "Doom"-CMU News - Carnegie Mellon University". www.cmu.edu. Retrieved 10 June 2017.
- Laney, Doug (2001). "3D data management: Controlling data volume, velocity and variety". META Group Research Note. 6 (70).
- Marr, Bernard (6 March 2014). "Big Data: The 5 Vs Everyone Must Know".
- Goes, Paulo B. (2014). "Design science research in top information systems journals". MIS Quarterly: Management Information Systems. 38 (1).
- (Kurzweil 2005, p. 260) or see Advanced Human Intelligence where he defines strong AI as "machine intelligence with the full range of human intelligence."
- The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013
- Berlinski, David (2000), The Advent of the Algorithm, Harcourt Books, ISBN 978-0-15-601391-8, OCLC 46890682.
- Buchanan, Bruce G. (Winter 2005), "A (Very) Brief History of Artificial Intelligence" (PDF), AI Magazine, pp. 53–60, archived from the original (PDF) on 26 December 2007, retrieved 30 August 2007.
- Brooks, Rodney (1990), "Elephants Don't Play Chess" (PDF), Robotics and Autonomous Systems, 6 (1–2): 3–15, CiteSeerX 10.1.1.588.7539, doi:10.1016/S0921-8890(05)80025-9, retrieved 30 August 2007.
- Butler, Samuel (13 June 1863), "Darwin Among the Machines", The Press, Christchurch, New Zealand, retrieved 10 October 2008.
- Colby, Kenneth M.; Watt, James B.; Gilbert, John P. (1966), "A Computer Method of Psychotherapy: Preliminary Communication", The Journal of Nervous and Mental Disease, vol. 142 no. 2, pp. 148–152, doi:10.1097/00005053-196602000-00005, retrieved 17 June 2018.
- Colby, Kenneth M. (September 1974), Ten Criticisms of Parry (PDF), Stanford Artificial Intelligence Laboratory, REPORT NO. STAN-CS-74-457, retrieved 17 June 2018.
- AI set to exceed human brain power, CNN.com, 26 July 2006, retrieved 16 October 2007.
- Copeland, Jack (2000), Micro-World AI, retrieved 8 October 2008.
- Cordeschi, Roberto (2002), The Discovery of the Artificial, Dordrecht: Kluwer..
- Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks, ISBN 0-465-02997-3
- Darrach, Brad (20 November 1970), "Meet Shaky, the First Electronic Person", Life Magazine, pp. 58–68.
- Doyle, J. (1983), "What is rational psychology? Toward a modern mental philosophy", AI Magazine, vol. 4 no. 3, pp. 50–53.
- Dreyfus, Hubert (1965), Alchemy and AI, RAND Corporation Memo.
- Dreyfus, Hubert (1972), What Computers Can't Do, New York: MIT Press, ISBN 978-0-06-090613-9, OCLC 5056816.
- The Economist (7 June 2007), "Are You Talking to Me?", The Economist, retrieved 16 October 2008.
- Feigenbaum, Edward A.; McCorduck, Pamela (1983), The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World, Michael Joseph, ISBN 978-0-7181-2401-4.
- Hawkins, Jeff; Blakeslee, Sandra (2004), On Intelligence, New York, NY: Owl Books, ISBN 978-0-8050-7853-4, OCLC 61273290.
- Hebb, D.O. (1949), The Organization of Behavior, New York: Wiley, ISBN 978-0-8058-4300-2, OCLC 48871099.
- Hewitt, Carl; Bishop, Peter; Steiger, Richard (1973), A Universal Modular Actor Formalism for Artificial Intelligence (PDF), IJCAI, archived from the original (PDF) on 29 December 2009
- Hobbes, Thomas (1651), Leviathan.
- Hofstadter, Douglas (1999) , Gödel, Escher, Bach: an Eternal Golden Braid, Basic Books, ISBN 978-0-465-02656-2, OCLC 225590743.
- Howe, J. (November 1994), Artificial Intelligence at Edinburgh University: a Perspective, retrieved 30 August 2007.
- Kaplan, Andreas; Haenlein, Michael (2018), "Siri, Siri in my Hand, who's the Fairest in the Land? On the Interpretations, Illustrations and Implications of Artificial Intelligence", Business Horizons, 62: 15–25, doi:10.1016/j.bushor.2018.08.004.
- Kolata, G. (1982), "How can computers get common sense?", Science, 217 (4566): 1237–1238, Bibcode:1982Sci...217.1237K, doi:10.1126/science.217.4566.1237, PMID 17837639.
- Kurzweil, Ray (2005), The Singularity is Near, Viking Press, ISBN 978-0-14-303788-0, OCLC 71826177.
- Lakoff, George (1987), Women, Fire, and Dangerous Things: What Categories Reveal About the Mind, University of Chicago Press., ISBN 978-0-226-46804-4.
- Lenat, Douglas; Guha, R. V. (1989), Building Large Knowledge-Based Systems, Addison-Wesley, ISBN 978-0-201-51752-1, OCLC 19981533.
- Levitt, Gerald M. (2000), The Turk, Chess Automaton, Jefferson, N.C.: McFarland, ISBN 978-0-7864-0778-1.
- Lighthill, Professor Sir James (1973), "Artificial Intelligence: A General Survey", Artificial Intelligence: a paper symposium, Science Research Council
- Lucas, John (1961), "Minds, Machines and Gödel", Philosophy, 36 (XXXVI): 112–127, doi:10.1017/S0031819100057983, retrieved 15 October 2008
- Maker, Meg Houston (2006), AI@50: AI Past, Present, Future, Dartmouth College, archived from the original on 8 October 2008, retrieved 16 October 2008
- Markoff, John (14 October 2005), "Behind Artificial Intelligence, a Squadron of Bright Real People", The New York Times, retrieved 16 October 2008
- McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (31 August 1955), A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, archived from the original on 30 September 2008, retrieved 16 October 2008
- McCarthy, John; Hayes, P. J. (1969), "Some philosophical problems from the standpoint of artificial intelligence", in Meltzer, B. J.; Mitchie, Donald (eds.), Machine Intelligence 4, Edinburgh University Press, pp. 463–502, retrieved 16 October 2008
- McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 978-1-56881-205-2, OCLC 52197627.
- McCullough, W. S.; Pitts, W. (1943), "A logical calculus of the ideas immanent in nervous activity", Bulletin of Mathematical Biophysics, 5 (4): 115–127, doi:10.1007/BF02478259
- Menabrea, Luigi Federico; Lovelace, Ada (1843), "Sketch of the Analytical Engine Invented by Charles Babbage", Scientific Memoirs, 3, retrieved 29 August 2008 With notes upon the Memoir by the Translator
- Minsky, Marvin (1967), Computation: Finite and Infinite Machines, Englewood Cliffs, N.J.: Prentice-Hall
- Minsky, Marvin; Papert, Seymour (1969), Perceptrons: An Introduction to Computational Geometry, The MIT Press, ISBN 978-0-262-63111-2, OCLC 16924756
- Minsky, Marvin (1974), A Framework for Representing Knowledge, retrieved 16 October 2008
- Minsky, Marvin (1986), The Society of Mind, Simon and Schuster, ISBN 978-0-671-65713-0, OCLC 223353010
- Minsky, Marvin (2001), It's 2001. Where Is HAL?, Dr. Dobb's Technetcast, retrieved 8 August 2009
- Moravec, Hans (1976), The Role of Raw Power in Intelligence, archived from the original on 3 March 2016, retrieved 16 October 2008
- Moravec, Hans (1988), Mind Children, Harvard University Press, ISBN 978-0-674-57618-6, OCLC 245755104
- NRC (1999), "Developments in Artificial Intelligence", Funding a Revolution: Government Support for Computing Research, National Academy Press, ISBN 978-0-309-06278-7, OCLC 246584055
- Newell, Allen; Simon, H. A. (1963), "GPS: A Program that Simulates Human Thought", in Feigenbaum, E.A.; Feldman, J. (eds.), Computers and Thought, New York: McGraw-Hill, ISBN 978-0-262-56092-4, OCLC 246968117
- Newquist, HP (1994), The Brain Makers: Genius, Ego, And Greed In The Quest For Machines That Think, New York: Macmillan/SAMS, ISBN 978-0-9885937-1-8
- Nick, Martin (2005), Al Jazari: The Ingenious 13th Century Muslim Mechanic, Al Shindagah, retrieved 16 October 2008.*
- O'Connor, Kathleen Malone (1994), The alchemical creation of life (takwin) and other concepts of Genesis in medieval Islam, University of Pennsylvania, pp. 1–435, retrieved 10 January 2007
- Olsen, Stefanie (10 May 2004), Newsmaker: Google's man behind the curtain, CNET, retrieved 17 October 2008.
- Olsen, Stefanie (18 August 2006), Spying an intelligent search engine, CNET, retrieved 17 October 2008.
- Pearl, J. (1988), Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, San Mateo, California: Morgan Kaufmann, ISBN 978-1-55860-479-7, OCLC 249625842.
- Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2.
- Poole, David; Mackworth, Alan; Goebel, Randy (1998), Computational Intelligence: A Logical Approach, Oxford University Press., ISBN 978-0-19-510270-3.
- Samuel, Arthur L. (July 1959), "Some studies in machine learning using the game of checkers", IBM Journal of Research and Development, 3 (3): 210–219, CiteSeerX 10.1.1.368.2254, doi:10.1147/rd.33.0210, retrieved 20 August 2007.
- Searle, John (1980), "Minds, Brains and Programs", Behavioral and Brain Sciences, 3 (3): 417–457, doi:10.1017/S0140525X00005756, retrieved 13 May 2009.
- Simon, H. A.; Newell, Allen (1958), "Heuristic Problem Solving: The Next Advance in Operations Research", Operations Research, 6: 1, doi:10.1287/opre.6.1.1.
- Simon, H. A. (1965), The Shape of Automation for Men and Management, New York: Harper & Row.
- Skillings, Jonathan (2006), Newsmaker: Getting machines to think like us, CNET, retrieved 8 October 2008.
- Tascarella, Patty (14 August 2006), "Robotics firms find fundraising struggle, with venture capital shy", Pittsburgh Business Times, retrieved 15 March 2016.
- Turing, Alan (1936–37), "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, 2 (42): 230–265, doi:10.1112/plms/s2-42.1.230, retrieved 8 October 2008.
- Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind, LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423.
- Weizenbaum, Joseph (1976), Computer Power and Human Reason, W.H. Freeman & Company, ISBN 978-0-14-022535-8, OCLC 10952283.
| 1 | 51 |
<urn:uuid:26025896-25a6-495f-8665-767d63cf0d75>
|
Marek Majdan and colleagues use detailed data from the statistical office of the European Union to estimate the burden of deaths due to traumatic brain injury in 16 European countries.
Traumatic brain injuries (TBIs) are a major public health, medical, and societal challenge globally [1–4]. In the European Union alone, an estimated 57,000 deaths and 1.5 million hospital admissions annually have been attributed to TBI. This translates to a pooled population mortality rate of 11.7 (95% CI: 9.9 to 13.6) and hospital admission rate of 287.2 (95% CI: 232.9 to 341.5) per 100,000 persons per year . The overall mortality in persons following TBI has been shown to be substantially higher than mortality in the general population—a pooled standardized mortality ratio of 2.18 (95% CI: 1.88–2.52) . Life expectancy after TBI has been estimated to range from less than 40% to over 85% of that of the general population—depending on the severity of the injury and the level of impairment . Due to the long-term character of the disabilities after TBI and their unstable nature (e.g., deterioration of previously achieved levels of outcome occurs in about 1 of 3 patients within 10 years post-injury), TBI has been considered a chronic condition . Thus, the general burden of TBI to victims, their families, and the society as a whole is substantial and has been well documented.
Recent epidemiological research suggests that the patterns of TBI are dynamic, that they are changing over time, and that they are dependent on the demographic structure of the population and the level of economic development . In emerging economies, intensive motorization that is not accompanied by adequate and enforced preventive measures has led to a substantial increase in TBIs related to traffic crashes [2,9]. As life expectancy has increased in high-income countries, TBI from falls has become more prevalent [9–11]. In order to cope with such variation, and to achieve any improvements in the levels of occurrence and outcomes of TBI, standardized, reproducible, regularly updated, and comparable epidemiological data are needed [5,12,13]. Most published epidemiological studies on TBI have focused on using case fatality rates, population mortality, or incidence to describe the epidemiology of TBI [14–16].
Although these indicators provide insight into the occurrence and outcome of TBIs in various populations, they fail to quantify the full extent of their public health and societal impact. Summary measures of population health used in the Global Burden of Disease Study have been designed to capture mortality and morbidity impact, and to allow subsequent comparison of disease impact on public health across a range of illnesses and populations. Among these measures are years of life lost (YLLs), which quantifies the number of years of life lost because the person dies prematurely due to a disease or injury; years lived with disability (YLDs), which quantifies the healthy time lost by a person living with a disability caused by a disease or injury; and disability-adjusted life years (DALYs), a summary measure that is the sum of YLLs and YLDs . These indicators have recently been used to estimate the global burden of diseases and the overall burden of injuries ; however—owing especially to the nonavailability of data—studies using them to describe TBI are scarce [12,18].
The aim of this study was to provide an in-depth analysis of the burden of deaths due to TBI by calculating TBI-induced YLLs in 16 European countries in 2013, analyzing their main causes and demographic patterns, using data extracted from death certificates under unified guidelines and collected in a standardized manner.
Study design and setting
A population-wide, cross-sectional epidemiological study was conducted in 16 European countries (Austria, Bulgaria, Croatia, Cyprus, Denmark, Estonia, Hungary, Ireland, Italy, Lithuania, Luxembourg, Romania, Serbia, Slovakia, Slovenia, and United Kingdom) in order to estimate TBI YLLs for the year 2013. The selection of countries was based on the availability of data. The year 2013 was chosen because it was the most recent year for which data were available. The availability of the data in other EU countries for this year was limited because, at the time of this study, not all European countries were submitting data on causes of injury-related deaths in the necessary format (e.g., giving both the external cause and nature of injury) and detail (e.g., giving data in sufficiently small age groups). Thus, the choice was made to use the 16 countries for which data were available, and to extrapolate the findings to the 28 member states of the European Union (EU-28), and this seemed justified under the circumstances.
The data used for all analyses in this study were acquired from the statistical office of the European Union (Eurostat). Eurostat routinely collects data from death certificates from the 28 EU member states, the former Yugoslav Republic of Macedonia, Albania, Iceland, Norway, Liechtenstein, and Switzerland and regularly publishes annual overviews of causes of deaths . For our study, a specifically tailored dataset of micro-level data was provided that, in detail of information, went beyond the regularly published reports. This dataset contained a record for each injury-related death that occurred in the included countries in 2013, where the external cause of death (International Classification of Diseases–10th Revision [ICD-10] codes V01–Y98), the specific nature of injury (ICD-10 codes S00–T98, only 1 diagnosis provided for each record), the age at death, and sex were given. All data used in our analyses were collected at the country level and then—following specific and unified guidelines—submitted to Eurostat, which in turn provided them to us. The study used administratively collected secondary data, and no ethics committee approval was required. No ethics approval was required in order to obtain data from Eurostat.
For the purpose of this study, a TBI-related death was defined as a death where the cause of death was a TBI or a TBI sequela, i.e., from the provided database, records in which the nature of injury was coded as ICD-10 S00–S09 (injuries to the head) or T90 (sequelae of injuries to the head).
The European Union life table published by Eurostat was used to determine the life expectancy at death for each recorded death . The number of YLLs for each death was calculated by subtracting the age at death from the life expectancy at the age of death, and summarized using this formula:
where d1 is the number of fatal cases due to health outcome l in a certain period and el is the expected individual life span at the age of death due to health outcome l.
YLLs were summarized into 6 age groups (0–4, 5–14, 15–34, 35–64, 65–84, and ≥85 years) by summing the YLLs in persons in each age group. Crude rates of YLLs were calculated per 100,000 people per year using mid-year populations for each country in total and for the 6 age groups. For comparison purposes (in response to the suggestions of a peer reviewer), number of YLLs and crude TBI YLL rates per 100,000 persons per year are also presented broken down into 5-year age groups (S1 Appendix). In addition, age-standardized rates with 95% CIs were calculated in order to adjust for differences in the age structures of the compared populations (i.e., differences between the analyzed countries). To calculate age-standardized rates, the European standard population was used, which is a theoretical population with its age distribution based on actual age distributions in the populations of the European countries . A pooled estimate was calculated based on age-standardized rates. Further, average YLLs due to TBI per case were calculated with 95% CIs for each country and overall (mean YLLs per case calculated by dividing the sum of YLLs by the number of cases in the respective group or subgroup).
In addition, both numbers of YLLs and YLL rates were stratified by sex, and external cause of injury. Differences between rates (by age group and sex) are presented as rate ratios with 95% CIs.
In order to evaluate the relative importance of TBI in the context of all injuries, proportions of TBI YLLs out of overall injury YLLs were calculated. For this calculation, cases with unspecific codes (e.g., “other and unspecified effects of external causes”); deaths caused by exposure to heat, frost, or intoxication (T15–T65); and cases with other generalized causes (T66–T78; T80–T88) were excluded. Deaths with these causes are either not directly comparable to deaths due to TBI—because the circumstances or mechanisms leading to death are substantially different (e.g., traffic injury versus frostbite)—or the true cause of death is actually unknown and inclusion could bias the calculated proportions by increasing the denominator. Thus, only the following codes were used in the denominator: injuries to the head (S00–S09); injuries involving multiple body regions (T00–T07); injuries to unspecified trunk, limb, or body region (T08–T14); certain early complications of trauma (T79); and sequelae of injuries, of poisoning, and of other consequences of external causes (T90–T98). In order to allow for comparisons, S1–S3 Tables present these analyses with all injury-related deaths (no exclusions) used as the denominator.
The total number of TBI YLLs in the EU was estimated by extrapolating the pooled crude rate of YLLs in the 16 analyzed countries to the EU-28 population count published by Eurostat .
Pooled analyses were performed in order to estimate summary age-standardized rates of YLLs. In order to model possible heterogeneity of rates in the different countries, the random effects model was applied by the DerSimonian and Laird method, in line with previous studies [5,24]. To assess the heterogeneity of the pooled estimations, I2 values with 95% confidence intervals were calculated. To make our findings as comparable as possible, in S4 Table we provide pooled rates calculated by applying the fixed effects model (in response to the suggestions of a peer reviewer). The original analysis plan for the study is reported in S1 Text.
Based on the data we obtained and the case ascertainment defined in this study, 17,049 TBI deaths were identified in the 16 analyzed countries in 2013, which translates to an age-standardized pooled rate of 11.3 (95% CI: 9.5–13.1) TBI deaths per 100,000 persons per year (Fig 1). Of these deaths, 11,944 (70%) were males. In males, the majority of deaths (8,595, 72%) occurred in persons 35–84 years old, whereas in females most deaths occurred in persons 65 years old and older (3,703, 73%). Fig 2 shows a map with levels of TBI death rates in the analyzed countries. The highest TBI death rates overall and for males were observed in Lithuania, and for females in Austria (consult S5 and S6 Tables for details). The largest sex differences in age-standardized TBI death rates were in Estonia (male to female ratio of 5.8) and Bulgaria (ratio of 4.5), whereas the smallest such differences were in the United Kingdom (ratio of 1.7) and Italy (ratio of 2.1)—consult S7 Table for details.
In order to present the overall magnitude of the problem in the analyzed countries, in Table 1 we present the overall numbers of TBI YLLs by country, age, and sex, along with the size of the population in each country. A total of 374,636 YLLs were attributed to TBI in the 16 analyzed countries in 2013. Of these, 282,870 (76%) were YLLs in the male population. The highest number of TBI YLLs for both sexes was observed in Italy (67,809 in males and 24,481 in females). For males, the lowest number of YLLs was in Luxembourg (1,031), and for females in Cyprus (212). Deaths occurring in the age group 15–64 years old were the largest contributors to TBI YLLs—they caused 73% (274,409) of all TBI YLLs (this pattern was present in both sexes). The sums of the TBI YLLs in the countries are proportional to the population and are higher in more populated countries, where populations at risk are larger. In order to put the numbers of YLLs in context with the population and to compare them more validly, we present crude and age-standardized rates in Table 2.
Crude rates of TBI YLLs are presented for each age group, along with the overall crude rate and the age-standardized rate per 100,000 persons per year—by country and sex. The highest age-standardized rates overall and in males were found for Estonia (518.2 [95% CI: 506.1 to 530.6] and 917.0 [95% CI: 893.1 to 941.7], respectively), and in females for Lithuania (181.3 [95% CI: 174.8 to 188.0]). The geographical distribution of TBI YLLs is presented in Fig 3. In S8 Table, age differences in TBI YLLs are presented as rate ratios with 95% CIs. In both sexes, and overall, compared to the reference category (35–64 years), the highest rates are in the age group 15–34 years (rate ratios of 1.18 overall, 1.13 in males and 1.29 in females). While in males the rates in the age groups 65–84 and ≥85 years are similar to that of the reference category (rate ratios of 1.0 and 0.93), in females the rates in the older groups are substantially higher (rate ratio of 1.92 for the age group 65–84 years and 2.34 for the age group ≥85 years), confirming the shift of TBI to higher ages in the female population.
In the same manner, sex differences are presented as rate ratios by country in S9 Table. Overall, the male to female rate ratio was 3.24 (95% CI: 3.22 to 3.27), ranging from 2.29 (95% CI: 2.26 to 2.33) in the United Kingdom to 9.01 (95% CI: 7.83 to 10.41) in Cyprus. The pooled age-standardized TBI YLL rates for the 16 countries were 259.1 (95% CI: 205.8 to 312.3) per 100,000 persons per year overall, 427.5 (95% CI: 290.0 to 564.9) in males, and 105.4 (95% CI: 89.1 to 121.6) in females.
Mean YLLs per TBI death was calculated in order to describe the burden presented by each death. Table 3 presents the findings for this measure in detail by country, age, and sex. On average, 1 TBI-related death translated into 24.3 (95% CI: 22.0 to 26.6) YLLs overall, 25.6 (95% CI: 23.4 to 27.8) YLLs in males, and 20.9 (95% CI: 17.9 to 24.0) YLLs in females. In general, per-case YLLs decreased with increasing age: from 79.3 YLLs/case in the age group 0–4 years to 3.4 in the age group 85 years and older, with corresponding values of 76.3 and 3.3 in males and 82.3 and 3.8 in females, respectively.
Falls and traffic injuries were the most common causes of TBI YLLs across the 16 countries—as presented in Fig 4—followed by suicide, violence, and other causes. After excluding deaths caused by natural forces, intoxication, and other generalized or unknown causes (see Methods), a total of 991,420 injury YLLs were identified overall, of which 714,757 (72%) were in males (S10 Table). These translated into pooled age-standardized rates of 627.9 (95% CI: 522.9 to 733.0) YLLs per 100,000 persons per year overall, 956.6 (95% CI: 782.6 to 1,130.6) in males and 318.9 (95% CI: 271.1 to 366.6) in females (S11 Table).
TBI YLLs as a proportion of overall injury YLLs are presented in Fig 5 in order to indicate their relative importance (see S1 Fig for sex-specific data). After excluding deaths caused by natural forces, intoxication, and other generalized or unknown causes, TBIs contributed on average 41% (44% in males and 34% in females) of all injury YLLs—with the highest contributions of TBI YLLs in both sexes in the age group 0–4 years (56% in males and 69% in females) (see S12 Table). For comparison purposes, TBI-related YLLs as a proportion of all injury YLLs are also presented without excluding deaths caused by natural forces, intoxication, and other generalized or unknown causes (see S13–S15 Tables).
In order to provide an estimation of TBI-related YLLs for the EU, the pooled crude rates from our study were extrapolated to the population of the 28 EU member states. These findings are presented in Table 4: based on our pooled rates, 1,319,496 (95% CI: 1,043,675 to 1,595,317) YLLs were attributable to TBI in the EU-28 in 2013 overall, with 1,058,962 (95% CI: 698,748 to 1,419,177) in males and 271,203 (95% CI: 227,211 to 315,196) in the female population.
We conducted a large-scale, cross-sectional, population-based analysis of YLLs due to TBI in 16 European countries for the year 2013. We found that in the selected countries a total of 17,049 TBI-related deaths occurred in 2013. These translated into a total of 374,636 YLLs. The pooled age-standardized rates of YLLs per 100,000 were 259.1 (95% CI: 205.8 to 312.3) overall, 427.5 (95% CI: 290.0 to 564.9) in males, and 105.4 (95% CI: 89.1 to 121.6) in females. Males contributed more substantially to the overall numbers of YLLs than females (282,870 YLLs, 76% of all TBI YLLs), which translated into a rate ratio of 3.24 (95% CI: 3.22 to 3.27). Each TBI death was on average associated with 24.3 (95% CI: 22.0 to 26.6) YLLs overall, 25.6 (95% CI: 23.4 to 27.8) in males and 20.9 (17.9 to 24.0) in females. Falls and traffic injuries were the most common external causes of TBI. TBI contributed on average 41% (44% in males and 34% in females) to overall injury YLLs in the 16 countries. Extrapolating our findings, about 1.3 million YLLs were attributable to TBI in the EU-28 in 2013 overall, 1.1 million in males, and 270,000 in females. To our knowledge, this is the largest and most comprehensive analysis of TBI YLLs in Europe to date.
Interpretation and generalizability
For all our analyses, microdata on causes of death obtained from Eurostat were used. Eurostat collects data on causes of death from countries, which extract them from death certificates in accordance with EU Commission Regulation No 328/2011 on community statistics on public health and health and safety at work . This regulation defines the scope, provides definitions of variables and characteristics of the data, and aims to achieve the highest possible degree of harmonization and comparability of the information obtained from various countries . Thus, to the best of our knowledge, for our study we have used the most valid and comparable data that were available—as such, we believe that the information and comparisons presented in this paper are valid. However, the relatively large between-country differences in YLLs suggest that there still may be factors beyond true country variability affecting the size of this variation. In general, countries follow ICD-10 standards, making the data collection procedures on causes of death relatively homogenous; however, factors such as differences in interpretation and use of ICD-10 rules at the national level, nonapplication of WHO updates, and differences in reporting of deaths of residents abroad and deaths of nonresidents in the reporting country may hinder the general comparability of the data on causes of death , and thus the generalizability of our findings.
Besides these systematic factors, country characteristics, such as age distribution and general economy level, may influence the numbers of reported TBI deaths. A recent study that evaluated TBI-related mortality in 25 European countries found that countries with higher gross domestic product tended to have higher TBI death rates. Furthermore, this study reports that the substantial between-country differences in TBI death rates could be driven by varying degrees of attributing death to multiple injuries—countries that reported relatively low numbers of TBI-related deaths at the same time reported relatively high numbers of death due to multiple injuries. Thus, a substantial number of TBI deaths may be in some countries “hidden” under multiple injury deaths . These factors may also have influenced the findings of this study, as both studies used data from the same source. In a similar manner, variations in use of “garbage codes” for cause of death (e.g., general causes such as “unknown” or “other”) may have an influence on the between-country variation observed in this study.
Despite all these issues, by using data routinely collected by an official European authority, based on a specific EU regulation, for the same year, and using the same coding system for case ascertainment, our study overcame many limitations of other types of investigations (such as the heterogeneity of time, case ascertainment, and geographical coverage of studies included in systematic reviews of TBI epidemiology [14–16])—which supports the validity and generalizability of our findings.
Comparison to other studies
To our knowledge, only 2 previous studies have specifically analyzed and reported TBI-related YLLs. A study from the Netherlands reported 118,207 TBI-related YLLs annually for the period of 2010–2012. As data for the Netherlands was not made available by Eurostat for our study, we are not able to directly compare these findings to ours. However, by dividing the number of YLLs reported in the Dutch study by the mean total population of the Netherlands for 2010–2012 according to Eurostat (16,653,712) , we were able to obtain a crude YLL rate of 710 per 100,000 persons per year. This rate is higher than that of the highest ranking country in our comparison (Estonia, with a crude TBI-related YLL rate of 525.6). However, it is important to note that the number of deaths in the Dutch study was estimated using average case fatality rates, which further limits the comparability with our findings.
Another study analyzed TBI-related YLLs in New Zealand and reported a total of 14,386 TBI-related YLLs in 2010. Using this value and New Zealand’s 2010 population estimate of 4,353,000 yields a crude rate of TBI-related YLLs of 330.5 per 100,000 persons per year. Such a rate is within the range of crude rates reported for the analyzed countries in our study and is similar to rates we found for Slovakia (319.4), Serbia (319.7), and Croatia (293.3). However, we note that this study used different methods for YLL estimation (e.g., different life table) and a different definition for TBI death, which hinders the general comparability of its findings to ours.
An earlier study assessed the overall burden of injuries in 6 European countries . In both this study and ours, rates of overall injury-related YLLs are reported for Austria, Denmark, and Ireland, and thus can be compared. For Austria the previous study reported 1,710 YLLs per 100,000 persons per year; in our study, the crude rate was 1,078. For Denmark the rates were 1,550 versus 766, and for Ireland the rates were 1,530 versus 1,035. Thus, the rates found in our study are consistently lower, which may be caused by the relatively large time lag between the 2 studies (e.g., 1999 versus 2013).
The pooled age-standardized all injury YLL rate in our study is lower than the global age-standardized YLL rate reported by the Global Burden of Disease Study 2013 —1,355.5 (95% CI: 1,083.8 to 1,627.1) in our study and 2,945 (95% uncertainty interval: 2,796 to 3,129) in the Global Burden of Disease Study 2013 injury study. This might be explained by the fact that the latter study reports a global estimate, and the observed difference may well reflect the patterns of injury deaths globally. On the other hand, the reported global rate falls within the range of age-standardized all injury YLL rates reported in the 16 analyzed countries in our study and is comparable to the rates found in Lithuania (3,554.2 [95% CI: 3,533.0 to 3,575.5]) and Estonia (2,339.1 [95% CI: 2,313.2 to 2,365.1]).
Implications for policy-making and research
In this study we performed to our knowledge the largest and most comprehensive analysis of TBI-related YLLs in Europe to date. Previous studies relied on analyzing and presenting death rates or case fatality rates in order to describe the magnitude of the burden of fatal TBI for the populations of countries [14–16]. Although important, such analyses do not put the problem of TBI deaths into the broader context of social and economic affairs of the respective countries. Our findings emphasize that with each death, large numbers of years of life are lost in the age groups of economically active people, which underlines the significant burden TBI imposes on the economy of countries and the serious impact on the life of families. We quantified the average number of YLLs due to TBI deaths per 100,000 persons and the average number of YLLs per TBI death in 16 countries, and provided these findings stratified by sex, age, and external cause of injury. We believe that this information could facilitate policy-makers in tailoring preventive action so that actions are targeted to the high-risk populations. Communicating the implications of TBI deaths using YLLs as a measure (rather than numbers of deaths) may help the general public to better grasp the magnitude of the problem, and could help to raise awareness about TBI as a major public health problem in general.
Although YLLs provide a more comprehensive measure of the burden of TBI deaths on a population, they do not capture the burden imposed by nonfatal TBI. Our findings can serve as a basis for analysis of the overall burden of TBI using metrics such as DALYs.
There are some limitations to this study that we would like to acknowledge. The differences in the calculated YLLs and all related variables inherit all possible bias and errors that were present in the raw data provided for us by Eurostat. We were not able to control these, or mitigate them in any way. All interpretations of our findings should be made with this in mind. The extrapolations to the population of the EU-28 are based on 16 countries, and it is possible that they are biased—they should be considered an estimation only. The reason for analyzing data from 16 countries and extrapolating the results to the whole EU-28 instead of using data from all countries was that the data for the rest of the EU countries were not available in the necessary format or with sufficient detail (e.g., countries did not provide the ICD-10 codes for nature of injury along with the ICD-10 codes for external causes, or they provided data grouped into larger age groups). Although this limits the validity of our extrapolations, the approach seemed justified under the circumstances. In order to analyze the full population burden of TBI, nonfatal cases must also be taken into consideration using metrics such as YLDs or DALYs. In this paper we were not able to estimate these, due to the nonavailability of data. Future research should be oriented towards these analyses.
Our study showed that TBI-related deaths have a substantial impact at the individual and population level in Europe and present an important societal and economic burden that must not be overlooked. We provide information valuable for policy-makers, enabling them to evaluate and plan preventive activities and resource allocation, and to formulate and implement strategic decisions. In addition, our results can serve as a basis for analyzing the overall burden of TBI in the population.
1. Hyder AA, Wunderlich CA, Puvanachandra P, Gururaj G, Kobusingye OC. The impact of traumatic brain injuries: a global perspective. NeuroRehabilitation. 2007;22(5):341–53. 18162698
2. Maas AI, Stocchetti N, Bullock R. Moderate and severe traumatic brain injury in adults. Lancet Neurol. 2008;7(8):728–41. doi: 10.1016/S1474-4422(08)70164-9 18635021
3. Reilly P. The impact of neurotrauma on society: an international perspective. Prog Brain Res. 2007;161:3–9. doi: 10.1016/S0079-6123(06)61001-7 17618966
4. Rubiano AM, Carney N, Chesnut R, Puyana JC. Global neurotrauma research challenges and opportunities. Nature. 2015;527(7578):S193–7. doi: 10.1038/nature16035 26580327
5. Majdan M, Plancikova D, Brazinova A, Rusnak M, Nieboer D, Feigin V, et al. Epidemiology of traumatic brain injuries in Europe: a cross-sectional analysis. Lancet Public Health. 2016;1(2):e76–83. doi: 10.1016/S2468-2667(16)30017-2
6. Haagsma JA, Graetz N, Bolliger I, Naghavi M, Higashi H, Mullany EC, et al. The global burden of injury: incidence, mortality, disability-adjusted life years and time trends from the Global Burden of Disease Study 2013. Inj Prev. 2016;22(1):3–18. doi: 10.1136/injuryprev-2015-041616 26635210
7. Brooks JC, Shavelle RM, Strauss DJ, Hammond FM, Harrison-Felix CL. Long-term survival after traumatic brain injury Part II: life expectancy. Arch Phys Med Rehabil. 2015;96(6):1000–5. doi: 10.1016/j.apmr.2015.02.002 26043195
8. Corrigan JD, Hammond FM. Traumatic brain injury as a chronic health condition. Arch Phys Med Rehabil. 2013;94(6):1199–201. doi: 10.1016/j.apmr.2013.01.023 23402722
9. Roozenbeek B, Maas AI, Menon DK. Changing patterns in the epidemiology of traumatic brain injury. Nat Rev Neurol. 2013;9(4):231–6. doi: 10.1038/nrneurol.2013.22 23443846
10. Majdan M, Mauritz W. Unintentional fall-related mortality in the elderly: comparing patterns in two countries with different demographic structure. BMJ Open. 2015;5(8):e008672. doi: 10.1136/bmjopen-2015-008672 26270950
11. Mauritz W, Brazinova A, Majdan M, Leitgeb J. Epidemiology of traumatic brain injury in Austria. Wien Klin Wochenschr. 2014;126(1–2):42–52. doi: 10.1007/s00508-013-0456-6 24249325
12. Te Ao B, Tobias M, Ameratunga S, McPherson K, Theadom A, Dowell A, et al. Burden of traumatic brain injury in New Zealand: incidence, prevalence and disability-adjusted life years. Neuroepidemiology. 2015;44(4):255–61. doi: 10.1159/000431043 26088707
13. Theadom A, Barker-Collo S, Feigin VL, Starkey NJ, Jones K, Jones A, et al. The spectrum captured: a methodological approach to studying incidence and outcomes of traumatic brain injury on a population level. Neuroepidemiology. 2012;38(1):18–29. doi: 10.1159/000334746 22179412
14. Brazinova A, Rehorcikova V, Taylor MS, Buckova V, Majdan M, Psota M, et al. Epidemiology of traumatic brain injury in Europe: a living systematic review. J Neurotrauma. 2016 Aug 25. doi: 10.1089/neu.2015.4126 26537996
15. Peeters W, van den Brande R, Polinder S, Brazinova A, Steyerberg EW, Lingsma HF, et al. Epidemiology of traumatic brain injury in Europe. Acta Neurochir (Wien). 2015;157(10):1683–96. doi: 10.1007/s00701-015-2512-7 26269030
16. Tagliaferri F, Compagnone C, Korsic M, Servadei F, Kraus J. A systematic review of brain injury epidemiology in Europe. Acta Neurochir (Wien). 2006;148(3):255–68. doi: 10.1007/s00701-005-0651-y 16311842
17. GBD 2015 DALYs and HALE Collaborators. Global, regional, and national disability-adjusted life-years (DALYs) for 315 diseases and injuries and healthy life expectancy (HALE), 1990–2015: a systematic analysis for the Global Burden of Disease Study 2015. Lancet. 2016;388(10053):1603–58. doi: 10.1016/S0140-6736(16)31460-X 27733283
18. Scholten AC, Haagsma JA, Panneman MJ, van Beeck EF, Polinder S. Traumatic brain injury in the Netherlands: incidence, costs and disability-adjusted life years. PLoS ONE. 2014;9(10):e110905. doi: 10.1371/journal.pone.0110905 25343447
19. Eurostat. Causes of death. Luxembourg: Eurostat. Available from: http://ec.europa.eu/eurostat/web/health/causes-death. Accessed 2016 Dec 16.
20. Eurostat. Life table. Luxembourg: Eurostat. Available from: http://appsso.eurostat.ec.europa.eu/nui/show.do?dataset=demo_mlifetable&lang=en. Accessed 2016 Dec 15.
21. Eurostat. Revision of the European standard population—report of Eurostat’s task force. Luxembourg: Publications Office of the European Union; 2013. Available from: http://ec.europa.eu/eurostat/documents/3859598/5926869/KS-RA-13-028-EN.PDF/e713fa79-1add-44e8-b23d-5e8fa09b3f8f. Accessed 2017 May 31.
22. Eurostat. Population. Luxembourg: Eurostat. Available from: http://ec.europa.eu/eurostat/web/population-demography-migration-projections/population-data. Accessed 2016 Dec 15.
23. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to meta-analysis. New York: John Wiley & Sons; 2009. 452 p.
24. Feigin VL, Theadom A, Barker-Collo S, Starkey NJ, McPherson K, Kahan M, et al. Incidence of traumatic brain injury in New Zealand: a population-based study. Lancet Neurol. 2013;12(1):53–64. doi: 10.1016/S1474-4422(12)70262-4 23177532
25. Commission Regulation (EU) No 328/2011 of 5 April 2011. Implementing Regulation (EC) No 1338/2008 of the European Parliament and of the Council on Community statistics on public health and health and safety at work, as regards statistics on causes of death.
26. Eurostat. Causes of death (hlth_cdeath)—reference metadata in Euro SDMX metadata structure (ESMS). Luxembourg: Eurostat. Available from: http://ec.europa.eu/eurostat/cache/metadata/en/hlth_cdeath_esms.htm. Accessed 2016 Dec 15.
27. Statistics New Zealand. Historical population estimates tables. Auckland: Statistics New Zealand. Available from: http://www.stats.govt.nz/browse_for_stats/population/estimates_and_projections/historical-population-tables.aspx. Accessed 2017 Jan 10.
28. Polinder S, Meerding WJ, Mulder S, Petridou E, van Beeck E, Group ER. Assessing the burden of injury in six European countries. Bull World Health Organ. 2007;85(1):27–34. doi: 10.2471/BLT.06.030973 17242755
| 1 | 35 |
<urn:uuid:041a7b54-05ca-4aea-9c41-6d1e3dadc700>
|
Before the PC: IBM invents virtualisation
A brief history of virtualisation
Virtualisation is not a novelty. It's actually one of the last pieces of the design of 1960s computers to trickle down to the PC – and only by understanding where it came from and how it was and is used can you begin to see the shape of its future in its PC incarnation.
As described in our first article in this series, current PC virtualisation means either software-assisted (Hyper-V, Xen etc) or all-software (VMware) full-system virtualisation.
Full-system will mean a full-fat server OS running multiple virtual machines, which are each complete emulated PCs with emulated chipset and emulated disk drives, running complete full-fat servers or client OSs. What the mainstream – that is, Windows-using – world seems to have forgotten, if it ever knew at all, is that there are other ways to crack the virtualisation nut, with their own unique benefits.
Virtualisation got really big, really quickly on the PC in three stages. Firstly, VMware showed that it could be done, in defiance of Popek and Goldberg.
Secondly, this caught on to the extent that Intel and AMD added hardware virtualisation to their processors. Thirdly, the rise of multi-core 64-bit machines, with many CPU cores and threads and umpteen gigs of RAM: resources that existing 32-bit OSs and apps can't use effectively, but which virtualisation devours with relish.
PC virtualisation is not ready for the big time just yet
Currently, however, the PC's full-system virtualisation is just about the simplest, most primitive and inefficient kind. When you look at the fancy tools that VMware and Microsoft are creating to provision and manage VMs – and the large-scale rollouts that are starting to occur – it's easy to forget that this is not a mature technology. In fact, PC virtualisation is still in its youth, and the fact that it is starting to show a few hairs on its chin doesn't mean that it is ready for the big time just yet.
Before you can understand how far it has yet to go, though, you need to know a bit of the background. And there's more of it than you might expect.
Before the PC: IBM invents virtualisation
Of course, there is nothing new under the Sun. (Or should that be under the Oracle, these days?) The arrival of ubiquitous virtualisation on the PC could be seen to deliver one of the last pieces of the set of features delivered by IBM’s System/360 computers of the 1960s.
IBM System/360: Hot new tech from the 1960s
Launched in 1964, the S/360 was intended from the start to be a whole range of compatible computers, stretching from relatively small, inexpensive machines to large, high-capacity ones. The S/360 took a radical new approach: all would run the same software, so that programs could be moved from one machine to another without modification – a bold innovation at the time.
Some of the exotic new features of the S/360 might sound familiar: memory addressed in units of fixed-length bytes; a byte always being eight bits; words being 32 bits long. What’s more, the S/360 was the first successful platform to achieve compatibility across different processors using microcode, which again is now a standard feature of most computers.
One of the things that the S/360 didn’t do at first, though, was the then-new feature of time-sharing. IBM systems had traditionally taken a batch-oriented approach: operators submitted "jobs" which the machine scheduled itself to run, without user interaction, whenever enough free resources were available.
In the mid-1960s, though, interactive computing was becoming popular: people were sitting at terminals, typing commands and getting the response immediately, as opposed to a pile of printouts the next day. But back then, a single computer was too expensive to be dedicated to just one person, so DARPA sponsored "Project MAC," one focus of which was building operating systems that would allow multiple people to use a single machine at once, via dumb terminals.
IBM wanted in on what might be a lucrative new market, so it set up the Cambridge Scientific Centre (CSC) to create a time-sharing version of the S/360. IBM designed a special dual-processor host for the job, the S/360-67, and CSC built a time-sharing OS for it, imaginatively named TSS. The snag is, it never worked satisfactorily.
One of the chief problems was that the S/360 didn't include some of the key features necessary for time-sharing, such as support for virtual memory and what was much later called a memory-management unit (MMU). For the PC, this has been no big deal since the Intel '386 appeared in 1985 – a good two decades later.
Mind you, it took until 1993 for Windows NT 3.1 to appear, the first edition of Microsoft's OS properly equipped to exploit these featurer. Users of SCO Xenix, among other Unices, had been happily multitasking with 386s for about five years by then. Soon after, so had intrepid users of Windows/386 2.1 and later Windows 3 in Enhanced Mode – if they were lucky and it didn't bluescreen on them, anyway.
You might well never have heard of Multics – the last machine running it was shut down in 2000 – but you will have heard of the OS it inspired: Unix.
Unix was conceived as a sort of anti-Multics – "Uni" versus "multi", geddit? Unix was mean to be small and simple, as opposed to the large, complicated Multics. Consider the labyrinthine complexity of modern Unix and ponder what Multics must have been like.
Another famous offspring of Project MAC was the MIT AI Lab, from which sprang Richard Stallman, Emacs, the GNU Project and the Free Software movement. It all worked out in the end, but you might like to reflect for a moment on the rarity of 36-bit hardware or Multics systems today. Project Mac's legacy was not products or technology, but rather a pervasive influence over the future of computing.
When Project MAC went off in its own, non-IBM direction, it left IBM's CSC division with nothing to do. In the hope of survival, CSC decided to press on with a different approach.
It took some lessons from an earlier IBM virtualisation project, the M44/44X, based on the pre-S/360 IBM 7000 series mainframe. The M44/44X was an attempt to implement partial virtualisation.
This was conceptually comparable to the modern open-source Xen hypervisor. On x86 CPUs without hardware virtualisation, Xen can't trap (ie, catch and safely run) all of the instruction set without hardware assistance, so it requires guest OSs to be modified so that they don't use the instructions Xen can't handle.
Today, this is called paravirtualisation: guests can only use a subset of the features of the host. Back in the early 1960s, IBM's M44 did much the same: it implemented what its developers called a "virtual machine," the 44X, which was just that critical bit simpler than the host.
| 1 | 9 |
<urn:uuid:75bda227-21bb-491f-aab8-2f1f7dfa9b01>
|
Performance task assessment as an educational tool is receiving fresh interest in the context of unhappiness with multiple-choice assessments1. For David Foster, Founder and Executive Director of the Silicon Valley Mathematics Initiative (SVMI), performance task assessment is much more than another passing educational fad. Since forming the Mathematics Assessment Collaborative (MAC) in the 1990s, he has been a champion of using mathematics performance task assessment to improve both student learning and teacher instructional practices.
The work of SVMI/MAC was used as an extended example in the 2013 publication, “Teacher Learning Through Assessment: How Student-Performance Assessments Can Support Teacher Learning,” by Linda Darling-Hammond and Beverly Falk2. The article describes how teachers in MAC member districts use research-based design principles to write performance task assessments aligned to the Common Core State Standards (pages 11–15). Students across all districts take common assessments, and teachers engage in the rubric-scoring process as professional development. Assessment reports include a reproduction of the tasks, scoring rubrics, and examples of real student work, all of which can inform classroom instruction.
To learn more about SVMI’s performance task assessment, Educational Data Systems staff took an opportunity to discuss the topic with David Foster and two other SVMI leaders, Cecilio Dimas, Partner and Director of Innovation & Strategy, and Tracy Sola, Assistant Director. Unless indented to indicate a direct quote, the text for this blog post is an edited but closely paraphrased version of our discussion.
EDS: Would you tell us about the history of why and how SVMI/MAC has supported performance task assessment for math education?
David Foster (DF):The story begins with the English-language arts (ELA) response to a 1975 Newsweek magazine article, “Why Johnny Can’t Write.” Our English teacher colleagues argued that the assessments at the time stressed grammar, vocabulary, and spelling. Writing was not being assessed, so it wasn’t being taught.
That was the birth of the Bay Area Writing Project to assess writing using a prompt, with humans using a rubric to do the scoring. It dramatically changed the way we teach writing in this country and also dramatically changed the way we did professional development. Professional development started to be about long-term ideas for developing good techniques for teaching writing and using writing assessments to review and inform teaching.
Teaching mathematics is about teaching problem solving. The parallel to ELA is to give kids good problems to solve, be able to look at their work, be able to score collectively, and be able to use it formatively to improve instruction.
In the mid-1990s, California was involved in “math wars,” and the governor was impressed with what was happening in Texas (see below for context information). The political battle ended in 1997 with CA adopting new standards in English and mathematics, and the state needed new tests to assess the new standards. Developing a new statewide assessment [from scratch] takes five years, so the state chose an existing off-the shelf multiple-choice assessment.
I was concerned that a multiple-choice test would not provide robust information about students. We had started the Mathematics Assessment Collaborative (MAC) in 1996 and had partnered with the University of Nottingham Mathematics Assessment Resource Services (MARS) to promote research and design a performance assessment instrument. The idea was to use performance task assessment to be able to look at students’ thinking, see how they approached the problem, and see how they were communicating their understanding. Essentially the math performance tasks would parallel a good writing task.
Cecilio Dimas (CD): The original call to action for the 23 MAC member school districts was to discuss other assessment options above and beyond the multiple-choice test. Santa Clara and San Mateo counties [CA] were original MAC members, and although membership has grown in geographic scope and fluctuated through time, the goal remains the same.
DF and CD: SVMI/MAC members administered the first MARS-developed performance-based tests for grades 3, 5, 7, and 9 in the spring of 1998 and expanded to grades 3 through 10 the following year. In 2004, grade 2 performance tasks were added. Teachers in the SVMI/MAC member districts took over writing the tasks in 2012 and continued to use MARS task design principles, aligning the new tasks to Common Core State Standards. SVMI/MAC continues to expand; for example, we have added “Integrated” mathematics options for high school students.
Tracy Sola (TS): Teachers ask for K–1 performance tasks, but we know that it is not developmentally appropriate for those young students to sit down and take a long written test. We have developed K–1 tasks, but they are used differently in classrooms—for example, as whole-group lessons or in individual interviews.
EDS: Will you expand on why performance assessment is vitally important to teachers, students, and parents?
DF: We’d all be better at teaching and learning if we focused a whole lot more on student thinking and student work. Far too often we focus on just what we’re supposed to cover or teach.
Performance task assessment gives us detailed information about what students know and how to build on that to meet the learning goals. [This] understanding helps us be far more effective in addressing the learning needs of those students.
CD: Performance tasks create a space for a lot of student thinking to surface. Even if students are struggling, that’s also usable data.
DF: Typical reports from multiple-choice tests give you a score that tells you only that your kids aren’t very good at fractions (for example). Of course, we want correct answers, but that is a byproduct of what we want students to know and be able to do. What we really want to know is the process and the thinking that goes along with it. What is helpful is to understand what they do know about fractions and where there are misconceptions. Performance assessment provides that information.
For more information about the Silicon Valley Mathematics Initiative, please visit https://svmimac.org/.
- There are better ways to assess students than with high-stakes standardized tests
What Happens When States Un-Standardize Tests?
Assessment Flexibility for States under ESSA: Highlights from New Hampshire’s Innovative Assessment Application
- Teacher Learning Through Assessment
David Foster is the executive director of the Silicon Valley Mathematics Initiative (SVMI) comprised of over 160 member districts in the greater San Francisco Bay Area. Besides the intensive work in California, SVMI consults across the country including New York, Illinois, Massachusetts, Ohio, Tennessee and Georgia. SVMI is affiliated with programs at University of California, Berkeley, Stanford University and San Jose State University. David established SVMI in 1996 working as Mathematics Director for the Robert N. Noyce Foundation. SVMI developed most of the content (videos, POMs, MAC Toolkits, Coaching Materials, etc.) that is available on www.insidemathematics.org. Foster is the primary author of Interactive Mathematics: Activities and Investigations, published by Glencoe/McGraw-Hill, 1994. David was a Regional Director for the Middle Grade Mathematics Renaissance, of the California State Systemic Initiative. David taught mathematics and computer science at middle school, high school, and community college for eighteen years.
Before joining SVMI as a Partner and Director of Innovation & Strategy in 2016, Cecilio was the Director of the STEAM (Science, Technology, Engineering, Arts, & Mathematics) Initiative at the Santa Clara County Office of Education (SCCOE). Prior to leading the STEAM Initiative, he served as a mathematics coordinator at SCCOE where he supported districts in their efforts to implement the Common Core State Standards-Mathematics. He has taught at both elementary and secondary levels, including 2nd grade, 3rd grade, 5th grade, 7th grade math, and Algebra I. SVMI has played a vital role in his development as a learner, teacher, coach, and facilitator. He enjoys working with students and teachers and believes that fostering the development of critical-thinking, collaboration and communication skills are essential for all students to have to thrive in our local, national, and global communities.
Tracy Sola has been the Assistant Director of the Silicon Valley Mathematics Initiative since 2015. Tracy has worked with SVMI to deliver professional development to teachers, coaches, and administrators since 2008, facilitating professional development experiences across the San Francisco Bay Area, Southern California, New York, Georgia, and Illinois. Tracy also directed the SVMI Lesson Study Project for many years. On behalf of SVMI, Tracy collaborates with Arizona State University, The University of California, Berkeley, and The Shell Center, University of Nottingham, UK, in the development of electronic versions of mathematics curriculum. Tracy coached K-8 mathematics and taught grades K-8 for 16 years.
| 1 | 2 |
<urn:uuid:64f50869-85fc-48e3-803a-5ab668fb0449>
|
Solar power is a relatively new development for humans but, of course, many living things have been exploiting the power of the sun for millions of years, through the process of photosynthesis. This ability is usually limited to plants, algae and bacteria, but one unique animal can do it too - the emerald green sea slug Elysia chlorotica. This remarkable creature steals the genes and photosynthetic factories of a type of algae that it eats (Vaucheria littorea), so that it can independently draw energy from the sun. Through genetic thievery, it has become a solar-powered animal and a beautifully green one at that.
The cells of algae, like those of plants, contain small compartments called chloroplasts that are its engines of photosynthesis. As the Elysia munches on algae, it takes their chloroplasts into the cells of its own digestive system, where they provide it with energy and sugars. It's a nifty trick that provides the sea slug with an extra energy source, but the problem is that it shouldn't work.
Chloroplasts are not independent modules that can be easily separated from their host cell and implanted into another. They are the remnants of once-independent bacteria that formed such a strong alliance with the cells of ancient plants and algae, that they eventually lost their autonomy and became an integral part of their partner. In doing so, they transferred the majority of their own genes to their host so that today, chloroplasts only have a tiny and depleted genome of their own, containing just 10% of the genes it needs for a free-living existence.
So, shoving a chloroplast from an algal cell into an animal one should be about as effective as installing a piece of specialised Mac software on a PC. The two simply shouldn't be compatible, and yet
and its chloroplasts clearly are. Mary Rumpho from the University of Maine discovered the key to the partnership - the sea slug has also stolen vital genes from the algae that allows it to use the borrowed chloroplast. It has found a way to patch its own genome to make it photosynthesis-compatible.
The pilfered gene is called psbO and it codes for a protein called MSP (manganese-stabilizing protein, in full). MSP is so important to the chemical reactions of photosynthesis that it is found in all species that have this ability, with very few differences between the various versions.
The pbsO gene has never before been spotted in an animal genome, but Rumpho found it amidst the sea slug's genes. To make sure that she wasn't just detecting undigested algae that had been left over in the slug's body, Rumpho also searched for pbsO in the DNA of sea slug eggs that had never been previously exposed to algae. She found it there too - clearly, the gene had been fully assimilated into the slug's own genome.
There is no doubt as to its origin - it must have come from the algae for both the slug's pbsO gene, and the MSP protein it codes for, were exact matches for the versions carried by V.littorea. Rumpho also found that other genes involved in photosynthesis have been transferred from the algae to the slug. This is a vital point for without these genes, the stolen chloroplasts wouldn't work. The slug itself is providing the proteins that make photosynthesis possible.
Elysia's antics are that much more intriguing because gene-swapping of this sort is very rare outside the bacterial world. Bacteria trade genes like humans swap gossip and on rare occasions, they will transfer genes (or even their entire genomes) into the more complex cells of animals and plants. And there are even fewerexamplesof genetic trade between these 'higher' kingdoms, which makes the tale of the slug and the algae all the more extraordinary.
We can only speculate about the route that led to Elysia wielding algal genes, but Rumpho has an idea. She suggests that once upon a time, swallowed algae ruptured in the gut of a sea slug, setting free its DNA, which was then taken up by the cells of the animal's digestive system. Somehow, this absorbed DNA was incorporated into Elysia's own genome.
In an email exchange with Carl Zimmer, Rumpho notes that Elysia's digestive system branches very closely to its sexual organs, giving the absorbed DNA a route into the animal's sex cells, and from there, into the next generation. She also notes that the sea slugs are almost always infected with a virus, which could act as a vehicle that shuttles DNA from destroyed algae into slug cells.
The story of Elysia and its genetic kleptomania is yet another example of animals undergoing the sort of horizontal gene transfer that is so commonplace in bacteria (see my previous posts on Space Invaders and rotifers). With similar reports growing in number, we would be foolish to underestimate the importance of such transfers in animal evolution.
Reference: M. E. Rumpho, J. M. Worful, J. Lee, K. Kannan, M. S. Tyler, D. Bhattacharya, A. Moustafa, J. R. Manhart (2008). From the Cover: Horizontal gene transfer of the algal nuclear gene psbO to the photosynthetic sea slug Elysia chlorotica Proceedings of the National Academy of Sciences, 105 (46), 17867-17871 DOI: 10.1073/pnas.0804968105
More on gene-swapping:
| 1 | 2 |
<urn:uuid:6383c2f6-d23f-4d47-a787-37585b2ebdb2>
|
Architecture: Intel x86, Motorola 68000, SPARC, PA-RISC
Based on: UNIX
The last version | Released: 4.2 Pre-release 2 | September 1997
NeXTSTEP – an object-oriented, multitasking operating system created by NeXT Computer, Inc. a company founded in 1985 by Apple Computer co-founder Steve Jobs.
This system was created on the base of Mach microkernel and BSD Unix system code. NeXTStep was oriented to work in a graphical environment. It had a very well-prepared, intuitive user interface, based on object-oriented architecture, quite different from both the most popular then Microsoft Windows 3.1 and Mac OS. The visualization engine was based on Postscript, which on one hand made it very demanding in terms of hardware (considerable demand for memory) and on other hand an ideal solution for industrial and designer workstations.
NeXTSTEP 1.0 was released 18 September 1989 after a couple of hits in 1986, and last Release 3.3 in early 1995, and previously worked only on the Motorola 68000 CPU family (especially the original black boxes) and the generic IBM compatible x86/Intel, Sun SPARC , and HP PA-RISC. About the time 3.2 releases NeXT teamed up with Sun Microsystems to develop OpenStep, cross-platform implementation of the standard (for Sun Solaris, Microsoft Windows, and NeXT Mach kernel version) based on NEXTSTEP 3.2.
In February 1997, after the purchase of NeXT by Apple, it became the source of the popular operating systems macOS, iOS, watchOS, and tvOS.
The NeXTSTEP screenshot’s author: Gürkan Sengün; source: Wikipedia; License: GNU GPL.
No download is available.
| 1 | 4 |
<urn:uuid:66235aed-04a9-4ac4-b3a9-555abb9f22b4>
|
Wnt signaling pathway
The Wnt signaling pathways are a group of signal transduction pathways which begin with proteins that pass signals into a cell through cell surface receptors. The name Wnt is a portmanteau created from the name Wingless and the name Int-1. Wnt signaling pathways use either nearby cell-cell communication (paracrine) or same-cell communication (autocrine). They are highly evolutionarily conserved in animals, which means they are similar across animal species from fruit flies to humans.
Three Wnt signaling pathways have been characterized: the canonical Wnt pathway, the noncanonical planar cell polarity pathway, and the noncanonical Wnt/calcium pathway. All three pathways are activated by the binding of a Wnt-protein ligand to a Frizzled family receptor, which passes the biological signal to the Dishevelled protein inside the cell. The canonical Wnt pathway leads to regulation of gene transcription, and is thought to be negatively regulated in part by the SPATS1 gene. The noncanonical planar cell polarity pathway regulates the cytoskeleton that is responsible for the shape of the cell. The noncanonical Wnt/calcium pathway regulates calcium inside the cell.
Wnt signaling was first identified for its role in carcinogenesis, then for its function in embryonic development. The embryonic processes it controls include body axis patterning, cell fate specification, cell proliferation and cell migration. These processes are necessary for proper formation of important tissues including bone, heart and muscle. Its role in embryonic development was discovered when genetic mutations in Wnt pathway proteins produced abnormal fruit fly embryos. Wnt signaling also controls tissue regeneration in adult bone marrow, skin and intestine. Later research found that the genes responsible for these abnormalities also influenced breast cancer development in mice.
This pathway's clinical importance was demonstrated by mutations that lead to various diseases, including breast and prostate cancer, glioblastoma, type II diabetes and others. Encouragingly, in recent years researchers reported first successful use of Wnt pathway inhibitors in mouse models of disease.
- 1 History and etymology
- 2 Proteins
- 3 Mechanism
- 4 Induced cell responses
- 5 Clinical implications
- 6 See also
- 7 References
- 8 Further reading
- 9 External links
History and etymologyEdit
The discovery of Wnt signaling was influenced by research on oncogenic (cancer-causing) retroviruses. In 1982, Roel Nusse and Harold Varmus infected mice with mouse mammary tumor virus in order to mutate mouse genes to see which mutated genes could cause breast tumors. They identified a new mouse proto-oncogene that they named int1 (integration 1).
Int1 is highly conserved across multiple species, including humans and Drosophila. Its presence in D. melanogaster led researchers to discover in 1987 that the int1 gene in Drosophila was actually the already known and characterized Drosophila gene known as Wingless (Wg). Since previous research by Christiane Nüsslein-Volhard and Eric Wieschaus (which won them the Nobel Prize in Physiology or Medicine in 1995) had already established the function of Wg as a segment polarity gene involved in the formation of the body axis during embryonic development, researchers determined that the mammalian int1 discovered in mice is also involved in embryonic development.
Continued research led to the discovery of further int1-related genes; however, because those genes were not identified in the same manner as int1, the int gene nomenclature was inadequate. Thus, the int/Wingless family became the Wnt family and int1 became Wnt1. The name Wnt is a portmanteau of int and Wg and stands for "Wingless-related integration site".
Wnt comprises a diverse family of secreted lipid-modified signaling glycoproteins that are 350–400 amino acids in length. The lipid modification of all Wnts is palmitoleoylation of a single totally conserved serine residue. Palmitoleoylation is necessary because it is required for Wnt to bind to its carrier protein Wntless (WLS) so it can be transported to the plasma membrane for secretion and it allows the Wnt protein to bind its receptor Frizzled Wnt proteins also undergo glycosylation, which attaches a carbohydrate in order to ensure proper secretion. In Wnt signaling, these proteins act as ligands to activate the different Wnt pathways via paracrine and autocrine routes.
|Homo sapiens||WNT1, WNT2, WNT2B, WNT3, WNT3A, WNT4, WNT5A, WNT5B, WNT6, WNT7A, WNT7B, WNT8A, WNT8B, WNT9A, WNT9B, WNT10A, WNT10B, WNT11, WNT16|
|Mus musculus (Identical proteins as in H. sapiens)||Wnt1, Wnt2, Wnt2B, Wnt3, Wnt3A, Wnt4, Wnt5A, Wnt5B, Wnt6, Wnt7A, Wnt7B, Wnt8A, Wnt8B, Wnt9A, Wnt9B, Wnt10A, Wnt10B, Wnt11, Wnt16|
|Xenopus||Wnt1, Wnt2, Wnt2B, Wnt3, Wnt3A, Wnt4, Wnt5A, Wnt5B, Wnt7A, Wnt7B, Wnt8A, Wnt8B, Wnt10A, Wnt10B, Wnt11, Wnt11R|
|Danio rerio||Wnt1, Wnt2, Wnt2B, Wnt3, Wnt3A, Wnt4, Wnt5A, Wnt5B, Wnt6, Wnt7A, Wnt7B, Wnt8A, Wnt8B, Wnt10A, Wnt10B, Wnt11, Wnt16|
|Drosophila||Wg, DWnt2, DWnt3/5, DWnt 4, DWnt6, WntD/DWnt8, DWnt10|
|Hydra||hywnt1, hywnt5a, hywnt8, hywnt7, hywnt9/10a, hywnt9/10b, hywnt9/10c, hywnt11, hywnt16|
|C. elegans||mom-2, lin-44, egl-20, cwn-1, cwn-2 |
Wnt signaling begins when a Wnt protein binds to the N-terminal extra-cellular cysteine-rich domain of a Frizzled (Fz) family receptor. These receptors span the plasma membrane seven times and constitute a distinct family of G-protein coupled receptors (GPCRs). However, to facilitate Wnt signaling, co-receptors may be required alongside the interaction between the Wnt protein and Fz receptor. Examples include lipoprotein receptor-related protein (LRP)-5/6, receptor tyrosine kinase (RTK), and ROR2. Upon activation of the receptor, a signal is sent to the phosphoprotein Dishevelled (Dsh), which is located in the cytoplasm. This signal is transmitted via a direct interaction between Fz and Dsh. Dsh proteins are present in all organisms and they all share the following highly conserved protein domains: an amino-terminal DIX domain, a central PDZ domain, and a carboxy-terminal DEP domain. These different domains are important because after Dsh, the Wnt signal can branch off into multiple pathways and each pathway interacts with a different combination of the three domains.
Canonical and noncanonical pathwaysEdit
The three best characterized Wnt signaling pathways are the canonical Wnt pathway, the noncanonical planar cell polarity pathway, and the noncanonical Wnt/calcium pathway. As their names suggest, these pathways belong to one of two categories: canonical or noncanonical. The difference between the categories is that a canonical pathway involves the protein β-catenin while a noncanonical pathway operates independently of it.
The canonical Wnt pathway (or Wnt/β-catenin pathway) is the Wnt pathway that causes an accumulation of β-catenin in the cytoplasm and its eventual translocation into the nucleus to act as a transcriptional coactivator of transcription factors that belong to the TCF/LEF family. Without Wnt, β-catenin would not accumulate in the cytoplasm since a destruction complex would normally degrade it. This destruction complex includes the following proteins: Axin, adenomatosis polyposis coli (APC), protein phosphatase 2A (PP2A), glycogen synthase kinase 3 (GSK3) and casein kinase 1α (CK1α). It degrades β-catenin by targeting it for ubiquitination, which subsequently sends it to the proteasome to be digested. However, as soon as Wnt binds Fz and LRP5/6, the destruction complex function becomes disrupted. This is due to Wnt causing the translocation of the negative Wnt regulator, Axin, and the destruction complex to the plasma membrane. Phosphorylation by other proteins in the destruction complex subsequently binds Axin to the cytoplasmic tail of LRP5/6. Axin becomes de-phosphorylated and its stability and levels decrease. Dsh then becomes activated via phosphorylation and its DIX and PDZ domains inhibit the GSK3 activity of the destruction complex. This allows β-catenin to accumulate and localize to the nucleus and subsequently induce a cellular response via gene transduction alongside the TCF/LEF (T-cell factor/lymphoid enhancing factor) transcription factors. β-catenin recruits other transcriptional coactivators, such as BCL9, Pygopus and Parafibromin/Hyrax. The complexity of the transcriptional complex assembled by β-catenin is beginning to emerge thanks to new high-throughput proteomics studies. The extensivity of the β-catenin interacting proteins complicates our understanding: β-catenin may be directly phosphorylated at Ser552 by Akt, which causes its disassociation from cell-cell contacts and accumulation in cytosol, thereafter 14-3-3ζ interacts with β-catenin (pSer552) and enhances its nuclear translocation. BCL9 and Pygopus have been reported, in fact, to possess several β-catenin-independent functions (therefore, likely, Wnt signaling-independent).
The noncanonical planar cell polarity (PCP) pathway does not involve β-catenin. It does not use LRP-5/6 as its co-receptor and is thought to use NRH1, Ryk, PTK7 or ROR2. The PCP pathway is activated via the binding of Wnt to Fz and its co-receptor. The receptor then recruits Dsh, which uses its PDZ and DIX domains to form a complex with Dishevelled-associated activator of morphogenesis 1 (DAAM1). Daam1 then activates the small G-protein Rho through a guanine exchange factor. Rho activates Rho-associated kinase (ROCK), which is one of the major regulators of the cytoskeleton. Dsh also forms a complex with rac1 and mediates profilin binding to actin. Rac1 activates JNK and can also lead to actin polymerization. Profilin binding to actin can result in restructuring of the cytoskeleton and gastrulation.
The noncanonical Wnt/calcium pathway also does not involve β-catenin. Its role is to help regulate calcium release from the endoplasmic reticulum (ER) in order to control intracellular calcium levels. Like other Wnt pathways, upon ligand binding, the activated Fz receptor directly interacts with Dsh and activates specific Dsh-protein domains. The domains involved in Wnt/calcium signaling are the PDZ and DEP domains. However, unlike other Wnt pathways, the Fz receptor directly interfaces with a trimeric G-protein. This co-stimulation of Dsh and the G-protein can lead to the activation of either PLC or cGMP-specific PDE. If PLC is activated, the plasma membrane component PIP2 is cleaved into DAG and IP3. When IP3 binds its receptor on the ER, calcium is released. Increased concentrations of calcium and DAG can activate Cdc42 through PKC. Cdc42 is an important regulator of ventral patterning. Increased calcium also activates calcineurin and CaMKII. CaMKII induces activation of the transcription factor NFAT, which regulates cell adhesion, migration and tissue separation. Calcineurin activates TAK1 and NLK kinase, which can interfere with TCF/ß-Catenin signaling in the canonical Wnt pathway. However, if PDE is activated, calcium release from the ER is inhibited. PDE mediates this through the inhibition of PKG, which subsequently causes the inhibition of calcium release.
Integrated Wnt pathwayEdit
The binary distinction of canonical and non-canonical Wnt signaling pathways has come under scrutiny and an integrated, convergent Wnt pathway has been proposed. Some evidence for this was found for one Wnt ligand (Wnt5A). Evidence for a convergent Wnt signaling pathway that shows integrated activation of Wnt/Ca2+ and Wnt/ß-catenin signaling, for multiple Wnt ligands, was described in mammalian cell lines.
Wnt signaling also regulates a number of other signaling pathways that have not been as extensively elucidated. One such pathway includes the interaction between Wnt and GSK3. During cell growth, Wnt can inhibit GSK3 in order to activate mTOR in the absence of β-catenin. However, Wnt can also serve as a negative regulator of mTOR via activation of the tumor suppressor TSC2, which is upregulated via Dsh and GSK3 interaction. During myogenesis, Wnt uses PA and CREB to activate MyoD and Myf5 genes. Wnt also acts in conjunction with Ryk and Src to allow for regulation of neuron repulsion during axonal guidance. Wnt regulates gastrulation when CK1 serves as an inhibitor of Rap1-ATPase in order to modulate the cytoskeleton during gastrulation. Further regulation of gastrulation is achieved when Wnt uses ROR2 along with the CDC42 and JNK pathway to regulate the expression of PAPC. Dsh can also interact with aPKC, Pa3, Par6 and LGl in order to control cell polarity and microtubule cytoskeleton development. While these pathways overlap with components associated with PCP and Wnt/Calcium signaling, they are considered distinct pathways because they produce different responses.
In order to ensure proper functioning, Wnt signaling is constantly regulated at several points along its signaling pathways. For example, Wnt proteins are palmitoylated. The protein porcupine mediates this process, which means that it helps regulate when the Wnt ligand is secreted by determining when it is fully formed. Secretion is further controlled with proteins such as GPR177 (wntless) and evenness interrupted and complexes such as the retromer complex. Upon secretion, the ligand can be prevented from reaching its receptor through the binding of proteins such as the stabilizers Dally and glypican 3 (GPC3), which inhibit diffusion. In cancer cells, both the heparan sulfate chains and the core protein of GPC3 are involved in regulating Wnt binding and activation for cell proliferation. Wnt recognizes a heparan sulfate structure on GPC3, which contains IdoA2S and GlcNS6S, and the 3-O-sulfation in GlcNS6S3S enhances the binding of Wnt to the heparan sulfate glypican. A cysteine-rich domain at the N-lobe of GPC3 has been identified to form a Wnt-binding hydrophobic groove including phenylalanine-41 that interacts with Wnt. Blocking the Wnt binding domain using a nanobody called HN3 can inhibit Wnt activation. At the Fz receptor, the binding of proteins other than Wnt can antagonize signaling. Specific antagonists include Dickkopf (Dkk), Wnt inhibitory factor 1 (WIF-1), secreted Frizzled-related proteins (SFRP), Cerberus, Frzb, Wise, SOST, and Naked cuticle. These constitute inhibitors of Wnt signaling. However, other molecules also act as activators. Norrin and R-Spondin2 activate Wnt signaling in the absence of Wnt ligand. Interactions between Wnt signaling pathways also regulate Wnt signaling. As previously mentioned, the Wnt/calcium pathway can inhibit TCF/β-catenin, preventing canonical Wnt pathway signaling. Prostaglandin E2 is an essential activator of the canonical Wnt signaling pathway. Interaction of PGE2 with its receptors E2/E4 stabilizes β-catenin through cAMP/PKA mediated phosphorylation. The synthesis of PGE2 is necessary for Wnt signaling mediated processes such as tissue regeneration and control of stem cell population in zebrafish and mouse. Intriguingly, the unstructured regions of several oversized Intrinsically disordered proteins play crucial roles in regulating Wnt signaling.
Induced cell responsesEdit
Wnt signaling plays a critical role in embryonic development. It operates in both vertebrates and invertebrates, including humans, frogs, zebrafish, C. elegans, Drosophila and others. It was first found in the segment polarity of Drosophila, where it helps to establish anterior and posterior polarities. It is implicated in other developmental processes. As its function in Drosophila suggests, it plays a key role in body axis formation, particularly the formation of the anteroposterior and dorsoventral axes. It is involved in the induction of cell differentiation to prompt formation of important organs such as lungs and ovaries. Wnt further ensures the development of these tissues through proper regulation of cell proliferation and migration. Wnt signaling functions can be divided into axis patterning, cell fate specification, cell proliferation and cell migration.
In early embryo development, the formation of the primary body axes is a crucial step in establishing the organism's overall body plan. The axes include the anteroposterior axis, dorsoventral axis, and right-left axis. Wnt signaling is implicated in the formation of the anteroposterior and dorsoventral (DV) axes. Wnt signaling activity in anterior-posterior development can be seen in mammals, fish and frogs. In mammals, the primitive streak and other surrounding tissues produce the morphogenic compounds Wnts, BMPs, FGFs, Nodal and retinoic acid to establish the posterior region during late gastrula. These proteins form concentration gradients. Areas of highest concentration establish the posterior region while areas of lowest concentration indicate the anterior region. In fish and frogs, β-catenin produced by canonical Wnt signaling causes the formation of organizing centers, which, alongside BMPs, elicit posterior formation. Wnt involvement in DV axis formation can be seen in the activity of the formation of the Spemann organizer, which establishes the dorsal region. Canonical Wnt signaling β-catenin production induces the formation of this organizer via the activation of the genes twin and siamois. Similarly, in avian gastrulation, cells of the Koller's sickle express different mesodermal marker genes that allow for the differential movement of cells during the formation of the primitive streak. Wnt signaling activated by FGFs is responsible for this movement.
Wnt signaling is also involved in the axis formation of specific body parts and organ systems later in development. In vertebrates, sonic hedgehog (Shh) and Wnt morphogenetic signaling gradients establish the dorsoventral axis of the central nervous system during neural tube axial patterning. High Wnt signaling establishes the dorsal region while high Shh signaling indicates the ventral region. Wnt is involved in the DV formation of the central nervous system through its involvement in axon guidance. Wnt proteins guide the axons of the spinal cord in an anterior-posterior direction. Wnt is also involved in the formation of the limb DV axis. Specifically, Wnt7a helps produce the dorsal patterning of the developing limb.
In the embryonic differentiation waves model of development Wnt plays a critical role as part a signalling complex in competent cells ready to differentiate. Wnt reacts to the activity of the cytoskeleton, stabilizing the initial change created by a passing wave of contraction or expansion and simultaneously signals the nucleus through the use of its different signalling pathways as to which wave the individual cell has participated in. Wnt activity thereby amplifies mechanical signalling that occurs during development.
Cell fate specificationEdit
Cell fate specification or cell differentiation is a process where undifferentiated cells can become a more specialized cell type. Wnt signaling induces differentiation of pluripotent stem cells into mesoderm and endoderm progenitor cells. These progenitor cells further differentiate into cell types such as endothelial, cardiac and vascular smooth muscle lineages. Wnt signaling induces blood formation from stem cells. Specifically, Wnt3 leads to mesoderm committed cells with hematopoietic potential. Wnt1 antagonizes neural differentiation and is a major factor in self-renewal of neural stem cells. This allows for regeneration of nervous system cells, which is further evidence of a role in promoting neural stem cell proliferation. Wnt signaling is involved in germ cell determination, gut tissue specification, hair follicle development, lung tissue development, trunk neural crest cell differentiation, nephron development, ovary development and sex determination. Wnt signaling also antagonizes heart formation, and Wnt inhibition was shown to be a critical inducer of heart tissue during development, and small molecule Wnt inhibitors are routinely used to produce cardiomyocytes from pluripotent stem cells.
In order to have the mass differentiation of cells needed to form the specified cell tissues of different organisms, proliferation and growth of embryonic stem cells must take place. This process is mediated through canonical Wnt signaling, which increases nuclear and cytoplasmic β-catenin. Increased β-catenin can initiate transcriptional activation of proteins such as cyclin D1 and c-myc, which control the G1 to S phase transition in the cell cycle. Entry into the S phase causes DNA replication and ultimately mitosis, which are responsible for cell proliferation. This proliferation increase is directly paired with cell differentiation because as the stem cells proliferate, they also differentiate. This allows for overall growth and development of specific tissue systems during embryonic development. This is apparent in systems such as the circulatory system where Wnt3a leads to proliferation and expansion of hematopoietic stem cells needed for red blood cell formation.
The biochemistry of cancer stem cells is subtly different than that of other tumor cells. These so-called Wnt-addicted cells hijack and depend on constant stimulation of the Wnt pathway to promote their uncontrolled growth, survival and migration. In cancer, Wnt signaling can become independent of regular stimuli, through mutations in downstream oncogenes and tumor suppressor genes that become permanently activated even though the normal receptor has not received a signal. β-catenin binds to transcription factors such as the protein TCF4 and in combination the molecules activate the necessary genes. LF3 strongly inhibits this binding in vitro, in cell lines and reduced tumor growth in mouse models. It prevented replication and reduced their ability to migrate, all without affecting healthy cells. No cancer stem cells remained after treatment. The discovery was the product of "rational drug design", involving AlphaScreens and ELISA technologies.
Cell migration during embryonic development allows for the establishment of body axes, tissue formation, limb induction and several other processes. Wnt signaling helps mediate this process, particularly during convergent extension. Signaling from both the Wnt PCP pathway and canonical Wnt pathway is required for proper convergent extension during gastrulation. Convergent extension is further regulated by the Wnt/calcium pathway, which blocks convergent extension when activated. Wnt signaling also induces cell migration in later stages of development through the control of the migration behavior of neuroblasts, neural crest cells, myocytes, and tracheal cells.
Wnt signaling is involved in another key migration process known as the epithelial-mesenchymal transition (EMT). This process allows epithelial cells to transform into mesenchymal cells so that they are no longer held in place at the laminin. It involves cadherin down-regulation so that cells can detach from laminin and migrate. Wnt signaling is an inducer of EMT, particularly in mammary development.
Insulin is a peptide hormone involved in glucose homeostasis within certain organisms. Specifically, it leads to upregulation of glucose transporters in the cell membrane in order to increase glucose uptake from the bloodstream. This process is partially mediated by activation of Wnt/β-catenin signaling, which can increase a cell's insulin sensitivity. In particular, Wnt10b is a Wnt protein that increases this sensitivity in skeletal muscle cells.
Since its initial discovery, Wnt signaling has had an association with cancer. When Wnt1 was discovered, it was first identified as a proto-oncogene in a mouse model for breast cancer. The fact that Wnt1 is a homolog of Wg shows that it is involved in embryonic development, which often calls for rapid cell division and migration. Misregulation of these processes can lead to tumor development via excess cell proliferation.
Canonical Wnt pathway activity is involved in the development of benign and malignant breast tumors. Its presence is revealed by elevated levels of β-catenin in the nucleus and/or cytoplasm, which can be detected with immunohistochemical staining and Western blotting. Increased β-catenin expression is correlated with poor prognosis in breast cancer patients. This accumulation may be due to factors such as mutations in β-catenin, deficiencies in the β-catenin destruction complex, most frequently by mutations in structurally disordered regions of APC, overexpression of Wnt ligands, loss of inhibitors and/or decreased activity of regulatory pathways (such as the Wnt/calcium pathway). Breast tumors can metastasize due to Wnt involvement in EMT. Research looking at metastasis of basal-like breast cancer to the lungs showed that repression of Wnt/β-catenin signaling can prevent EMT, which can inhibit metastasis.
Wnt signaling has been implicated in the development of other cancers. Changes in CTNNB1 expression, which is the gene that encodes β-catenin, can be measured in breast, colorectal, melanoma, prostate, lung, and other cancers. Increased expression of Wnt ligand-proteins such as Wnt1, Wnt2 and Wnt7A were observed in the development of glioblastoma, oesophageal cancer and ovarian cancer respectively. Other proteins that cause multiple cancer types in the absence of proper functioning include ROR1, ROR2, SFRP4, Wnt5A, WIF1 and those of the TCF/LEF family.
Type II diabetesEdit
Diabetes mellitus type 2 is a common disease that causes reduced insulin secretion and increased insulin resistance in the periphery. It results in increased blood glucose levels, or hyperglycemia, which can be fatal if untreated. Since Wnt signaling is involved in insulin sensitivity, malfunctioning of its pathway could be involved. Overexpression of Wnt5b, for instance, may increase susceptibility due to its role in adipogenesis, since obesity and type II diabetes have high comorbidity. Wnt signaling is a strong activator of mitochondrial biogenesis. This leads to increased production of reactive oxygen species (ROS) known to cause DNA and cellular damage. This ROS-induced damage is significant because it can cause acute hepatic insulin resistance, or injury-induced insulin resistance. Mutations in Wnt signaling-associated transcription factors, such as TCF7L2, are linked to increased susceptibility.
- Nusse R, Brown A, Papkoff J, Scambler P, Shackleford G, McMahon A, et al. (January 1991). "A new nomenclature for int-1 and related genes: the Wnt gene family". Cell. 64 (2): 231. doi:10.1016/0092-8674(91)90633-a. PMID 1846319.
- Nusse R, Varmus HE (June 1992). "Wnt genes". Cell. 69 (7): 1073–87. doi:10.1016/0092-8674(92)90630-U. PMID 1617723.
- Nusse R (January 2005). "Wnt signaling in disease and in development". Cell Research. 15 (1): 28–32. doi:10.1038/sj.cr.7290260. PMID 15686623.
- Zhang H, Zhang H, Zhang Y, Ng SS, Ren F, Wang Y, Duan Y, Chen L, Zhai Y, Guo Q, Chang Z (November 2010). "Dishevelled-DEP domain interacting protein (DDIP) inhibits Wnt signaling by promoting TCF4 degradation and disrupting the TCF4/beta-catenin complex". Cellular Signalling. 22 (11): 1753–60. doi:10.1016/j.cellsig.2010.06.016. PMID 20603214.
- Goessling W, North TE, Loewer S, Lord AM, Lee S, Stoick-Cooper CL, Weidinger G, Puder M, Daley GQ, Moon RT, Zon LI (March 2009). "Genetic interaction of PGE2 and Wnt signaling regulates developmental specification of stem cells and regeneration". Cell. 136 (6): 1136–47. doi:10.1016/j.cell.2009.01.015. PMC 2692708. PMID 19303855.
- Logan CY, Nusse R (2004). "The Wnt signaling pathway in development and disease". Annual Review of Cell and Developmental Biology. 20: 781–810. CiteSeerX 10.1.1.322.311. doi:10.1146/annurev.cellbio.20.010403.113126. PMID 15473860.
- Komiya Y, Habas R (April 2008). "Wnt signal transduction pathways". Organogenesis. 4 (2): 68–75. doi:10.4161/org.4.2.5851. PMC 2634250. PMID 19279717.
- Zimmerli D, Hausmann G, Cantù C, Basler K (December 2017). "Pharmacological interventions in the Wnt pathway: inhibition of Wnt secretion versus disrupting the protein-protein interfaces of nuclear factors". British Journal of Pharmacology. 174 (24): 4600–4610. doi:10.1111/bph.13864. PMC 5727313. PMID 28521071.
- Nusse R, van Ooyen A, Cox D, Fung YK, Varmus H (1984). "Mode of proviral activation of a putative mammary oncogene (int-1) on mouse chromosome 15". Nature. 307 (5947): 131–6. Bibcode:1984Natur.307..131N. doi:10.1038/307131a0. PMID 6318122.
- Klaus A, Birchmeier W (May 2008). "Wnt signalling and its impact on development and cancer". Nature Reviews. Cancer. 8 (5): 387–98. doi:10.1038/nrc2389. PMID 18432252.
- Cadigan KM, Nusse R (December 1997). "Wnt signaling: a common theme in animal development". Genes & Development. 11 (24): 3286–305. doi:10.1101/gad.11.24.3286. PMID 9407023.
- Hannoush RN (October 2015). "Synthetic protein lipidation". Current Opinion in Chemical Biology. 28: 39–46. doi:10.1016/j.cbpa.2015.05.025. PMID 26080277.
- Yu J, Chia J, Canning CA, Jones CM, Bard FA, Virshup DM (May 2014). "WLS retrograde transport to the endoplasmic reticulum during Wnt secretion". Developmental Cell. 29 (3): 277–91. doi:10.1016/j.devcel.2014.03.016. PMID 24768165.
- Janda CY, Waghray D, Levin AM, Thomas C, Garcia KC (July 2012). "Structural basis of Wnt recognition by Frizzled". Science. 337 (6090): 59–64. Bibcode:2012Sci...337...59J. doi:10.1126/science.1222879. PMC 3577348. PMID 22653731.
- Hosseini V, Dani C, Geranmayeh MH, Mohammadzadeh F, Nazari Soltan Ahmad S, Darabi M (June 2019). "Wnt lipidation: Roles in trafficking, modulation, and function". Journal of Cellular Physiology. 234 (6): 8040–8054. doi:10.1002/jcp.27570. PMID 30341908.
- Kurayoshi M, Yamamoto H, Izumi S, Kikuchi A (March 2007). "Post-translational palmitoylation and glycosylation of Wnt-5a are necessary for its signalling". The Biochemical Journal. 402 (3): 515–23. doi:10.1042/BJ20061476. PMC 1863570. PMID 17117926.
- Nusse, Roel. "The Wnt Homepage". Retrieved 15 April 2013.
- Sawa H, Korswagen HC (March 2013). "WNT signaling in C. Elegans". Wormbook: 1–30. doi:10.1895/wormbook.1.7.2. PMC 5402212. PMID 25263666.
- Rao TP, Kühl M (June 2010). "An updated overview on Wnt signaling pathways: a prelude for more". Circulation Research. 106 (12): 1798–806. doi:10.1161/CIRCRESAHA.110.219840. PMID 20576942.
- Schulte G, Bryja V (October 2007). "The Frizzled family of unconventional G-protein-coupled receptors". Trends in Pharmacological Sciences. 28 (10): 518–25. doi:10.1016/j.tips.2007.09.001. PMID 17884187.
- Habas R, Dawid IB (February 2005). "Dishevelled and Wnt signaling: is the nucleus the final frontier?". Journal of Biology. 4 (1): 2. doi:10.1186/jbiol22. PMC 551522. PMID 15720723.
- Minde DP, Anvarian Z, Rüdiger SG, Maurice MM (August 2011). "Messing up disorder: how do missense mutations in the tumor suppressor protein APC lead to cancer?". Molecular Cancer. 10: 101. doi:10.1186/1476-4598-10-101. PMC 3170638. PMID 21859464.
- Minde DP, Radli M, Forneris F, Maurice MM, Rüdiger SG (2013). Buckle AM (ed.). "Large extent of disorder in Adenomatous Polyposis Coli offers a strategy to guard Wnt signalling against point mutations". PLOS ONE. 8 (10): e77257. Bibcode:2013PLoSO...877257M. doi:10.1371/journal.pone.0077257. PMC 3793970. PMID 24130866.
- MacDonald BT, Tamai K, He X (July 2009). "Wnt/beta-catenin signaling: components, mechanisms, and diseases". Developmental Cell. 17 (1): 9–26. doi:10.1016/j.devcel.2009.06.016. PMC 2861485. PMID 19619488.
- Staal FJ, Clevers H (February 2000). "Tcf/Lef transcription factors during T-cell development: unique and overlapping functions". The Hematology Journal. 1 (1): 3–6. doi:10.1038/sj.thj.6200001. PMID 11920163.
- Kramps T, Peter O, Brunner E, Nellen D, Froesch B, Chatterjee S, Murone M, Züllig S, Basler K (April 2002). "Wnt/wingless signaling requires BCL9/legless-mediated recruitment of pygopus to the nuclear beta-catenin-TCF complex". Cell. 109 (1): 47–60. doi:10.1016/s0092-8674(02)00679-7. PMID 11955446.
- Mosimann C, Hausmann G, Basler K (April 2006). "Parafibromin/Hyrax activates Wnt/Wg target gene transcription by direct association with beta-catenin/Armadillo". Cell. 125 (2): 327–41. doi:10.1016/j.cell.2006.01.053. PMID 16630820.
- van Tienen LM, Mieszczanek J, Fiedler M, Rutherford TJ, Bienz M (March 2017). "Constitutive scaffolding of multiple Wnt enhanceosome components by Legless/BCL9". eLife. 6: e20882. doi:10.7554/elife.20882. PMC 5352222. PMID 28296634.
- Fang D, Hawke D, Zheng Y, Xia Y, Meisenhelder J, Nika H, Mills GB, Kobayashi R, Hunter T, Lu Z (April 2007). "Phosphorylation of beta-catenin by AKT promotes beta-catenin transcriptional activity". Journal of Biological Chemistry. 282 (15): 11221–9. doi:10.1074/jbc.M611871200. PMC 1850976. PMID 17287208.
- Cantù C, Valenta T, Hausmann G, Vilain N, Aguet M, Basler K (June 2013). "The Pygo2-H3K4me2/3 interaction is dispensable for mouse development and Wnt signaling-dependent transcription". Development. 140 (11): 2377–86. doi:10.1242/dev.093591. PMID 23637336.
- Cantù C, Zimmerli D, Hausmann G, Valenta T, Moor A, Aguet M, Basler K (September 2014). "Pax6-dependent, but β-catenin-independent, function of Bcl9 proteins in mouse lens development". Genes & Development. 28 (17): 1879–84. doi:10.1101/gad.246140.114. PMC 4197948. PMID 25184676.
- Cantù C, Pagella P, Shajiei TD, Zimmerli D, Valenta T, Hausmann G, Basler K, Mitsiadis TA (February 2017). "A cytoplasmic role of Wnt/β-catenin transcriptional cofactors Bcl9, Bcl9l, and Pygopus in tooth enamel formation". Science Signaling. 10 (465): eaah4598. doi:10.1126/scisignal.aah4598. PMID 28174279.
- Gordon MD, Nusse R (August 2006). "Wnt signaling: multiple pathways, multiple receptors, and multiple transcription factors". The Journal of Biological Chemistry. 281 (32): 22429–33. doi:10.1074/jbc.R600015200. PMID 16793760.
- Sugimura R, Li L (December 2010). "Noncanonical Wnt signaling in vertebrate development, stem cells, and diseases". Birth Defects Research. Part C, Embryo Today. 90 (4): 243–56. doi:10.1002/bdrc.20195. PMID 21181886.
- van Amerongen R, Nusse R (October 2009). "Towards an integrated view of Wnt signaling in development". Development. 136 (19): 3205–14. doi:10.1242/dev.033910. PMID 19736321.
- van Amerongen R, Fuerer C, Mizutani M, Nusse R (September 2012). "Wnt5a can both activate and repress Wnt/β-catenin signaling during mouse embryonic development". Developmental Biology. 369 (1): 101–14. doi:10.1016/j.ydbio.2012.06.020. PMC 3435145. PMID 22771246.
- Thrasivoulou C, Millar M, Ahmed A (December 2013). "Activation of intracellular calcium by multiple Wnt ligands and translocation of β-catenin into the nucleus: a convergent model of Wnt/Ca2+ and Wnt/β-catenin pathways". The Journal of Biological Chemistry. 288 (50): 35651–9. doi:10.1074/jbc.M112.437913. PMC 3861617. PMID 24158438.
- Inoki K, Ouyang H, Zhu T, Lindvall C, Wang Y, Zhang X, Yang Q, Bennett C, Harada Y, Stankunas K, Wang CY, He X, MacDougald OA, You M, Williams BO, Guan KL (September 2006). "TSC2 integrates Wnt and energy signals via a coordinated phosphorylation by AMPK and GSK3 to regulate cell growth". Cell. 126 (5): 955–68. doi:10.1016/j.cell.2006.06.055. PMID 16959574.
- Kuroda K, Kuang S, Taketo MM, Rudnicki MA (March 2013). "Canonical Wnt signaling induces BMP-4 to specify slow myofibrogenesis of fetal myoblasts". Skeletal Muscle. 3 (1): 5. doi:10.1186/2044-5040-3-5. PMC 3602004. PMID 23497616.
- Malinauskas T, Jones EY (December 2014). "Extracellular modulators of Wnt signalling". Current Opinion in Structural Biology. 29: 77–84. doi:10.1016/j.sbi.2014.10.003. PMID 25460271.
- Gao W, Kim H, Feng M, Phung Y, Xavier CP, Rubin JS, Ho M (August 2014). "Inactivation of Wnt signaling by a human antibody that recognizes the heparan sulfate chains of glypican-3 for liver cancer therapy". Hepatology. 60 (2): 576–87. doi:10.1002/hep.26996. PMC 4083010. PMID 24492943.
- Gao W, Xu Y, Liu J, Ho M (May 2016). "Epitope mapping by a Wnt-blocking antibody: evidence of the Wnt binding domain in heparan sulfate". Scientific Reports. 6: 26245. Bibcode:2016NatSR...626245G. doi:10.1038/srep26245. PMC 4869111. PMID 27185050.
- Gao W, Tang Z, Zhang YF, Feng M, Qian M, Dimitrov DS, Ho M (March 2015). "Immunotoxin targeting glypican-3 regresses liver cancer via dual inhibition of Wnt signalling and protein synthesis". Nature Communications. 6: 6536. Bibcode:2015NatCo...6.6536G. doi:10.1038/ncomms7536. PMC 4357278. PMID 25758784.
- Li N, Wei L, Liu X, Bai H, Ye Y, Li D, et al. (April 2019). "A Frizzled-Like Cysteine-Rich Domain in Glypican-3 Mediates Wnt Binding and Regulates Hepatocellular Carcinoma Tumor Growth in Mice". Hepatology. 70 (4): 1231–1245. doi:10.1002/hep.30646. PMC 6783318. PMID 30963603.
- Ho M, Kim H (February 2011). "Glypican-3: a new target for cancer immunotherapy". European Journal of Cancer. 47 (3): 333–8. doi:10.1016/j.ejca.2010.10.024. PMC 3031711. PMID 21112773.
- Li N, Gao W, Zhang YF, Ho M (November 2018). "Glypicans as Cancer Therapeutic Targets". Trends in Cancer. 4 (11): 741–754. doi:10.1016/j.trecan.2018.09.004. PMC 6209326. PMID 30352677.
- Gao, Wei; Xu, Yongmei; Liu, Jian; Ho, Mitchell (May 17, 2016). "Epitope mapping by a Wnt-blocking antibody: evidence of the Wnt binding domain in heparan sulfate". Scientific Reports. 6: 26245. doi:10.1038/srep26245. ISSN 2045-2322. PMC 4869111. PMID 27185050.
- Kolluri A, Ho M (2019-08-02). "The Role of Glypican-3 in Regulating Wnt, YAP, and Hedgehog in Liver Cancer". Frontiers in Oncology. 9: 708. doi:10.3389/fonc.2019.00708. PMC 6688162. PMID 31428581.
- Malinauskas T, Aricescu AR, Lu W, Siebold C, Jones EY (July 2011). "Modular mechanism of Wnt signaling inhibition by Wnt inhibitory factor 1". Nature Structural & Molecular Biology. 18 (8): 886–93. doi:10.1038/nsmb.2081. PMC 3430870. PMID 21743455.
- Malinauskas T (March 2008). "Docking of fatty acids into the WIF domain of the human Wnt inhibitory factor-1". Lipids. 43 (3): 227–30. doi:10.1007/s11745-007-3144-3. PMID 18256869.
- Minde DP, Radli M, Forneris F, Maurice MM, Rüdiger SG (2013). "Large extent of disorder in Adenomatous Polyposis Coli offers a strategy to guard Wnt signalling against point mutations". PLOS ONE. 8 (10): e77257. Bibcode:2013PLoSO...877257M. doi:10.1371/journal.pone.0077257. PMC 3793970. PMID 24130866.
- Gilbert SF (2010). Developmental biology (9th ed.). Sunderland, Mass.: Sinauer Associates. ISBN 9780878933846.
- Vasiev B, Balter A, Chaplain M, Glazier JA, Weijer CJ (May 2010). "Modeling gastrulation in the chick embryo: formation of the primitive streak". PLOS ONE. 5 (5): e10571. Bibcode:2010PLoSO...510571V. doi:10.1371/journal.pone.0010571. PMC 2868022. PMID 20485500.
- Gilbert SF (2014). "Early Development in Birds". Developmental Biology (10th ed.). Sunderland (MA): Sinauer Associates.
- Ulloa F, Martí E (January 2010). "Wnt won the war: antagonistic role of Wnt over Shh controls dorso-ventral patterning of the vertebrate neural tube". Developmental Dynamics. 239 (1): 69–76. doi:10.1002/dvdy.22058. PMID 19681160.
- Zou Y (September 2004). "Wnt signaling in axon guidance". Trends in Neurosciences. 27 (9): 528–32. doi:10.1016/j.tins.2004.06.015. PMID 15331234.
- Gordon NK, Gordon R (March 2016). "The organelle of differentiation in embryos: the cell state splitter". Theoretical Biology & Medical Modelling. 13: 11. doi:10.1186/s12976-016-0037-2. PMC 4785624. PMID 26965444.
- Gordon N, Gordon, R (2016). Embryogenesis Explained. Singapore: World Scientific Publishing. pp. 580–591. doi:10.1142/8152. ISBN 978-981-4740-69-2.
- Nusse R (May 2008). "Wnt signaling and stem cell control". Cell Research. 18 (5): 523–7. doi:10.1038/cr.2008.47. PMID 18392048.
- Bakre MM, Hoi A, Mong JC, Koh YY, Wong KY, Stanton LW (October 2007). "Generation of multipotential mesendodermal progenitors from mouse embryonic stem cells via sustained Wnt pathway activation". The Journal of Biological Chemistry. 282 (43): 31703–12. doi:10.1074/jbc.M704287200. PMID 17711862.
- Woll PS, Morris JK, Painschab MS, Marcus RK, Kohn AD, Biechele TL, Moon RT, Kaufman DS (January 2008). "Wnt signaling promotes hematoendothelial cell development from human embryonic stem cells". Blood. 111 (1): 122–31. doi:10.1182/blood-2007-04-084186. PMC 2200802. PMID 17875805.
- Schneider VA, Mercola M (February 2001). "Wnt antagonism initiates cardiogenesis in Xenopus laevis". Genes & Development. 15 (3): 304–15. doi:10.1101/gad.855601. PMC 312618. PMID 11159911.
- Marvin MJ, Di Rocco G, Gardiner A, Bush SM, Lassar AB (February 2001). "Inhibition of Wnt activity induces heart formation from posterior mesoderm". Genes & Development. 15 (3): 316–27. doi:10.1101/gad.855501. PMC 312622. PMID 11159912.
- Ueno S, Weidinger G, Osugi T, Kohn AD, Golob JL, Pabon L, Reinecke H, Moon RT, Murry CE (June 2007). "Biphasic role for Wnt/beta-catenin signaling in cardiac specification in zebrafish and embryonic stem cells". Proceedings of the National Academy of Sciences of the United States of America. 104 (23): 9685–90. Bibcode:2007PNAS..104.9685U. doi:10.1073/pnas.0702859104. PMC 1876428. PMID 17522258.
- Willems E, Spiering S, Davidovics H, Lanier M, Xia Z, Dawson M, Cashman J, Mercola M (August 2011). "Small-molecule inhibitors of the Wnt pathway potently promote cardiomyocytes from human embryonic stem cell-derived mesoderm". Circulation Research. 109 (4): 360–4. doi:10.1161/CIRCRESAHA.111.249540. PMC 3327303. PMID 21737789.
- Burridge PW, Matsa E, Shukla P, Lin ZC, Churko JM, Ebert AD, Lan F, Diecke S, Huber B, Mordwinkin NM, Plews JR, Abilez OJ, Cui B, Gold JD, Wu JC (August 2014). "Chemically defined generation of human cardiomyocytes". Nature Methods. 11 (8): 855–60. doi:10.1038/nmeth.2999. PMC 4169698. PMID 24930130.
- Kaldis P, Pagano M (December 2009). "Wnt signaling in mitosis". Developmental Cell. 17 (6): 749–50. doi:10.1016/j.devcel.2009.12.001. PMID 20059944.
- Willert K, Jones KA (June 2006). "Wnt signaling: is the party in the nucleus?". Genes & Development. 20 (11): 1394–404. doi:10.1101/gad.1424006. PMID 16751178.
- Hodge, Russ (2016-01-25). "Hacking the programs of cancer stem cells". medicalxpress.com. Medical Express. Retrieved 2016-02-12.
- Schambony A, Wedlich D (2013). Wnt Signaling and Cell Migration. Madame Curie Bioscience Database. Landes Bioscience. Retrieved 7 May 2013.
- Micalizzi DS, Farabaugh SM, Ford HL (June 2010). "Epithelial-mesenchymal transition in cancer: parallels between normal development and tumor progression". Journal of Mammary Gland Biology and Neoplasia. 15 (2): 117–34. doi:10.1007/s10911-010-9178-9. PMC 2886089. PMID 20490631.
- Abiola M, Favier M, Christodoulou-Vafeiadou E, Pichard AL, Martelly I, Guillet-Deniau I (December 2009). "Activation of Wnt/beta-catenin signaling increases insulin sensitivity through a reciprocal regulation of Wnt10b and SREBP-1c in skeletal muscle cells". PLOS ONE. 4 (12): e8509. Bibcode:2009PLoSO...4.8509A. doi:10.1371/journal.pone.0008509. PMC 2794543. PMID 20041157.
- Howe LR, Brown AM (January 2004). "Wnt signaling and breast cancer". Cancer Biology & Therapy. 3 (1): 36–41. doi:10.4161/cbt.3.1.561. PMID 14739782.
- Taketo MM (April 2004). "Shutting down Wnt signal-activated cancer". Nature Genetics. 36 (4): 320–2. doi:10.1038/ng0404-320. PMID 15054482.
- DiMeo TA, Anderson K, Phadke P, Fan C, Feng C, Perou CM, Naber S, Kuperwasser C (July 2009). "A novel lung metastasis signature links Wnt signaling with cancer cell self-renewal and epithelial-mesenchymal transition in basal-like breast cancer". Cancer Research. 69 (13): 5364–73. doi:10.1158/0008-5472.CAN-08-4135. PMC 2782448. PMID 19549913.
- Anastas JN, Moon RT (January 2013). "WNT signalling pathways as therapeutic targets in cancer". Nature Reviews. Cancer. 13 (1): 11–26. doi:10.1038/nrc3419. PMID 23258168.
- Welters HJ, Kulkarni RN (December 2008). "Wnt signaling: relevance to beta-cell biology and diabetes". Trends in Endocrinology and Metabolism. 19 (10): 349–55. doi:10.1016/j.tem.2008.08.004. PMID 18926717.
- Yoon JC, Ng A, Kim BH, Bianco A, Xavier RJ, Elledge SJ (July 2010). "Wnt signaling regulates mitochondrial physiology and insulin sensitivity". Genes & Development. 24 (14): 1507–18. doi:10.1101/gad.1924910. PMC 2904941. PMID 20634317.
- Zhai L, Ballinger SW, Messina JL (March 2011). "Role of reactive oxygen species in injury-induced insulin resistance". Molecular Endocrinology. 25 (3): 492–502. doi:10.1210/me.2010-0224. PMC 3045736. PMID 21239612.
- Grant SF, Thorleifsson G, Reynisdottir I, Benediktsson R, Manolescu A, Sainz J, et al. (March 2006). "Variant of transcription factor 7-like 2 (TCF7L2) gene confers risk of type 2 diabetes". Nature Genetics. 38 (3): 320–3. doi:10.1038/ng1732. PMID 16415884.
| 1 | 21 |
<urn:uuid:be67b054-4ce5-4ae1-bf2c-09afcaa22ceb>
|
ICD Codes can be found in any hospital records, physician records, patient records, Electronic Remittance Advice (ERAs) or Explanation of Benefits (EOBs). What is the meaning of these codes? Why is it so important for Medical Billing?
ICD codes could be important for a variety of reasons:
- When a doctor submits bill to insurance for reimbursement, each service described by a Current Procedural Terminology (CPT) code must be matched to an ICD code. If those two codes don't align correctly with each other, payment may be rejected. In other words, if the service isn't one that would be typically provided for someone with that diagnosis, the doctor won't get paid. For example, the doctor could not typically submit a bill for an x-ray if the patient's complaint was a rash.
- If a patient has a chronic disease, once an ICD code has been assigned, it may affect the treatment patient receive if his provider looks at the code. This sometime happens in a hospital where a doctor who is not the one who usually treats patient, or with a doctor who reviews patient records before he sees patient. That ICD determination can be a good thing or a bad thing. It may mean patient won't receive a certain medication because his disease code means it is contraindicated. Or, it may mean patient do receive a treatment that isn't necessarily useful, but the hospital will be able to bill for it.
What is ICD-9 Code?
Most ICD-9 codes are comprised of three characters to the left of a decimal point, and one or two digits to the right of the decimal point. Examples:
- 250.0 means diabetes with no complications
- 530.81 means gastro reflux disease (GERD)
- 079.99 means a virus
Some ICD-9 codes have V or E in front of them. A V code designates a patient who is accessing the healthcare system for some reason that won't require a diagnosis, usually a preventive reason. Examples:
- V70.0, the code for a general health check up
- V58.66 specifies that a patient is a long term aspirin user
- V76.12 is coded for a healthy person who gets a mammogram
- V04.81 is the most common code for a flu shot
An ICD-9 code with an E specifies that the health problem is the result of an environmental factor such as an injury, accident, a poisoning or others. A car accident code will be preceded by an E, as will a code for a victim of a plane crash or a snake bite or any other health problem caused by outside force. Medical errors are reported using some of these ICD E codes.
What is ICD-10 Code Look Like?
ICD-10 codes are approached differently and are quite different from their ICD-9 counterparts. These codes are broken down into chapters and subchapters. They are comprised of a letter plus two digits to the left of the decimal point, then one digit to the right. The letters group diseases. All codes preceded by a C indicate a malignancy (cancer), codes preceded by a K indicate gastrointestinal problems, and so forth. Examples are:
- A02.0 indicates a salmonella infection
- I21.X refers to myocardial infarction
- M16.1 is used for arthritis in the hip
- Q codes represent genetic abnormalities
- U codes are for new problems that develop over time. Any of the antibiotic resistant "superbugs" that develop over time will fall into the U category.
Its so important for the medical billing service provider to start learning ICD-10 to avoid future problems for their respective providers. The good news is that practice management software and Electronic Medical Records (EMR) vendors are using technologies advancement such as Natural Language Processing (NLP), Computer Assisted Coding (CAC) to derive ICD-10 codes from Documentation.
| 2 | 7 |
<urn:uuid:232c6a8b-56bb-4428-a9c5-d5f9dedff45d>
|
VMware Inc. (VM Virtual Machine) a subsidiary of EMC Corporation that provides most of the software of virtualization compatible computers available for X86. Among this software include VMware Workstation, and the free VMware Server and VMware Player. VMware software can run on Windows, Linux, and platform Mac OS X running on Intel processors, under the name of VMware Fusion. The company’s corporate name is a play on words using the traditional interpretation of the acronym “VM” in computing environments as virtual machines (V irtual M achines).
A virtualizer software allows you to run (simulate) multiple computers (operating systems) within the same hardware simultaneously, allowing better use of resources. Nevertheless, when an intermediate layer between the physical system and the operating system running on hardware emulation, the execution speed of the latter is
VMware is similar to its counterpart Virtual PC, although there are differences between them that affect the way in which the software interacts with the physical system. The virtual system performance varies depending on the characteristics of the physical system in which to run, and virtual resources (CPU, RAM, etc..) Assigned to the virtual system.
While VirtualPC emulates an x86 platform, the VMware virtualized, so that most of the instructions in VMware running directly on physical hardware, while in the case of Virtual PC are translated into calls to the operating system running on the physical system.
| 1 | 2 |
<urn:uuid:22c3fac9-e532-4495-85f0-d9b93a93ccf7>
|
Origins of the Methodist mission
The Wesleyan Methodist movement in England grew rapidly from the 1790s. Lay Methodists, who were mostly of humble backgrounds, wanted their own mission. The Wesleyan-Methodist Missionary Society (WMS) saw the primary goal of the mission as making converts, and it rejected Samuel Marsden’s emphasis on ‘civilising’.
Samuel Leigh, their first missionary, was a Methodist minister to the convict settlement of New South Wales. Marsden encouraged his visit to New Zealand, after which Leigh appealed to the WMS to establish a mission. A team of missionaries arrived in Tonga and New Zealand in 1822.
First Methodist missions
On the advice of the Church Missionary Society (CMS) missionaries, the Wesleyan mission station was based at Kaeo, north of the Bay of Islands. The site proved too close to that of the 1809 Boyd massacre, and the missionaries were forced to flee in January 1827. Three returned a year later to Mangungu on the Hokianga Harbour. The mission baptised its first converts in 1830.
As new missionaries arrived, more stations were established along the west coast of the North Island, at Kāwhia, Manukau, Kaipara and Raglan. The CMS and the WMS agreed that the Wesleyans would focus their activities on the west coast as far south as Taranaki, and the South Island, while the CMS mission took the East Coast and lower North Island. The missions grew quickly in the 1840s. WMS stations were opened at Port Nicholson (Wellington), Cloudy Bay in Marlborough and Waikouaiti in Otago. Further stations opened in Taranaki and Waikato.
First Catholic mission
The Society of Mary (whose members are known as Marists) was a religious movement which emerged as part of a reinvigoration of Catholicism in the aftermath of the French Revolution. The Pope approved the new order in 1836 and it supported the development of a new mission in Western Oceania. Jean Baptiste Pompallier was appointed bishop of this region and he and his small team arrived in the Hokianga in January 1838, at a site near the Methodist station.
130 years later
After 30 years of missionary work in New Zealand, Bishop Pompallier returned to France in 1869 and died two years later. In 2001 his family and the bishops of France agreed to allow his reburial in New Zealand. From December 2001 Pompallier’s bishop’s coffin was taken to all six Catholic dioceses in New Zealand. In April 2002 it was reburied beneath the altar of St Mary’s Church in the tiny Hokianga community of Motutī, near the place where the bishop had first preached in New Zealand.
Competition for converts
Pompallier faced intense hostility from the Anglicans and Methodists, and his initial following came from Māori unhappy with those churches. Unlike the CMS, he viewed the primary responsibility of the Catholic mission as baptising converts, not challenging the lifestyle of Māori. The chanting, rituals and ornamentation of his religion were attractive to Māori.
The mission was strengthened with the arrival of seven priests and five Marist brothers in 1839, the year it set up a new base at Kororāreka (later renamed Russell) in the Bay of Islands. Further bases were established in Northland, Auckland, Bay of Plenty, Wellington, Ōtaki and Akaroa. Priests travelled from these settlements, often on foot over long distances, to preach to other Māori communities.
By 1845 the New Zealand Catholic mission had baptised 5,000 people, but lack of money and the outbreak of the New Zealand wars led to a subsequent decline. Tensions also emerged between Pompallier and the Marist order, and a Marist-staffed diocese of Wellington was formed in 1850. Thereafter Pompallier struggled to find priests for Auckland Catholics.
Missionaries from Germany and Scotland
From 1842 the North German Missionary Society, based in Bremen, sent missionaries to New Zealand, among them Carl Völkner and J. F. Riemenschneider, who worked alongside the existing missions. The Berlin Missionary Society sent missionaries to the Chatham Islands in 1843. In 1844 the Reformed Church of Scotland sent two missionaries.
| 1 | 2 |
<urn:uuid:ceadd00d-0fe4-481d-ad6b-2e777fe58c87>
|
In that article, the discussion is about one TCP connection being tunnelled over another TCP connection. Basically it comes down to the lower layer buffering and re-sending the TCP datagrams just as the upper layer gives up on hearing a reply and re-sends its own attempt.
Now, end-to-end ACKs have been done on long chains of AX.25 networks before. It’s generally accepted to be an unreliable mechanism. UDP for sure can benefit, but then many protocols that use UDP already do their own handling of lost messages. CoAP for instance does its own ARQ, as does TFTP.
This latter document, was apparently the inspiration for 6LoWPAN. Section 4.4.3 discusses the approaches to handling ARQ in TCP. Section 9.6 goes into further detail on how ARQ might be handled elsewhere in the network.
Thankfully in our case, it’s only the network that’s constrained, the nodes themselves will be no smaller than a Raspberry Pi which would have held its own against the PC that Adam Dunkels used to write that thesis!
In short, it looks as if just routing IP packets is not going to cut it, we need to actually handle the TCP side of things as well. As for other protocols like CoAP, I guess the answer is be patient. The timeout settings defined in RFC-7252 are usually tuneable, and it may be desirable to back those off just a little for use over AX.25.
So, doing some more digging here. One question people might ask is what kind of applications would I use over this network?
HTTP really isn’t designed for low-bandwidth links, as Steve Netting demonstrated:
The page itself is bad enough, but even then, it’s loaded after a minute. The real slow bit is the 20kB GIF.
So yeah, slow-scan television, the ability to send weather radar images over, that is something I was thinking of, but not like that!
That request is 508 bytes and the response headers are 216 bytes. It’d be inappropriate on 6LoWPAN as you’d be fragmenting that packet left right and centre in order to squeeze it into the 128-byte 802.15.4 frames.
In that video, ICMP echo requests were also demonstrated, and those weren’t bad! Yes, a little slow, but workable. So to me, it’s not the packet network that’s the problem, it’s just that something big like HTTP is just not appropriate for a 1200-baud radio link.
It might work on 9600 baud packet … maybe. My Kantronics KPC3 doesn’t do 9600 baud over the air.
CoAP was designed for tight messages. It is UDP based, so your TCP connection overhead disappears, and the “options” are encoded as individual bytes in many cases. There are other UDP-based protocols that would work fine too, as well as older TCP protocols such as Telnet.
A request, and reply in CoAP look something like this:
That there, also shows another tool to data packing: CBOR. CBOR is basically binary JSON. Just like JSON it is schemaless, it has objects, arrays, strings, booleans, nulls and numbers (CBOR differentiates between integers of various sizes and floats). Unlike JSON, it is tight. The CBOR blob in this response would look like this as JSON (in the most compact representation possible):
The entire exchange is 190 bytes, less than a quarter of the size of just the HTTP request alone. I think that would work just fine over 1200 baud packet. As a bonus, you can also multicast, try doing that with HTTP.
So you’d be writing higher-level services that would use this instead of JSON-REST interfaces. There’s a growing number of libraries that can consume this sort of thing, and IoT is pushing that further. I think it’s doable.
Now, on the routing front, I’ve been digging up a bit on Net/ROM. Net/ROM is actually two parts, Net/ROM Level 3 does the routing and level 4 does the circuit switching. It’s the “Level 3” bit we want.
Coming up with a definitive specification of the protocol has been a bit tough, it doesn’t help that there is a company called NetROM, but I did manage to find this document. In a way, if I could make my software behave like a Net/ROM node, I could piggy-back off that to discover neighbours. Thus this protocol would co-exist along side Net/ROM networks that may be completely oblivious to TCP/IP.
This is preferable to just re-inventing the wheel…yes I know non-circular wheels are so much fun! Really, once Net/ROM L3 has figured out where everyone is, IP routing just becomes a matter of correctly addressing the AX.25 frame so the next hop receives the message.
VK4RZB at Mt. Coot-tha is one such node running TheNet. Easy enough to do tests on as it’s a mere stone throw away from my home QTH.
There’s a little consideration to make about how to label the AX.25 frame. Obviously, it’ll be a UI frame, but what PID field should I use? My instinct suggests that I should just label it as “ARPA Internet Protocol”, since it is Internet Protocol traffic, just IPv6 instead of v4. Not all the codes are taken though, 0xc9 is free, so I could be cheeky and use that instead. If the idea takes off, we can talk with the TAPR then.
So, I’ll admit to looking at AX.25 with the typical modems available (the classical 1200-baud AFSK and the more modern G3RUH modem which runs at a blistering 9600 baud… look out 5G!) years ago and wondering “what’s the point”?
It was Brisbane Area WICEN’s involvement in the International Rally of Queensland that changed my view somewhat. This was an event that, until CAMS knocked it on the head, ran annually in the Imbil State Forest up in the Sunshine Coast hinterland.
There, WICEN used it for forwarding the scores of drivers as they passed through each stage of the rally. A checkpoint would be at the start and finish of each stage, and a packet network would be set up with digipeaters in strategic locations and a base station, often located at the Imbil school.
The organisers of IRoQ did experiment with other ways of getting scores through, including hiring bandwidth on satellites, flying planes around in circles over the area, and other shenanigans. Although these systems had faster throughput speeds, one thing they had which we did not have, was latency. The score would arrive back at base long before the car had left the check point.
In addition to this kind of work, WICEN also help out with horse endurance rides. Traditionally we’ve just relied on good ol’e analogue FM radio, but in events such as the Tom Quilty, there has been a desire to use packet as a mechanism for reporting when horses arrive at given checkpoints and to perhaps enable autonomous stations that can detect horses via RFID and report those “back to base” to deter riders from cheating.
The challenge of AX.25 is two-fold:
With the exception of Linux, no other OS has any kind of baked-in support for it, so writing applications that can interact with it means either implementing your own AX.25 stack or interfacing to some third-party stack such as BPQ.
Due to the specialised stack, applications often have to run as privileged applications, can have problems with firewalling, etc.
The AX.25 protocol does do static routing. It offers connected-mode links (like TCP) and a connectionless-mode (like UDP), and there are at least two routing protocols I know of that allow for dynamic routing (ROSE, Net/ROM). There is a standard for doing IPv4 over AX.25, but you still need to manage the allocation of addresses and other details, it isn’t plug-and-play.
Net/ROM would make an ideal way to forward 6LoWPAN traffic, except it only does connected mode, and doing IP over a “TCP-like” link is really a bad idea. (Anything that does automatic repeat requests really messes with TCP/IP.)
I have no idea whether ROSE does the connectionless mode, but the idea of needing to come up with a 10-digit numeric “address” is a real turn-off.
If the address used can be derived off the call-sign of the operator, that makes life a lot easier.
The IPv6 address format has enough bits to do that. To me the most obvious way would be to derive a MAC address from a call-sign and an arbitrarily chosen digit (0-7). It would be reversible of course, and since the MAC address is used in SLAAC, you would see the station’s call-sign in the IPv6 address.
The thinking is that there’s a lot of problems that have been solved in 6LoWPAN. Discovery of services for example is handled using mechanisms like mDNS and CoRE RD. We don’t need to forward Internet traffic, although being able to pull up the Mt. Kanigan and Mt. Stapylton radars over such a network would be real handy at times (yes, I know it’ll be slow).
The OS will view the packet network like a VPN, and so writing applications that can talk over packet will be no different to writing any other kind of network software. Any consumer desktop OS written in the last 16 years has the necessary infrastructure to support it (even Windows 2000, there was a downloadable add-on for it).
Linking two separate “mesh” networks via point-to-point links is also trivial. Each mesh will of course see the other as “external” but participants on both can nonetheless communicate.
The guts of 6LoWPAN is in RFC-4944. This specifies details about how the IPv6 datagram is encoded as a IEEE 802.15.4 payload, and how the infrastructure within 802.15.4 is used to route IPv6. Gnarly details like how fragmentation of a 1280-byte IPv6 datagram into something that will fit the 128-byte maximum 802.15.4 frames is handled here. For what it’s worth, AX.25 allows 255 bytes (or was it 256?), so we’re ahead there.
Crucially, it is assumed that the 802.15.4 layer can figure out how to get from node A to node Z via B… C…, etc. 802.15.4 networks are managed by a PAN coordinator, which provides various services to the network.
AX.25 makes this “our problem”. Yes the sender of a frame can direct which digipeaters a frame should be passed to, but they have to figure that out. It’s like sending an email by UUCP, you need a map of the Internet to figure out what someone’s address is relative to your site.
Plain AX.25 digipeaters will of course be part of the mix, so having the ability for a node stuck on one side of such a digipeater would be worth having, but ultimately, the aim here will be to provide a route discovery mechanism in place that, knowing a few static digipeater routes, can figure out who is able to hear whom, and route traffic accordingly.
Yesterday’s post was rather long, but was intended for mostly technical audiences outside of amateur radio. This post serves as a brain dump of volatile memory before I go to sleep for the night. (Human conscious memory is more like D-RAM than one might realise.)
So, many in our group use packet radio TNCs already, with a good number using the venerable Kantronics KPC3. These have a DB9 port that connects to the radio and a second DB25 RS-323 port that connects to the computer.
My proposal: we make an audio interface that either plugs into that DB9 port and re-uses the interface cables we already have, or directly into the radio’s data port.
This should connect to an audio interface on the computer.
For EMI’s sake, I’d recommend a USB sound dongle like this, or these, or this as that audio interface. I looked on Jaycar and did see this one, which would also work (and burn a hole in your wallet!).
If you walk in and the asking price is more than $30, I’d seriously consider these other options. Of those options, U-Mart are here in Brisbane; go to their site, order a dongle then tell the site you’ll come and pick it up. They’ll send you an email with an order number when it’s ready, you just need to roll up to the store, punch that number into a terminal in the shop, then they’ll call your name out for you to collect and pay for it.
Scorptec are in Melbourne, so you’ll have to have items shipped, but are also worth talking to. (They helped me source some bits for my server cluster when U-Mart wouldn’t.)
USB works over two copper pairs; one delivers +5V and 0V, the other is a differential pair for data. In short, the USB link should be pretty immune from EMI issues.
At worst, you should be able to deal with it with judicious application of ferrite beads to knock down the common mode current and using a combination of low-ESR electrolytic and ceramic capacitors across the power rails.
If you then keep the analogue cables as short as absolutely possible, you should have little opportunity for RF to get in.
I don’t recommend the TigerTronics Signalink interfaces, they use cheap and nasty isolation transformers that lead to serious performance issues.
For the receive audio, we feed the audio from the radio and we feed that via potentiometer to a 3.5mm TRS (“phono”) plug tip, with sleeve going to common. This plugs into the Line-In or Microphone input on the sound device.
Push to Talk and Transmit audio
I’ve bundled these together for a good reason. The conventional way for computers to drive PTT is via an RS-232 serial port.
We can do that, but we won’t unless we have to.
Unless you’re running an original SoundBLASTER card, your audio interface is likely stereo. We can get PTT control via an envelope detector forming a minimal-latency VOX control.
Another 3.5mm TRS plug connects to the “headphone” or “line-out” jack on our sound device and breaks out the left and right channels.
The left and right channels from the sound device should be fed into the “throw” contacts on two single-pole double-throw toggle switches.
The select pin (mechanically operated by the toggle handle) on each switch thus is used to select the left or right channel.
One switch’s select pin feeds into a potentiometer, then to the radio’s input. We will call that the “modulator” switch; it selects which channel “modulates” our audio. We can again adjust the gain with the potentiometer.
The other switch first feeds through a small Schottky diode then across a small electrolytic capacitor (to 0V) then through a small resistor before finally into the base of a small NPN signal transistor (e.g. BC547). The emitter goes to 0V, the collector is our PTT signal.
This is the envelope detector we all know and love from our old experiments with crystal sets. In theory, we could hook a speaker to the collector up to a power source and listen to AM radio stations, but in this case, we’ll be sending a tone down this channel to turn the transistor, and thus or PTT, on.
The switch feeding this arrangement we’ll call the “PTT” switch.
By using this arrangement, we can use either audio channel for modulation or PTT control, or we can use one channel for both. 1200-baud AFSK, FreeDV, etc, should work fine with both on the one channel.
If we just want to pass through analogue audio, then we probably want modulation separate, so we can hold the PTT open during speech breaks without having an annoying tone superimposed on our signal.
It may be prudent to feed a second resistor into the base of that NPN, running off to the RTS pin on an RS-232 interface. This will let us use software that relies on RS-232 PTT control, which can be added by way of a USB-RS232 dongle.
The cheap Prolific PL-2303 ones sold by a few places (including Jaycar) will work for this. (If your software expects a 16550 UART interface on port 0x3f8 or similar, consider running it in a virtual machine.)
Ideally though, this should not be needed, and if added, can be left disconnected without harm.
There are a few “off-the-shelf” packages that should work fine with this arrangement.
AGWPE on Windows provides a software TNC. On Linux, there’s soundmodem (which I have used, and presently mirror) and Direwolf.
Shouldn’t need a separate PTT channel, it should be sufficient to make the pre-amble long enough to engage PTT and rely on the envelope detector recognising the packet.
FreeDV provides an open-source digital voice platform system for Windows, Linux and MacOS X.
This tool also lets us send analogue voice. Digital voice should be fine, the first frame might get lost but as a frame is 40ms, we just wait before we start talking, like we would for regular analogue radio.
For the analogue side of things, we would want tone-driven PTT. Not sure if that’s supported, but hey, we’ve got the source code, and yours truly has worked with it, it shouldn’t be hard to add.
The two to watch here would be QSSTV (Linux) and EasyPal (Windows). QSSTV is open-source, so if we need to make modifications, we can.
Not sure who maintains EasyPal these days, not Eric VK4AES as he’s no longer with us (RIP and thank-you). Here, we might need an RS-232 PTT interface, which as discussed, is not a hard modification.
Most is covered by FLDigi. Modes with a fairly consistent duty cycle will work fine with the VOX PTT, and once again, we have the source, we can make others work.
Custom software ideas
So we can use a few off-the-shelf packages to do basic comms.
We need auditability of our messaging system. Analogue FM, we can just use a VOX-like function on the computer to record individual received messages, and to record outgoing traffic. Text messages and files can be logged.
Ideally, we should have some digital signing of logs to make them tamper-resistant. Then we can mathematically prove what was sent.
In a true emergency, it may be necessary to encrypt what we transmit. This is fine, we’re allowed to do this in such cases, and we can always turn over our audited logs for authorities anyway.
Files will be sent as blocks which are forward-error corrected (or forward-erasure coded). We can use a block cipher such as AES-256 to encrypt these blocks before FEC. OpenPGP would work well here rather doing it from scratch; just send the OpenPGP output using FEC blocks. It should be possible to pick out symmetric key used at the receiving end for auditing, this would be done if asked for by Government. DIY not necessary, the building blocks are there.
Digital voice is a stream, we can use block ciphers but this introduces latency and there’s always the issue of bit errors. Stream ciphers on the other hand, work by generating a key stream, then XOR-ing that with the data. So long as we can keep sync in the face of bit errors, use of a stream cipher should not impair noise immunity.
Signal fade is a worse problem, I suggest a cleartext (3-bit, 4-bit?) gray-code sync field for synchronisation. Receiver can time the length of a fade, estimate the number of lost frames, then use the field to re-sync.
Most (all?) stream ciphers are symmetric. We would have to negotiate/distribute a key somehow, either use Diffie-Hellman or send a generated key as an encrypted file transfer (see above). The key and both encrypted + decrypted streams could be made available to Government if needed.
The software should be capable of:
Real-time digital voice (encrypted and clear; the latter being compatible with FreeDV)
File transfer (again, clear and encrypted using OpenPGP, and using good FEC, files will be cryptographically signed by sender)
Voice mail and SSTV, implemented using file transfer.
Radioteletype modes (perhaps PSK31, Olivia, etc), with logs made.
Analogue voice pass-through, with recordings made.
All messages logged and time-stamped, received messages/files hashed, hashes cryptographically signed (OpenPGP signature)
Operation over packet networks (AX.25, TCP/IP)
Standard message forms with some basic input validation.
Ad-hoc routing between interfaces (e.g. SSB to AX.25, AX.25 to TCP/IP, etc) should be possible.
The above stack should ideally work on low-cost single-board computers that are readily available and are low-power. Linux support will be highest priority, Windows/MacOS X/BSD is a nice-to-have.
GNU Radio has building blocks that should let us do most of the above.
The Yaesu FT-897D has the de-facto standard 6-pin Mini-DIN data jack on the back to which you can plug a digital modem. Amongst the pins it provides is a squelch status pin, and in the past I’ve tried using that to drive (via transistors) the carrier detect pin on various computer interfaces to enable the modem to detect when a signal is incoming.
The FT-897D is fussy however. Any load at all pulling this pin down, and you get no audio. Any load. One really must be careful about that.
Last week when I tried the UDRC-II, I hit the same problem. I was able to prove it was the UDRC-II by construction of a crude adapter cable that hooked up to the DB15-HD connector, converting that to Mini-DIN6: by avoiding the squelch status pin, I avoided the problem.
One possible solution was to cut the supplied Mini-DIN6 cable open, locate the offending wire and cut it. Not a solution I relish doing. The other was to try and fix the UDRC-II.
Discussing this on the list, it was suggested by Bryan Hoyer that I use a 4.7k pull-up resistor on the offending pin to 3.3V. He provided a diagram that indicated where to find the needed signals to tap into.
With that information, I performed the following modification. A 1206 4.7k resistor is tacked onto the squelch status pin, and a small wire run from there to the 3.3V pin on a spare header.
UDRC-II modification for Yaesu FT-897D
I’m at two minds whether this should be a diode instead, just in case a radio asserts +12V on this line, I don’t want +12V frying the SoC in the Raspberry Pi. On the other hand, this is working, it isn’t “broke”.
Doing the above fixed the squelch drive issue and now I’m able to transmit and receive using the UDRC-II. Many thanks to Bryan Hoyer for pointing this modification out.
Well, I’ve been thinking a lot lately about single board computers. There’s a big market out there. Since the Raspberry Pi, there’s been a real explosion available to the small-end of town, the individual. Prior to this, development boards were mostly in the 4-figures sort of price range.
So we’re now rather spoiled for choice. I have a Raspberry Pi. There’s also the BeagleBone Black, Banana Pi, and several others. One gripe I have with the Raspberry Pi is the complete absence of any kind of analogue input. There’s an analogue line out, you can interface some USB audio devices (although I hear two is problematic), or you can get an I2S module.
There’s a GPU in there that’s capable of some DSP work and a CLKOUT pin that can generate a wide range of frequencies. That sounds like the beginnings of a decent SDR, however one glitch, while I can use the CLKOUT pin to drive a mixer and the GPIOs to do band selection, there’s nothing that will take that analogue signal and sample it.
If I want something wider than audio frequencies (and even a 192kHz audio CODEC is not guaranteed above ~20kHz) I have to interface to SPI, and the pickings are somewhat slim. Then I read this article on a DIY single board computer.
That got me thinking about whether I could do my own. At work we use the Technologic Systems TS-7670 single-board computers, and as nice as those machines are, they’re a little slow and RAM-limited. Something that could work as a credible replacement there too would be nice, key needs there being RS-485, Ethernet and a 85 degree temperature rating.
Form factor is a consideration here, and I figured something modular, using either header pins or edge connectors would work. That would make the module easily embeddable in hobby projects.
Since all the really nice SoCs are BGA packages, I figured I’d first need to know how easy I could work with them. We’ve got a stack of old motherboards sitting in a cupboard that I figured I could raid for BGAs to play with, just to see first-hand how fine the pins were. A crazy thought came to me: maybe for prototyping, I could do it dead-bug style?
Key thing here being able to solder directly to a ball securely, then route the wire to its destination. I may need to glue it to a bit of grounded foil to keep the capacitance in check. So, the first step I figured, would be to try removing some components from the boards I had laying around to see this first-hand.
In amongst the boards I came across was one old 386 motherboard that I initially mistook for a 286 minus the CPU. The empty (PLCC) socket is for an 80387 math co-processor. The board was in the cupboard for a good reason, corrosion from the CMOS battery had pretty much destroyed key traces on one corner of the board.
Corrosion on a motherboard caused by a CMOS battery
I decided to take to it with the heat gun first. The above picture was taken post-heatgun, but you can see just how bad the corrosion was. The ISA slots were okay, and so where a stack of other useful IC sockets, ICs, passive components, etc.
With the heat gun at full blast, I’d just wave it over an area of interest until the board started to de-laminate, then with needle-nose pliers, pull the socket or component from the board. Sometimes the component simply dropped out.
At one point I heard a loud “plop”. Looking under the board, one of the larger surface-mounted chips had fallen off. That gave me an idea, could the 386 chip be de-soldered? I aimed the heat-gun directly at the area underneath. A few seconds later and it too hit the deck.
All in all, it was a successful haul.
Parts off the 386 motherboard
I also took apart an 8-bit ISA joystick card. It had some nice looking logic chips that I figured could be re-purposed. The real star though was the CPU itself:
The question comes up, what does one do with a crusty old 386 that’s nearly as old as I am? A quick search turned up this scanned copy of the Intel 80386SX datasheet. The chip has a 16-bit bus with 23 bits worth of address lines (bit 0 is assumed to be zero). It requires a clock that is double the chip’s operating frequency (there’s an internal divide-by-two). This particular chip runs internally at 20MHz. Nothing jumped out as being scary. Could I use this as a practice run for making an ARM computer module?
A dig around dug up some more parts:
In this pile we have…
an AMD 486 DX/4 100MHz (I might do something with that one day too)
I also have some SIMMs laying around, but the SDRAM modules look easier to handle since the controllers on board synchronise with what would otherwise be the front-side bus. The datasheet does not give a minimum clock (although clearly this is not DC; DRAM does need to be refreshed) and mentions a clock frequency of 33MHz when set to run at a CAS latency of 1. It just so happens that I have a 33MHz oscillator. There’s a couple of nits in this plan though:
the SDRAM modules a 3.3V, the CPU is 5V: no problem, there are level conversion chips out there.
the SDRAM modules are 64-bits wide. We’ll have to buffer the output to eight 8-bit registers. Writes do a read-modify-write cycle, and we use a 2-in-4 decoder to select the CE pin on two of the registers from address bits 1 and 2 from the CPU.
Each SDRAM module holds 32MB. We have a 23-bit address bus, which with 16-bit words gives us a total address space of 16MB. Solution: the old 8-bit computers of yesteryear used bank-switching to address more RAM/ROM than they had address lines for, we can interface an 8-bit register at I/O address 0x0000 (easily decoded with a stack of Schottky diodes and a NOT gate) which can hold the remaining address bits mapping the memory to the lower 8MB of physical memory. We then hijack the 386’s MMU to map the 8MB chunks and use the page faults to switch memory banks. (If we put the SRAM and ROM up in the top 1MB, this gives us ~7MB of memory-mapped I/O to play with.)
So, not show stoppers. There’s an example circuit showing interfacing an ATMega8515 to a single SDRAM chip for driving a VGA interface, and some example code, with comments in German. Unfortunately you’d learn more German in an episode of Hogan’s Heroes than what I know, but I can sort-of figure out the sequence used to read and write from/to the SDRAM chip. Nothing looks scary there either. This SDRAM tutorial seems to be a goldmine.
Thus, it looks like I’ve got enough bits to have a crack at it. I can run the 386 from that 33MHz brick; which will give me a chip running at 16.5MHz. Somewhere I’ve got the 40MHz brick laying around from the motherboard (I liberated that some time ago), but that can wait.
A first step would be to try interfacing the 386 chip to an AVR, and feed it instructions one step at a time, check that it’s still alive. Then, the next steps should become clear.
Well, lately I’ve been doing a bit of work hacking the firmware on the Rowetel SM1000 digital microphone. For those who don’t know it, this is a hardware (microcontroller) implementation of the FreeDV digital voice mode: it’s a modem that plugs into the microphone/headphone ports of any SSB-capable transceiver and converts FreeDV modem tones to analogue voice.
I plan to set this unit of mine up on the bicycle, but there’s a few nits that I had.
There’s no time-out timer
The unit is half-duplex
If there’s no timeout timer, I really need to hear the tones coming from the radio to tell me it has timed out. Others might find a VOX feature useful, and there’s active experimentation in the FreeDV 700B mode (the SM1000 currently only supports FreeDV 1600) which has been very promising to date.
Long story short, the unit needed a more capable UI, and importantly, it also needed to be able to remember settings across power cycles. There’s no EEPROM chip on these things, and while the STM32F405VG has a pin for providing backup-battery power, there’s no battery or supercapacitor, so the SM1000 forgets everything on shut down.
ST do have an application note on their website on precisely this topic. AN3969 (and its software sources) discuss a method for using a portion of the STM32’s flash for this task. However, I found their “license” confusing. So I decided to have a crack myself. How hard can it be, right?
There’s 5 things that a virtual EEPROM driver needs to bear in mind:
The flash is organised into sectors.
These sectors when erased contain nothing but ones.
We store data by programming zeros.
The only way to change a zero back to a one is to do an erase of the entire sector.
The sector may be erased a limited number of times.
So on this note, a virtual EEPROM should aim to do the following:
It should keep tabs on what parts of the sector are in use. For simplicity, we’ll divide this into fixed-size blocks.
When a block of data is to be changed, if the change can’t be done by changing ones to zeros, a copy of the entire block should be written to a new location, and a flag set (by writing zeros) on the old block to mark it as obsolete.
When a sector is full of obsolete blocks, we may erase it.
We try to put off doing the erase until such time as the space is needed.
Step 1: making room
The first step is to make room for the flash variables. They will be directly accessible in the same manner as variables in RAM, however from the application point of view, they will be constant. In many microcontroller projects, there’ll be several regions of memory, defined by memory address. This comes from the datasheet of your MCU.
The MCU here is the STM32F405VG, which has 1MB of flash starting at address 0x08000000. This 1MB is divided into (in order):
Sectors 0…3: 16kB starting at 0x08000000
Sector 4: 64kB starting at 0x0800c000
Sector 5 onwards: 128kB starting at 0x08010000
We need at least two sectors, as when one fills up, we will swap over to the other. Now it would have been nice if the arrangement were reversed, with the smaller sectors at the end of the device.
The Cortex M4 CPU is basically hard-wired to boot from address 0, the BOOT pins on the STM32F4 decide how that gets mapped. The very first few instructions are the interrupt vector table, and it MUST be the thing the CPU sees first. Unless told to boot from external memory, or system memory, then address 0 is aliased to 0x08000000. i.e. flash sector 0, thus if you are booting from internal flash, you have no choice, the vector table MUST reside in sector 0.
Normally code and interrupt vector table live together as one happy family. We could use a couple of 128k sectors, but 256k is rather a lot for just an EEPROM storing maybe 1kB of data tops. Two 16kB sectors is just dandy, in fact, we’ll throw in the third one for free since we’ve got plenty to go around.
However, the first one will have to be reserved for the interrupt vector table that will have the space to itself.
/* Specify the memory areas */
/* ISR vectors *must* be placed here as they get mapped to address 0 */
VECTOR (rx) : ORIGIN = 0x08000000, LENGTH = 16K
/* Virtual EEPROM area, we use the remaining 16kB blocks for this. */
EEPROM (rx) : ORIGIN = 0x08004000, LENGTH = 48K
/* The rest of flash is used for program data */
FLASH (rx) : ORIGIN = 0x08010000, LENGTH = 960K
/* Memory area */
RAM (rwx) : ORIGIN = 0x20000000, LENGTH = 128K
/* Core Coupled Memory */
CCM (rwx) : ORIGIN = 0x10000000, LENGTH = 64K
This is only half the story, we also need to create the section that will be emitted in the ELF binary:
. = ALIGN(4);
. = ALIGN(4);
. = ALIGN(4);
*(.text) /* .text sections (code) */
*(.text*) /* .text* sections (code) */
*(.rodata) /* .rodata sections (constants, strings, etc.) */
*(.rodata*) /* .rodata* sections (constants, strings, etc.) */
*(.glue_7) /* glue arm to thumb code */
*(.glue_7t) /* glue thumb to arm code */
. = ALIGN(4);
_etext = .; /* define a global symbols at end of code */
_exit = .;
There’s rather a lot here, and so I haven’t reproduced all of it, but this is the same file as before at revision 2389, but a little further down. You’ll note the .isr_vector is pointed at the region called FLASH which is most definitely NOT what we want. The image will not boot with the vectors down there. We need to change it to put the vectors in the VECTOR region.
Whilst we’re here, we’ll create a small region for the EEPROM.
THAT’s better! Things will boot now. However, there is still a subtle problem that initially caught me out here. Sure, the shiny new .eeprom section is unpopulated, BUT the linker has helpfully filled it with zeros. We cannot program zeroes back into ones! Either we have to erase it in the program, or we tell the linker to fill it with ones for us. Thankfully, the latter is easy (stm32_flash.ld at 2395):
. = ALIGN(4);
KEEP(*(.eeprom)) /* special section for persistent data */
. = ORIGIN(EEPROM) + LENGTH(EEPROM) - 1;
. = ALIGN(4);
} >EEPROM = 0xff
We have to do two things. One, is we need to tell it that we want the region filled with the pattern 0xff. Two, we need to make sure it gets filled with ones by telling the linker to write one as the very last byte. Otherwise, it’ll think, “Huh? There’s nothing here, I won’t bother!” and leave it as a string of zeros.
Step 2: Organising the space
Having made room, we now need to decide how to break this data up. We know the following:
We have 3 sectors, each 16kB
The sectors have an endurance of 10000 program-erase cycles
Give some thought as to what data you’ll be storing. This will decide how big to make the blocks. If you’re storing only tiny bits of data, more blocks makes more sense. If however you’ve got some fairly big lumps of data, you might want bigger blocks to reduce overheads.
I ended up dividing the sectors into 256-byte blocks. I figured that was a nice round (binary sense) figure to work with. At the moment, we have 16 bytes of configuration data, so I can do with a lot less, but I expect this to grow. The blocks will need a header to tell you whether or not the block is being used. Some checksumming is usually not a bad idea either, since that will clue you in to when the sector has worn out prematurely. So some data in each block will be header data for our virtual EEPROM.
If we don’t care about erase cycles, this is fine, we can just make all blocks data blocks, however it’d be wise to track this, and avoid erasing and attempting to use a depleted sector, so we need somewhere to track this. 256 bytes gives us enough space to stash an erase counter and a map of what blocks are in use within that sector.
So we’ll reserve the first block in the sector to act as this index for the entire sector. This gives us enough room to have 16-bits worth of flags for each block stored in the index. That gives us 63 blocks per sector for data use.
It’d be handy to be able to use this flash region for a few virtual EEPROMs, so we’ll allocate some space to give us a virtual ROM ID. It is prudent to do some checksumming, and the STM32F4 has a CRC32 module, so in that goes, and we might choose to not use all of a block, so we should throw in a size field (8 bits, since the size can’t be bigger than 255). If we pad this out a bit to give us a byte for reserved data, we get a header with the following structure:
So that subtracts 8 bytes from the 256 bytes, leaving us 248 for actual program data. If we want to store 320 bytes, we use two blocks, block index 0 stores bytes 0…247 and has a size of 248, and block index 1 stores bytes 248…319 and has a size of 72.
I mentioned there being a sector header, it looks like this:
Program Cycles Remaining
Block 0 flags
Block 1 flags
Block 2 flags
No checksums here, because it’s constantly changing. We can’t re-write a CRC without erasing the entire sector, we don’t want to do that unless we have to. The flags for each block are currently allocated accordingly:
When the sector is erased, all blocks show up as having all flags set as ones, so the flags is considered “inverted”. When we come to use a block, we mark the “in use” bit with a zero, leaving the rest as ones. When we erase, we mark the entire flags block as zeros. We can set other bits here as we need for accounting purposes.
Thus we have now a format for our flash sector header, and for our block headers. We can move onto the algorithm.
Step 3: The Code
This is the implementation of the above ideas. Our code needs to worry about 3 basic operations:
This is good enough if the size of a ROM image doesn’t change (normal case). For flexibility, I made my code so that it works crudely like a file, you can seek to any point in the ROM image and start reading/writing, or you can blow the whole thing away.
It is bad taste to leave magic numbers everywhere, so constants should be used to represent some quantities:
The virtual ROM sector size in bytes. (Those watching Codec2 Subversion will note I cocked this one up at first.)
The number of sectors.
The size of a block
The address where the virtual ROM starts in Flash
The base sector number where our ROM starts
Our maximum number of program-erase cycles
Our programming environment may also define some, for example UINTx_MAX.
From the above, we can determine:
VROM_DATA_SZ = VROM_BLOCK_SZ – sizeof(block_header):
The amount of data per block.
VROM_BLOCK_CNT = VROM_SECT_SZ / VROM_BLOCK_SZ:
The number of blocks per sector, including the index block
VROM_SECT_APP_BLOCK_CNT = VROM_BLOCK_CNT – 1
The number of application blocks per sector (i.e. total minus the index block)
I decided to use the STM32’s CRC module for this, which takes its data in 32-bit words. There’s also the complexity of checking the contents of a structure that includes its own CRC. I played around with Python’s crcmod module, but couldn’t find some arithmetic that would allow it to remain there.
So I copy the entire block, headers and all to a temporary copy (on the stack), set the CRC field to zero in the header, then compute the CRC. Since I need to read it in 32-bit words, I pack 4 bytes into a word, big-endian style. In cases where I have less than 4 bytes, the least-significant bits are left at zero.
We identify each block in an image by the ROM ID and the block index. We need to search for these when requested, as they can be located literally anywhere in flash. There are probably cleverer ways to do this, but I chose the brute force method. We cycle through each sector and block, see if the block is allocated (in the index), see if the checksum is correct, see if it belongs to the ROM we’re looking for, then look and see if it’s the right index.
To read from the above scheme, having been told a ROM ID (rom), start offset and a size, the latter two being in byte sand given a buffer we’ll call out, we first need to translate the start offset to a sector and block index and block offset. This is simple integer division and modulus.
The first and last blocks of our read, we’ll probably only read part of. The rest, we’ll read entire blocks in. The block offset is only relevant for this first block.
So we start at the block we calculate to have the start of our data range. If we can’t find it, or it’s too small, then we stop there, otherwise, we proceed to read out the data. Until we run out of data to read, we increment the block index, try to locate the block, and if found, copy its data out.
Writing and Erasing
Writing is a similar affair. We look for each block, if we find one, we overwrite it by copying the old data to a temporary buffer, copy our new data in over the top then mark the old block as obsolete before writing the new one out with a new checksum.
Trickery is in invoking the wear levelling algorithm on an as-needed basis. We mark a block obsolete by setting its header fields to zero, but when we run out of free blocks, then we go looking for sectors that are full of obsolete blocks waiting to be erased. When we encounter a sector that has been erased, we write a new header at the start and proceed to use its first data block.
In the case of erasing, we don’t bother writing anything out, we just mark the blocks as obsolete.
The full C code is in the Codec2 Subversion repository. For those who prefer Git, I have a git-svn mirror (yes, I really should move it off that domain). The code is available under the Lesser GNU General Public License v2.1 and may be ported to run on any CPU you like, not just ST’s.
I’ve been running a station from the bicycle for some time now and I suppose I’ve tried a few different battery types on the station.
Originally I ran 9Ah 12V gel cells, which work fine for about 6 months, then the load of the radio gets a bit much and I find myself taking two with me on a journey to work because one no longer lasts the day. I replaced this with a 40Ah Thundersky LiFePO4 pack which I bought from EVWorks, which while good, weighed 8kg! This is a lot lighter than an equivalent lead acid, gel cell or AGM battery, but it’s still a hefty load for a bicycle.
At the time that was the smallest I could get. Eventually I found a mob that sold 10Ah packs. These particular cells were made by LiFeBatt, and while pricey, I’ve pretty much recouped my costs. (I’d have bought and disposed of about 16 gel cell batteries in this time at $50 each, versus $400 for one of these.) These are what I’ve been running now since about mid 2011, and they’ve been pretty good for my needs. They handle the load of the FT-857 okay on 2m FM which is what I use most of the time.
A week or two back though, I was using one of these packs outside with the home base in a “portable” set-up with my FT-897D. Tuned up on the 40m WICEN net on 7075kHz, a few stations reported that I had scratchy audio. Odd, the radio was known to be good, I’ve operated from the back deck before and not had problems, what changed?
The one and only thing different is I was using one of these 10Ah packs. I’ve had fun with RF problems on the bicycle too. On transmit, the battery was hovering around the 10.2V mark, perhaps a bit low. Could it be the radio is distorting on voice peaks due to input current starvation? I tried after the net swapping it for my 40Ah pack, which improved things. Not totally cleared up, but it was better, and the pack hadn’t been charged in a while so it was probably a little low too.
I thought about the problem for a bit. SSB requires full power on voice peaks. For a 100W radio, that’s a 20A load right now. Batteries don’t like this. Perhaps there was a bit of internal resistance from age and the nature of the cells? Could I do something to give it a little hand?
Supercapacitors are basically very high capacity electrolytic capacitors with a low breakdown voltage, normally in the order of a few volts and capacitances of over a farad. They are good for temporarily storing charge that needs to be dumped into a load in a hurry. Could this help?
My cells are in a series bank of 4: ~3.3V/cell with 4 cells gives me 13.2V. There’s a battery balancer already present. If a cell gets above 4V, that cell is toast, so the balancer is present to try to prevent that from happening. I could buy these 1F 5.5V capacitors for only a few dollars each, so I thought, “what the hell, give it a try”. I don’t have much information on them other that Elna Japan made them. The plan was to make some capacitor “modules” that would hook in parallel to each cell.
My 13.2V battery pack, out of its case
For my modules, the construction was simple, two reasonably heavy gauge wires tacked onto the terminals, the whole capacitor then encased in heatshrink tubing and ring lugs crimped to the leads. I was wondering whether I should solder a resistor and diode in parallel and put that in series with the supercap to prevent high in-rush current, but so far that hasn’t been necessary.
The re-assembled pack
I’ve put the pack back together and so far, it has charged up and is ready to face its first post-retrofit challenge. I guess I’ll be trying out the HF station tomorrow to see how it goes.
Not a complete solution to the RF feedback, it seems to help in other ways. I did a quick test on the drive way first with the standard Yaesu handmic and with the headset. Headset still faces interference problems on HF, but I can wind it up to about 30W~40W now instead of 20.
More pondering to come but we’ll see what the other impacts are.
I’ve been riding on the road now for some years, and while I normally try to avoid it, I do sometimes find myself riding on the road itself rather than on the footpath or bicycle path.
Most of the time, the traffic is fine. I’m mindful of where everyone is, and there aren’t any problems, but I have had a couple of close calls from time to time. Close calls that have me saying “ode for a horn”.
By law we’re required to have a bell on our bikes. No problem there, I have a mechanical one which is there purely for legal purposes. If I get pulled over by police, and they ask, I can point it out and demonstrate it. Requirement met? Tick to that.
It’s of minimal use with pedestrians, and utterly useless in traffic.
Early on with my riding I developed a lighting system which included indicators. Initially this was silent, I figured I’d see the lights flashing, but after a few occasions forgetting to turn indicators off, I fitted a piezo buzzer. This was an idea inspired by the motorcycles ridden by Australia Post contractors, which have a very audible buzzer. Jaycar sell a 85dB buzzer that’s waterproof, overkill in the audio department but fit for purpose. It lets me know I have indicators on and alerts people to my presence.
That is, if they equate the loud beep to a bicycle. Some do not. And of course, it’s still utterly useless on the road.
I figured a louder alert system was in order. Something that I could adjust the volume on, but loud enough to give a pedestrian a good 30 seconds warning. That way they’ve got plenty of time to take evasive action while I also start reducing speed. It’s not that I’m impatient, I’ll happily give way, but I don’t want to surprise people either. Drivers on the other hand, if they do something stupid it’d be nice to let them know you’re there!
My workplace looks after a number of defence bases in South-East Queensland, one of which has a railway crossing for driver training. This particular boom gate assembly copped a whack from a lightning strike, which damaged several items of equipment, including the electronic “bells” on the boom gate itself. These “bells” consisted of a horn speaker with a small potted PCB mounted to the back which contained an amplifier and bell sound generator. Apply +12V and the units would make a very loud dinging noise. That’s in theory; in practise, all that happened was a TO-220 transistor got hot. Either the board or the speaker (or both) was faulty.
It was decided these were a write-off, and after disassembly I soon discovered why: the voice coils in the horn speakers had been burnt out. A little investigation, and I figured I could replace the blown out compression drivers and get the speakers themselves working again, building my own horn.
A concept formed: the horn would have two modes, a “bell” mode with a sound similar to a bicycle bell, and a “horn” mode for use in traffic. I’d build the circuit in parts, the first being the power amplifier then interface to it the sound effect generator.
To make life easier testing, I also decided to add a line-in/microphone-in feature which would serve to debug construction issues in the power amplifier and add a megaphone function. (Who knows, might be handy with WICEN events.)
Replacing the compression drivers
Obviously it’d be ideal to replace it with the correct part, but looking around, I couldn’t see anything that would fit the housing. That, and what I did see, was more expensive than buying a whole new horn speaker.
There was a small aperture in the back about 40mm in diameter. The original drivers were 8ohms, and probably rated at 30W and had a convex diaphragm which matched the concave geometry in the back of the horn assembly.
Looking around, I saw these 2W mylar cone speakers. Not as good as a compression driver, but maybe good enough? It was cheap enough to experiment. I bought two to try it out.
I got them home, tacked some wires onto one of them and plugged it into a radio. On its own, not very loud, but when I held it against the back of a horn assembly, the amplification was quite apparent. Good enough to go further. I did some experiments with how to mount the speakers to the assembly, which required some modifications to be made.
I soon settled on mounting the assembly to an aluminium case with some tapped holes for clamping the speaker in place. There was ample room for a small amplifier which would be housed inside the new case, which would also serve as a means of mounting the whole lot to the bike.
I wasn’t sure what to use for this, I had two options: build an analogue circuit to make the effect, or program a microcontroller. I did some experiments with an ATMega8L, did manage to get some sound out of it using the PWM output, but 8kB of flash just wasn’t enough for decent audio.
A Freetronics LeoStick proved to be the ticket. 32kB flash, USB device support, small form factor, what’s not to like? I ignored the Arduino-compatible aspect and programmed the device directly. Behind the novice-friendly pin names, they’re an ATMega32U4 with a 16MHz crystal. I knocked up a quick prototype that just played a sound repeatedly. It sounded a bit like a crowbar being dropped, but who cares, it was sufficient.
Experimenting with low-pass filters I soon discovered that a buffer-amp would be needed, as any significant load on the filter would render it useless.
A 2W power amplifier
Initially I was thinking along the lines of a LM386, but after reading the datasheet I soon learned that this would not cut it. They are okay for 500mW, but not 2W. I didn’t have any transistors on hand that would do it and still fit in the case, then I stumbled on the TDA 1905. These ICs are actually capable of 5W into 4 ohms if you feed them with a 14V supply. With 9V they produce 2.5W, which is about what I’m after.
I bought a couple then set to work with the breadboard. A little tinkering and I soon had one of the horn speakers working with this new amplifier. Plugged into my laptop, I found the audio output to be quite acceptable, in fact turned up half-way, it was uncomfortable to sit in front of.
I re-built the circuit to try and make use of the muting feature. For whatever reason, I couldn’t get this to work, but the alternate circuit provided a volume control which was useful in itself.
For the line-level audio, there’s no need for anything more fancy than a couple of resistors to act as a passive summation of the left and right channels, however for a microphone and for the LeoStick, I’d need a preamp. I grabbed a LM358 and plugged that into my breadboard alongside the TDA1905.
Before long, I had a working microphone preamp working using one half of the LM358, based on a circuit I found. I experimented with some resistor values and found I got reasonable amplification if I upped some of the resistor values to dial the gain back a little. Otherwise I got feedback.
For the LeoStick, it already puts out 5V TTL, so a unity-gain voltage follower was all that was needed. The second half of the LM358 provided this. A passive summation network consisting of two resistors and DC-blocking capacitor allowed me to combine these outputs for the TDA1905.
One thing I found necessary, the TDA1905 and LM358 misbehave badly unless there’s a decent size capacitor on the 9V rail. I found a 330uF electrolytic helped in addition to the datasheet-prescribed 100nF ceramics.
Since I’m running on batteries with no means of generating power, it’s important that the circuit does not draw power when idle. Ideally, the circuit should power on when either I:
plug the USB cable in (for firmware update/USB audio)
toggle the external source switch
press the bell button
We also need two power rails: a 9V one for the analogue electronics, and a 5V one for the LeoStick. A LM7809 and LM7805 proved to be the easiest way to achieve this.
To allow software control of the power, a IRF9540N MOSFET was connected to the 12V input and supplies the LM7809. The gate pin is connected to a wired-OR bus. The bell button and external source switch connect to this bus with signal diodes that pull down on the gate.
Two BC547s also have collectors wired up to this bus, one driven from the USB +5V supply, and the other from a pin on the LeoStick. Pressing the Bell button would power the entire circuit up, at which point the LeoStick would assert its power on signal (turning on one of the BC547s) then sample the state of the bell button and start playing sound. When it detects the button has been released, it finishes its playback and turns itself off by releasing the power on signal.
Sound effect generator
Earlier I had prototyped a bell generator, however it wasn’t much use as it just repeatedly made a bell noise regardless of the inputs. To add insult to injury, I had lost the source code I used. I had a closer look at the MCU datasheet, deciding to start from a clean slate.
The LeoStick provides its audio on pin D11, which is wired up to Port B Pin 7. Within the chip, two possible timers hook up: Timer 0, which is an 8-bit timer, and Timer 1, which is 16-bits. Both are fed from the 16MHz system clock. The bit depth affects the PWM carrier frequency we can generate, the higher the resolution, the slower the PWM runs. You want the PWM frequency as high as possible, ideally well above 20kHz so that it’s not audible in the audio output, and obviously well above the audio sampling rate.
At 16MHz, a 16-bit timer would barely exceed 240Hz, which is utterly useless for audio. A 10-bit timer fares better, with 15kHz, older people may not hear it but I certainly can hear 15kHz. That leaves us with 8-bits which gets us up to 62kHz. So no point in using Timer 1 if we’re only going to be using 8-bits of it, we might as well use Timer 0.
Some of you familiar with this chip may know of Timer 4, which is a high-speed 10-bit timer fed by a separate 64MHz PLL. It’s possible to do better quality audio from here, either running at 10-bits with a 62kHz carrier, or dropping to 8-bits and ramping the frequency to 250kHz. Obviously it’d have been nice, but I had already wired things up by this stage, so it was too late to choose another pin.
Producing the output voltage is only half the equation though: once started, the PWM pin will just output a steady stream of pulses, which when low-passed, produces a DC offset. In order to play sound, we need to continually update the timer’s Capture Compare register with each new sample at a steady rate.
The most accurate way to do this, is to use another timer. Timer 3 is another 16-bit timer unit, with just one capture compare output available on Port C pin 3. It is an ideal candidate for a timer that has no external influence, so it gets the job of updating the PWM capture compare value with new samples.
Timer 1 is connected to pins that drive two of the three LEDs on the LeoStick, with Timer 4 driving the remaining one, so if I wanted, I could have LEDs fade in and out with it instead of just blinking. However, my needs are basic, and I need something to debounce switches and visibly blink LEDs. So I use that with a nice long period to give me a 10Hz timer.
Here is the source code. I’ll add schematics and other notes to it with time, but the particular bits of interest for those wanting to incorporate PWM-generated sound in their AVR projects are the interrupt routine and the sound control functions.
To permit gapless playback, I define two buffers which I alternate between, so while one is being played back, the other can be filled up with samples. I define these on line 139 with the functions starting at line 190. The interrupt routine that orchestrates the playback is at line 469.
When sound is to be played, the first thing that needs to happen is for the initial buffer to be loaded with samples using the write_audio function. This can either read from a separate buffer in RAM (e.g. from USB) or from program memory. One of the options permits looping of audio. Having loaded some initial data in, we can then call start_audio to set up the two timers and get audio playback rolling. start_audio needs the sample rate to configure the sample rate timer, and can accept any sample rate that is a factor of 16MHz (so 8kHz, 16kHz up to 32kHz).
The audio in this application is statically compiled in, taking the form of an array of uint8_t‘s in PROGMEM.
Creating the sounds
I initially had a look around to see if I could get a suitable sound effect. This proved futile, I was ideally looking around for a simple openly-licensed audio file. Lots of places offered something, but then wanted you to sign up or pay money. Fine, I can understand the need to make a quid, and if I were doing this a lot, I’d pay up, but this is a once-off.
Eventually, I found some recordings which were sort of what I was after, but not quite. So I downloaded these then fired up Audacity to have a closer look.
The bicycle bell
Bicycle bells have a very distinctive sound to them, and are surprisingly complicated. I initially tried to model it as an exponentially decaying sinusoid of different frequencies, but nothing sounded quite right.
The recording I had told me that the fundamental frequency was just over 2kHz. Moreover though, the envelope was amplitude-modulated by a second sinusoid: this one about 15Hz. Soon as I plugged this second term in, things sounded better. This script, was the end result. The resulting bell sounds like this:
So somewhat bell-like. To reduce the space, I use a sample rate of 6.4kHz. I did try a 4kHz sample rate but was momentarily miffed at the result until I realised what was going on: the bell was above the Nyquist frequency at 4kHz, 6.4kHz is the minimum practical rate that reproduces the audio.
I used Audacity to pick a point in the waveform for looping purposes, to make it sound like a bell being repeatedly struck.
I wanted something that sounded a little gutsy. Like an air-horn on a truck. Once again, I hit the web, and found a recording of a train horn. Close enough, but not long enough, and a bit noisy. However opening it up in Audacity and doing a spectrum analysis, I saw there were about 5 tones involved. I plugged these straight into a Python script and decided to generate those directly. Using a raised cosine filter to shape the envelope at the start and end, and I soon had my horn effect. This script generates the horn. The audio sounds like this:
Using other sound files
If you really wanted, you could use your own sound recordings. Just keep in mind the constraints of the ATMega32U4, namely, 32kB of flash has to hold both code and recordings. An ATMega64 would do better. The audio should be mono, 8-bits and unsigned with as lower sample rate as you can get away with. (6.4kHz proved to be sufficient for my needs.)
Your easiest bet would be to either figure out how to read WAV files (in Python: wave module), or alternatively, convert to raw headerless audio files, then code up a script that reads the file one byte at a time. The Python scripts I’ve provided might be a useful starting point for generating the C files.
Alternatively, you can try interfacing a SDCard and embedding a filesystem driver and audio file parser (not sure about WAVE but Sun Audio is easily parsed), this is left as an exercise for the adventurous.
I’ll put schematics and pictures up soonish. I’m yet to try mounting the whole set up, but so far the amplifier is performing fine on the bench.
Earlier this week I had an idea. We’ve got an old clock radio that picks up interference from the fridge when it turns on and the buttons on it are starting to fail with age.
I thought: “Why not build a new one?”
So the requirements are simple. We need a real-time clock, display driver, and of course, a receiver. The unit we have spends most of its time tuned to 792kHz AM (4QG or “ABC Radio National”), so a simple direct conversion receiver was what I was thinking of. But what about the LO?
Now I do have some clock radio ICs that implement the timing circuitry, alarm function and LED panel driver somewhere in a junk box. You feed them with the 50Hz or 60Hz waveform that comes out of the transformer and they use that as the timing source. Easy to use a 555 timer for the time source, and I’d make a traditional receiver. Another option is to use a AVR microcontroller, I have a few ATMega8Ls in the junk box with a NXP I2C RTC chip which I also have a few of.
The ATMega8L has a couple of PWM channels one 16-bit and one 8-bit: could they be used as an LO?
So: after digging around and locating my bought-years-ago and not-yet-used AVR programmer, and dusting off a breadboard that had an ATMega8L on it from a previous experiment I set to work.
This page explains in good detail how the PWM channels work. I started with those examples as a guide and tweaked from there.
For the PWM channel to work as a receiver LO, I want it to cover 540kHz to ~2MHz, with reasonable granularity. Question is, how far can I crank this? I have a 4MHz crystal, not the fastest I can use with this chip, but the absolute top of the range for the ATMegas isn’t much higher: 16MHz or maybe 20MHz. So if you’ve got a 16MHz crystal, you can expect to quadruple what I do here.
I started off with some blink code. If you take out all the delays, you get the following code:
The yellow waveform there is off one of the crystal pins. The cyan one is the PWM pin output, which in this case is a software driven GPIO. Even if this one worked, you wouldn’t want to do it this way unless your chip was doing only this task, and who’d use a programmable chip like an ATMega8L for that?
So, after reading through the documentation and examples, I loaded in the following code:
| 1 | 11 |
<urn:uuid:2fe0dab0-9916-4451-8df6-495f4f543cab>
|
'Were I called on to define, very briefly, the term Art, I should call it 'the reproduction of what the Senses perceive in Nature through the veil of the soul.' The mere imitation, however accurate, of what is in Nature, entitles no man to the sacred name of 'Artist.'
Edgar Allan Poe
Dubuffet was born in Le Havre. He moved to Paris in 1918 to study painting at the Académie Julian, but after six months he left the Académie to study independently. In 1924, doubting the value of art, he stopped painting and took over his father's business of selling wine. He took up painting again in the 1930s, when he made a large series of portraits in which he emphasized the vogues in art history. Although he stopped once more, he turned to art for good in 1942 when he started to paint figures of nude women in a impersonal and primitive way, in strong colours. Another subject he choose, was people in everyday life, such as people sitting in the underground, or just walking in the countryside. His first solo show came in 1944.
In 1945 he became strongly impressed by a show in
Paris of Jean Fautrier's paintings, in which he recognized meaningful
art which expressed directly and purely the depth of a person. Jean
Fautrier (May 16, 1898 – July 21, 1964) was a French painter and
sculptor. He was one of the most important practitioners of Tachisme*.
Just like Fautrier, Dubuffet started to use thick oil paint, but mixed
with sand and gravel, by which he could model the paint as a skin of the
painting. This resulted in the series 'Hautes Pâtes'.
Influenced by Hans Prinzhorn's book Artistry of the Mentally Ill, Dubuffet coined the term Art Brut (meaning "raw art," oftentimes referred to as ‘outsider art’) for art produced by non-professionals working outside aesthetic norms, such as art by psychiatric patients, prisoners, and children. He sought to create an art as free from intellectual concerns as Art Brut, and his work often appears primitive and child-like.
From 1962 he produced a series of works in which he limited himself to the colours red, white, black, and blue. Towards the end of the 1960s he turned increasingly to sculpture, producing works in polystyrene which he then painted with vinyl paint.
In late 1960-1961, Dubuffet began experimenting with music and sound and made several recordings with the Danish painter Asger Jorn, a founding member of the avant-garde movement COBRA. The same period he started making sculpture, but in a very not-sculptural way. As his medium he preferred to use the ordinary materials as papier mâchier and for all the light medium polystyrene, in which he could model very fast and switch easily from one work to another, as sketches on paper. At the end of the sixties he started to create his large sculpture-habitations, such as 'Tour aux figures', 'Jardin d'Hiver' and 'Villa Falbala' in which people can wander, stay, contemplate etc. In 1969 ensued an acquaintance between him and the French Outsider Art artist Jacques Soisson.
In 1973 Jean Dubuffet created a piece of art by the name ‘Jardin d’Émail’, an example of an environment. Environment art can be seen as a movement of conceptual art.
In 1978 Dubuffet collaborated with American composer and musician Jasun Martz to create the record album artwork for Martz’s avant-garde symphony entitled The Pillory.
One of Dubuffet's later works was Monument With Standing Beast (1984). Dubuffet died in Paris in 1985.
In early 2012 the Pace Gallery mounted an exhibition of work from the last two years of Dubuffet's life. Writing to Arne Glimcher, in a letter reproduced in the catalog for the exhibition, Dubuffet explained his conception for the paintings as “intended to challenge the objective nature of being. The notion of being is presented here as relative rather than irrefutable: it is merely a projection of our minds, a whim of our thinking. The mind has the right to establish being wherever it cares to and for as long as it likes. There is no intrinsic difference between being and fantasy.”
Artists you may think of are Pierre Alechinsky, Karel Appel, Alberto Burri, Jean Fautrier, Lucio Fontana, Sam Francis, Asger Jorn, Jackson Pollock and Zao Wou Ki. And others, of course…
© pictures: wagenvoorde
Lucio Fontana (19 February 1899 – 7 September 1968) was an Italian painter, sculptor and theorist of Argentine birth. He was mostly known as the founder of Spatialism and for his ties to Arte Povera.
Fontana spent the first years of his life in Italy and came back to Argentina in 1905, where he stayed until 1922, working as a sculptor along with his father, and later on his own. As early as 1926 he participated in an exhibition.
In 1927 Fontana returned to Italy and studied under
the sculptor Adolfo Wildt, at Accademia di Brera from 1928 to
1930. (Having a strong late nineteenth century Romanticism background,
Wildt is dedicated to the art of sculpture, strongly influenced by the
Secession and by Art Nouveau. It is characterized by
complex symbolism and by definition is almost gothic in its forms.)
It was there where Fontana presented his first exhibition in 1930. During the following decade he travelled through Italy and France, working with abstract and expressionist painters. In 1935 he joined the association Abstraction-Création in Paris and from 1936 to 1949 made expressionist sculptures in ceramic and bronze. In 1939, he joined the Corrente, a Milan group of expressionist artists.
In 1940 he returned to Argentina. In Buenos Aires (1946) he founded the Altamira academy together with some of his students. The White Manifesto was published, where is stated that "Matter, colour and sound in motion are the phenomena whose simultaneous development makes up the new art". In the text, which Fontana did not sign but to which he actively contributed, he began to formulate the theories that he was to expand as Spazialismo, or Spatialism, in five manifestos from 1947 to 1952
Following his return to Italy in 1948 Fontana exhibited his first Ambiente spaziale a luce nera (Spatial Environment) (1949) at the Galleria del Naviglio in Milan, a temporary installation consisting of a giant amoeba-like shape suspended in the void in a darkened room and lit by neon light. From 1949 on he started the so-called Spatial Concept or slash series, consisting in holes or slashes on the surface of monochrome paintings, drawing a sign of what he named "an art for the Space Age".
Fontana often lined the reverse of his canvases with black gauze so that the darkness would shimmer behind the open cuts and create a mysterious sense of illusion and depth. He then created an elaborate neon ceiling called "Luce spaziale" in 1951 for the Triennale in Milan. In his important series of Concetto spaziale, La Fine di Dio (1963–64), Fontana used the egg shape.
The End of God (1963)
A pink, egg-shaped canvas is peppered with numerous clusters of puncture marks, revealing the blank space behind. This is one of the series of works in which Fontana made single cuts or holes in the canvas, and represents the artist’s ultimate gestural act. The puncture marks and gashes were made to draw attention to the infinite space behind the canvas, making the void beyond the flat surface as much a part of the work as the canvas itself.
This rejection of the traditional space of the canvas, and the movement of the work into the real space of the gallery is typical of Spazialismo.
Fontana engaged in many collaborative projects with the most important architects of the day, in particular with Luciano Baldessari, who shared and supported his research for Spatial Light – Structure in Neon (1951) at the 9th Triennale and, among other things, commissioned him to design the ceiling of the cinema in the Sidercomit Pavilion at the 21st Milan Fair in 1953.
Around 1960, Fontana began to reinvent the cuts and
punctures that had characterized his highly personal style up to that
point, covering canvases with layers of thick oil paint applied by hand
and brush and using a scalpel or Stanley knife to create great fissures
in their surface. In 1961, following an invitation to participate along
with artists Jean Dubuffet, Mark Rothko, Sam Francis, and others in an
exhibition of contemporary painting entitled "Art and Contemplation",
held at Palazzo Grassi in Venice, he created a series of 22 works
dedicated to the lagoon city. He manipulated the paint with his fingers
and various instruments to make furrows, sometimes including scattered
fragments of Murano glass
In the last years of his career, Fontana became increasingly interested in the staging of his work in the many exhibitions that honored him worldwide, as well as in the idea of purity achieved in his last white canvases. These concerns were prominent at the 1966 Venice Biennale, for which he designed the environment for his work. At Documenta IV in Kassel in 1968, he positioned a large, revelatory slash as the centre of a totally white room (Ambiente spaziale bianco).
Shortly before his death he was present at the "Destruction Art, Destroy to Create" demonstration at the Finch College Museum of New York. Then he left his home in Milano and went to Comabbio (in the province of Varese, Italy), his family's mother town, where he died in 1968.
Fontana had his first solo exhibitions at Galleria
del Milione, Milan, in 1931. In 1961, Michel Tapié organized his first
show in the U.S., an exhibition of the Venice series, at the Martha
Jackson Gallery, New York. He participated in numerous exhibitions
around the world.
Bruce Nauman, born in 1941 in Fort Wayne, Indiana, has been recognized
since the early 1970s as one of the most innovative and provocative of
America's contemporary artists. After leaving school, Nauman had the
simple realization that, if he was an artist and if he was in the
studio, that whatever he was doing in the studio must be art.
I met with the art of Bruce Nauman at the Benesse House Museum, in
Japan. I saw the installation '100 Live and Die', a neon billboard of
flashing phrases, and was intrigued. I went back in the evening, when
there was no one around and everything dark, except this installation.
Guests of the hotel are permitted to wander beyond closing time. It was
fascinating. This dark, concrete gallery, and 'CRY AND LIVE', and/or 'THINK
AND DIE', it read, in large, glowing letters.
A remarkable place to be, on the tiny island of Naoshima in the Seto
Inland Sea of southern Japan. A place where art and nature go hand in
The works shown here are of an exhibition in
the Bonnefantenmuseum in Maastricht, Holland.
Bill Viola, Flushing, New York, 1951
Catherine’s Room 2001
Color video on five LCD flat panels
Originally from New York, Viola has travelled widely. He studied Zen meditation and advanced video technology during a period of 18 months in Japan, before moving to southern California at the beginning of the 1980s. His experience of Eastern philosophy has informed his artistic investigation into the relationship between an individual's inner life and the experience of his body. In his work with experimental sound and video he therefore aims to create art which operates as a complete 'experience'.
The video artist Bill Viola takes inspiration from
painterly traditions. That influence can be discerned both in his choice
of subject matter and in his manner of portrayal. The five LCD screens
that make up Catherine’s Room remind one of a traditional polyptych,
which was used as an altarpiece since the late Middle Ages. The title of
this work could refer to St. Catherine, who led a life of spirituality
and ascetism. Despite associations with ancient paintings, the woman
herself has a thoroughly contemporary appearance.
The Tate Gallery in London has a work of him, titled
‘Nantes Triptych (1992).
© pictures: wagenvoorde
Richard Serra (born November 2, 1939) is an American
minimalist sculptor and video artist known for working with large-scale
assemblies of sheet metal. Serra was involved in the Process
Richard Serra was born in San Francisco. His father,
Tony, was Spanish native of Mallorca and his mother, Gladys, was Russian
from Odessa. Serra studied English literature at the University of
California, Berkeley and later at the University of California, Santa
Barbara between 1957 and 1961. While at Santa Barbara, he studied art
with Howard Warshaw and Rico Lebrun. On the West Coast, he supported
himself by working in steel mills, which turned out to have a strong
influence on his later work.
Serra started living in New York in the 1960s, and there his circle of friends included Carl Andre, Walter De Maria, Eva Hesse, Sol LeWitt, and Robert Smithson.
Information about his life and work can be found here: http://en.wikipedia.org/wiki/Richard_Serra
Serra's drawings are not sketches for his sculptures, but autonomous works of art. Although he has been drawing since 1972, his first solo exhibition only came in 1974, in New York. The pictures below are taken at an exhibition in the Bonnefantenmuseum in Holland, in 2011.
‘There is no way to
make a drawing – there is only drawing. ‘
‘I am aware that people call my black drawing installations sculptural. Not only are these drawings flat and flush with the wall, but they do not create any illusion of three-dimensionality. They do, however, involve the viewer with the specific three-dimensionality of the site of the installation.’
To use black is the clearest way of marking against a
white field, no matter whether you use lead or charcoal or paint stick.
It is also the clearest way of marking without creating associative
Richard Serra Drawings Zeichnungen
1969-1990, Notes on Drawing p.11, Bentelli AG. Bern, 1990
Site-specific art is artwork created to exist in a
certain place. Typically, the artist takes the location into account
while planning and creating the artwork.
With two 200-metre long walls and the grass and water
in between, Sea Level is Serra's largest work in Europe. If you walk
along the wall, the artwork gives you the feeling of being submerged
under water while slowly floating back tot the surface a little further
Depending on the weather conditions, the massive walls will undergo a transformation. On a sunny day the blue sky will be reflected by the shiny silvery wall, whereas on heavily clouded days the wall will take on a dark grey colour and look impenetrable.
Sea Level by Richard Serra, exhibited in Zeewolde, Holland.
Photos © wagenvoorde
Ai Weiwei (1957)
‘Since his emergence as an artist in the late 1970s.
Ai Weiwei has been a prime mover in the Beijing art scene, combining his
roles as artist, architect, and instigator to create new institutions as
well as new art forms.
When Ai Weiwei returned to China, in 1993, he soon
became a central figure in Beijing’s East Village.
Hanging Man In Porcelain 2011
Ai Weiwei’s first one-man show, in New York in 1988
included a portrait of Marcel Duchamp, consisting of a wire hanger
twisted into a silhouette of this French-American artist who, along with
Andy Warhol, is one of Ai’s greatest idols.
5000 kg Sunflower Seeds 2010
Sunflower Seeds is made up of millions of small sculptures, apparently identical, but each one actually sculpted and painted. These are two smaller versions of the work which, on having its debut in Tate Modern in 2010, made Ai Weiwei known to a wider audience.
These porcelain and wooden sculptures were executed
using ancient handcraft traditions. The ceramic stones were made in
Jingdezhen, where Chinese porcelain production originates. The two tress
have been built from fallen trunks collected in the mountainous regions
of southern China. Similar to the construction of Ai Weiwei’s previous
wood sculptures, the tree fragments have been interlocked using a
classic Chinese technique.
Fountain of Light is approximately 23 feet high and
was inspired by an ambitious monument to communism that was intended to
be built in Russia – but never was.
Mariko Mori lives and works in New York. Oneness is an allegory of connectedness, a representation of the disappearance of boundaries between the self and others. It is a symbol of the acceptance of otherness and a model for overcoming national and cultural borders. It also is a representation of the Buddhist concept of oneness, of the world existing as one interconnected organism.
is known about the personal life of Mariko Mori. She is believed to been
born in Tokyo in to be married to the composer Ken Ideka.
fallen into a fin-de-siecle period of crisis in which people believe
only the things they see right in front of them" - Mariko Mori -
all kinds of fantasy and dreams are very important to our life.
For a short impression, have a look at:
If this video has stopped,
you see small pictures at the bottom.
| 1 | 6 |
<urn:uuid:b2dab540-d65e-478e-9bb4-7baf640b0397>
|
An online magazine is a magazine published on the Internet, through bulletin board systems and other forms of public computer networks. One of the first magazines to convert from a print magazine format to being online only was the computer magazine Datamation. Some online magazines distributed through the World Wide Web call themselves webzines. An ezine (also spelled e-zine) is a more specialized term appropriately used for small magazines and newsletters distributed by any electronic method, for example, by electronic mail (e-mail/email, see Zine). Some social groups may use the terms cyberzine and hyperzine when referring to electronically distributed resources. Similarly, some online magazines may refer to themselves as "electronic magazines" or "e-magazines" to reflect their readership demographics or to capture alternative terms and spellings in online searches.
An online magazine shares some features with a blog and also with online newspapers, but can usually be distinguished by its approach to editorial control. Magazines typically have editors or editorial boards who review submissions and perform a quality control function to ensure that all material meets the expectations of the publishers (those investing time or money in its production) and the readership.
Many large print-publishers now provide digital reproduction of their print magazine titles through various online services for a fee. These service providers also refer to their collections of these digital format products as online magazines, and sometimes as digital magazines.
Some online publishers have begun publishing in multiple digital formats, or dual digital formats, that may include both HTML version that look like traditional web pages and Flash versions that appear more like traditional magazines with digital flipping of pages.
Online magazines representing matters of interest to specialists in or societies for academic subjects, science, trade or industry are typically referred to as online journals.
|It's amazing how inexpensive a publication can be if it doesn't need to pay for writing, editing, design, paper, ink, or postage.|
|—Mega 'Zines, Macworld (1995)|
Many general interest online magazines provide free access to all aspects of their online content although some publishers have opted to require a subscription fee to access premium online article and/or multi-media content. Online magazines may generate revenue based on targeted search ads to web-site visitors, banner ads (online display advertising), affiliations to retail web sites, classified advertisements, product-purchase capabilities, advertiser directory links, or alternative informational/commercial purpose.
The original online magazines, e-zines and disk magazines, or diskmags, due to their low cost and initial non-mainstream targets, may be seen as a disruptive technology to traditional publishing houses. The high cost of print publication and large Web readership has encouraged these publishers to embrace the World Wide Web as a marketing and content delivery system and another medium for delivering their advertisers' messages.
In the late 1990s, e-zine publishers began adapting to the interactive and informative qualities of the Internet instead of simply duplicating print magazines on the web. Publishers of traditional print titles and entrepreneurs with an eye to a potential readership in the millions started publishing online titles. Salon.com, founded in July 1995 by David Talbot, was launched with considerable media exposure and today reports 5.8 million monthly unique visitors. In the 2000s, some webzines began appearing in a printed format to complement their online versions.
- Video magazine
- Digital edition
- List of online magazine archives
- News site
- Online newspaper
- Computer magazine
- Electronic journal
- "Datamation". Datamation 4. Retrieved 24 March 2015.
- "Webzine | Define Webzine at Dictionary.com". Dictionary.reference.com. Retrieved 2012-03-02.
- "Definition of 'webzine'". collinsdictionary.com. Retrieved 24 March 2015.
- "Ad Agency Starts New Online Publication". Sfvbj.com/. Retrieved 2012-03-20.
- Pogue, David (May 1995). "Mega 'Zines: Electronic Mac Mags make modems meaningful". Macworld (subscription required): 143–144. Retrieved 2011-02-23.
It's amazing how inexpensive a publication can be if it doesn't need to pay for writing, editing, design, paper, ink, or postage.[permanent dead link]
| 1 | 2 |
<urn:uuid:3154beff-8043-42bc-aaf0-91c4c4056856>
|
Update Immunopathophysiology of Measles
Measles, also known as rubeola, is one of the most contagious infectious diseases, with at least a 90% secondary infection rate in susceptible domestic contacts. It can affect people of all ages, despite being considered primarily a childhood illness. Measles is marked by prodromal fever, cough, coryza, conjunctivitis, and pathognomonic enanthem (ie, Koplik spots), followed by an erythematous maculopapular rash on the third to seventh day. Infection confers life-long immunity.
In temperate areas, the peak incidence of infection occurs during late winter and spring. Infection is transmitted via respiratory droplets, which can remain active and contagious, either airborne or on surfaces, for up to 2 hours. Initial infection and viral replication occur locally in tracheal and bronchial epithelial cells.
After 2-4 days, measles virus infects local lymphatic tissues, perhaps carried by pulmonary macrophages. Following the amplification of measles virus in regional lymph nodes, a predominantly cell-associated viremia disseminates the virus to various organs prior to the appearance of rash.
Measles virus infection causes a generalized immunosuppression marked by decreases in delayed-type hypersensitivity, interleukin (IL)-12 production, and antigen-specific lymphoproliferative responses that persist for weeks to months after the acute infection. Immunosuppression may predispose individuals to secondary opportunistic infections, particularly bronchopneumonia, a major cause of measles-related mortality among younger children.
In individuals with deficiencies in cellular immunity, measles virus causes a progressive and often fatal giant cell pneumonia. In immunocompetent individuals, wild-type measles virus infection induces an effective immune response, which clears the virus and results in lifelong immunity.
Measles virus (MV)3 infection is responsible for an acute childhood disease which remains the fourth cause of infant mortality in the world. Paradoxically, the development of the MV-specific response, which establishes efficient long-term immunity, is associated with a transient but profound immunosuppression. The latter persists several weeks after infection and contributes to the high frequency of opportunistic infections. MV infection has been involved in decrease of tuberculin skin reactivity, inhibition of Ab response to Salmonella typhi vaccine, reduced proliferation capacity of T and B lymphocytes in response to mitogens, and dysregulation of cytokine responses with a Th2 polarization (1). Moreover, in vitro studies have suggested that both lymphocytes and APCs might be involved in MV-induced immunosuppression (2, 3). MV-infected DCs become unable to induce both allogeneic and syngeneic T cell proliferation. MV infection of monocytes and dendritic cells (DCs) inhibits their ability to secrete IL-12. Infected T cells, monocytes, and DCs die by apoptosis .
DCs belong to a family of professional APCs responsible for the generation of effector CD4+ and CD8+ T cells. They originate from CD34+ bone marrow progenitors. Immature DCs form a network within all epithelia, as Langerhans cells (LCs) in the skin or DCs in the respiratory mucosa. These immature DCs are able to capture particular Ags via phagocytosis and soluble Ags via macropinocytosis or receptor-mediated endocytosis . They express low levels of MHC class II (MHC-II) molecules at their cell surface. To become a potent APC, the immature DCs need to be activated by stimuli that promote their maturation and migration to the T cell areas of lymphoid tissues. Living bacteria, microbial products (LPS), or various cytokines (TNF-α, GM-CSF, IL-1β) stimulate DC maturation. Upon maturation, MHC-II molecules are delivered to the plasma membrane (12) and the expression of costimulatory membrane molecules is increased, thus favoring T cell activation.
When the mature DCs reach secondary lymphoid organs, they interact with T cells, receiving signals which induce their terminal differentiation into mature effector DCs. CD40-CD40 ligand (CD40L) interaction between DCs and T cells is essential for an optimal cytokine production. The best-known consequence of CD40 ligation is the IL-12 production by DCs. In human, the X-linked immunodeficiency hyper-IgM syndrome has been attributed to mutations in the CD40L gene. Over the past year, it was recognized that the function of CD40 accounts not only for the regulation of T-dependent humoral immune responses, but also for cellular immune responses. Several immune dysfunctions observed in CD40L-deficient mice and patients could be explained by a failure properly to activate APCs. Recent in vivo studies in mouse demonstrated that CD40 ligation on the DCs can replace CD4+ T cells to prime CD8+ cytotoxic responses.
The mechanisms by which MV infection interferes with the functions of DCs remained unknown. MV replication induces normal maturation of immature monocyte-derived DCs and LCs. But, we show that MV replication leads to an abnormal terminal differentiation of CD40L-activated human DCs. Impairment of CD40/CD40L signaling following MV infection was demonstrated by inhibition of tyrosine-phosphorylation level in MV-infected DCs after CD40 activation. This could explain why DCs display impaired APC functions and may consequently promote MV-induced immunosuppression.
Measles virus causes a severe systemic illness. The rash occurs simultaneously with the onset of the effector phase of the antiviral immune response and substantial evidence of immune activation. This immune response is effective in clearing virus and in establishing long-term resistance to reinfection but is associated with immune suppression, autoimmune encephalomyelitis, and increased susceptibility to secondary infections. This apparent paradox may be explained in part by preferential long-term activation of type 2 CD4+ T cells by measles virus infection. Preferential stimulation of type 1 CD4+ T cells by inactivated virus vaccines is hypothesized to play a role in subsequent development of atypical measles.
Measles is a highly contagious childhood disease associated with an immunological paradox: although a strong virus-specific immune response results in virus clearance and the establishment of a life-long immunity, measles infection is followed by an acute and profound immunosuppression leading to an increased susceptibility to secondary infections and high infant mortality. In certain cases, measles is followed by fatal neurological complications. To elucidate measles immunopathology, we have analyzed the immune response to measles virus in mice transgenic for the measles virus receptor, human CD150. These animals are highly susceptible to intranasal infection with wild-type measles strains. Similarly to what has been observed in children with measles, infection of suckling transgenic mice leads to a robust activation of both T and B lymphocytes, generation of virus-specific cytotoxic T cells and antibody responses. Interestingly, Foxp3(+)CD25(+)CD4(+) regulatory T cells are highly enriched following infection, both in the periphery and in the brain, where the virus intensively replicates. Although specific anti-viral responses develop in spite of increased frequency of regulatory T cells, the capability of T lymphocytes to respond to virus-unrelated antigens was strongly suppressed. Infected adult CD150 transgenic mice crossed in an interferon receptor type I-deficient background develop generalized immunosuppression with an increased frequency of CD4(+)CD25(+)Foxp3(+) T cells and strong reduction of the hypersensitivity response. These results show that measles virus affects regulatory T-cell homeostasis and suggest that an interplay between virus-specific effector responses and regulatory T cells plays an important role in measles immunopathogenesis. A better understanding of the balance between measles-induced effector and regulatory T cells, both in the periphery and in the brain, may be of critical importance in the design of novel approaches for the prevention and treatment of measles pathology.
A generalized immunosuppression that follows acute measles frequently predisposes patients to bacterial otitis media and bronchopneumonia. In approximately 0.1% of cases, measles causes acute encephalitis. Subacute sclerosing panencephalitis (SSPE) is a rare chronic degenerative disease that occurs several years after measles infection.
After an effective measles vaccine was introduced in 1963, the incidence of measles decreased significantly. Nevertheless, measles remains a common disease in certain regions and continues to account for nearly 50% of the 1.6 million deaths caused each year by vaccine-preventable childhood diseases. The incidence of measles in the United States and worldwide is increasing, with outbreaks being reported particularly in populations with low vaccination rates.
Maternal antibodies play a significant role in protection against infection in infants younger than 1 year and may interfere with live-attenuated measles vaccination. A single dose of measles vaccine administered to a child older than 12 months induces protective immunity in 95% of recipients. Because measles virus is highly contagious, a 5% susceptible population is sufficient to sustain periodic outbreaks in otherwise highly vaccinated populations. A second dose of vaccine, now recommended for all school-aged children in the United States, induces immunity in about 95% of the 5% who do not respond to the first dose. Slight genotypic variation in recently circulating strains has not affected the protective efficacy of live-attenuated measles vaccines.
Unsubstantiated claims that suggest an association between the measles vaccine and autism have resulted in reduced vaccine use and contributed to a recent resurgence of measles in countries where immunization rates have fallen to below the level needed to maintain herd immunity.
Modulation of immune functions by measles virus.
Measles virus remains among the most potent global pathogens killing more than 1 million children annually. A profound suppression of general immune functions occurs during and for weeks after the acute disease, which favors secondary infections. In contrast, virus-specific immune responses are efficiently generated, mediate viral control and clearance and confer a long-lasting immunity. Because they sense pathogen-associated molecular patterns, and subsequently initiate and shape adaptive immune responses, professional antigen-presenting cells (APC) such as dendritic cells are likely to play a key role in the induction and quality of the virus-specific immune response. Key features of immune suppression associated with measles virus, however, are compatible with interference with APC maturation and function and subsequent qualitative and quantitative alterations of T cell activation.
- Sellin CI1, Jégou JF, Renneson J, Druelle J, Wild TF, Marie JC, Interplay between virus-specific effector response and Foxp3 regulatory T cells in measles virus immunopathogenesis. Horvat B. PLoS One. 2009;4(3):e4948.
- Griffin DE1, Ward BJ, Esolen LM. Pathogenesis of measles virus infection: an hypothesis for altered immune responses. J Infect Dis. 1994 Nov;170 Suppl 1:S24-31.
- Schneider-Schaulies S1, ter Meulen V.Modulation of immune functions by measles virus. Springer Semin Immunopathol. 2002;24(2):127-48.
- Schneider-Schaulies S1, Niewiesk S, Schneider-Schaulies J, ter Meulen V. Measles virus induced immunosuppression: targets and effector mechanisms. Curr Mol Med. 2001 May;1(2):163-81.
- Servet-Delprat C1, Vidalain PO, Valentin H, Rabourdin-Combe C. Measles virus and dendritic cell functions: how specific response cohabits with immunosuppression. Curr Top Microbiol Immunol. 2003;276:103-23.
| 1 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.