text
stringlengths 242
506k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 15
389
| file_path
stringlengths 138
138
| language
stringclasses 1
value | language_score
float64 0.65
0.99
| token_count
int64 57
112k
| score
float64 2.52
5.03
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Structured musical performance support:
The use of computer driven pitch and rhythm recognition and correction.
Ultimately, the application will aid users in developing their abilities to
express themselves musically through a series of exercises. The reward is the
magic of music created in a group setting. The project could have several
levels of difficulty, each one building on the skills of the previous. The user
should be free to explore. They should be able to skip areas that bore them as
well as explore areas that might presently be beyond their musical grasp.
User input could take the form of a keyboard, touch screen, or rear projection
table. The exercises could start out with simple tasks such as mimicking a
single tempo generated by the machine. The accuracy of the user could be
displayed graphically on a video monitor. The exercises could progress to
playing a counterpoint rhythm to the computers, to practicing multi tempos of
say "3 on 4" or "2 on 3" between the users left and right hands. Group rhythmic
exercises could also be written. Advanced exercises could teach parididdles or
other drumming rudiments.
The pitch of the users voice could be captured through the use of a headset
microphone. The exercises could start simply by having the user echo a single
pitch that is generated by the computer. The closeness of the pitch of the users
voice versus the computers can be monitored with a graphic representation. Once
the user is comfortable with echoing unison pitch, other intervals can be
explored, such as octaves, fifths, thirds, etc. When the user is comfortable
with intervals, they can progress on to short melodies, and then harmonize with
the computer. The whole time, they will be getting feedback as to how they are
doing. If they slip up, the program could encourage them to try again. If they
fail repeatedly, the program might suggest a previous exercise.
Once the solo exercises have served their purpose the system could be extended
to support group play. Users could be given short sections of songs to learn a
piece at a time. Some users can sing, others can play rhythms, still others
could do both. The system should be extensible to at least four players. Virtual
rhythmic percussion palettes could be selected by the user or by the computer as
needed by the piece that was being performed. The whole experience could take
place around a touch sensitive, rear projection circular table. Performances can
be written and stored in the system to be taught to players.
Solo learning sessions could be replaced by group learning sessions that focus
solely on the smallest building blocks of a given piece. This would have the
advantage of getting the group making music (and hopefully having fun) as
quickly as possible.
The basic idea is that almost everyone is capable of creating music at some
level. Some people shy away from the activity because of embarrassing results or
because the effort and discipline involved seems overwhelming. By allowing the
computer to play "the ultimately patient conductor", users can progress at their
own rate. By allowing users to practice alone, or with friends that used to be
"non-musical" the embarrassing aspects are hopefully minimized. Focusing on
small blocks of achievement allows the users to "get it right" as quickly as
possible, hopefully minimizing the frustrating aspects of structured group
|
<urn:uuid:839f2e2d-d448-4060-94b4-b4cf6158c22d>
|
CC-MAIN-2013-20
|
http://cs1.cs.nyu.edu/~dqu2556/Multimedia/smps.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382360/warc/CC-MAIN-20130516092622-00008-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.95494 | 708 | 2.96875 | 3 |
Arts & Informatics
The field that connects Arts and Informatics is very wide and it has many various topics to discuss. Here is just an example. When humans are playing music or listening to music, a great deal of data processing is involved. Recognition of phrases, styles, instruments, composers and performers is easily done by humans. Ways to model and program such capabilities by means of grammars, rules, neural nets and genetic algorithms are discussed in this field and also synthesis of music in terms of algorithmic composition and in terms composition tools. The impact that computers have on performing art is surveyed, for example in combination with vision, exemplified by virtual orchestras, conducted by light batons or moving dancers.
|
<urn:uuid:0fc89dfe-6da0-4c2b-961a-e666439aec70>
|
CC-MAIN-2013-20
|
http://www.social-informatics.org/c/273/Arts__Informatics/?preid=308
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702454815/warc/CC-MAIN-20130516110734-00036-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.963249 | 149 | 2.546875 | 3 |
In the 1980s, three new ways of doing things moved out of computer laboratories onto the market. The first - parallel computing - uses many processors working together to solve a single problem. Parallel computers are more powerful and cheaper to use than traditional single-processor machines, but are proving more difficult to program.
The use of mathematically rigorous techniques or 'formal methods' to develop software was the second development. Programmers can now prove their work is correct mathematically, rather than run a program on a computer to see if it works and then check back for errors. The third, object-oriented programming, is a rapidly maturing descendant of the structured programming that swept computing in the 1970s.
While there are now many interesting books on these subjects, most are aimed at researchers and graduate students, and are not suitable as undergraduate texts. An exception is How to Write Parallel Programs: A ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
|
<urn:uuid:ed2fc654-c641-4c60-82b8-093e83f61807>
|
CC-MAIN-2013-20
|
http://www.newscientist.com/article/mg13217905.100-review-at-the-computers-parallel-face.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708808767/warc/CC-MAIN-20130516125328-00083-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.946997 | 208 | 3.9375 | 4 |
At the same time, it is spreads into many fiels which put ddemands on its functionality. Fortunately, these all fitin well with the original design concpets.
Collaborative work was the original design goal of W3. This involves everyone working together in a group to be able to share knowledge, modifying, annotating, and contributing as well as reading. This is an exciting area. It requires good wysiwyg hypertext and hypermedia editors (which will probably arrive during the next year) as well as authentication of users.
Object-Oriented distributed databases and the web appraoch each other also. providing object-oriented featues in the web will allow users to manipulate objects other than documents. Examples of this may be scientific data, and simulated worlds, in wich multiple users define, and interact with, objects of all conveivable kinds.
While all these developments make life very interesting for internet-connected academics, there is a strong desire to get this access to home and school users. This will involve the development of very efficient protocols for phone lines and caching algorithms to give good performance to dial-in users. This will open up and enormous market, and be a great equalizer across the developed world.
one other of the many factors which impinges on the web is the world of libraries. Just as W3 gives libraries a way to open their doors to the public and give them a level of service never before achieved, so the web with its blossoming collections of information requires the skills of librarianship. The technical demands here are for long-lived naming schems which will survive their creators.
Other extenstions being refined include a forms language for complex online querie, and more sophisticated markup languages for structured multimedia documents.
The standardisation work is being done by many W3 development teams conjunction with the Internet Engineering Task Force.
Meanwhile, the W3 team at CERN is documenting the existing practice as a set of Internet RFCs, and is providing a coordinating role in the devlopment process.
(Back to the www seminar .)Tim BL
|
<urn:uuid:d771b0d2-cb09-484c-9f39-bc0efc3ea1c5>
|
CC-MAIN-2013-20
|
http://www.w3.org/Talks/CompSem93/FutureText.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704818711/warc/CC-MAIN-20130516114658-00071-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.936802 | 432 | 2.515625 | 3 |
A software design model in which objects contain both data and the instructions that work on the data. It is becoming widely deployed in distributed computing. Major object-oriented programming languages include C++, Smalltalk, Objective C, Object COBOL, and Eiffel.
The art and science of manipulating data, like programming, in the form of "objects", streamlining ways of identifying and addressing business problems and creating applications. Its applications are built up from objects containing both information and the intelligence needed to process that data in a single unit; particularly useful in workgroups where it lets a document contain its own security and routing information. Standards are being discussed by several bodies including the Object Management Group with its Object Management Architecture. Dogged by acronyms and competing methodologies, object technology is a growing phenomenon.
Any application development that is designed to use data packaged into objects. See CORBA.
Technology--usually programming languages--designed to work with objects.
the languages, environments, tools, and methodologies used to build software applications based on objects.
A computer programming approach that builds software applications through the repeated use of self-contained object-bits of data that are surrounded with the program information needed to gain access to the data. Objects can perform certain computer functions when they receive messages to do that function.
|
<urn:uuid:52bc96e2-2d7d-4970-8b2c-0e04feda35d1>
|
CC-MAIN-2013-20
|
http://www.metaglossary.com/meanings/1501812/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707188217/warc/CC-MAIN-20130516122628-00011-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.935661 | 266 | 3.34375 | 3 |
by Paul Rudo on 31/03/11 at 4:40 pm
Over the past several years, tech headlines have been increasingly peppered with the term “NoSQL.” One can quibble about the accuracy and usefulness of that term (and many do), but that conversation can and should be ignored. What can’t be ignored, however, is the dramatic transformation in software systems that is driving the emergence and growing adoption of this new class of non-relational database technology.
Interactive Software Has Changed
Interactive software (software with which a person iteratively interacts in real time) has changed in fundamental ways over the past 35 years. The “online” systems of the 1970s have given way to today’s Web applications and a modern application architecture that addresses the radical differences in users, applications, and underlying infrastructure.
However, relational database technology, which was invented and optimized for the systems of the 1970s, has not kept pace with these changes and, in some regards, is the last domino to fall in the march toward fully distributed software architecture.
- Users: In the early days of interactive systems, an online application that supported 2,000 users was considered huge. Further, the population of users was controlled, worked within well-defined office hours, and was relatively static in size. Today, applications accessed via the public Web – for example, online banking systems, social gaming, e-commerce – support a population of users several orders of magnitude greater in size. A newly launched software system can grow from zero users to millions overnight – and those users can be located anywhere in the world, requiring 24×7 application availability.
- Applications: In the past, interactive software systems were primarily designed to automate existing manual processes and typically mirrored clerical employee tasks that culminated in some sort of “transaction.” These systems accelerated task completion and improved accuracy, but were about automating – not innovating. Modern applications, in contrast, break new ground, changing the nature of communication, shopping, advertising, entertainment, and much more. Change is really the only constant in these systems.
- Infrastructure: Perhaps the most obvious difference between then and now is the infrastructure atop which interactive systems execute. Centralization characterized the computing environment in the 1970s – mainframes and minicomputers with shared CPU, memory and disk subsystems were the norm. Today, distributed computing is the norm.
Application Architecture Has Changed
To address these changes, modern Web applications are built to scale out: just add more Web servers behind a load balancer to support more users. The result is attractive (near-linear) cost and performance curves, but the real win is the flexibility this distributed application architecture affords. Beyond the ability to quickly add or remove Web servers to support user volume and activity levels, distributing the load across servers (and even geographies) is inherently fault tolerant, supporting continuous operations.
RDBMS: Shortcomings and Band-Aids
In contrast to these sweeping changes in application architecture, relational database technology has not fundamentally changed in 40 years.
- It remains a centralized, “scale-up” technology; runs on complex, proprietary, expensive servers; and handling more users requires getting bigger (and even more expensive) servers (for increased CPU, memory and I/O capacity).
- Running RDBMS technology in an otherwise distributed architecture highlights its lack of flexibility for “rightsizing” the database in real time to fit the needs and usage patterns of the application. (The Web logic layer scales out; the relational database, well, can’t).
- The rigidity of the database schema – the fact that changing the schema once data is inserted is A Big Deal – makes it very difficult to quickly change application behavior, especially if it involves changes to data formats and content.
Recognizing these shortcomings of RDBMSs for modern interactive software applications, developers and practitioners have come up with some workarounds – for example, sharding, denormalizing, and distributed caching – which, while useful to a limited degree, are really just Band-Aids that ease symptoms, but don’t fight the disease.
So Why NoSQL?
Early Web pioneers such as Google and Amazon, faced with the inadequacies of relational technology, and blessed with the ability to invent their own databases, developed (and now depend on) Big Table and Dynamo respectively to meet their highly distributed database needs.
And the NoSQL database was born.
What makes a NoSQL database a NoSQL database?
- It’s schema-less. Data can be inserted without a defined schema, and the format of the data being inserted can change at any time – providing extreme flexibility in the application and its ability to change with the needs of the business.
- It’s elastic. A NoSQL database automatically spreads data across servers, requiring no participation from the applications. Servers can be added and removed from the data layer without application downtime, with data and I/O spread across servers.
- It’s queryable. Sharding an RDBMS can seriously inhibit the ability to perform complex queries. NoSQL systems retain their full query expressive power, even when distributed across hundreds or thousands of servers.
- It caches for extreme low latency. To reduce latency and increase sustained data throughput, advanced NoSQL database technologies transparently cache data in system memory – a behavior that is completely transparent to the developer and the ops team.
NoSQL for the Rest of Us
Few companies can afford to develop and maintain their own NoSQL database, but the need for a new approach is nearly universal. Without a doubt, the reason NoSQL is picking up steam is because growing numbers of developers and ops teams recognize its potential for reducing the cost and complexity of data management while increasing the scalability and performance of interactive Web applications.
A number of commercial and open source database technologies such as Couchbase (a database combining the leading NoSQL data management technologies CouchDB, Membase and Memcached), MongoDB, Redis, Cassandra and others are now available and increasingly represent the most frequently selected data management choice behind new interactive Web applications.
For a more complete discussion, visit www.couchbase.com/why-nosql
This was a guest article, contributed by James Phillips, Senior Vice President of Products at Couchbase.
|
<urn:uuid:30bfe189-ce72-4a85-926e-17f45e3f8f44>
|
CC-MAIN-2013-20
|
http://enterprisefeatures.com/2011/03/why-you-should-care-about-nosql/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702957608/warc/CC-MAIN-20130516111557-00055-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.926408 | 1,311 | 2.640625 | 3 |
Algorithm Design introduces algorithms by looking at the real-world problems that motivate them. In a clear,
straight-forward style, Kleinberg and Tardos teaches students to analyze and define problems for themselves and from this,
to recognize which design principles are appropriate for a given situation. The text encourages a greater understanding
of the algorithm design process and an appreciation of the role of algorithms in the broader field of computer science.
|
<urn:uuid:edd357e6-a6b8-4a07-84d9-7ba3008178ad>
|
CC-MAIN-2013-20
|
http://www.aw-bc.com/info/kleinberg/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705957380/warc/CC-MAIN-20130516120557-00035-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.911769 | 87 | 2.515625 | 3 |
Previous: 1.1.3 Utility Programs
Up: 1.1 Computer System Organization
Previous Page: 1.1.3 Utility Programs
Next Page: 1.2 Representing Data and Program Internally
Above the utilities in Figure 1.1 is the block labeled User Programs. It is at this level where a computer becomes specialized to perform a task to solve a user's problem. Given a task that needs to be performed, a programmer can design and code a program to perform that task using the text editors, compilers, debuggers, etc. The program so written may make use of operating system facilities, for example to do I/O to interact with the program user. It is at this level that the examples, exercises and problems in this text will be written.
However, not everyone who uses a computer is a programmer or desires to be a programmer. As well, if every time a new task was presented to be programmed, one had to start from scratch with a new program, the utility and ease of using the computers would be reduced. These days packages of predefined software, or Applications, are available from many vendors in the industry. Highly functional word processors, desktop publishing packages, spread sheet and data base programs and, yes, games are readily available for computer users as well as programmers. In fact, perhaps most computer users these days access their machines exclusively through these application programs.
A computer system is typically purchased with an operating system, a variety of utilities (such as compilers for high level languages and text editors) and application programs. Without the layers of software in modern computers, computer systems would not be as useful and popular as they are today. While the complexity of these underlying layers has increased greatly in recent years, the net effect has been to make computers easier for people to use.
In the remainder of this Chapter we will take a more detailed look at how data and programs are represented within the machine. We finally discuss the design of programs and their coding in the C language before beginning a detailed description in Chapter .
|
<urn:uuid:2a016dfc-8f6d-48c6-b34e-e8252e80bdbe>
|
CC-MAIN-2013-20
|
http://www-ee.eng.hawaii.edu/Courses/EE150/Book/chap1/subsection2.1.1.4.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382185/warc/CC-MAIN-20130516092622-00025-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.950505 | 416 | 3.828125 | 4 |
|United States Patent||5,129,077|
|Hillis||July 7, 1992|
A method and apparatus are described for improving the utilization of a parallel computer by allocating the resources of the parallel computer among a large number of users. A parallel computer is subdivided among a large number of users to meet the requirements of a multiplicity of databases and programs that are run simultaneously on the computer. This is accomplished by dividing the parallel computer into a plurality of processor arrays, each of which can be used independently of the others. This division is made dynamically in the sense that the division can readily be altered and indeed in a time sharing environment may be altered between two successive time slots of the frame. Further, the parallel computer is organized so as to permit the simulation of additional parallel processors by each physical processor in the array and to provide for communication among the simulated parallel processors. These simulated processors may also be stored, in virtual memory. As a result of this design, it is possible to build a parallel computer with a number of physical processors on the order of 1,000,000 and a number of simulated processors on the order of 1,000,000,000,000. Moreover, since the computer can be dynamically reconfigured into a plurality of independent processor arrays, a device this size can be shared by a large number of users with each user operating on only a portion of the entire computer having a capacity appropriate for the problem then being addressed.
|Inventors:||Hillis; W. Daniel (Cambridge, MA)|
Thinking Machines Corporation
|Filed:||January 16, 1990|
|Application Number||Filing Date||Patent Number||Issue Date|
|Current U.S. Class:||712/13 ; 712/E9.049|
|Current International Class:||G06F 9/46 (20060101); G06F 9/50 (20060101); G06F 15/16 (20060101); G06F 15/76 (20060101); G06F 15/173 (20060101); G06F 15/80 (20060101); G06F 9/38 (20060101); G06F 12/10 (20060101); G06F 11/16 (20060101); G06F 013/00 ()|
|Field of Search:||364/2MSFile,9MSFile|
|4523273||June 1985||Adams, III et al.|
|4639857||January 1987||McCanny et al.|
|4748585||May 1988||Chiarulli et al.|
Ian R. Greenshields, "Dynamically Reconfigurable, Yector-Slice Processor", IEEE Proceedings, vol. 129, Pt. E, No. 5 (Sep. 1982), pp. 207-215. .
Lin et al."Reconfiguration Procedures for a Polymorphic and Partitionable Multiprocessor." IEEE Transactions on Computers, vol. C-35, No. 10 (Oct. 1986), pp. 910-915. .
Tsutomu Hoshino "An Invitation to the World of PAX." IEEE Computer, (May 1986), pp. 68-80, 0018-9162/86/0500-0068$01.00. .
Charles L. Seitz, "The Cosmic Cube", Communications of the ACM, vol. 28, No. 1 (Jan. 1985), pp. 22-33. .
Preparata et al. "The Cube-Connected Cycles: A Versatile Network for Parallel Computation." Communications of the ACM, vol. 24, No. 5 (May 1981), pp. 300-309. .
Hillis W. D. "Chapter 4 The Prototype." In: The Connection Machine (Massachusetts, MIT, 1985), pp. 71-90, 145-172. .
NCR45CG72 GAPP Application Note No. 3, Ohio, NCR Corporation, 1985, pp. 1-23. .
NCR45CG72, Ohio, NCR Corporation, 1984, pp. 1-12. .
Hillis, W. D. "The Connection Machine", Massachusetts, MIT (1981), pp. 1-21, 23-29. A.I. Memo No. 646. .
Kenneth E. Batcher "Design of a Massively Parallel Processor." IEEE Transactions on Computers, vol. C-29, No. 9 (Sep. 1980), pp. 836-840. .
Asbury et al. "Concurrent Computers Ideal for Inherently Parallel Problems." Computer Design, (Sep. 1, 1985), pp. 99-102, 104, 106-107..
|
<urn:uuid:3abf3682-803e-4156-a16d-686c8a4030b4>
|
CC-MAIN-2013-20
|
http://patents.com/us-5129077.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705957380/warc/CC-MAIN-20130516120557-00009-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.767966 | 1,013 | 2.671875 | 3 |
There are three parts to this coursework.
Parts one and two must be completed by all students.
Part three must be attempted by all MSc students.
The choice of implementation language is up to you.
You should submit a report in the format specified below for each stage.
This part is a gentle reminder of how to program a conventional PC. It is due to be completed by the end of week 4.
The application you are to program is a simple game. It is based on a higher/lower outcome at each step.
There is a path with four steps. When the game starts the player is presented with a randomly chosen number in the range 1-10. The player then moves to the first step and a random number is chosen by the application, in the range 1-10. The player is asked to guess whether the number they have been given is higher or lower than the number allocated to the step.
If they guess correctly, they move to the next step and the behaviour repeats. If they are wrong, they go back one step (to the beginning if this is the first step). The object is to cross the path to the point beyond the final step.
New numbers are generated each time.
Write a program in either C or Java which implements this game for a PC.
Now port the application to a handheld device. This is due in by the end of week 8.
Now implement the game as a race between two or more players, each running on a separate PDA.
I will expect a paper hand-in to be submitted through the student office, with a proper frontsheet.
You should include:
If you are very pleased with your program, you may wish to e-mail it to me so that I can share in your pleasure.
|
<urn:uuid:b629b002-6a9f-4323-9b79-a4bb4919db46>
|
CC-MAIN-2013-20
|
http://www.macs.hw.ac.uk/~rjp/teaching/PUMA/coursework.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705097259/warc/CC-MAIN-20130516115137-00098-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.960904 | 369 | 2.859375 | 3 |
Sentient Data Access via a Diverse Society of Devices
GEORGE W. FITZMAURICE, ALIAS
AZAM KHAN, ALIAS
WILLIAM BUXTON, BUXTON DESIGN
GORDON KURTENBACH, ALIAS
RAVIN BALAKRISHNAN, UNIVERSITY OF TORONTO
Today’s ubiquitous computing environment cannot benefit from the traditional understanding of a hierarchical file system.
It has been more than ten years since such “information appliances” as ATMs and grocery store UPC checkout counters were introduced. For the office environment, Mark Weiser began to articulate the notion of UbiComp (ubiquitous computing) and identified some of the salient features of the trends in 1991.1, 2 Embedded computation is also becoming widespread.
Microprocessors, for example, are finding themselves embedded into seemingly conventional pens that remember what they have written.3 Anti-lock brake systems in cars are controlled by fuzzy logic. And as a result of wireless computing, miniaturization, and new economies of scale, such technologies as PDAs (personal digital assistants), IM (instant messaging), and mobile access to the Internet are almost taken for granted.
But while many of the components of UbiComp that were described and anticipated by Weiser are now commonplace, major aspects of the vision are still developing. A common language for these devices has not been standardized, nor have current database solutions sufficiently captured the complexities involved in correctly expressing multifaceted data. In particular, XML is only now emerging as a viable backbone for communication within a diverse society of devices. CMSs that are now commercially available would be capable of appropriately expressing the data, but often still need to be custom-built for a given application domain. In this discussion, we focus on modeling the human aspect of interactions in the type of rich computing environment we envisage becoming commonplace.
FRAMING THE PROBLEM
The widespread growth of computational and communications technologies is obvious. But from our perspective, it is not the ubiquitousness of the technology per se that is of primary importance, but rather how its existence fosters changes in who employs “computation” (in the broadest sense), where they do so, how they interact, and what it is used for. Technology is certainly important, but our perspective is shaped by the notion that its importance lies in its potential to serve as a motor-sensory, cognitive, and social prosthesis—not as an end in itself.
Ubiquitous computing is in some ways an everyday reality. However, cooperative ubiquitous computing is still in its infancy. New forms of interaction must be developed for this environment—interaction between two or more parties: people and people (both technologically mediated and not), people and machines, and machines and machines. Implicit in this formulation is the importance of location. Previously, transactions took place where the computer was anchored. The location of the computer was not a design issue. Now distance (both physical and social) and location are key considerations in understanding and designing systems. The underlying concept is perhaps best articulated in a famous quote from the architect Louis I. Khan: “Thoughts exchanged by one another are not the same in one room as in another.”4 This includes “thoughts” exchanged between people and/or machines, and implies that behavior is sensitive to location, and as a consequence of mobility, must adapt to changes in physical and social location.
With respect to location-based design, the particular input and output technologies being considered closely interplay with the choices made for the data formats and the ways to present the data. A wide variety of input technologies was developed during the dawn of UbiComp, and we now see a plethora of output devices also being introduced. Small displays are appearing everywhere, on appliances from watches, to pens, and telephones. Equally interesting is how the increasing penetration of plasma panels has led to large-format displays being used as general-purpose signage, such as electronic movie posters at cinemas. It is clear that this trend will only accelerate, given the progress and promise of organic light-emitting diode (OLED) technology, which is already finding its way into commercial products.5
While UbiComp is increasingly characterized by a growing deployment of small (mainly mobile) and large (mainly embedded) displays, our current store and our investment in interaction techniques are still dominated by the demands of the GUI running on a traditional desktop computer (see figure 1). The classes of devices illustrated are shown along a linear one-dimensional scale in a way that implies that they reflect a series of distinct, independent devices—which is largely consistent with current practice. However, at the PARC in the late 1980s , when we were developing the tabs, pads, and “Liveboards” discussed by Weiser, we were primarily exploring the relationships and interactions among these devices as they related to artifacts, and to people, in the physical world. It is these relationships we intend to explore in detail.
As computing devices expand from the status-quo keyboard and desktop to a variety of form factors and scales, we can imagine workplaces configured to have a society of devices, each designed for a very specific task. As a whole, the collection of devices may act much like a workshop in the physical world, where the data moves among the specialized digital stations. For our society of devices to operate seamlessly, a mechanism will be required to (a) transport data between devices and (b) have it appear at each workstation, or tool, in the appropriate representation.
This vision has two aspects: the system and network architecture to support transport and access, or system model; and the user’s conceptualization of these activities, or user model. An example of this system/user model distinction is the standard desktop system in which file transfers have a “drag-and-drop” user model, while the underlying system model is a “file move” from one directory to another.
Our user model draws inspiration from, and hybridizes, two related fields of research: wearable/mobile computing,6 and embedded ubiquitous computing environments.7 The idea is to use wearable/mobile computers to carry referential data to embedded computing environments at specific locations. This presents three fundamental questions for users:
What do you carry?
We depart from the graspable/tangible approach8,9 in which an individual physical artifact exists for every piece of digital data you wish to carry. Because this approach does not scale well, we take an ecological approach and consider what we reasonably expect a person to carry with them (e.g., a watch, PDA, or phone). While a person would only need to carry a single physical artifact, it would be capable of holding multiple data references.
What is in place at the location you are going to?
We assume there are task-specific devices at special locations. Taking the household as an example, locations such as the kitchen afford and imply a very different set of tasks from other locations such as the family room.
What is the relationship between the things you carry and the equipment at a given location?
We assume that all stationary devices are connected via a network, as are the mobile devices, at least when in proximity to the stationary equipment. Thus, mobile devices need only carry references to data because the network makes the data pervasive. We also assume a mobile device may act as part of a specific user interface to the computational elements at a particular location, as well as a carrier of references to the data to be operated upon.
We illustrate this approach with a simple example from our experimental environment: an automotive design studio. In this example, a designer sees a physical picture of a car posted on a studio art board and would like to see the virtual 3-D model of the same car on the studio’s wall-sized display device (called a Powerwall). The designer uses a PDA equipped with a bar-code scanner to record a bar code printed on the corner of the picture of the car, thereby capturing the reference to the data associated with the sketches. By carrying the PDA to the Powerwall, a 3-D geometric model of the same car is displayed on the screen when the user presses a Send button (see figure 2). Relative to the user, the system is sentient. It senses the relationship between the data and the terminal and acts accordingly, bringing up related yet terminal-specific data that the user would expect at a terminal of this type and location. Therefore, we call our user model sentient data access.
While this example does not show many of the complexities that can arise in different situations, it does demonstrate the basic components of our user model. Formally, this user model contains three components:
- Terminals—Fixed-location devices.
- Identifiers—Either physical or virtual pointers to data (such as a URL or bar code).
- Containers—Mobile wireless devices that can carry identifiers, as well as serve as a personal portable UI to computational devices (terminals) distributed in the physical environment.
As in our automotive design studio example, the terminals are fixed-location devices designed to perform specific, often complex, tasks. These terminals may include desktop workstations, touch-sensitive plasma panels, large-display projection screens, and other more specialized devices. Each has a user interface that enables a person to interact with it directly. Typically, they also afford interaction through the UI of the portable container device. As we shall see, the complexity that would result from having to learn and interact with a number of diverse terminals can be reduced or eliminated by converging on a consistent approach to their user interfaces. Thus, due to its specialized nature, each device is less complex than the general-purpose alternative. At the same time, overall complexity is reduced if one can leverage the transfer of skills from device to device, due to the consistency of their UI design. We hope that our examples will illustrate that, with appropriate design, one can have one’s proverbial cake and eat it too.
Note that we do not need completely different terminals to perform different tasks. For example, identical terminals at different locations may be dedicated to different tasks. This is analogous to an office on one floor being dedicated to accounting, while an identical office on a different floor is used for quality assurance. Departments (i.e., function) can be identified by location and terminal type, or both.
While the terminals are used to display and interact with data, identifiers are keys to access the data. From the user perspective, identifiers include UPC symbols, RF (radio frequency) tags, and Smart Badge ID numbers that allow integration with physical artifacts, as well as (URLs), which allow integration with Web assets. When working with virtual assets already in the system, the displayed representation of the asset itself can act directly as an identifier (see figure 3).
Containers, or wireless mobile devices, primarily serve as a mechanism for easily transporting data identifiers among terminals. Sample containers include PDAs, cell phones, bar-code readers, and Smart Cards (see figure 3). Some devices can be both a container and a terminal. These types of devices not only hold and transport an identifier, they can also allow some interaction with the associated data. For example, a PDA transporting an image identifier can also display and allow for machine manipulation of a version of the image itself. A container can also work in concert with a terminal, serving as an extension of the terminal’s user interface. This is particularly useful when working with terminals that have limited input functionality.
There are two fundamental challenges in creating systems of these types. The first is having the system predict, given an identifier, which representation of the data should be loaded onto the terminal. The second is providing a way for the user to choose an alternate representation when the system does not correctly predict which representation the user desires. Given these fundamental concepts, we now explore their use in an experimental environment consisting of a heterogeneous society of devices.
AUTO DESIGN STUDIO AS TRIAL ENVIRONMENT
We have been working with automotive designers for a number of years and have a fairly deep appreciation of the problems they face in their workflow within this media-rich environment. Given this background, the automotive design studio is an appropriate application domain for our trial environment.
A typical automotive design studio supports a workflow that involves a myriad of data types, including: two-dimensional concept sketches; computer-rendered images; animations and movies of cars in various environments; 3-D clay and computer models at various scales; interior textures and fabrics; and engineering data. In addition, the studio needs to facilitate data flow among a divergent set of processes—including conceptual development; interior and exterior specification; engineering designs and constraints; design review and evaluations; and, finally, manufacturing. The different tasks in this workflow are typically performed by different people, at different locations, and often using very different and specialized hardware and software. This is an ideal environment to test our conceptual framework for sentient data access using a society of devices.
To facilitate this diverse workflow, our trial environment contains various terminal types, each suited for a specific task (see figure 4).
The largest terminal is a 6-by-8-foot rear-projection screen (see figure 5a). In real auto design studios, even larger display screens, called Powerwalls (see figure 5a), are being widely installed. These large displays function as awareness servers, which ambiently display imagery of two-dimensional and 3-D content, giving designers in the studio the context of their peers’ work. Powerwalls are also well suited for the evaluation of designs of 3-D car exteriors, especially when full-scale visualizations are desired. They can be used as general-purpose screens for presentations to large audiences.10
While the large scale of the Powerwall display facilitates full-scale viewing, the flat nature of the screen does not provide the viewer with any sense of immersion. Figure 5b shows our second large terminal: the Vision Dome—a 10-by-10-foot hemispherical concave display produced by Elumens.11
The hemispherical display surface provides the viewer with a greater sense of immersion than a typical flat-screen display. When viewing designs for the interior of cars, for example, this enhanced sense of immersion provides a better idea of what it would be like to actually sit inside the car. Furthermore, since this immersion is facilitated without encumbering stereoscopic hardware, subtle human body-language cues, such as eye gaze, are not obscured. Viewers’ ability to interact with one another while using the terminal is thus uncompromised. However, easy interaction with the surface of the display itself is precluded by the size and shape of this terminal, and the fact that viewers should stand several feet away from the display to get maximum immersion. To counteract these factors, we provide an auxiliary 15-inch touch-screen display, mounted at waist height in front of the terminal, to serve as an interaction portal.
Our third terminal is one of medium scale: a high-resolution 51-inch plasma display with an overlaid transparent digitizing surface (see figure 5c). We use this terminal primarily as an asset-awareness server. Running our PortfolioBrowser software, various digital assets such as images, 3-D models, animations, and movies can be easily accessed, compared, sorted, and annotated. Furthermore, when not actively being used, the terminal goes into an ambient mode that cycles though the various assets. Much like the corkboards of the past, only more dynamic, this provides for an ambient display that casually increases awareness of the various assets related to projects being worked on in the studio.
In addition to these three medium- to large-scale terminals, we have a more specialized terminal called the Chameleon12, 13—a high-resolution touch-sensitive LCD panel tracked in 3-D space by an articulated arm (see figure 6). This terminal is a specialized viewer that makes inspection of a 3-D model intuitive by allowing a user to move around in 3-D space by physically moving the display. In effect, the display is a moveable window into the 3-D space.
Along with the specialized terminals described above, our space is populated by various status-quo PC workstations, used for engineering, design, and model-building applications.
Envisioned Usage Scenario. We envision a usage scenario that involves coordinated use of all these terminals. While they are all interconnected at the systems level, from the user’s perspective, a seamless mechanism for transporting work from one device to another is highly desirable. For example, a user may first view a car’s exterior design on the plasma display, and then move to the VisionDome to get a better understanding of the car’s interior.
Using current status-quo user interfaces to accomplish this can be cumbersome. The user would first have to determine the name of the file that is related to the picture of the car’s exterior, then determine the name and location of another file, which contains the data for this car’s interior suitable for display on the Vision Dome. Finally, on the Vision Dome, the user would have to navigate through a file browser to load this file.
The intention of our sentient access user model is to alleviate the complexity of this transaction. A much improved user interface results by using off-the-shelf mobile devices, such as PDAs with wireless connections, as containers for transporting information between terminals. In our previous example, a user could transfer a digital asset’s identifier by tapping on the image of the car’s exterior on the plasma display, then tapping the screen of the handheld PDA device that serves as a container. This pick-and-drop metaphor14 is an extension of the typical drag-and-drop action found on desktop interfaces. The user then walks over to the VisionDome with the container, and uses a similar pick-and-drop gesture from the container to the dome to load the files relevant to the given digital asset’s identifier.
The key here is that the software has to be smart enough to know that the car’s interior designs should be loaded on the VisionDome, despite having received an identifier from the car’s exterior that was being viewed on the plasma display terminal. The representation most appropriate for a given location and a given terminal’s affordances is chosen by default. The user, meanwhile, does not need to be concerned with low-level systems issues such as filenames and directory structures.
Just as we have a diversity of terminals, we also have a diverse set of containers. Some container devices may have mechanisms for dealing with different identifier technologies, such as UPC bar codes and RF tags. A PDA with a wireless network connection and a bar-code reader, for example, can be used to scan bar codes to access digital assets (see figure 7).
This identifier to the asset can then be transported to other terminals as described earlier, resulting in a “scan-and-drop” metaphor. An advantage to using bar codes is that we can also integrate physical assets into our system. For example, bar codes on 3-D clay models can be read and used as identifiers to access associated digital assets on appropriate terminals (see figure 8).
The glue that binds our diverse collection of terminals, containers, and identifiers is a software infrastructure we call PortfolioBrowser (see figure 9).
The PortfolioBrowser currently deals with the traditional scenario in which a user has come to a terminal without an identifier in hand and needs to use the terminal as an asset browser. We envisage extending the PortfolioBrowser’s functionality to address the two challenges mentioned earlier: the need to deal with representation of the data to load, given an identifier, and addressing the case when the system does not correctly predict which representation the user desires.
As figure 10a illustrates, the default UI for our PortfolioBrowser organizes our assets by tabs. This is similar to an image-based file browser. Our intention is to extend this to organize and prioritize the data based on several criteria, including suitability of the data for a given terminal type, recent sessions, and the specific user.
The user can then select any data asset via this user interface for display on the terminal (see figure 10b, c). In contrast, we envision that when a user approaches a terminal with a container and sends an identifier to a terminal, the PortfolioBrowser will respond by automatically choosing the most appropriate representation and displaying the associated digital asset. If there are several choices for the appropriate representation, these choices would be presented to the user by the PortfolioBrowser.
We maintain consistency and simplicity in the interaction by providing a common interface critical to the success of our sentient access user model. The inherent advantages of employing specific terminals for specific tasks would be defeated if moving from one terminal to another was complicated or time-consuming, and required log-in actions, along with learning a multitude of data access interfaces. Thus, the design of our PortfolioBrowser embraces our fundamental goal of minimizing transaction costs at all times, throughout the entire system.
We have focused on a user model of container-terminal interaction. Another scenario we are interested in is terminal-to-terminal communication in which the goal is to use the features of one terminal to enhance the capabilities of another. For example, a user could employ the Chameleon terminal to navigate around a car model in 3-D space, while others view the results of the navigation on a Powerwall terminal (see figure 11b).
Another important aspect to our user model is the management of connections among containers and terminals, based on proximity to one another. While terminals and containers are always implicitly connected via a wireless network, interactions between a specific terminal and container require that an explicit relationship be established. In the simplest case, when a container comes into physical proximity with a terminal, an explicit connection is automatically established without user intervention. In a more complex case, a container is in close proximity to multiple terminals. Ultimately, proximity alone may not be sufficient to determine appropriate connections. In this case, the user will need to be presented with a list of choices and confirm a connection. The important thing is to avoid having the user perform a series of initiation and setup tasks to establish a connection between a container and a terminal.
System Architecture. Given our envisioned usage and system scenario, several key underlying mechanisms are required. At the center of our system will be a relational database. The main objects in this database will be digital assets associated with an automotive design process—including sketches, 3-D models, photo-realistic renders, engineering data, market data, and animations. A content management system15 will present these assets grouped into projects. For example, a project encompasses a particular model of a car. In addition to this, the database needs to hold information on opening an object with a given application for a particular terminal. Associations between data type and application are normally handled by the operating system. Unfortunately, current operating system mappings of data types to application do not factor in the terminal properties. Therefore, an important component of the database will be the mapping between data type and the target application that may depend on the terminal type. Having this terminal information will allow us to retrieve the correct assets for a particular terminal using database queries on given identifiers, as illustrated in figure 12.
The complexity of matching a given identifier with a particular terminal at a certain location, while accounting for a number of contextual states, requires adaptive, programmable heuristics to deliver the appropriate asset. To compound the complexity, the time of day or the presence of other people may influence the choice of asset presented. Initially, a set of preprogrammed rules will offer a default outcome. As usage knowledge is added to the system, a number of approaches may be blended to form an effective heuristic strategy.
Programming in a cooperative ubiquitous environment can be conceptualized as running an object-oriented simulator in which each computational element is abstracted into an object. Objects dynamically enter and leave the environment. A spatial layout consisting of the objects can be constructed to match the location-sensitive nature of the identifier-container-terminal user model. In this abstraction, all of the computational elements can be programmed holistically instead of individually. Furthermore, we speculate that diagnostic tools such as spatially oriented debuggers can be defined to facilitate development of sentient data access for a rich society of devices.
CURRENT STATE OF THE INFRASTRUCTURE
Within our current trial environment we have set up the various terminals and physical stations described earlier (plasma display, Powerwall, Chameleon, VisionDome, traditional physical art board, and physical 3-D model). All of the computational terminals are functional and on a single network. A Symbol PDA acts as our container device, which is currently capable of scanning bar codes from physical artifacts, communicating to our network via a wireless connection, and serving as a portable user interface for terminals using the Pebbles software from Carnegie-Mellon University. The PortfolioBrowser software works on all of the terminals, and the architecture currently supports a shared database. However, more development is needed to fully support identifier transactions. We are continuing to develop the system infrastructure to fully support the sentient data access user model, including complete database support and customized PDA software (to support the pick-and-drop and scan-and-drop actions).
ENHANCING THE COOPERATIVE ENVIRONMENT
To some degree, data access methods have been rooted in the metaphor of accessing files in a hierarchical filesystem. Technological developments such as wireless networks, mobile computing devices, and specialized display terminals can be used to present a different, and possibly more effective, user model for data access in a modern cooperative ubiquitous computing environment. We have proposed a user model called “sentient data access,” which utilizes access context, location, and user information.
While we have used the automotive design studio as an application domain to motivate our discussion, our sentient data access model is clearly not limited to this domain. For example, other environments with a similarly rich set of tasks, assets, and media—including hospitals, biotech labs, special-effects studios, and industrial design companies—could benefit from a similar model. As the complexity in data access increases in these environments, we believe that the benefits of this seamless, intelligent user model will be all the more critical. Q
The authors would like to thank Symbol Technologies, Elumens Corporation, Fakespace Labs, the Pebbles Project at Carnegie-Mellon University, the PortfolioBrowser product team at Alias, Alex Babkin, and Scott Guy for their assistance with this research project.
1. Weiser, M. Some computer science issues in ubiquitous computing, Communications of the ACM 36, 7 (1991), 75–84.
2. Weiser, M. The computer for the 21st century. Scientific American 265, 3 (Dec. 11, 1991), 94–104.
3. The Anoto Group: see http://www.anoto.com/.
4. Toynbee, A. Architecture, Silence, and Light: On the Future of Art. (1970) Viking, NY, 20–35.
5. Kodak EasyShare LS633: see http://www.kodak.com/US/en/corp/display/LS633.jhtml.
6. Mann, S. Wearable intelligent signal processing. Proceedings of the IEEE 86, 11 (Nov. 1998), 2123–2151; see also http://www.eecg.utoronto.ca/~mann/.
7. Weiser, M. The computer for the 21st century. Scientific American 265, 3 (Dec. 11, 1991), 94–104.
8. Fitzmaurice, G.W. Graspable user interface. Ph.D. dissertation, University of Toronto, 1996.
9. Ishii, H. and Ullmer, B. (1997) Tangible bits: Towards seamless interfaces between people, bits and atoms. Proceedings of ACM CHI (1997), 234–241.
10. Balakrishnan, R., Buxton, W., Fitzmaurice, G., and Kurtenbach, G. Large displays in automotive design. IEEE Computer Graphics & Applications 20, 4 (July 2000), 68–75.
11. Elumens Corporation: see http://www.elumens.com/.
12. Fitzmaurice, G.W. Situated information spaces and spatially-aware palmtop computing. Communications of the ACM 36, 7 (1993), 39–49.
13. Buxton, B., Fitzmaurice, G.W., Khan, A., Kurtenbach, G., and Tsang, M. Boom Chameleon: Simultaneous capture of 3D viewpoint, voice and gesture annotations on a spatially-aware display. ACM CHI Letters 4, 2 (2002) 111–120.
14. Rekimoto, J. Pick-and-drop: A direct manipulation technique for multiple computer environments. Proceedings of ACM UIST (1997), 31–39.
15. Addey, D., Ellis, J., Suh, P., and Thiemecke, D. Content Management Systems, Glasshaus Publishers, Birmingham, UK, 2002.
GEORGE W. FITZMAURICE, Ph.D. ([email protected]) is a senior research scientist in the Interactive Graphics Research Group at Alias and an adjunct professor of computer science at the University of Toronto.
AZAM KHAN ([email protected]) is a Human-Computer Interaction (HCI) researcher at Alias and is currently pursuing his M.Sc. at the University of Toronto.
WILLIAM BUXTON (http://www.billbuxton.com) is principal of Buxton Design, a Toronto-based boutique design and consulting firm, and an associate professor of computer science at the University of Toronto.
GORDON KURTENBACH, Ph.D. ([email protected]) is director of research at Alias and an associate professor of computer science at the University of Toronto.
RAVIN BALAKRISHNAN (http://www.dgp.toronto.edu/~ravin) is an adjunct professor of computer science at the University of Toronto.
Sentient Computing Project
Hopper describes the Sentient Computing project at AT&T Laboratories Cambridge. This project attempts to track the physical environment and the user’s activity, and then react appropriately, depending on the user’s location in the environment. For example, if a user moves into a new room, their terminal log-in session follows them to a local terminal in that room. [Hopper, A. The Royal Society Clifford Paterson Lecture: Sentient Computing. AT&T Laboratories Cambridge Technical Report (1999); see also http://www.uk.research.att.com/abstracts.html.]
Removable Media Metaphor
Ullmer and colleagues propose a “removable media” metaphor for dealing with the transport of data among devices. Their basic idea is to have physical objects, known as mediaBlocks, associated with pieces of data. These mediaBlocks, which need not have any computational power, can then be moved from one computational device to another for processing of the data. For example, to print a document, the document file can be carried on a mediaBlock from a desktop computer to a printer. The act of docking the mediaBlock in the printer initiates the print job. [Ullmer, B., Glas, D., and Ishii H. mediaBlocks: Physical containers, transports, and controls for online media, Proceedings of the ACM SIGGRAPH (1998), 379–386.]
A similar mechanism for transporting data is described by Streitz and colleagues within their i-Land project, which interconnects computationally enabled furniture with large displays. Their mechanism allows for physical objects, called passengers, to act as a temporary container for data transport between these computationally enabled stations. [Streitz, N.A., Geißler, J., Holmer, T., Konomi, S., Müller-Tomfelde, C., Reischl, W., Rexroth, P., Seitz, P., Steinmetz, R., and i-LAND: An interactive landscape for creativity and innovation, Proceedings of the ACM CHI (1999), 120–127.]
The pick-and-drop metaphor proposed by Rekimoto allows for transfer of data from device to device in a technique that is an extension of the typical drag-and-drop action found on desktop interfaces. The idea is for users to identify (pick) an item on one device, move the input device to a second device, and insert (drop) the item onto that device, causing the data to be transferred. [Rekimoto, J. Pick-and-drop: A direct manipulation technique for multiple computer environments. Proceedings of ACM UIST (1997), 31–39.]
System with Goal of Intuitive ManipulationsWant and colleagues describe a system whose goal is intuitive manipulations based on the coupling of physical objects to representative virtual objects or actions. They do this by augmenting everyday objects with sensor tags. Actions take place when augmented objects are tapped on computational objects with sensor readers. [Want, R., Fishkin, K. P., Gujar, A., and Harrison, B. Bridging physical and virtual worlds with electronic tags. Proceedings of ACM CHI. (1999), 370–377.]
The ubiquitous computing project at the PARC utilized small mobile devices, called ParcTabs, which were designed with four context-specific behaviors in mind: (1) stand-alone unit away from the network, (2) in the building as a networked appliance, (3) in a room with an electronic whiteboard and used as a telepointer, and (4) next to the electronic whiteboard used as a metacontroller in the left hand, while a stylus is used in the right hand. [Want, R., Schilit, B. N., Adams, N. I., Gold, R., Petersen, K., Goldberg, D., Ellis, J. R., and Weiser, M. An overview of the ParcTab ubiquitous computing experiment. IEEE Personal Communications 2, 6 (1995), 28–43.]
In many ways, our work is similar to aspects of all of these previous systems. However, we propose a formal user model, and from an implementation perspective, we use networked computational devices as mobile containers rather than static physical objects to transport identifiers. Furthermore, while the identifiers in the previous systems serve as single, simple links to particular objects or actions, identifiers in our system are more complex because they serve as a pointer to a set of possible actions. From this set, the system intelligently selects the most appropriate action based on context. This context depends on several factors, including the type and location of each terminal, thus leveraging the configuration of our society of devices to promote seamless data access.
Originally published in Queue vol. 1, no. 8—
see this item in the ACM Digital Library
RAVIN BALAKRISHNAN (http://www.dgp.toronto.edu/~ravin) is an adjunct professor of computer science at the University of Toronto.For additional information see the ACM Digital Library Author Page for: Ravin Balakrishnan
GEORGE W. FITZMAURICE, Ph.D. ([email protected]) is a senior research scientist in the Interactive Graphics Research Group at Alias and an adjunct professor of computer science at the University of Toronto.For additional information see the ACM Digital Library Author Page for: George W. Fitzmaurice
AZAM KHAN ([email protected]) is a Human-Computer Interaction (HCI) researcher at Alias and is currently pursuing his M.Sc. at the University of Toronto.For additional information see the ACM Digital Library Author Page for: Azam Khan
|
<urn:uuid:3a1f709f-a7d6-4edf-9a27-770f5d8de12c>
|
CC-MAIN-2013-20
|
http://queue.acm.org/detail.cfm?id=966721
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706635063/warc/CC-MAIN-20130516121715-00033-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.919026 | 7,520 | 2.515625 | 3 |
This innovative text presents computer programming as a unified discipline in a way that is both practical and scientifically sound. The book focuses on techniques of lasting value and explains them precisely in terms of a simple abstract machine. The book presents all major programming paradigms in a uniform framework that shows their deep relationships and how and where to use them together.
After an introduction to programming concepts, the book presents both well-known and lesser-known computation models ("programming paradigms"). Each model has its own set of techniques and each is included on the basis of its usefulness in practice. The general models include declarative programming, declarative concurrency, message-passing concurrency, explicit state, object-oriented programming, shared-state concurrency, and relational programming. Specialized models include graphical user interface programming, distributed programming, and constraint programming. Each model is based on its kernel language—a simple core language that consists of a small number of programmer- significant elements. The kernel languages are introduced progressively, adding concepts one by one, thus showing the deep relationships between different models. The kernel languages are defined precisely in terms of a simple abstract machine. Because a wide variety of languages and programming paradigms can be modeled by a small set of closely related kernel languages, this approach allows programmer and student to grasp the underlying unity of programming. The book has many program fragments and exercises, all of which can be run on the Mozart Programming System, an Open Source software package that features an interactive incremental development environment.
About the Authors
Peter Van Roy is Professor in the Department of Computing Science and Engineering at Université catholique de Louvain, at Louvain-la-Neuve, Belgium.
Seif Haridi is Professor of Computer Systems in the Department of Microelectronics and Information Technology at the Royal Institute of Technology, Sweden, and Chief Scientific Advisor of the Swedish Institute of Computer Science.
"In almost 20 years since Abelson and Sussman revolutionized the teaching of computer science with their Structure and Interpretation of Computer Programs, this is the first book I've seen that focuses on big ideas and multiple paradigms, as SICP does, but chooses a very different core model (declarative programming). I wouldn't have made all the choices Van Roy and Haridi have made, but I learned a lot from reading this book, and I hope it gets a wide audience."
Brian Harvey, Lecturer, Computer Science Division, University of California, Berkeley
"This book follows in the fine tradition of Abelson/Sussman and Kamin's book on interpreters, but goes well beyond them, covering functional and Smalltalk-like languages as well as more advanced concepts in concurrent programming, distributed programming, and some of the finer points of C++ and Java."
Peter Norvig, Google Inc.
|
<urn:uuid:95caec51-1ebf-41e3-a4f5-a4c3664ac2c5>
|
CC-MAIN-2013-20
|
http://mitpress.mit.edu/books/concepts-techniques-and-models-computer-programming
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.930075 | 581 | 2.8125 | 3 |
Data Structures and Algorithms in Java 3E Wiley International Edition
* Entirely new chapter on recursion
* Additional exercises on the analysis of simple algorithms
* New case study on parenthesis matching and HTML validation
Michael Goodrich received his Ph.D. in computer science from Purdue University in 1987. He is currently a professor in the Department of Computer Science at University of California, Irvine. Previously, he was a professor at Johns Hopkins University. He is an editor for the International Journal of Computational Geometry & Applications and Journal of Graph Algorithms and Applications.
Roberto Tamassia received his Ph.D. in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign in 1988. He is currently a professor in the Department of Computer Science at Brown University. He is editor-in-chief for the Journal of Graph Algorithms and Applications and an editor for Computational Geometry: Theory and Applications. He previously served on the editorial board of IEEE Transactions on Computers.
In addition to their research accomplishments, the authors also have extensive experience in the classroom. For example, Dr. Goodrich has taught data structures and algorithms courses, including Data Structures as a freshman-sophomore level course and Introduction to Algorithms as an upper level course. He has earned several teaching awards in this capacity. His teaching style is to involve the students in lively interactive classroom sessions that bring out the intuition and insights behind data structuring and algorithmic techniques. Dr. Tamassia has taught Data Structures and Algorithms as an introductory freshman-level course since 1988. One thing that has set his teaching style apart is his effective use of interactive hypermedia presentations integrated with the Web.
Table of Contents
2. Object-Oriented Design.
3. Analysis Tools.
4. Stacks, Queues, and Recursion.
5. Vectors, Lists, and Sequences.
7. Priority Queues.
8. Maps and Dictionaries.
9. Search Trees.
10. Sorting, Sets and Selection.
11. Text Processing.
Appendix: Useful Mathematical Facts.
Best Deals on PCWorld
- Networking, Wireless & VoIPView all »
- NotebooksView all »
- TabletsView all »
- Mobile PhonesView all »
- Printers & ScannersView all »
Sign up to PC World Today for the latest news, reviews and galleries from PC World Australia.Sign up now »
- FTJob Title: Mac Systems/ Enterprise Systems EngineerNZ
- FTSenior Python DeveloperNSW
- FTWeb Analyst - WebTrendsVIC
- FT.NET - Sitecore Developer - Melbourne - PermNSW
- FTLead Software EngineerSA
- FTQuality ManagerSA
- FTTest EngineerVIC
- FTR&D EngineerSA
- FTTechnical Business AnalystNSW
- FTTechnical Consulting ManagerNSW
- FTOS Web Applications DeveloperNSW
- FTFlash / ActionScript Developer - ContractNSW
|
<urn:uuid:023e4426-444a-4352-9e0f-3c838b57cd59>
|
CC-MAIN-2013-20
|
http://www.pcworld.idg.com.au/books/product/data-structures-and-algorithms-in-java-3e-wiley-international-edition/0471644528/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704133142/warc/CC-MAIN-20130516113533-00060-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.841957 | 637 | 2.53125 | 3 |
Previous: 1 Introduction
Up: 1 Introduction
Next: 1.2 Representing Data and Program Internally
Previous Page: 1 Introduction
Next Page: 1.1.1 Computer Hardware
Before we look at the C language, let us look at the overall organization of computing systems. Figure 1.1 shows a block diagram of a typical computer system. Notice it is divided into two major sections; hardware and software.
|
<urn:uuid:ba3855bc-c70e-416e-b2c5-a1341d14335b>
|
CC-MAIN-2013-20
|
http://www-ee.eng.hawaii.edu/Courses/EE150/Book/chap1/section2.1.1.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383263/warc/CC-MAIN-20130516092623-00029-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.875239 | 86 | 3.375 | 3 |
The 1971 doctoral thesis of Edwin Roger Banks describes a
novel way to build an ordinary computer. Instead of creating
the computer out of wires and transistors and diodes, Banks
creates his machine out of information and rules. This
commentary will examine Banks' reasoning and methods, and
illustrate the structures he devised.
Part 1 lays out the
background in some detail by describing first the system of
logic used by computers, and then the common engineering
methods of implementing that logic.
Part 2 describes the
computing concept known as cellular automata, which is the
particular medium that Banks is working with.
Part 3 describes
and illustrates the actual methods employed by Banks to build
a computer by information transformation according to simple
Part 4 comments briefly on the significance of the
Click <here> to read
|
<urn:uuid:6c7fcfee-713f-4684-a284-2986431bc5bc>
|
CC-MAIN-2013-20
|
http://www.bottomlayer.com/bottom/banks/banks_commentary.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704075359/warc/CC-MAIN-20130516113435-00030-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.905292 | 177 | 2.78125 | 3 |
LAS CRUCES - A long-time partnership between Jeanine Cook, New Mexico State University electrical and computer engineering associate professor, and Sandia National Laboratories has resulted in a project that will push the limits of computing power to help scientists more quickly and efficiently solve complex problems that are important to areas such as national security and medical research.
The four-year National Science Foundation project aims to develop an entirely new computer system focused on solving complex, graph-based problems that will reach into the next frontier of supercomputing: exascale processing that is 1,000 times that of the fastest computer currently operational, at one quadrillion operations per second.
"A simple example of a graph-based computer problem is Facebook," explains Cook, who directs the Advanced Computer Architecture Performance and Simulation Laboratory at NMSU. "When you make a profile and begin adding friends, Facebook goes out and identifies other friends that you might add to your network. It's all based on a giant graph of connections showing relationships among people."
Computers are increasingly being used to solve graph-based, data-intensive problems in application areas such as cybersecurity, medical informatics and social networks.
But computers aren't designed specifically to solve these types of problems.
"Our system will be created specifically for solving these types of very complex problems. Intuitively, I believe
Cook specializes in micro-architecture simulation, performance modeling and analysis, workload characterization and power optimization. "I create software models of computer processor components and their behavior and use these models to predict and analyze performance of future designs," Cook said.
It was her work while on sabbatical with Sandia's Algorithms and Architectures group in 2009 that led to the $2.7 million NSF collaborative project. Cook developed processor and simulation tools and statistical performance models that identified performance bottlenecks in Sandia applications. Her work with Sandia also led to her selection by President George W. Bush as one of the recipients of the prestigious Presidential Early Career Award for Scientists and Engineers.
While there, Cook worked with Richard Murphy, a leading expert in the area of computer systems architecture. Murphy was interested in developing a system that would solve graph-based problems faster while consuming less energy. Together they assembled a team of researchers from Sandia, NMSU, Indiana University and Louisiana State University. Cook, Murphy and two NMSU electrical engineering Ph.D. students, Patricia Grubel and Samer Haddad, will collaborate on hardware development. Colleagues Andrew Lumsdaine, at Indiana University, and Thomas Sterling, at Louisiana State University, will develop the software.
A main goal of the design is to incorporate programmable hardware, known as Field-programmable Gate Arrays, to provide customized circuitry for executing graph algorithm operations.
Another goal is to make it available for public license. The system will be described in VHDL, a hardware description language that is an international standard, and the description will be made freely available to a whole gamut of users who perform graph-based computing - government laboratories, commercial enterprises and academia.
"Anyone, anywhere could buy off-the-shelf reprogrammable hardware and download our architecture and software and replicate the system," Cook said.
The biggest goal, however, is improved performance and energy efficiency.
"This system will be faster because the processor will be custom designed to execute these specific applications, so performance will be optimized. The current systems used for running graph-based problems are relatively slow," explained Cook.
"They will also use less energy because FPGAs are very energy-efficient," she added. "This is a significant consideration because it reduces the cost of running these types of large applications. And everyone is having trouble paying their electric bills these days."
"Eye on Research" is provided by New Mexico State University. This week's feature was written by Linda Fresques of the College of Engineering.
|
<urn:uuid:b84931ee-5acf-46e6-b81e-39a51d49794b>
|
CC-MAIN-2013-20
|
http://www.lcsun-news.com/las_cruces-news/ci_18591809
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703293367/warc/CC-MAIN-20130516112133-00096-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.952325 | 806 | 3.1875 | 3 |
Science Fair Project Encyclopedia
Systems design is the process or art of defining the hardware and software architecture, components, modules, interfaces, and data for a computer system to satisfy specified requirements. One could see it as the application of systems theory to computing. Some overlap with the discipline of systems analysis appears inevitable.
Prior to the standardisation of hardware and software in the 1990s which resulted in the ability to build modular systems, systems design had a more crucial and respected role in the data processing industry. The increasing importance of software running on generic platforms have enhanced the discipline of software engineering at systems design's expense.
Design tools such as UML now address some of the issues of computer systems design and interfacing.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
|
<urn:uuid:850f0a46-1452-4032-bf8e-eabfb43562d3>
|
CC-MAIN-2013-20
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Systems_design
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706082529/warc/CC-MAIN-20130516120802-00025-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.927148 | 175 | 3.390625 | 3 |
Software are used for a variety of different tasks, from word processing to gaming to sending rockets to the moon. For each type of scenario, there are different types of software that can be used. For an example, the user cannot use a word processing software to create a graphical animation or use a graphical animation software to create a program to run an automated teller machine in a bank.
Advanced software are software which are tailor made for users who have unique requirements. There are many software companies who are dedicated towards developing software for specific requirements. There are several techniques that are used by these companies to develop the required software. First the analysts visits the relevant client and talks to all the employees and the management and tries to understand the requirement of each party. The management will of course be more interested about the financial areas and if implementing a new system to their company is worth their while. The employees will not be questioned about the financial issues, rather they will be questioned about the requirement of usage. Once these factors are figured out, the analysts will examine the environment in which the system is to be implemented, the hardware resources available and whether new hardware devices are required for the system to be effectively used.
Once these factors are figured out, all the possible methods of developing the system will be checked and the most effective method will be chosen. Once the system is developed it will be implemented and the required training will be given to the users. Then comes the maintenance or after sales part, which many software companies consider to be the most important part of implementing a software system. The software should be flexible enough to expand and bend in the way the company expands and bends.
Then there are systems which are used to take decisions for the organization. When these types of software are implemented, the analysts will visit the organization and observe the behavior of the management and or a specific person who makes the decision. These types of systems take time to be implemented because the analysts will be spending a significant amount of time analyzing the behaviors of different people and how they coordinate with each other and how it affects the way they take decisions. Once enough information has been gathered, the system will be developed and implemented. Then again, the analysts will be observing the system and checking if the system coordinates with the other decision makers in the company. After the system is fully launched the analysts will be closely monitoring the system, so that the system can be expanded according to the requirements of the organization.
These types of software are not readily made for mass usage by a lot of users. Yet there are advanced software which are created for mass usage, such as graphical animation software, program development software and so on. But the issue is that everyone cannot use these software. Proper training and a good knowledge in the specific field is required to work with them. Once the user learns and understands how to use the specific software, the amount of work that can be done through them will be much greater than that of using a normal software which is designed to be used by a majority of people with minimal training.
Summery: This article explains how advanced software packages are developed to meet individual corporate requirements and how useful and productive such advanced software systems are in today's busy world
|
<urn:uuid:93384f62-2ae6-4124-bfca-2ba43aede9e3>
|
CC-MAIN-2013-20
|
http://www.bizymoms.com/computers-and-technology/advanced-software.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700074077/warc/CC-MAIN-20130516102754-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.947912 | 644 | 3.375 | 3 |
Science 4.0 is a movement resulting from the formation and use of the World Wide Web. Scientists are using the World Wide Web to collaborate and do science in new ways with significant implications. The term Science 4.0 is a special case of the World Wide Web and enhanced by an understanding of Web 2.0. Therefore it is helpful to understand what the Web 2.0 is in order to better understand Science 4.0. The term Web 2.0 was first used at the O’Reilly Media conference and implies a development in the use of the Web. This has been covered in an earlier article in the series (see Appendix). The Web 2.0 had several characteristics according to this definition including software above the level of a single device. In the definition, this aspect of the description is brief. iTunes is given as an example of software that operates across mobile phones, desktop computers and business servers in order to deliver an effective service.
Applying these principles to Science, a simple example is BOINC which provides analytical software for use on desktop PC’s. The end-user downloads the software and installs this onto their desktop computer. Data is provided by the server, downloaded and analysed by the software on the desktop. This has also been covered under harnessing collective intelligence (see Appendix). There are two immediate principles that arise from application of this aspect of the Web 2.0 definition to science
1. For tasks that require large computational resources, software operating over multiple devices increases the computing resources available and would be expected the time taken to solve computational problems.
2. If software operates on mobile devices and desktops or servers this can transform workflow for scientists. Mobile devices typically have reduced memory and processing resources compared to servers and desktops. Functionality would need to be tailored to the available resources. Examples of work which would be facilitated by this approach include
a. Data collection in the field and transfer to a database as well as remote initiation of analysis on the data on the server/desktop.
b. Analysis of data on the go. The mobile applications would enable the scientist to instruct the server or remote desktop to undertake analytical tasks and return the results in a format which can be viewed on the mobile application. This would enable flexibility in workflow and potentially improve efficiency.
This approach opens up many possibilities.
Appendix – Science 4.0 Articles on the TAWOP Site
Index: There are indices for the TAWOP site here and here Twitter: You can follow ‘The Amazing World of Psychiatry’ Twitter by clicking on this link. Podcast: You can listen to this post on Odiogo by clicking on this link (there may be a small delay between publishing of the blog article and the availability of the podcast). It is available for a limited period. TAWOP Channel: You can follow the TAWOP Channel on YouTube by clicking on this link. Responses: If you have any comments, you can leave them below or alternatively e-mail [email protected]. Disclaimer: The comments made here represent the opinions of the author and do not represent the profession or any body/organisation. The comments made here are not meant as a source of medical advice and those seeking medical advice are advised to consult with their own doctor. The author is not responsible for the contents of any external sites that are linked to in this blog.
|
<urn:uuid:e4ba6e9c-6487-419b-80e6-8f84640f96b1>
|
CC-MAIN-2013-20
|
http://theamazingworldofpsychiatry.wordpress.com/2012/06/02/doing-science-4-0-deconstructing-web-2-0-software-above-the-level-of-a-single-device/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708690512/warc/CC-MAIN-20130516125130-00093-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.93152 | 691 | 3.453125 | 3 |
Onteca are currently undertaking a major research project with the support of the Technology Strategy Board. It is a bit early to announce our results but included here is the project abstract and a teaser image of early results we are getting.
Autonomy Orientated Computing (AOC) presents a generalised model which unifies analysis, modelling and simulation of the characteristics of complex systems. AOC uses autonomous entities or agents to simulate and solve complicated problems. Often AOC based solutions are designed to minimise human involvement. For instance autonomic computing systems are computer networks that can be managed without the need for human intervention. AOC systems manage themselves according to an administrator’s goals. The analogy of an ant colony is often used to explain AOC, the systems within the colony combine to create the whole.
We wish to apply Autonomy Orientated Computing (AOC) techniques to the creation of computer game content. Computer Games are often built by teams of over 200 hundred people, and a significant part of the development effort is spent on the creation of “levels” or “challenges”. Throughout our current production and research work Onteca have used a variety of Generative Algorithms to produce game content both in terms of form and function.
In this project we will examine ways in which AOC can be used to lower the cost of game production.
|
<urn:uuid:e8d95fd0-9a13-4ab4-98d0-0359d0bc4871>
|
CC-MAIN-2013-20
|
http://onteca.com/research/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00015-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.941269 | 278 | 2.515625 | 3 |
Give us Feedback!
Set the category for this topic
Arts & Culture
Biology & Nature
Business & Companies
Food & Drink
Geography & Travel
Health & Medicine
History & Events
Religion & Philosophy
Society & Politics
Technology & Computing
Transportation & Vehicles
- Created 2012-02-15
is a particular way of storing and organizing
in a computer so that it can be used
Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example,
s are particularly well-suited for implementation of databases, while
implementations usually use
s to look up identifiers.
Data structures provide a means to manage large amounts of data efficiently, such as large
s and internet indexing services. Usually, efficient data structures are a key to designing efficient
. Some formal design methods and
s emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in both
from Wikipedia (last updated: 21 May), licensed under
What do you know about this topic?
Please make sure to only add personal information and experiences about this topic that complements the article above. Comments or opinions should be posted at the bottom of the page by clicking
. Thanks alot for contributing!
...or create an Experience Page
Currently no applications. Add an application using the contribute box to the right.
Let People Vote
Ask a Question
UC Berkeley video course on data structures
Data structures course with animations
Data structure tutorials with animations
An Examination of Data Structures from .NET perspective
Schaffer, C. ''Data Structures and Algorithm Analysis''
Persistent data structure
Linked data structure
Plain old data structure
Concurrent data structure
Add new image
Add image by copy and paste a link:
Add external link
Links to external pages
Add related topic
Links to related topics
Copyright 2011 © Empedia.com BETA
Forgot your password?
|
<urn:uuid:1f84d7e6-a625-4095-a36b-44b4c2f1dab2>
|
CC-MAIN-2013-20
|
http://www.empedia.com/topic/data-structure?id=e8e4e89e-3669-474d-9b53-0eeff0025f20
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706637439/warc/CC-MAIN-20130516121717-00028-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.732908 | 425 | 3.34375 | 3 |
SYSTEM PROGRAMMINGA.Y. 2008/2009
Hours for week: 5 - Weeks: 14
Hours of lectures: 70 - Hours of laboratory: 0 - Hours of others activities: 0
The first objective of the course is to introduce the use of the C programming language and of Unix system libraries for exploiting the main operating system functionalities that may be of interest for the programmer.
A second goal is to improve the students’ programming skill by showing the inner behavior of a computing system during the execution of their programs. In order to do so, the main computing architecture concepts will be illustrated, discussing how they can influence the correctness, the performance and the utility of application software.
The main objectives of the course are:
• to give students a basic knowledge of the C language and of its use in the Unix programming environment;
• to provide detailed notions on the architecture and inner behavior of computing systems that may be of direct interest for programmers.
Lectures, supported by slides.
Slides, plus various supplementary material indexed at the course web page.
B. Kerningham, D. Ritchie, "Il linguaggio C - Seconda Edizione", Jackson Libri.
R. E. Bryant e D. R. O'Hallaron, "Computer Systems: A Programmer's Perspective", Prentice-Hall.
This lecture is suitable for second-year students of Computer Science Engineering.
|
<urn:uuid:49a3b765-cbfb-4c88-97bd-34582451d360>
|
CC-MAIN-2013-20
|
http://www.ing.unisannio.it/ects/scheda_en.php?1256
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708783242/warc/CC-MAIN-20130516125303-00017-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.835377 | 295 | 2.765625 | 3 |
Computer system (usually an application) designed to simulate the problem-solving behavior of a human who is an expert in a particular domain or discipline.
- Artificial Intelligence and Expert Systems - syllabus to a course taught by Ruth A. Palmquist at the University of Texas in 1996
- Navigator Interface - designed by David Stern at Yale University to help users determine - and in many cases transparently link to - the most appropriate source for their information need(s) in the sciences
About this page.
Page last updated: 2006-07-16
|
<urn:uuid:0d61d56c-4330-4b00-b9b3-c905d39dd62f>
|
CC-MAIN-2013-20
|
http://www.ils.unc.edu/callee/ermlinks/exp_sys.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699675907/warc/CC-MAIN-20130516102115-00006-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.877981 | 113 | 2.53125 | 3 |
Letter Design by Yeohyun Ahn and layout design by John Page Corrigan
Processing is a free programming language for the electronic arts and visual design community created by Ben Fry and Case Reas.
For more information: http://www.processing.org
Processing.org: Programming for Artists and Designers
by Casey Reas and Benjamin Fry
Presented by Yeohyun Ahn and Gregory May at School of ThoughtIII in 2007 at Art center College of Design
Students have become accustomed to solving design problems through complex commercial software packages that will evolve rapidly and possibly disappear in the near future. How can we provide students with the confidence and broad structural understanding they will need to educate themselves as their field changes?
Former MIT Media Lab collaborators Casey Reas and Benjamin Fry pioneered the open-source project Processing in 2001. Designed to encourage learning code through easy and frequent visual feedback, Processing is a simple but deceptively powerful programming language that can generate startling visual effects. Through the application of basic mathematical concepts (including random processes and rule-based systems), unexpected expressions that might take days to create by hand can be generated in seconds. Virtually any type of data set -- from sound and other "captured" activity to RFID tags and blogs -- can be used to generate work that is not bound to the computer screen or to print. Processing users are finding new ways to use this flexibility every day, sending their interpreted data to objects as varied as drawing machines, architectural facades, and cell phones.
Learning to work with code can be as fundamental to the designer's education as learning to bind a book or print with letterpress, particularly for those who wish to work with non-traditional media. By learning to perform basic operations directly in a programming language, students are exposed to the core structures that underlie the high-level tools used in the profession, while also expanding their abilities and experience in new media.
Yeohyun Ahn is developing a set of on-line resources and teaching tools created especially for designers and design students with limited prior knowledge of computer languages. They are building tutorials around basic design operations such as repeat, rotate, move, invert, cut, and random as well as graphic design functions such as transparency, layer, color, hierarchy, figure/ground.
|
<urn:uuid:e95d9310-648b-4fba-a0c5-f8576c610f48>
|
CC-MAIN-2013-20
|
http://www.yeoahn.com/typecode1/page.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704133142/warc/CC-MAIN-20130516113533-00081-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.941304 | 469 | 2.765625 | 3 |
|Uses of a personal computer
Like other computers, personal computers can be instructed to perform a variety of individual functions. A set of instructions that tells a computer what to do is called a program. Today, more than 10,000 application programs are available for use on personal computers. They include such popular programs as word processing programs, spreadsheet programs, database programs, and communication programs.
Word processing programs are used to type, correct, rearrange, or delete text in letters, memos, reports, and school assignments. Spreadsheet programs enable individuals to prepare tables easily. The users of such programs establish rules for handling large groups of numbers. For example, using a spreadsheet program, a person can enter some numbers into a table and the program will calculate and fill in the rest of the table. When the user changes one number in the table, the other numbers will change according to the rules established by that user. Spreadsheets may be used for preparing budgets and financial plans, balancing a chequebook, or keeping track of personal investments.
Database programs allow a computer to store large amounts of data (information) in a systematic way. Such data might include the name, address, telephone number, salary, and starting date of every employee in a company. The computer could then be asked to produce a list of all employees who receive a certain salary.
Communication programs connect a personal computer to other computers. People can thereby exchange information with one another via their personal computers. In addition, communication programs enable people to link their personal computers with databanks. Databanks are huge collections of information stored in large centralized computers. News, financial and travel information, and other data of interest to many users can be obtained from a databank.
Other programs include recreational and educational programs for playing games, composing and hearing music, and learning a variety of subjects. Programs have also been written that turn household appliances on and off. Some people develop their own programs to meet needs not covered by commercially prepared programs. Others buy personal computers mainly to learn about computers and how to program them.
|COMPUTERS IN ASTRONOMY
Using computers is an important part of modern astronomy. Computers aid observational astronomers in many ways. For example, computers guide telescopes, and they control devices that measure the radiation gathered by telescopes. Astronomers also use computers to work out designs for new telescopes and to analyse data collected with telescopes. Computers have a major role in theoretical studies. A theoretical astronomer might use a computer to produce a mathematical model of the history of a star from its birth to its death.
|
<urn:uuid:d8165af6-0832-45e8-828b-e21bd61a82ba>
|
CC-MAIN-2013-20
|
http://library.thinkquest.org/C007091/uses.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696400149/warc/CC-MAIN-20130516092640-00011-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.936986 | 526 | 3.1875 | 3 |
An Introduction to Amoeba
Andrew S. Tanenbaum
Dept. of Mathematics and Computer Science
De Boelelaan 1081
1081 HV Amsterdam, The Netherlands
Sape J. Mullender
Centrum voor Wiskunde en Informatica
1098 SJ Amsterdam, The Netherlands
As the 1990s take hold, it is increasingly clear that computer operating systems
designed for single processor systems back in the 1970s and 1980s will no longer be
appropriate for the needs of the new decade. UNIX? is now almost 20 years old. Although
it has gotten much bigger over the years, the basic ideas have not really changed since it
was created in the early 1970s. MS-DOS, although not quite as old, is in many ways even
less appropriate than UNIX for the powerful computer systems of the 1990s. Perhaps it is
time to start over fresh with something new. In this collection of papers we describe the
Amoeba distributed operating system, which has been designed and implemented with the
technology of the 1990s in mind.
What are the key characteristics of computing now and in the future? We are convinced that two factors will dominate the next decade:
d The need for physically distributed hardware
d The need for logically centralized software
Let us now discuss these in turn.
First, computers are becoming cheaper at an enormous rate. In the 1970s, it was normal for many people to share a single mainframe or minicomputer by running a timesharing system on it. Each user had a terminal with which to access the computer. The ratio of computers to people was very low, often 20 or 50 or even 100 people per machine.
In the 1980s, the personal computer and personal workstation became popular. By the end of the decade, many universities and companies operated using a model in which each person had his or her own machine, all connected by a local area network. The ratio of computers to people became approximately 1 to 1, as many machines as people.
In the 1990s, hardware prices will continue will continue to drop dramatically. We will soon come to a situation in which it is economically feasible to have 20 or 50 or even 100 computers per person. Clearly the current model of giving each person a personal computer or workstation breaks down under these conditions. Nevertheless, the availability of large numbers of powerful single-chip processors is a given. Any system for the
? UNIX is a Registered Trademark of AT&T Bell Laboratories.
|
<urn:uuid:ddf16257-5bce-4848-af2e-a1bc1066801e>
|
CC-MAIN-2013-20
|
http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0cstr--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-about---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&c=cstr&cl=CL1.249&d=HASH636a44a9077d39fa126bde
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382892/warc/CC-MAIN-20130516092622-00014-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.954742 | 523 | 2.734375 | 3 |
- Templates and Data Types: we need to have programmable
data sizes (32-bit floats, 16-bit integers), generic containers
(linked lists, diagonal matrices), and generic classes
(Fourier Transform must operate on all types of signals).
- Data-Driven Programming: specific operations depend
on the type of data in a file (recognition on a feature stream
vs. an audio stream).
- Centralized Memory Management: efficient, block-oriented
memory management is essential.
- Virtual Interfaces: implementations must be
algorithm neutral and extensible (by novices).
|
<urn:uuid:19d6e834-2fe0-4f18-b713-9dd54610dc0b>
|
CC-MAIN-2013-20
|
http://www.isip.piconepress.com/publications/conference_presentations/2001/oscon/software/presentation/html/phil_00.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704658856/warc/CC-MAIN-20130516114418-00058-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.760812 | 129 | 2.859375 | 3 |
Introduction to Mathematica Programming
Wolfram Research, Inc.
The goal of this tutorial is to gain a basic understanding of the Mathematica programming language.
We will examine how Mathematica evaluates input, how to define functions to work with many kinds of expressions, and how to describe and manipulate data.
We will also examine high-level styles of Mathematica programming, how to collect
groups of functions into packages, and some miscellaneous topics, such as how to format
output and generate warning messages.
|
<urn:uuid:a2cee48a-f46e-4861-aac9-5db1ca447a40>
|
CC-MAIN-2013-20
|
http://library.wolfram.com/conferences/conference98/abstracts/introduction_to_mathematica_programming.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705305291/warc/CC-MAIN-20130516115505-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.864664 | 107 | 3.453125 | 3 |
The Leading eBooks Store Online
for Kindle Fire, Apple, Android, Nook, Kobo, PC, Mac, Sony Reader...
Data Structures and Algorithms in Java
Data Structures and Algorithms in Java, Second Edition is designed to be easy to read and understand although the topic itself is complicated. Algorithms are the procedures that software programs use to manipulate data structures. Besides clear and simple example programs, the author includes a workshop as a small demonstration program executable on a Web browser. The programs demonstrate in graphical form what data structures look like and how they operate. In the second edition, the program is rewritten to improve operation and clarify the algorithms, the example programs are revised to work with the latest version of the Java JDK, and questions and exercises will be added at the end of each chapter making the book even more useful.
Suggested solutions to the programming projects found at the end of each chapter are made available to instructors at recognized educational institutions. This educational supplement can be found at www.prenhall.com, in the Instructor Resource Center.
800 pages; ISBN 9780768662603
, or download in
- Academic > Computer Science > Computer science
- Academic > Computer Science > Electronic data processing
- Academic > Computer Science > Computers - special aspects
- Academic > Computer Science > System design; Periodicals
- Academic > Computer Science > Database management
- Academic > Computer Science > Data structures
- Computers > Programming Languages > Java
|
<urn:uuid:d3eca007-d767-4a33-af76-2f0ff49acf35>
|
CC-MAIN-2013-20
|
http://www.ebooks.com/226901/data-structures-and-algorithms-in-java/lafore-robert/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00091-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.849926 | 307 | 3.140625 | 3 |
Introduction to Object-Oriented Databases provides the first unified and coherent presentation of the essential concepts and techniques of object-oriented databases. It consolidates the results of research and development in the semantics and implementation of a full spectrum of database facilities for object-oriented systems, including data model, query, authorization, schema evolution, storage structures, query optimization, transaction management, versions, composite objects, and integration of a programming language and a database system.
Broadcast media, such as satellite, ground radio, and multipoint cable channels, can easily provide full connectivity for communication among geographically distributed users. One of the most important problems in the design of networks (referred to as packet broadcast networks) that can take practical advantage of broadcast channels is how to achieve efficient sharing of a single common channel.
This collection of original research provides a comprehensive survey of developments at the leading edge of concurrent object-oriented programming. It documents progress—from general concepts to specific descriptions—in programming language design, semantic tools, systems, architectures, and applications. Chapters are written at a tutorial level and are accessible to a wide audience, including researchers, programmers, and technical managers.
Teaching the theory of error correcting codes on an introductory level is a difficult task. The theory, which has immediate hardware applications, also concerns highly abstract mathematical concepts. This text explains the basic circuits in a refreshingly practical way that will appeal to undergraduate electrical engineering students as well as to engineers and technicians working in industry.
This final report of the Stanford Lisp Performance Study, conducted over a three year period by the author, describes implementation techniques, performance tradeoffs, benchmarking techniques, and performance results for all of the major Lisp dialects in use today. A popular high level programming language used predominantly in artificial intelligence, Lisp was the first language to concentrate on working with symbols instead of numbers.
Today's computers must perform with increasing reliability, which in turn depends on the problem of determining whether a circuit has been manufactured properly or behaves correctly. However, the greater circuit density of VLSI circuits and systems has made testing more difficult and costly.
|
<urn:uuid:e8080200-e69d-406c-a874-0b52215251b6>
|
CC-MAIN-2013-20
|
http://www.mitpress.mit.edu/books/series/computer-systems-series
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706469149/warc/CC-MAIN-20130516121429-00093-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.920074 | 425 | 2.515625 | 3 |
Sunday, June 24, 2012
CS 2208 DATA STRUCTURES lab manual ( DS lab manual) | for CSE - III semester |
download link : "data structure lab manual"
list of experiments :
1. Implement singly and doubly linked lists.
2. Represent a polynomial as a linked list and write functions for polynomial
3. Implement stack and use it to convert infix to postfix expression
4. Implement a double-ended queue (dequeue) where insertion and deletion
operations are possible at both the ends.
5. Implement an expression tree. Produce its pre-order, in-order, and postorder
6. Implement binary search tree.
7. Implement insertion in AVL trees.
8. Implement priority queue using binary heaps
9. Implement hashing with open addressing.
10. Implement Prim's algorithm using priority queues to find MST of an
Anna University Nov/Dec 2012 Timetable Result
To get instant updates like us in Facebook
|
<urn:uuid:86999964-0e02-40f9-95a8-8829dd7950f5>
|
CC-MAIN-2013-20
|
http://www.iannauniversity.com/2012/06/cs-2208-data-structures-lab-manual-ds.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707439689/warc/CC-MAIN-20130516123039-00007-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.769584 | 213 | 2.71875 | 3 |
Skip to Main Content
Nearly every problem encountered in engineering at some time proceeds from the qualitative to the quantitative phase where the results of mathematical analysis must be applied in actual computation. Most often the computation is short enough that automatic means are not necessary. However, more and more problems are requiring powerful aids to calculation. This increase is due as much to expanded thinking encouraged by the mere availability of computers as to any actual backlog of work. Therefore it is to the engineer's advantage to know what computers can do for him, even though he may take his problem to'someone else for final preparation and programming. The following text presents some examples in which automatic calculation is being used. The logic, used in choosing the computing methods is shown based on the characteristics of problem and computer. As background for the examples the most important of these characteristics are presented briefly in the next section.
|
<urn:uuid:5311a235-6b8f-47cf-84d4-90c0de374b71>
|
CC-MAIN-2013-20
|
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&tp=&arnumber=1135532&contentType=Journals+%26+Magazines&sortType%3Dasc_p_Sequence%26filter%3DAND(p_IS_Number%3A25254)
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704752145/warc/CC-MAIN-20130516114552-00076-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.958463 | 174 | 3.21875 | 3 |
This thesis will divide the concept of member feedback in online communities into three types, namely conversational-, behavioural feedback and content analysis. During this thesis we will argue for the advantages with user involvement in design and how the three types of feedback listed above together with members could be introduced into the design process.
Computing is usually defined as the activity of using and improving computer hardware and software. It is the computer-specific part of information technology. Computer programming in general is the process of writing, testing, debugging, and maintaining the source code and documentation of computer programs. This section is dedicated to computer projects, computer programming projects, final year computer projects, computing project ideas, project reports, computer project topics, project list, computer projects for students.
Open source software development projects are often lacks financial support. But nonprofit organizations and hosts are providing services and the possibility of funding the development. Several donators willing to support these nonprofit organizations exist. There has not yet been any formal investigation of the decision processes for the division of the financial support within nonprofit organizations.
In recent years, IT has come to play an important role in companies. So successful execution of business processes often depends on mission-critical IT-solutions. Managing such IT is challenging. Companies have to keep up with rapid developments, but also consider long-term consequences while doing so. How do they survive in the long run without surrendering in the short run? What should be done in-house? What should be bought from external providers? How should they allocate scarce IT resources?
The Extensible Markup Language (XML) is becoming the de facto standard for information representation and exchange over the Internet. Owing to its hierarchical (recursive) and self-describing syntax, XML is flexible enough to express a large variety of information. To retrieve useful information from XML, queries expressed in query language like XPath is used to specify some elements that suit a given criteria. An XPath expression is comprised of a sequence of location steps, each consisting of an axis, a node test, and possibly a predicate.
This work is an investigation of how a software architecture can be changed in order to improve the support of the creation of a customised user interface. The parts of Symbian OS that are of interest for the work are described in detail. Finally, how well the latest architecture supports customisation of the graphical user interface in comparison to the original Symbian OS architecture.
Imagine a taxi driver wanting to watch a football game while working. Events in the game cannot be predetermined, the driver’s available attentional resources vary and network connections change from non-existing to excellent, so it will be necessary to develop a viewing application that can adapt to circumstances.
The World Wide Web is expanding at an surprising rate and its now the greatest information and knowledge archive. Several web documents are gathered, that need programmed processing and evaluation for smart applications. In this dissertation, we explore the web document analysis technique and also develop an application to antiphishing. For Web document analysis, a visual factor based page segmentation approach is recommended and implemented.
|
<urn:uuid:99afc7f6-6ad2-4666-90bf-8de426484049>
|
CC-MAIN-2013-20
|
http://freeprojectreports.com/computer
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383259/warc/CC-MAIN-20130516092623-00064-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.924532 | 628 | 2.625 | 3 |
Because of its innovative characteristics, web services are called the nexgen (next generation) web technology. For developing any application or website, the programmers have to create lots of coding, generates many files which produces difficulties in handling those data if it is done manually. The developers have to categorise those data according to its category like data for users, data for administrators, data for login and data for device etc. These functions consume lots of time and manpower and resulting the losses in the productivity.
eXtensible markup language Solve the such problems by enabling the developers to assign data to be exchanged between PCs, smart devices, applications, and Web sites. An arranged and clustered data according to format and style definitions can be easily organized, programmed, edited, and exchanged between any Web sites, applications, and devices.
|
<urn:uuid:755dd59d-1beb-4d3f-a39b-df5077a8abe9>
|
CC-MAIN-2013-20
|
http://www.javajazzup.com/issue11/page8.shtml
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705955434/warc/CC-MAIN-20130516120555-00087-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.901965 | 165 | 2.53125 | 3 |
Oscar and Aunty Verity, Uncle Jonathan, Granny Gi-Gi, Mummy & Daddy all went to the zoo today. Oscar loved seeing the monkeys, fish and elephants, and there were lots of big cats.
Oscar’s got his first programming book, and he’s loving learning about the ABCs of the web.
Thanks for the present, Godfather of Code!
The government have announced their next plans for replacing ICT in schools. They’ve drafted a curriculum for a new subject ‘Computing’. And it looks great.
The full document is here. Computing is on page 152.
Here’s what kids will be learning in Primary School.
Key Stage 1
Pupils should be taught to:
- understand what algorithms are, how they are implemented as programs on digital devices, and that programs execute by following a sequence of instructions
- write and test simple programs
- use logical reasoning to predict the behaviour of simple programs
- organise, store, manipulate and retrieve data in a range of digital formats
- communicate safely and respectfully online, keeping personal information private, and recognise common uses of information technology beyond school.
Key Stage 2
Pupils should be taught to:
- design and write programs that accomplish specific goals, including controlling or simulating physical systems; solve problems by decomposing them into smaller parts
- use sequence, selection, and repetition in programs; work with variables and various forms of input and output; generate appropriate inputs and predicted outputs to test programs
- use logical reasoning to explain how a simple algorithm works and to detect and correct errors in algorithms and programs
- understand computer networks including the internet; how they can provide multiple services, such as the world-wide web; and the opportunities they offer for communication and collaboration
- describe how internet search engines find and store data; use search engines effectively; be discerning in evaluating digital content; respect individuals and intellectual property; use technology responsibly, securely and safely
- select, use and combine a variety of software (including internet services) on a range of digital devices to accomplish given goals, including collecting, analysing, evaluating and presenting data and information.
And secondary school:
Key Stage 3
Pupils should be taught to:
- design, use and evaluate computational abstractions that model the state and behaviour of real-world problems and physical systems
- understand at least two key algorithms for each of sorting and searching; use logical reasoning to evaluate the performance trade-offs of using alternative algorithms to solve the same problem
- use two or more programming languages, one of which is textual, each used to solve a variety of computational problems; use data structures such as tables or arrays; use procedures to write modular programs; for each procedure, be able to explain how it works and how to test it
- understand simple Boolean logic (such as AND, OR and NOT) and its use in determining which parts of a program are executed; use Boolean logic and wild-cards in search or database queries; appreciate how search engine results are selected and ranked
- understand the hardware and software components that make up networked computer systems, how they interact, and how they affect cost and performance; explain how networks such as the internet work; understand how computers can monitor and control physical systems
- explain how instructions are stored and executed within a computer system
- explain how data of various types can be represented and manipulated in the form of binary digits including numbers, text, sounds and pictures, and be able to carry out some such manipulations by hand
- undertake creative projects that involve selecting, using, and combining multiple applications, preferably across a range of devices, to achieve challenging goals, including collecting and analysing data and meeting the needs of known users
- create, reuse, revise and repurpose digital information and content with attention to design, intellectual property and audience.
If the majority of the population can do everything on those lists, they’ll get so much more out of the web and computers, and the web and computers could become more powerful. For just one example, consider the modifiers and wildcards you can use in searches when trying to find something unusual (like [googling for a web page])http://support.google.com/websearch/bin/answer.py?hl=en&answer=136861), or hunting down an old email in Outlook); And as a bonus, if lots more people knew how to use them, then companies would implement and maintain these operators all the more.
I’m one years old
I eat all my breakfast
I choose what I wear
I take myself to the childminders
It’s been snowing lots, like when Oscar was born! This time he experienced the snow. Maybe next time he’ll remember it.
Oscar demonstrates his technique for getting down the stairs. Read the rest of this entry »
Oscar is getting SO good at standing without support. He’s up to about 20s and knows he can do it. He should be asleep though, really :p I guess he’s also learned to save his best tricks for bed time!
We loved seeing Mr & Mrs Lapish getting married. The ceremony and reception were lovely and lots of fun. The best pics we got are here, including plenty of Oscar.
Argh! I just
rm -rf all my iTunes library. I feel like such a n00b.
The music is backed up (in many places, including iTunes+) and I was in the process of adding Movies and TV shows to a backup, but deleted the wrong copy.
So, me and Caroline have 3 episodes of Battlestar Galactica left to watch on the iPad, and then we’ll have to wait to watch the final season until I’ve managed to recover them.
|
<urn:uuid:ccf9bbe4-5afa-447b-aaa2-84eafa49ffb2>
|
CC-MAIN-2013-20
|
http://fredsherbet.com/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703489876/warc/CC-MAIN-20130516112449-00025-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.940854 | 1,215 | 3.625 | 4 |
World’s First Computational Knowledge Engine Is One of the Most Eagerly Anticipated International Product Launches of 2009
May 18, 2009—Wolfram Alpha LLC today announced the general availability of Wolfram|Alpha, the world’s first computational knowledge engine, offered for free on the web.
Wolfram|Alpha draws on scientist Stephen Wolfram‘s groundbreaking work on Mathematica, the world’s leading technical computing software platform, and on the discoveries he published in his paradigm-shifting book, A New Kind of Science. Over 200,000 people from throughout the world have contacted the company to learn more about Wolfram|Alpha since news of the service first surfaced broadly in March.
The long-term goal of Wolfram|Alpha is to make all systematic knowledge immediately computable and accessible to everyone. Wolfram|Alpha draws on multiple terabytes of curated data and synthesizes it into entirely new combinations and presentations. The service answers questions, solves equations, cross-references data types, projects future behaviors, and more. Wolfram|Alpha’s examples pages and gallery show a few of the many uses of this new technology.
“Fifty years ago,” said Stephen Wolfram, the founder and CEO of Wolfram Research, “when computers were young, people assumed that they’d be able to ask a computer any factual question, and have it compute the answer. I’m happy to say that we’ve successfully built a system that delivers knowledge from a simple input field, giving access to a huge system, with trillions of pieces of curated data and millions of lines of algorithms. Wolfram|Alpha signals a new paradigm for using computers and the web.”
Four Pillars of Wolfram|Alpha
Wolfram|Alpha is made up of four main “pillars” or components:
- Curated Data. Wolfram|Alpha contains terabytes of factual data covering a wide range of fields. Teams of subject-matter experts and researchers collect and curate data, transforming it into computable forms that can be understood and operated on by computer algorithms.
- Dynamic Computation. When Wolfram|Alpha receives a user query, it extracts the relevant facts from its stored computable data and then applies a collection of tens of thousands of algorithms, creating and synthesizing new relevant knowledge.
- Intuitive Language Understanding. To allow Wolfram|Alpha to understand inputs entered in everyday language, its developers examine the ways people express ideas within fields and subject matters and continually refine algorithms that automatically recognize these patterns.
- Computational Aesthetics. Wolfram|Alpha also represents a new approach to user-interface design. The service takes user inputs and builds a customized page of clearly and usefully presented computed knowledge.
Wolfram|Alpha has been entirely developed and deployed using Wolfram Research, Inc.’s Mathematica technology. Wolfram|Alpha contains nearly six million lines of Mathematica code, authored and maintained in Wolfram Workbench. In its launch configuration, Wolfram|Alpha is running Mathematica on about 10,000 processor cores distributed among five colocation facilities, using gridMathematica-based parallelism. And every query that comes into the system is served with webMathematica.
“Wolfram|Alpha is an extremely powerful way of harnessing the world’s knowledge. Now, anyone with web access can tap into that knowledge to find relevant information and discover new insights,” said Theodore Gray, co-founder of Wolfram Research.
The Wolfram|Alpha launch process has been broadcast live on Justin.tv and documented on the Wolfram|Alpha blog and on its Twitter and Facebook accounts. The site first went live for testing on Friday, May 15, 2009, and has been rigorously tested and further performance-tuned since then in preparation for today’s official launch.
Key Partnerships with Dell, Inc. and R Systems NA, Inc.
Powering Wolfram|Alpha’s computational knowledge engine required highly specialized servers and compute clusters. An innovative approach helped speed deployment, lower total cost of ownership, and reduce the environmental impact of the system’s compute clouds.
Dell’s Data Center Solutions (DCS) Division worked with R Systems NA, Inc., Wolfram|Alpha’s data-center hosting partner, to identify the right customized cloud computing solution and tune it to R Systems’ facility, operating processes, and application workload.
“Since Dell created DCS in 2007, we have championed the early adopters at the leading edge of areas like search, service provider, and cloud-computing spaces. Today’s launch of Wolfram|Alpha is significant for both the technology community and to Dell,” said Forrest Norrod, vice president and general manager of Dell’s Data Center Solutions Division. “It is incredible to see the outcome of Dell’s strategy to work with customers to design and build an infrastructure fully optimized for their computing environments.”
Brian Kucic, CEO of R Systems, said, “Working with Wolfram Alpha LLC on a project like Wolfram|Alpha is exactly what we envisioned when we started our company. Being engaged with high-performance computing on a large scale and offering world-class resources to researchers is our mission. As Wolfram|Alpha expands, we’re confident that our resources will satisfy its users and their expectations.”
For more information on Wolfram|Alpha, visit its Media Resources page.
About Wolfram Alpha LLC
Wolfram Alpha LLC is a Wolfram Research company.
Wolfram|Alpha’s long-term goal is to make all systematic knowledge immediately computable and accessible to everyone. The company aims to collect and curate all objective data; implement every known model, method, and algorithm; and make it possible to compute whatever can be computed about anything.
Wolfram|Alpha builds on the achievements of science and other systematizations of knowledge to provide a single source that can be relied on by everyone for definitive answers to factual queries.
About Wolfram Research
Wolfram Research, Inc. is a powerhouse in technical innovation and pursues a long-term vision to develop the science, technology, and tools to make computation an ever-more-potent force in today’s and tomorrow’s world.
With Mathematica 7, Wolfram Research delivered powerful new capabilities, including image processing, parallel high-performance computing, and new on-demand data—making the software more relevant than ever to everyone from leading researchers to students and other users. Wolfram Research sponsors the world’s largest free network of technical information websites, including MathWorld—the #1 website devoted to mathematics—and the Wolfram Demonstrations Project.
Wolfram Research was founded in 1987 by Stephen Wolfram, who continues to lead the company today. The company is headquartered in the United States, with offices in Europe and Asia. For more information, visit its website.
About Stephen Wolfram
Stephen Wolfram is a distinguished scientist, inventor, author, and business leader. He is the creator of Mathematica, the author of A New Kind of Science, the creator of Wolfram|Alpha, and the founder and CEO of Wolfram Research.
Wolfram has been president and CEO of Wolfram Research since its founding in 1987. In addition to his business leadership, Wolfram is deeply involved in the development of the company’s technology, and continues to be personally responsible for overseeing all aspects of the functional design of the core Mathematica system as well as Wolfram|Alpha.
|
<urn:uuid:ab2cb72a-d86e-46aa-bdc3-25f874bea1a2>
|
CC-MAIN-2013-20
|
http://company.wolfram.com/news/2009/wolframalpha-officially-launched/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710115542/warc/CC-MAIN-20130516131515-00041-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.900876 | 1,582 | 2.8125 | 3 |
Running programs or parts of programs concurrently and in an organized manner on different processors has long fascinated researchers with its promise of a quantum leap in computing power. However, it has been only recently that parallel and distributed computing left the laboratory and become the most dynamic component of the computer and networking industry. Presently it is the acknowledged key for solving large computational problems not only in science and engineering, but also in business, medicine and communications.
MET CS 786 Parallel and Distributed Computations
The transition to the world of concurrent distributed computations poses a tremendous challenge to the computer professional. The core of computer science theory and practice is built upon the centralized processing paradigm, i.e. centralized control, memory and computation. Concurrency requires a rethinking of the very basis of computer science.
It is the goal of this course to discuss parallel and distributed computing
by taking into account architectural, algorithmic and software aspects
and how they impact each other.
Homework / Project Topics
Class e-mail list.
|
<urn:uuid:ab58b439-34a4-41d2-83ed-94419b230a6e>
|
CC-MAIN-2013-20
|
http://people.bu.edu/zlateva/cs786/home.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00006-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.928003 | 205 | 2.5625 | 3 |
Suppose you have an array of 99 numbers. The array contains the digits 1 to 100 with one digit missing. Describe four different algorithms to compute the missing number. Two of these should optimize for low storage and two of these should optimize for fast processing.
Perl Performance Interview Questions
- Describe how you would store a social graph in a relational database.
- Describe an efficient algorithm to determine whether or not person X is a 2nd degree connection of person Y.
- Describe an efficient algorithm to determine whether or not person X is a 3rd degree connection of person Y.
- How can you make #3 very quick. E.g. how does Linked In compute 3rd degree connections quickly?
Write a string to find a substring in a given string. Do this in O(n) or better
What is a hashtable? Give an example of a type of problem that a hashtable is useful for.
Explain big o notation and how it is useful in computer science to classify algorithms.
- What order is a hash table lookup?
- What order is determining if a number is even or odd?
- What order is finding an item in an unsorted list?
- What order is a binary search?
For the data structures: Array and Linked List explain:
- Where you might use them
- Operations that are commonly supported (add, insert etc)
Design a system to efficiently calculate the top 1MM Google search queries and create a report of these. Additionally:
- You are given twelve servers
- Each has two processors, 4GB of ram and four 400GB hard drives.
- The machines are networked
- The log data as roughly 100 Billion log lines in it.
- The log data comes in twelve, 320 Gb files.
- Each line of the files has roughly 40 search queries
- You can only use open source software or software that you write.
Write a function to efficiently determine the result of a game of Tic Tac Toe.
The function takes as input the game and the sign (x or o) of the player. The function returns if this player has won the game or not.
Carefully consider both the data structure and the algorithm for your answer.
Efficiently calculate the shortest distance between two Facebook users given an API endpoint that returns all friends of a given user.
Given any integer, efficiently find the next highest integer that uses the same digits.
For example if the number is
15432, you should return
Write a function that, given a word and a dictionary will efficiently return the minimal edits (character addition or deletion) to get to a word that exists in the dictionary.
Given two arrays of strings, A and B.
B contains every element in A, and has one additional member, for example:
* A = ['dog', 'cat', 'monkey] * B = ['cat', 'rat', 'dog', 'monkey']
Write a function to find the extra string in B. Do this in O(n)
|
<urn:uuid:17d638b9-0079-4a33-b332-ba6dd3c2aa3b>
|
CC-MAIN-2013-20
|
http://thereq.com/q/best-perl-software-interview-questions/performance
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704666482/warc/CC-MAIN-20130516114426-00061-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.873352 | 636 | 2.8125 | 3 |
Some systems are too large to be understood entirely by any one human mind. They are composed of a diverse array of individual components capable of interacting with each other and adapting to a changing environment. As systems, they produce behavior that differs in kind from the behavior of their components. Complexity Theory is an emerging discipline that seeks to describe such phenomena previously encountered in biology, sociology, economics, and other disciplines.
Beyond new ways of looking at ant colonies, fashion trends, and national economies, complexity theory promises powerful insights to software development. The Internet—perhaps the most valuable piece of computing infrastructure of the present day—may fit the description of a complex system. Large corporate organizations in which developers are employed have complex characteristics. In this session, we'll explore what makes a complex system, what advantages complexity has to offer us, and how to harness these in the systems we build.
Full-stack JVM generalist. Passionate teacher. GitHubber. Husband of one, father of three. Believer in Christ. bio from Twitter
Sign in to add slides, notes or videos to this session
|
<urn:uuid:c64d9ca1-2aa3-42ec-8331-81e6fd99c67a>
|
CC-MAIN-2013-20
|
http://lanyrd.com/2011/rocky-mountain-software-symposium/skcfb/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699924051/warc/CC-MAIN-20130516102524-00042-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.940692 | 222 | 2.609375 | 3 |
|United States Patent||5,557,775|
|Shedletsky||September 17, 1996|
An expert system is used to design a computer network comprising hardware platforms, applications, data bases, user interfaces, etc. The expert system initially displays questions and receives responsive information from a user as to characteristics of backend data bases and whether copies of said backend data bases should be stored in respective frontend data bases. In response, the expert system "builds" one of a predetermined set of backend models which corresponds to the information. Next, to reduce complexity, the expert system identifies two or more of the frontend data bases of compatible type that can be merged together and then displays questions and receives responsive information indicating whether the mergers should be made. Next, the expert system displays questions and receives responsive information as to characteristics of frontend components including an intermediate server. In response, the expert system builds one of a set of predetermined frontend models which corresponds to the information. Next, the expert system identifies a function of the intermediate server that can be performed on a backend platform within the backend model. Next, the expert system displays questions and receives responsive information whether the function should be performed on the backend platform, and updates the frontend model accordingly. Next, the expert system displays questions and receives responsive information as to characteristics of connections between the updated frontend model and the backend model. In response, the expert system determines a final design of the computer network based on the connection information, backend model and updated frontend model.
|Inventors:||Shedletsky; John J. (Brewster, NY)|
International Business Machines Corporation
|Filed:||February 23, 1994|
|Current U.S. Class:||703/13 ; 706/45|
|Current International Class:||G06F 17/50 (20060101); G06F 009/30 (); G06F 013/10 ()|
|Field of Search:||395/500,325,800,908,75,50,63,21,306,309|
|4965741||October 1990||Winchell et al.|
Ceri et al., "Expert Design of Local Area Networks",IEEE 1990, pp. 23-33 (Oct., 1990). .
Mehdi Owrang O. et al., "An Expert System Based Configuration Design of Hybrid-Ethernet Local Area Network", IEEE 1991, pp. 807-812 (Oct., 1991). .
Merabti et al., "Knowledge-Based Support for Distributed Real-Time System Design", IEEE 1992, pp. 218-225. (Aug., 1992). .
Merabti et al., "Towards a Design Toolset for Lan-Based Real-Time Systems", IEEE 1990, pp. 234-240. (Mar., 1990). .
Schneidewind, "Distributed System Software Design Paradigm with Application to Computer Networks", IEEE 1989, pp. 402-412. (Apr., 1989). .
Shiratori et al., "Using Artificial Intelligence in Communication System Design", IEEE Jan., 1992, pp. 38-46. (Jan., 1992). .
Ma et al., "A Knowledge-Based Planner of LAN", IEEE 1992, pp. 496-501. (Nov., 1992). .
Shedletsky et al., "Application Reference Designs for Distributed Systems", IBM 1993, vol. 32, No. 4 pp. 624-646. (month not available). .
IBM Systems Journal, vol. 32, No. 4, 1993, "Application reference designs for distributed systems", by Shedletsky et al, pp. 625-646. (month is not available)..
|
<urn:uuid:46222ccd-f026-4bfc-9cd8-4f8692180ef9>
|
CC-MAIN-2013-20
|
http://patents.com/us-5557775.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710299158/warc/CC-MAIN-20130516131819-00035-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.767192 | 784 | 2.671875 | 3 |
Victor D. Vianu
Professor, Computer Science and Engineering
Reinventing the database in response to recent developments, including the emergence of the World Wide Web. The Web itself can be seen as a huge distributed database. This fundamentally changes the very concept of a database. An expert in both classical database theory, logic, and data on the Web, Professor Vianu can put this dynamic into perspective. One contribution to classical database theory is the "relational machine," devised to better account for computational complexity in modern databases. Relational databases are accessed through abstract interfaces, making them easier to use, but masking low-level details both from users and classical mechanisms for analyzing complexity, such as Turing machines. Relational machines capture such complexity. Vianu has proven that automata theory provides a valuable tool for analyzing modern query languages on data in XML form, the emerging Web standard for data exchange. XML and the Web increasingly form Vianu's focus. He is: developing type-checking algorithms to guarantee the robustness of applications built using XML; studying database response to queries with only partial information available; and, exploring how useful data can be extracted from streams of XML wrapped data. Vianu has also worked on spatial databases, showing how queries can take advantage of annotations about the spatial data. For example, topological data about a geographic information system can significantly speed up query processing.
Jacobs School Faculty Update Your Profile
|
<urn:uuid:d69bdf69-6205-45d3-a184-bae762125a6f>
|
CC-MAIN-2013-20
|
http://www.jacobsschool.ucsd.edu/faculty/faculty_bios/index.sfe?fmp_recid=126
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697917013/warc/CC-MAIN-20130516095157-00072-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.886326 | 292 | 2.546875 | 3 |
Microsoft changed the management interface of SQL 2008 to the same user interface (UI) used for Visual Studio 2008. Why do you think Microsoft made that change, and why is it important to database developers?
how long has the organism been in the contry and s.tate?how did it get here? haw has it impacted the environment (both direct impacts, and indirect from eradication efforts
Please explain in at least 5 sentences. Describe the differences between the phases and the workflows of the Unified Process.
three examples of the client/server model in linux
Which term applies to the output of a report containing only specific data arranged in a useful way?
Discuss the guidelines for create good Web sites. List guidelines for using graphics in designing Web sites. Please include referneces and at least 100 words
Case: Holloway Travel Vehicles Holloway Travel Vehicles sells new recreational vehicles and travel trailers. When new vehicles arrive at Holloway Travel Vehicles, a new vehicle record is created. Included in the new vehicle record is a vehicle serial number, name, model, year, manufacturer, and...
1. Write a program to generate and display a table of n and n2, for integer values of n ranging from 1 through 10. Be sure to print appropriate column headings. 2. Write a C program that prompts for a variable number of integers, adds them up, averages them, and prints out the average The...
1. Write a program to generate and display a table of n and n2, for integer values of n ranging from 1 through 10. Be sure to print appropriate column headings. 2. Write a C program that prompts for a variable number of integers, adds them up, averages them, and prints out the average...
if-then statement that assigns 20 to the variable y and assigns 40 to the variable z if the variable x is greater than 20
Ask a new Computer Science Question
Tips for asking Questions
- Provide any and all relevant background materials. Attach files if necessary to ensure your tutor has all necessary information to answer your question as completely as possible
- Set a compelling price: While our Tutors are eager to answer your questions, giving them a compelling price incentive speeds up the process by avoiding any unnecessary price negotiations
- 1. Identify and describe Trust/Security Domain boundaries that may be applicable to personal computer (workstation) security in a business context.
2. This is a C++ codelab question.
- The "origin" of the cartesian plane in math is the point where x and y are both zero. Given a variable, origin of type Point-- a structured type with two fields, x and y, both of type double, write one or two statements that make this variable's field's values consistent with the mathematical notion of "origin".
- Assume two variables p1 and p2 of type POINT, with two fields, x and y, both of type double, have been declared. Write a statement that reads values for p1 and p2 in that order. Assume that values for x always precede y.
- In mathematics, "quadrant I" of the cartesian plane is the part of the plane where x and y are both positive. Given a variable, p that is of type POINT-- a structured type with two fields, x and y, both of type double-- write and expression that is true if and only the point represented by p is in "quadrant I".
|
<urn:uuid:1451fdf3-9c54-4851-9479-1bb24c261dec>
|
CC-MAIN-2013-20
|
http://www.coursehero.com/tutors/problems/Computer-Science/11371/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700477029/warc/CC-MAIN-20130516103437-00043-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.898054 | 713 | 2.65625 | 3 |
After completing this self-contained course on server-based Internet applications software, students who start with only the knowledge of how to write and debug a computer program will have learned how to build web-based applications on the scale of Amazon.com. Unlike the desktop applications that most students have already learned to build, server-based applications have multiple simultaneous users. This fact, coupled with the unreliability of networks, gives rise to the problems of concurrency and transactions, which students learn to manage by using the relational database system.After working their way to the end of the book, students will have the skills to take vague and ambitious specifications and turn them into a system design that can be built and launched in a few months. They will be able to test prototypes with end-users and refine the application design. They will understand how to meet the challenge of extreme business requirements with automatic code generation and the use of open-source toolkits where appropriate. Students will understand HTTP, HTML, SQL, mobile browsers, VoiceXML, data modeling, page flow and interaction design, server-side scripting, and usability analysis.The book, which originated as the text for an MIT course, is suitable for classroom use and will be a useful reference for software professionals developing multi-user Internet applications. It will also help managers evaluate such commercial software as Microsoft Sharepoint of Microsoft Content Management Server.
|
<urn:uuid:4726d6bb-bc85-4cde-ad38-b35e7e268b4d>
|
CC-MAIN-2013-20
|
http://www.amazon.co.uk/Software-Engineering-Internet-Applications-Andersson/dp/0262511916
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700626424/warc/CC-MAIN-20130516103706-00025-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.940215 | 278 | 2.8125 | 3 |
From Wikibooks, open books for an open world
Acoustics is the science that studies sound, in particular its production, transmission, and effects.
Ada Programming is a tutorial teaching the Ada programming language. Ada puts unique emphasis on, and provides strong support for, good software engineering practices that scale well to very large software systems (millions of lines of code, and very large development teams).
Adventist Youth Honors Answer Book is an unofficial instructor's guide for teaching Adventist Youth (AY) Honors to members of Pathfinder Clubs and Adventist Youth Societies. Pathfinders and Adventist Youth are youth clubs operated by the Seventh-day Adventist Church.
The Algorithms book aims to be an accessible introduction into the design and analysis of efficient algorithms.
The Advanced Certificate and the Advanced Diploma in Applications of ICT in Libraries permit library staff to obtain accreditation for their skills in the use of ICT. Anyone can make use of the materials and assessment is available in variety of modes, including distance learning.
Arimaa is a two-player 64-square board game invented by Omar Syed, a computer engineer trained in artificial intelligence. While its rules are simple, the game has proven to be deep in strategy, and programmers have yet to develop an Arimaa bot which can defeat the best human players.
The Basic Computing Using Windows book introduces the reader to a Windows PC environment.
Nuclear Medicine is a fascinating application of nuclear physics. The first ten chapters of this wikibook are intended to support a basic introductory course in an early semester of an undergraduate program. Additional chapters cover more advanced topics in this field.
Blended learning is combining the best of face to face and Web-based technology (e.g., online discussions, self-paced instruction, collaborative learning, streaming video, audio, and text) to accomplish an educational goal.
Blender 3D is a cross-platform, open source 3D modeling and animation package. It can be used to create photo-realistic images, animated films, CGI special effects and computer games. This book provides an excellent collection of tutorials to help you learn to model, render, rig, animate, and create with Blender 3D. You will be turned from a newbie to a pro in minutes!
The C# Programming Language is an object-oriented programming language developed by Microsoft as part of the .NET initiative. This book will discuss and explain this powerful language.
Chess is an ancient strategy game for two players. In this book, not only will you learn to play chess, but you will also master it. This book is great for beginners, but also for anyone else interested in chess - even a chess master.
Cognitive Psychology is a psychological science which is interested in various mind and brain related subfields such as cognition, the mental processes that underlie behavior, reasoning and decision making.
Communication Theory is about transmitting information from one person to another and the ways in which individuals and groups use the technologies of communication.
Everyone has his or her own view of the nature of consciousness. The intention of Consciousness Studies is to expand this view by providing an insight into the various ideas and beliefs on the subject as well as a review of current work in neuroscience.
Control Systems is an inter-disciplinary engineering text that analyzes the effects and interactions of mathematical systems. This book is for third and fourth year undergraduates in an engineering program.
Spice up your life with the Cookbook, which has all kinds of recipes, plus information about nutrition, ingredients, cuisines, special diets, and more!
Engineering Acoustics is the study of the generation and manipulation of sound waves, from an engineering perspective. Requires knowledge of Calculus and Ordinary Differential Equations .
From Rome to the present day, European History is a sweeping textbook of the continent's history placed in its proper context. Designed for AP European History students.
First Aid covers all topics required for a standard first aid course. The basics covered include: primary assessment, circulatory & respiratory emergencies, internal injuries, and medical conditions. The chapter on advanced topics covers AED operation, oxygen, airway management, and triage.
Formal Logic is a study of inference with purely formal content. The first rules of formal logic were written over 2300 years ago by Aristotle and are still vital to many modern disciplines like Linguistics and Computer Science.
The Guitar is a stringed musical instrument that is played in many different styles of Western music. This book provides lessons on playing styles and techniques.
Haskell is a lazy functional programming language with a state of the art type system. This tutorial aims to be friendly enough for new programmers, yet deep enough to challenge the most experienced. Come stretch your mind with us!
|
<urn:uuid:438cb76b-71dc-4f41-98d1-783aa4cb94ee>
|
CC-MAIN-2013-20
|
http://en.wikibooks.org/wiki/Main_Page/test
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701577515/warc/CC-MAIN-20130516105257-00064-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.930572 | 986 | 3.109375 | 3 |
In computing, C is a general-purpose, block structured, procedural, imperative computer programming language developed in 1972 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system.
This is a small utility to calculate the net income by checking a person’s salary and monthly expenses, on the basis of this information it calculates the loss/profit. It has a nice graphical User...
Full fingerprint verification and recognition algorithm source
code in c language for developers.
top down parser parses a given string for an input string which may match rhe table which is printed at the beggining of the program.this program was done for students of tsec originally done by...
Free Pascal binding for FAM/Gamin library. The archive also
contain an example program both in Pascal and in the C
programming language. The binding arrives with LGPL license
that allow the...
This is a C Source Code which is made as of Windows XP's Paint in which you can save or open file or draw anything you want.U will find how good the program is after looking at.
This is a graphical file browser which can be used in other programs as well.It is made in turbo C 3.0.
This program shows the utilization of the graphics header file in c to create a drawing application. Most beginners will be able to understand.
simple c application with concept of the data file for add, search, delete , update the books, members etc in general library system
This is a chess game i developed in c using c graphics.Each
coin has its own movement and the game can be saved. When
restarted u can load your saved game .It also keeps track of
whose turn it...
Flavius josephu's problem is to find a place for the last survivour. It is an important problem of data structure. It is implemented using circular linked list & Shows the power of circular linked...
This program converts any number from 0 to 99,99,99,999 to words . This can be useful in situations like receipt printing where we need to specify a number in words . Although the code is C the...
The Enveria IDE for Rapid Application Development (RAD) is an intuitive platform for programming robust GUI software.
* An interface that maintains an look and feel that is...
The code implements Huffman variable length data compression algorithm to achieve a Minimum variance encoding. Its pretty well commented and for enhanced understanding, try uncommenting the...
C Code Library is a powerful multi-language source code Library with the following benefits:
1. Built-in library with 50,000++ lines of code
2. Quick and powerful search engine
Deleaker is a useful add-in for Visual Studio 2003-2005 that helps you to analyze programming errors, many of which are unique to Visual C . Deleaker is a great tool for Visual C developers who...
A message-passing parser makes it easy to evaluate and interpret object-oriented languages, because the parser design allows the language interpreter to evaluate language references and the objects...
The Fileview is a simple file viewer capable of viewing large text files. Designed in C Language.
C API for SCSI device programming under windows. Encapsulates SPTI (SCSI Pass Through Interface) and makes SCSI device programming easy.
Linklist, stacks, queues, fcfs, priority, sjf process
scheduling programs written in c
|
<urn:uuid:d009548a-b19c-4b40-8d34-be39d8508153>
|
CC-MAIN-2013-20
|
http://www.programmersheaven.com/tags/C/Files/?Cons_Application=Computer+Science
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706474776/warc/CC-MAIN-20130516121434-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.892705 | 716 | 3.0625 | 3 |
Saturday, June 23, 2012
IT 2205 DATA STRUCTURES AND ALGORITHMS lab manual | for IT - III semester |
download link : "data structure and algorithms lab manual"
1. Implement singly and doubly linked lists.
2. Represent a polynomial as a linked list and write functions for polynomial addition.
3. Implement stack and use it to convert infix to postfix expression
4. Implement array-based circular queue and use it to simulate a producerconsumer
5. Implement an expression tree. Produce its pre-order, in-order, and post-order
6. Implement binary search tree.
7. Implement priority queue using heaps
8. Implement hashing techniques.
9. Implement Dijkstra's algorithm using priority queues
10. Implement a backtracking algorithm for Knapsack problem
Anna University Nov/Dec 2012 Timetable Result
To get instant updates like us in Facebook
|
<urn:uuid:3e1ba651-7166-45e6-9ab0-66cbec8e5917>
|
CC-MAIN-2013-20
|
http://www.iannauniversity.com/2012/06/it-2205-data-structures-and-algorithms.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704433753/warc/CC-MAIN-20130516114033-00077-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.712483 | 199 | 2.921875 | 3 |
The lecturer is not able to offer courses in English at this time.
* Load is given in academic hour (1 academic hour = 45 minutes)
Digital systems process in discrete steps real-world values previously converted into numbers. As within digital systems data are given a binary representation, what is based on both theoretical and technological grounds, digital systems are based upon logic circuits. The objective of the course is to introduce students to fundamental principles of digital systems design, starting with the elementary procedures of analysis and design. Elementary combinational and sequential components and modules are elaborated too, as well as the inclusion of digital systems in the real world.
C Programming Language; Brian W. Kernighan, Dennis Ritchie, Dennis M. Ritchie; Prentice Hall
Programming in C 2004; Stephen Kochan; Sams; 2004
C Programming: A Modern Approach; N. King, K.N. King; W. W. Norton & Company; 1996
Electrical Engineering and Information Technology and Computing
|
<urn:uuid:ac67887f-277c-4ac6-a773-27825708e491>
|
CC-MAIN-2013-20
|
http://www.fer.unizg.hr/en/course/ppi
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710274484/warc/CC-MAIN-20130516131754-00090-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.902106 | 203 | 2.828125 | 3 |
- Join over 1.2 million students every month
- Accelerate your learning by 29%
- Unlimited access for just £4.99 per month
Smart Card System
The first 200 words of this essay...
Chapter 1: Client- Server Technology
1.1 Client-Server Concept and Architecture
The term "client/server" implies that clients and servers are separate logical entities that work together, usually over a network, to accomplish a task. Client/server is more than a client and a server communicating across a network. Client/server uses asynchronous and synchronous messaging techniques with the assistance of middle-ware to communicate across a network.
Client/Server uses this approach of a client (UI) and the server (database I/O) to provide its robust distributed capabilities. The company, Sigma has used this technique for over 15 years to allow its products to be ported to multiple platforms, databases, and transaction processors while retaining a product's marketability and enhanced functionality from decade to decade.
Sigma's client/server product uses an asynchronous approach of sending a message to request an action and receives a message containing the information requested. This approach allows the product to send intensive CPU processing requests to the server to perform and return the results to the client when finished.
Sigma's architecture is based on re-usability and portability. Sigma currently uses a standard I/O routine, which is mutually exclusive from the
Found what you're looking for?
- Start learning 29% faster today
- Over 150,000 essays available
- Just £4.99 a month
Not the one? We have 100's more
Computer Science (view all)
- Why its important to have protocols and standards on a netwo...
- Barriers to Communication
- Plan an installation and an upgrade - Requirements in prepar...
- Control Unit, Memory Unit, and Arithmetic Logic Unit. The C...
- Open and Closed Loop Control System
- Describe the application and limits of procedural, object or...
- Case Study. LEGAL ISSUES-: Data Protection Act. Whiteman Lei...
- Growth and Influence of Computers and Computing
- Common methods of attack and types of malware
- Designing a data system for a vintage clothes business.
""Oliver Baldesare. National Sports Diploma. BTEC Student.
""Yoel Lax. Religious Studies. GCSE Student.
|
<urn:uuid:62b2b083-5b07-45fb-b9a1-2a3745ee05d2>
|
CC-MAIN-2013-20
|
http://www.markedbyteachers.com/as-and-a-level/computer-science/smart-card-system.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701281163/warc/CC-MAIN-20130516104801-00082-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.883027 | 498 | 2.828125 | 3 |
Computer Architecture and Organization
A top-down approach to computer design. Computer architecture: introduction to
assembly language programming and machine language set design. Computer organization:
logical modules; CPU, memory and I/O units. Instruction cycles, the data-path and
control unit. Hardwiring and microprogramming. The memory subsystem and timing.
I/O interface, interrupts, programmed I/O and DMA. Introduction to pipelining and
memory hierarchies. Fundamentals of computer networks.
|
<urn:uuid:8460e5b6-9f50-4ff6-9e34-39ee8ce14e94>
|
CC-MAIN-2013-20
|
http://www.mathematics-academy.com/Computer-Architecture-and-Organization
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.784438 | 106 | 3.140625 | 3 |
webpage updated 2/29/2012
Orig. Ed 1987, Reprint Ed. 1993
This book is based on the belief that theoretical computer science should be taught as the basis of applied sciences. Thus, much emphasis is devoted to illustrate how theoretical concepts can be exploited in practice. Besides traditional fields of the theory of computation, such as automata and form languages, the book also covers formal semantics and formal analysis of computer programs, which are considered by the authors as basic for the computer scientist, as are automata and computation theory.
|
<urn:uuid:3b9bbdfa-1101-4382-9bfb-cd2b5a850d2a>
|
CC-MAIN-2013-20
|
http://www.krieger-publishing.com/stackmathematics_31.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706477730/warc/CC-MAIN-20130516121437-00015-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.907088 | 109 | 2.5625 | 3 |
CORVALLIS, Ore. – Someday when military leaders are planning a battle or air traffic controllers are trying to land a dozen aircraft at once, they may benefit from studies now being done at Oregon State University – experts explaining how they play a cool video combat game in order to help artificial intelligence researchers build better virtual combatants.
The idea is to have experienced players of real-time strategy games, such as computerized combat, explain to a novice what they are doing, and why, and how it might work. These “think out loud” approaches allow researchers to design better interfaces and knowledge representations for computers.
Findings on this work were just published in Knowledge Based Systems, an academic journal, by Ron Metoyer, an associate professor in the OSU School of Electrical Engineering and Computer Science, and Simone Stumpf at City University London.
The research is being supported by the Defense Advanced Research Projects Agency, the research and development office for the U.S. Department of Defense which – among other things – helped create the Internet.
“We had groups of people playing Wargus, an open-source video game that’s very involved,” Metoyer said. “As they played, we asked them to explain their actions to a novice, while we did both audio and video recordings. Our goal was to find out the tricks the best human players use and let machines learn from them.”
Such “real time strategy,” researchers say, is applicable to military operations, air traffic control, emergency response team management, or other demanding tasks in which many different elements have to be considered and decided quickly. While playing the computer games, people have to determine how to allocate their resources, where to place armies, how to time their battles, deal with uncertainty, what units to create, and other challenges.
In this exercise, the expert players were soon chatting about how to place their archers, build a farm, anticipate the enemy attacks, be aware of surroundings. The player warns, “I wanna not cut down those trees.” Translated, the challenge is how to tell the computer which objects are part of the environment that should be left alone, as well as when and where to do something in a specific game situation.
“These are pretty complex games and there’s a lot going on at once,” Metoyer said. “Trying to build an army, feed it, form communities and carry on battles demands a lot of strategy and coordination. Very good game players can do that, and we think we can tap into these human insights to help machine learning.”
Previous approaches, he said, were often based on computers playing the games multiple times and trying to figure out strategy on their own. The new concept uses traditional human-computer interaction techniques to inform machine intelligence, and helps the computer learn more quickly.
And it should be possible, Metoyer said, to even pay back the favor a little - and create more fun video games.
|
<urn:uuid:e8fc04e0-08df-4c6f-a0f2-3c9e5dd29dcc>
|
CC-MAIN-2013-20
|
http://oregonstate.edu/ua/ncs/archives/2010/feb/beyond-computerized-chess-%E2%80%93-teaching-computers-more-complex-strategies
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708144156/warc/CC-MAIN-20130516124224-00060-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.968315 | 620 | 2.609375 | 3 |
Data-Intensive Supercomputing: The case for DISC
Date of Original Version
Abstract or Table of Contents
Google and its competitors have created a new class of large-scale computer systems to support Internet search. These “Data-Intensive Super Computing” (DISC) systems differ from conventional supercomputers in their focus on data: they acquire and maintain continually changing data sets, in addition to performing large-scale computations over the data. With the massive amounts of data arising from such diverse sources as telescope imagery,medical records, online transaction records, and web pages, DISC systems have the potential to achieve major advances in science, health care, business efficiencies, and information access. DISC opens up many important research topics in system design, resource management, programming models, parallel algorithms, and applications. By engaging the academic research community in these issues, we can more systematically and in a more open forum explore fundamental aspects of a societally important style of computing.
|
<urn:uuid:779ca494-af6f-4f38-bd3d-2d57be133cfc>
|
CC-MAIN-2013-20
|
http://repository.cmu.edu/isr/283/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706933615/warc/CC-MAIN-20130516122213-00088-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.907973 | 202 | 3.03125 | 3 |
Decision Support Systems
Decision support systems are one of the core technology areas developed at the Research Applications Laboratory (RAL). Generally speaking, a decision support system (DSS) is a computerized system designed to help a user make decisions. RAL has developed DSS's to support users making decisions in a wide variety of settings including aviation, surface transportation, and national security. RAL uses similar processes and components to develop many of its decision support systems.
When Research Applications Laboratory develops a decision support system, the first step in the process is to assess the user's needs for the system and develop a concept of how the system will work. The next step in development of a DSS at RAL is to perform research to identify and evaluate the best components (data and methods) for the given decision support system. This assessment often involves evaluating a combination of mathematic, computer, and atmospheric sciences capabilities. After this research is completed, the decision support system is designed, developed, and transferred into operation. This process can be iterated, as necessary, to address changing user's needs, new data sources, and advancing technology.
Many of RAL's decision support systems contain common sets of components. These components include data related components, algorithm related components, and user interface and display related components. The data related components are made of modules that ingest data, format data, store data, transfer data, and archive data. The algorithm related components utilize diverse methods such as atmospheric models, rule based algorithms, fuzzy logic algorithms, statistical algorithms, and data mining algorithms.
The user interface and display components, which are heavily customized for the user's work environment and operational needs, often employ web technologies to speed development and facilitate access. RAL is often able to reuse or modify components from previous decision support systems for use in a new DSS, greatly reducing the schedule and cost for the new system.
A few examples of decision support systems developed at RAL include:
- Pentagon Shield an airborne-hazard assessment and prediction system to protect the people working in and around the Pentagon.
- MDSS a tool for decision support for winter road maintenance managers.
- LLWAS used to warn of microbursts that could be hazardous to aircraft landing and departing at an airport.
- 4DWX used at army test ranges to provide more accurate go/no-go guidance to testers.
- WSDDM depicts accurate, real-time nowcasts of snowfall rate, plus current weather conditions for use by aircraft de-icing and airport operations users.
RAL continues to develop a wide variety decision support systems, often by leveraging technology from existing systems. If you have interest in utilizing an existing RAL DSS or development of a new decision support system, please contact us.
The primary role of the nowcast environment is to collect weather data; execute algorithms for producing a combined thunderstorm forecast; and provide a graphical display tool for viewing the various datasets.
|
<urn:uuid:0e8e86ca-5cdb-4a0f-b811-74f8d988661a>
|
CC-MAIN-2013-20
|
http://www.rap.ucar.edu/technology/dss.php/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698104521/warc/CC-MAIN-20130516095504-00081-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.92998 | 599 | 3.09375 | 3 |
Computer Software and Applications' has been designed for the students who have a penchant in computer education or pursuing computer education. This book consists of twenty-nine chapters, focuses on each and every details required for the students. The chapters like Input and Output Devices, DOS, Microsoft Word, Excel, HTML have been added which give the basic details to the students. An honest attempt has been made to make this book very useful.
The salient features of this book are:
# Written in easy language
# Plenty of examples and exercises have been added
# 'Solved MCQs' and 'Let us revise' have been given in each chapter.
# Clear diagrams and figures help students to have clear understanding.
|
<urn:uuid:5c8bfbd6-47e2-4bdd-a09a-c481b641a18c>
|
CC-MAIN-2013-20
|
http://koolskool.in/computer-software-and-applications
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132729/warc/CC-MAIN-20130516113532-00071-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.960676 | 144 | 2.765625 | 3 |
Welcome to AS/A2 Computing Level 3
This course is designed to allow students the opportunity to develop their understanding of problem solving using the principles of computing.
This course develops student’s capacity for critical thinking. They will develop an understanding of the range of applications of computers and the effects of their use. Students will form an understanding of the organisation of computer systems including software, data, hardware, communications and people.
It will be necessary to acquire the skills, including computer programming, to apply this understanding to developing computer-based solutions to problems.
In this development they will be shown systems analysis and design, methods of problem formulation and planning of solutions using computers, and systematic methods of development, testing, implementation and documentation.
Mr M Bull Mr A Keeley
|
<urn:uuid:2b05b82f-a156-4dbf-ab80-c2bb1a35e9fe>
|
CC-MAIN-2013-20
|
http://www.coundoncourt.coventry.sch.uk/faculties/ict-business-media/comp
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383077/warc/CC-MAIN-20130516092623-00009-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.91656 | 157 | 3.046875 | 3 |
Terms that are on use on this site.There are 493 entries in this glossary.
Optical Character Recognition is the machine recognition of characters by light-sensing methods.
an Original Equipment Manufacturer provides final systems made from assemblies and subassemblies from other manufacturers.
The difference in temperature between the set point and the actual process temperature. Also, referred to as droop.
operator interface; the hardware and software that shows an operator the state of a process
Object linking and embedding
A controller whose action is fully on or fully off.
Object Oriented Programming
OLE for process control
OPC data access
OPC data exchange
The lack of electrical contact in any part of the measuring circuit. An open circuit is usually characterized by rapid large jumps in displayed potential, followed by an off-scale reading.
an approach to computing that allows the interconnectability of systems based on compliance with established standards.
a structured set of system programs that controls the activities of a computer system by managing memory, tasks and communications.
a physical link between the human operator and a computer system, typically consisting of a graphical representation.
A process of orchestrating the efforts of all components toward achievement of the stated aim so everyone gains.
|
<urn:uuid:198e548f-745b-4431-b329-b80cacb965ab>
|
CC-MAIN-2013-20
|
http://www.automationmag.com/component/option,com_glossary/glossid,51/letter,O/task,list/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704645477/warc/CC-MAIN-20130516114405-00058-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.881882 | 257 | 2.90625 | 3 |
This class is an introduction to Processing and the Arduino. Processing is a powerful, open source, programming language developed by artists for artists. Coding or programing gives artists a cutting-edge palette of tools to explore the expressive potential of emerging technologies. Processing can be used to create interactive installations or generative compositions that shift and change over time. It can be used to visualize data, and manipulate images in real time. Arduino is an open-source electronics platform based on flexible, easy-to-use hardware and software. The Arduino can be used to read sensors and control actuators to create tangible and responsive objects. Technical instruction will be peppered with discussions about the ramifications of the open source movement. Topics include: creative commons, open distribution, sharing, and open capitalism, to name a few. It should be emphasized that this is an entry level class. The only requirements are that you are not intimidated by computers and are eager to be challenged by new concepts and ideas.
|
<urn:uuid:b9a215c6-e657-4162-8ad0-e000162aa391>
|
CC-MAIN-2013-20
|
http://www.cca.edu/academics/finar/curriculum/spring/604-40
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705926946/warc/CC-MAIN-20130516120526-00041-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.906535 | 196 | 3.296875 | 3 |
Interview Questions for IT Freshers
Q1. What do you understand by the term ‘normalization’?
Q2. How would you define WLAN?
Q3. What do you know about gigabyte Ethernet?
Q4. Do you know what page segmentation is?
Q5. Have you ever managed PPC campaigns?
Q6. What are meta tags?
Q7. What are the PHP data types?
Q8. How can functions be defined?
Q9. What do you understand by cache memory?
Q10. What are the various types of printers?
Q11. What is port monitoring?
Q12. What are the ten TCP/IP protocols?
Q13. How does the authentication method work?
Q14. What are the components of an operating system?
Q15. What do you understand by distributed system?
Q16. How can audio be captured in PHP?
Q17. How can a variable be defined in PHP?
Q18. How is SEM different from SEO?
Q19. What do you understand by latent Semantic analysis?
Q20. Do you know how to write an HTML code?
Q21. What is an inline style?
Q22. How can files be downloaded using PHP?
Q23. How can a video be captured in PHP?
Q24. How would you differentiate between a file and a script?
Q25. How does SNMP work?
Q26. What is mirroring?
Q27. How would you define a motherboard?
Q28. Differentiate between print and echo.
Q29. Which are the SEO tools that you are familiar with?
Q30. What do you know about the BIOS battery?
Q31. What is a LAN card?
Q32. How many types of errors are there in PHP?
Q33. How can communication with sockets be made possible?
Q34. How would you define a motherboard?
Q35. What are subnets?
Q36. What do you understand by ‘frame relay’?
Q37. Explain the functions of VPN. Q38. Tell us what you know about virtual memory.
Q39. How can a FAT be converted to NTFS?
Q40. What is root directory?
Q41. In Windows XP, how can a built-in firewall be configured?
Q42. Differentiate between thread and process.
Q43. What is DRAM?
Q44. Differentiate between UNIX and Windows. Q45. What do you understand by page fault?
Q46. How would you define VOIP?
Q47. How can variables be passed to forms?
Q48. What are the most important factors of SERP?
Q49. What is off-page optimization?
Q50. What are the various web technologies that you know of?
|
<urn:uuid:72aaf538-1cc0-4919-adda-24f1b0689530>
|
CC-MAIN-2013-20
|
http://www.jobzing.com/interview-questions/it-freshers.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706624988/warc/CC-MAIN-20130516121704-00032-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.915343 | 611 | 2.65625 | 3 |
At the Wednesday, November 15 Lunch ‘n Learn seminar, Computer Science Professor and Chair Larry Peterson discussed PlanetLab, an open platform for developing, deploying, and accessing planetary scale internet services.
A prototype of the next internet architecture, PlanetLab is a set of more than 700 servers spread across the globe but connected to the internet at 340 sites in more than 35 countries. You can think of PlanetLab as an entry point onto a new internet and supporting a diversity of services, bringing together a range of new and known ideas and techniques within a comprehensive design. Just as Princeton users are within a short hop to local routers to access the information they need, each of the approximately 3 million users on PlanetLab are all within a small hop to access the information and services they require.
The key idea, explains Peterson, is “distributed virtualization.” Each of these PlanetLab servers can be shared between research groups, and virtualized such that each user thinks that they have access to the resources on all of the machines. The servers become dedicated to their research, and they themselves can deploy experiments of various kinds on those virtual machines.
Many institutions of higher education are involved, and there are currently on the order of 600 research projects and 2,500 researchers making use of PlanetLab. Some are simply short term experiments… just long enough to achieve a result for a scholarly paper. Some are running continuously. For example, there are projects in anomaly and fault detection, seeing when the internet is or is not behaving as it should, and there is research in different methods for routing. Researchers are able to observe hiccups. If just one of the 25,000,000 daily request has a timeout, researchers are able to trace routing paths to key node, triangulate to the point of failure, and by so doing, learn how to rout around such potential points of failure. There are also experiments in probing the various characteristics of the internet and its behavior. New management services are being explored.
But Peterson emphasized that this is not a self-contained laboratory. It was the intent from the beginning to use PlanetLab to deploy long running services. Researchers are therefore encouraged to attract real clients to access whatever value-added content can be provided. Users can take advantage of the services, and often find that through PlanetLab, they are obtaining better performance essentially by using a new access mechanism to the content of the internet. PlanetLab transfers 3-4 terabytes of data and touches about one million unique IP addresses every day… from users reaching out to download content or to access a PlanetLab service.
PlanetLab works particularly well with very large file transfers. It is used now, for example, to distribute video lectures from the University Channel and reaches many endpoints efficiently. Users need not necessarily know from where the lecture is being served. PlanetLab will often automatically locate the nearest server and more responsively provide the desired content.
Just a few weeks ago, for example, Princeton Professor Ed Felton published a video about American voting systems. He put a two megabyte video on the web and, given the interest, the number of hits would have swamped Princeton’s internet connection. Instead, PlanetLab distributed that content, sustaining about 700 megabits/second from clients all over the world. PlanetLab essentially broke the file into discrete chunks and distributed it among PlanetLab servers, balancing the load over the number of sites on which the content was loaded.
Peterson explained that from the start, PlanetLab’s founders had a very strong conviction that such real life experiences were necessary to test assumptions. Only when the system was so deployed would researchers begin to be able to view the impact of hidden assumptions and to understand fully the problems associated with such deployment.
Says Peterson: “Build it, learn, build more, learn more. And at the end of the day you wind up with a service that people can use.”
Posted by Lorene Lavora
|
<urn:uuid:b15d047e-09ba-4204-a03c-3c10ea901895>
|
CC-MAIN-2013-20
|
http://blogs.princeton.edu/itsacademic/2006/11/planetlab_a_new_model_for_planetary-scale_computing_services.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705957380/warc/CC-MAIN-20130516120557-00076-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.949274 | 809 | 3.265625 | 3 |
Although computers have advanced dramatically over the last 50 years, they still do not possess the basic conceptual intelligence that most humans take for granted. By leveraging human skills and abilities in a novel way we can solve large-scale computational problems and collect training data to teach computers many basic human talents. Professor Luis von Ahn — of the Computer Science Department at Carnegie Mellon University — discusses how human brains can act as processors in a distributed system, each performing a small part of a massive computation.
Click Above for Video
…or download the OGG video format!
|
<urn:uuid:c33484d8-4bf9-4f14-9b83-4a89e04e4923>
|
CC-MAIN-2013-20
|
http://blogs.law.harvard.edu/mediaberkman/2010/04/28/luis-von-ahn-on-human-computation/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708143620/warc/CC-MAIN-20130516124223-00076-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.909094 | 111 | 3.265625 | 3 |
(Computationally-Oriented Display Environment) is a visual programming language
and system for parallel programming
, letting users compose sequential programs into parallel ones. The parallel program is a directed graph, where data flows on arcs linking the nodes representing the sequential programs. The programs may be written in any language, and CODE outputs parallel programs for a variety of architectures, as its model is architecture independent.
|
<urn:uuid:dd28b401-9936-424d-98f4-fd5e81804ded>
|
CC-MAIN-2013-20
|
http://www.reference.com/browse/CODE+(programming+language)
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00061-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.918951 | 81 | 2.765625 | 3 |
____ is the delivery of education at one location while the learning takes place at other locations.
Using ink-jet printer technology, but on a much larger scale, a(n) ____ creates photo-realistic-quality color prints.
____ contain the formats for how a particular object should display in a Web browser.
Technical certification programs are offered by many vendors, called ____ organizations, that develop and administer the examinations to determine whether a person is qualified for certification; a Web site showing certification percentages is shown in the accompanying figure
Double Data Rate SDRAM (DDR SDRAM) chips are even faster than SDRAM chips because they ____.
ctransfer data twice for every clock
dlarge format printer
5 Multiple Choice Questions
RIMM (Rambus Inline Memory Model)
if-then-else control structure
5 True/False Questions
Together, the four basic operations or a processor (fetching, decoding, executing, and storing) comprise a(n) ____ cycle. → machine
a type of communications device that connects a communication chanel to sending or receving device is a ___ → router
A(n) ____ program interacts with a DBMS, which in turn interacts with the database. → hypercube
All of the following use direct access EXCEPT ____. → a collection of related Web pages
You can erase a DVD-ROM. → a collection of related Web pages
|
<urn:uuid:1ef1a7a7-d05f-4b18-b50c-4c16df61a1c4>
|
CC-MAIN-2013-20
|
http://quizlet.com/8430071/test/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00024-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.824672 | 301 | 3.515625 | 4 |
Library/Software Engineering/Program Structure
Essentially just an algorithm. Example: directions from location A to B. Linear and sequential.
Sequential with Resources
Example: a recipe. Sequential algorithm but requires acquisition and use of resources and produces new resources as an output.
Response to an action dependent on the current state. Messages can arrive in any order.
Concurrency. Message processing. Lost events. Error handling.
|
<urn:uuid:41e86fc7-0041-406e-b599-038f550f1ab4>
|
CC-MAIN-2013-20
|
http://athile.net/library/wiki/index.php?title=Library/Software_Engineering/Program_Structure&oldid=3140
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698289937/warc/CC-MAIN-20130516095809-00014-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.845033 | 87 | 2.59375 | 3 |
Programming languages, Concurrent programming, Communicating agents, Multiprocessors, Language implementation, Joyce
Joyce is a programming language for parallel computers based on CSP and Pascal. A Joyce program defines concurrent agents which communicate through unbuffered channels. This paper describes a multiprocessor implementation of Joyce.
Hansen, Per Brinch, "A Multiprocessor Implementation of Joyce" (1988). Electrical Engineering and Computer Science Technical Reports. Paper 29.
|
<urn:uuid:c378c16e-3a12-489e-acd4-5fd8d8e929f0>
|
CC-MAIN-2013-20
|
http://surface.syr.edu/eecs_techreports/29/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710605589/warc/CC-MAIN-20130516132325-00048-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.694598 | 98 | 2.59375 | 3 |
|Research Areas -> Theory of Computation -> Parallel and Distributed Computation|
For decades, the speed of processors was growing exponentially, but this has abruptly stopped. Instead, now the number of processor cores on a chip is growing exponentially. A graphics processing unit (GPU) in a laptop may have 100 cores, and supercomputers may have 1,000,000. At Michigan, we are developing algorithms and data structures that use parallelism to help solve large problems such as climate modeling and the design of ethical clinical trials. Abstract models of parallelism are also investigated, such as having a vast array of tiny processors all working synchronously on the same problem. We also study abstract models of distributed computation, where a large number of independent, unsynchronized computers are arranged in a (possibly unknown) network and must solve a problem only through local communication.|
Stout, Quentin F.
Related Labs, Centers, and Groups
Software Systems Laboratory
|
<urn:uuid:802b8804-cbeb-4dc6-b421-2522119849e6>
|
CC-MAIN-2013-20
|
http://eecs.umich.edu/eecs/research/group.html?r_id=40&g_id=83
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701314683/warc/CC-MAIN-20130516104834-00022-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.931356 | 194 | 2.6875 | 3 |
Resource Sharing Computer Networks
Dr. Lawrence G. RobertsAdvanced Research Projects Agency
Just as time-shared computer systems have permitted groups of hundreds of individual users to share hardware and software resources with one another, networks connecting dozens of such systems will permit resource sharing between thousands of users. Each system, by virtue of being time-shared, can offer any of its services to another computer system on demand. The most important criterion for the type of network interconnection desired is that any user or program on any of the networked computers can utilize any program or subsystem available on any other computer without having to modify the remote program.
The objective of this program is twofold:
- to develop techniques and obtain experience on interconnecting computers in such a way that a very broad class of interactions are possible, and
- to improve and increase computer research productivity through resource sharing.
By establishing a network tying ARPA- sponsored computer research centers together, both goals are achieved. In fact, the most efficient way to develop the techniques needed for an effective network is by involving the research talent at these centers in prototype activity.
Currently there are thousands of computer centers in the country, each of which operates almost completely autonomously. There is some trading of programs between those machines, which are sufficiently similar to allow this, and there is technical communication through publications of technical meetings describing techniques developed. However, since the computer field is growing at such a rapid rate, a more immediate mechanism must be developed if there is to be significant cross-fertilization in sharing between these many centers. Although the same problem exists in many technological areas, the solution is most easily found and implemented by the computer community. If a sufficiently reliable and capable network were established linking these centers, many improvements could be obtained. There network were established linking these centers, many improvements could be obtained. There would be less duplication of large programs and systems, some of which require hundreds of man months of effort. Currently such programs must be reprogrammed for each machine where they are needed even if they are only required occasionally. It is estimated that such duplicative efforts more than double the national costs of creating and maintaining the software. A network will not eliminate all of this duplication but can be used for those functions which are only infrequently called and those which only need to be tested. Further, there are large data files available at individual locations which are not valuable enough to warrant duplication at every computer center, but from which segments could be obtained at any network location. For example, within the ARPA research centers there are files of speech samples, digitized pictures and the semantic definitions of most English words.
Often it is important at a research establishment to test a new language developed at another installation to determine what features should be incorporated into local languages. Currently one either reprograms the language on his local machine or obtains sufficient remote console time to evaluate the language. Although it may be preferable to use the original system via remote consoles, this is often difficult or impossible due to console incompatibility. With an interactive network it is possible to use ones local consoles through the local computer to access the remote system, thus eliminating the need for compatible consoles and at the same time reducing the communications costs by several orders of magnitude.
Another important application of a network is to link specialized computers to general purpose computer centers. ILLIAC IV is an outstanding example of such a specialized machine. With recent improvements in the hardware area, it will become more cost effective to design and construct computers particularly efficient at other specialized tasks (e. g. compiling, list processing and information retrieval). Making such machines available to all the computer research establishments would significantly increase the capability at these other centers.
The military environment like the scientific environment, includes thousands computers of various vintages and vendors. The traditional staff elements (Personnel, Intelligence, Operations (Command and Control), Logistics and communications) throughout all the Services are various machines with varying degrees of success. With the current fractionation of computer resources in the absence of any technology permitting the interconnection and sharing of these resources, the current situation can only get worse. Those data files and programs which have common utility to many military organizations and installations must be stored, created and maintained separately at each different machine. Military systems interconnected in a distributed interactive network obviate such constraints.
Relatively little work has been done in the past on interactive computer networks and it is mainly with the advent of widespread time-sharing that such nets become feasible. Most previous work has concentrated on either load sharing or message handling goals. Several attempts at load s have been made including ones at Bell Telephone Lab and UCLA. The desire was to improve processor utilization through load equalization, out unfortunately the precise compatibility required Is almost impossible to maintain.
More recently, experiments have been carried out between Lincoln Laboratory and System Development Corporation to test the feasibility of more general computer-computer interaction. This experiment demonstrated the relative ease of modifying time-sharing systems to permit network interactions and provided some statistics on the message lengths encountered. This experience has been added to through the introduction of the 338 display a computer at ARPA tied to the Lincoln system. Although the requests in this communication link are totally in one direction, the form of communication utilized is identical with that expected in network activities and has extended the techniques to include graphic display interactions.
Preliminary Network Planning
In early 1967 preliminary plans for an interactive computer network were discussed with ARPA Information Processing Techniques contractors. Working groups were established to design standardized communications protocol and to specify their network requirements. A preliminary protocol was developed and discussed with interested parties during the summer of 1967.
Network Information Center
In order for people to utilize the envisioned computer network effectively, it was necessary to provide extremely good documentation on what programs and files are available throughout the net. This information should be available online to any individual in the network. It should be possible for him to add new program descriptions, edit previous descriptions, retrieve relevant information based on keyword searches and affix comments to program descriptions, which he has used. To achieve this goal, Stanford Research Institute has agreed to develop such a facility. This is an extension of the capability already achieved at SRI and is already in progress in order that it may become available concurrently with the network.
Multi-point, fast response, high capacity, reliable communications are required for an interactive computer network. The traffic between nodes is expected to consist mainly of short digital messages with a wide dispersal of destinations. Initially, message length will vary from one to one thousand characters and with an expected average length of 20 characters. Since a cross country 50 kb communication line has a delay equivalent to 150 characters, messages must be continuously multiplexed into each line in order to maintain reasonable efficiency. Since the dispersion of destinations is large, messages with different origins and destinations must be concentrated into the same line. This can only be achieved with a store and forward system.
Message delay for on-line interactive work should be well below one second (origin to destination). This cannot be achieved with voice grade communication lines in a store and forward system. However, with 50-kilobit communication lines, the required response speed can be attained. The additional capacity obtained with 50 kb lines is also important, but is not the prime factor dictating the choice of these lines.
After considering the trade-offs associated with the communications subsystem, it was decided to design and build a store and forward net using message processors at each research center interconnected with 50 kb communication lines. Such a distributed communication system will be revolutionary, providing vastly reduced transmission costs, fast response and high reliability. The effect of providing such an efficient communication capability to the computer researchers should be to inspire the development of creative and effective network techniques.
Copyright © 2001 Dr. Lawrence G. Roberts
|
<urn:uuid:835e353c-4311-473b-a2f0-79c87081a13e>
|
CC-MAIN-2013-20
|
http://www.packet.cc/files/res-share-comp-net.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705575935/warc/CC-MAIN-20130516115935-00018-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.945389 | 1,577 | 3 | 3 |
Since its first volume in 1960, Advances in Computers has presented detailed coverage of innovations in hardware and software and in computer theory, design, and applications. It has also provided contributors with a medium in which they can examine their subjects in greater depth and breadth than that allowed by standard journal articles. As a result, many articles have become standard references that continue to be of significant, lasting value despite the rapid growth taking place in the field.
This volume is organized around engineering large scale software systems. It discusses which technologies are useful for building these systems, which are useful to incorporate in these systems, and which are useful to evaluate these systems.
Researchers and graduate students in computer science.
|
<urn:uuid:9d4dc796-6a5d-48df-b834-d9cd9dee4e46>
|
CC-MAIN-2013-20
|
http://store.elsevier.com/product.jsp?isbn=9780120121465
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699798457/warc/CC-MAIN-20130516102318-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.958427 | 139 | 2.609375 | 3 |
Mar. 13, 2013 Large parts of our lives are now being monitored and analysed by computers. Log on to Amazon and intelligent data analysis software can recommend a selection of books you might like to read. Far from being a sinister intrusion into people's privacy, the purpose of these systems is to improve our lives, experts say.
Professor Bogdan Gabrys, chair in computational intelligence at BU's Smart Technology Research Centre explains: "There is a huge explosion in the amount and availability of data we are generating on a daily basis but unless you use it in the right way that information is not going to be very useful. We have been working with a number of companies who want to use the data they obtain during their daily business to make predictions about sales and model customer behaviour."
Known as predictive analysis, the work by Professor Gabrys and his colleagues goes beyond simply crunching numbers. They are developing computer programs capable of learning. With this intelligent software, computers can make judgements about the quality and reliability of the data they gather. They look for patterns and adapt according to what the information will be used for.
"We are trying to design adaptive algorithms that learn on the basis of the data they receive," says Professor Gabrys. This has led to the Centre's work supporting businesses in the tourism and communications industries:
"We have been working with Lufthansa Systems so the airline can accurately forecast demand for different types of plane tickets. Customers going on holiday in economy class tend to book their tickets a few months in advance. If the planes fill up with economy customers, they have to turn away lucrative business and we've found first class customers tend to book late."
"Communications companies like BT also want to be able to predict whether a customer is going to switch providers as it costs BT between five to eight times more to get a new customer than to retain an existing one. So we have been helping them detect if someone is likely to change service provider, so they can then be proactive, contact such customers with a good offer or just give them more of a personal touch."
Building a learning computer system capable of adapting according to the information that is fed into it is no easy task. Most prediction software until recently has been tailor made to solve specific problems. This can make them expensive to maintain and hard to adapt.
For this, Professor Gabrys and his team have turned to one of the most successful problem solvers on the planet for inspiration -- Mother Nature herself. They are building systems which process information in a similar way to the human brain, with its networks of neurons that constantly rewire themselves as we learn.
They have also drawn inspiration from genetics and natural evolution in the behaviour of insects such as bees and ants as well as flocking and swarming behaviour in birds and fish to devise robust learning and optimisation algorithms.
Professor Gabrys says: "We are trying to build more flexible systems and push the boundaries of how intelligent these systems are."
So with ever more intelligent systems, will computers soon take the guesswork out of our everyday lives with accurate and reliable predictions?
Professor Gabrys is not so sure: "If someone tells you they can reliably predict really complex systems such as economies or financial markets one year ahead, do not believe them. Some things are predictable and some are not. The critical aspect in what we do is knowing the difference between them."
Other social bookmarking and sharing tools:
Note: If no author is given, the source is cited instead.
|
<urn:uuid:9906c05f-a740-45aa-98ed-fc27f30ecbf0>
|
CC-MAIN-2013-20
|
http://www.sciencedaily.com/releases/2013/03/130313095539.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381630/warc/CC-MAIN-20130516092621-00066-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.96683 | 710 | 2.703125 | 3 |
Mike Addlesee, Rupert Curwen, Steve Hodges, Joe Newman, Pete Steggles, Andy Ward, and Andy Hopper
Sentient computing systems, which can change their behavior based on a model of the environment they construct using sensor data, may hold the key to managing tomorrow's device-rich mobile networks.
As computer users become increasingly mobile and the diversity of devices with which they interact increases, the authors note that the overhead of configuring and personalizing these systems must also increase. A natural solution to this problem involves creating devices and applications that appear to cooperate with users, reacting as though they are aware of the context and manner in which they are being used, and recon-figuring themselves appropriately.
At AT&T Laboratories Cambridge, the authors built a system that uses sensors to update a model of the real world. The model describes the world much as users themselves would, and they can use it to write programs that react to changes in the environment according to their preferences. The authors call their approach sentient computing because the applications appear to share the user's perception of the environment. Sentient computing systems create applications that appear to perceive the world, making it easy to configure and personalize networked devices in ways that users can easily understand.
But sentient computing offers more than a solution to the problems of con-figuration and personalization. When people interact with computer systems in this way, the environment itself becomes the user interface--a natural goal for human-computer interaction.
Publisher IEEE Computer Society
Copyright © 2001 IEEE. Reprinted from IEEE Computer Society. This material is posted here with permission of the IEEE. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.
|
<urn:uuid:43d064a0-a5bc-47b5-832d-66c661a7dbe2>
|
CC-MAIN-2013-20
|
http://research.microsoft.com/apps/pubs/default.aspx?id=132531
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00021-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.919795 | 407 | 2.5625 | 3 |
Computer modeling is the use of computers to create replicas of real-life objects or to simulate processes. Computer models are valuable because they allow observers to study the functioning of an entire system. Models can also impose conditions that are not easily or safely applied in a real situation.
Computer models can handle complex evaluations involving huge numbers of variables. Models may be intricately detailed but easily modified and manipulated. Processes can be sped up or slowed down. Individual components can be altered to reveal their impact. With such flexibility, computer models are capable of exploring and studying a great variety of topics and concepts. Conventional mathematical modeling cannot begin to achieve the range of possibilities provided by computer modeling and simulations. Functions of operating system is monitor, keyboard, mouse, ups and cpu.
How They Work
A computer model is usually defined in mathematical terms by a computer program. The program applies concepts of algorithms to software. Step-by-step procedures are written as mathematical logic. The procedures instruct a machine to perform in a prescribed way. The mathematical equations are constructed to represent the functioning of a system being studied, from hurricane formation to cell growth. When the program is running, the mathematical dynamics become comparable to the dynamics of the real system. Data are fed into the program. The results are displayed or printed.
Models provide a flexibility that enables predictions of future and hypothetical events. Predictions are based on a set of known conditions. These are established at the beginning of a modeling sequence. For example, a model may analyze the interaction of weather, pollution, and ozone for a city. It requires initial information regarding the city's size, industries, and range of weather conditions. From this information the model can project future smog intensity over a period of time. Hypothetical events are studied by introducing different kinds of data. By manipulating data for a model of a rocket launch, the model can determine the conditions required for the safest launch. Functions of operating system is monitor, keyboard, mouse, ups and cpu.
The processes of many systems are extremely complicated. For example, consider the branching structures of neurons of the human brain. The mathematical representation of the neuron would have to be extremely complex to accommodate the many possible factors involved.
Types of Models
The earliest computer models presented data in the form of spreadsheets . They resembled typical bookkeeping ledgers of columns and rows. Spreadsheet programs were widely applicable to business concerns, especially those involving financial information. Later developments included interactive spreadsheets, which provided automatic recalculation and linking between multiple worksheets. Functions of operating system is monitor, keyboard, mouse, ups and cpu.
Another type of model involves computer-aided design (CAD). CAD provides a representation of an object by computer graphics. Such models display exact replicas of real-life objects but on a smaller scale.
A simulation is a CAD representation of ongoing real or imagined situations or phenomena. The simulation is sometimes achieved by combining a sequence of models, creating the effect of movement and motion. For example, one of the earliest computer simulations was developed during the Manhattan Project to depict the process of nuclear detonation. A realistic simulation incorporates many parameters. It helps viewers appreciate the multiple cause-and-effect relationships in a situation. Simulations allow people to study structures and behaviors by interacting with an artificial model. People hoping to become surgeons, airplane pilots, and train engineers are among the many students whose education includes simulator training. Functions of operating system is monitor, keyboard, mouse, ups and cpu.
The variety and sophistication of computer models have increased as computers have become more powerful. Most personal and office computers are now connected to the Internet. Yet most computers are also idle during some hours of the day or night. The collective power of these idle computers can be harnessed through distributed computing. The idle computers become part of the public distributed network for various modeling efforts. Some of these distributed computing projects work as screensavers or in the background. The idle processors perform together as a virtual supercomputer to tackle complex modeling and simulation projects. Functions of operating system is monitor, keyboard, mouse, ups and cpu.
Most people are familiar with simulations provided in computer and video games, and students may have used computer simulations and animations in the classroom. The scientific community uses models and simulation in numerous practical applications.
|
<urn:uuid:70e4bfbb-9241-4ba1-9a03-a8b4a4a1d7b7>
|
CC-MAIN-2013-20
|
http://functionsofoperatingsystem.com/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709947846/warc/CC-MAIN-20130516131227-00045-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.944821 | 870 | 3.578125 | 4 |
Occasionally, computer technology makes a break with the past. The relational model of database management, with its simple, tabular data structures and powerful data manipulation operations, was one such revolution. Another revolution in computing technology, client/server computing, took place in the last decade with the spread of minicomputers and microcomputers and a network to support intra machine communication. These highly cost-effective and flexible open systems have made client/server computing possible. Client/server computing delivers the benefits of the network computing model along with the shared data access and high performance characteristics of the host-based computing model.
Clients and servers are characterized by endless access to each other's resources. They provide advanced communications and raw computing power to handle demanding applications, as well as graphic user interfaces (GUIs).
In a client/server database system there are three distinct components, each focusing on a specific job required to provide a user's environment:
- a database server that manages the database among a number of clients,
- a client application that desires services of the database, and
- a network that supports the communication between the client, (front end) and the server that deals with the network and the database system (back end).
|
<urn:uuid:20b78a40-25fc-41d7-916d-55b08cec2471>
|
CC-MAIN-2013-20
|
http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0cstr--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-about---00-0-1-00-0--4----0-0-11-10-0utfZz-8-10&a=d&cl=CL1.166&d=HASH012bc045b8fcda525033c89f.2
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703227943/warc/CC-MAIN-20130516112027-00081-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.935427 | 250 | 3.46875 | 3 |
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Learning To Think Spatially
of working and final results (the functional equivalent of short-term memory)
Assistance: provides prompts, feedback, hints, and suggestions to guide the choice of data analysis steps and to manage the flow of work
Display: provides a flexible display system for the representation of working and final results to oneself and to others—in physical form (e.g., a graph on paper, a three-dimensional model of molecular structure) or in virtual form (e.g., on-screen, for hard-copy printing, for export to other software packages)
For example, throughout history, we have developed, taught, and used a suite of tools—abacuses, compasses, Cuisenaire rods, protractors, graph paper, measuring and slide rules, and mechanical and electronic calculators—to facilitate calculations in the process of mathematical problem solving. Today, with the advent of sophisticated computer technologies, we are beginning to teach students to use software such as spreadsheets, database management programs, computer programming languages, and statistical analysis programs to perform calculations and to solve mathematical problems.
By routinizing basic mathematical operations (simple—such as addition, subtraction, multiplication, and division, complex—such as percentages, square roots, exponentiation, or generating trigonometric functions), tools and technologies provide ways of performing calculations and tracking the flow of sequences of chained operations. They can speed up the process of problem solving and increase the chances of arriving at a correct answer. They also provide ways of representing the working and final results to oneself and to others. Similar suites of tools and technologies can support spatial thinking in other knowledge domains (e.g., in architecture: pencil sketches, colored perspective drawings, sections [plans, elevations, etc.], balsa wood and cardboard models, CAD systems, virtual reality displays; in sea navigation; portolan charts, astrolabes, compasses, sextants, modified Mercator projection maps, chronometers, celestial tables, Loran, GPS).
In any knowledge domain, the components of a suite of support systems serve different functions in different contexts, for example, trading off speed and simplicity (in terms of data needs and the execution of operations) for depth and complexity. The elements of a suite of tools are not necessarily built in a coordinated fashion, either in terms of a division of functions or in terms of common design principles: they are assembled over time, with new tools adding to or replacing existing ones. However, their alignment along a low- to high-technology continuum is not necessarily synonymous with worse to better. Because of their simplicity, transparency, and intuitive nature, low-technology tools are often taught and used as precursors for understanding the complex, nontransparent, and non-intuitive operations of high-technology tools. Indeed, in many instances, the “back-of-the-envelope” answers generated by low-technology tools are perfectly adequate to the task at hand. However, with the increasing link between workforce demands and digital information technology, familiarity with and indeed mastery over high-technology tools is increasingly important.
6.4TOOLS FOR THOUGHT: THE LIMITS TO POWER
Support systems, especially those that are computer based, are the cognitive equivalent of power tools. With the promise of access to such power comes costs and challenges. These include the time and committed effort it takes to learn to use a support system (and continuously upgrade to new versions), the need to understand the system’s range of appropriate and inappropriate uses, and the need to appreciate the system’s characteristic limits and idiosyncrasies.
There is a wide range of support systems available in science, mathematics, and design (see Box 7.1 for a description of hi-tech support systems for spatial thinking). These support systems can be a boon or a bane to the learner. Experts in a knowledge domain start with an understanding
|
<urn:uuid:77a9ede0-2c8f-4cad-b36c-e1d17c2fe848>
|
CC-MAIN-2013-20
|
http://www.nap.edu/openbook.php?record_id=11019&page=141
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705318091/warc/CC-MAIN-20130516115518-00026-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.920974 | 847 | 3.28125 | 3 |
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Avoiding Surprise in an Era of Global Technology Advances
undergraduate computer science courses and so are available in many parts of the world. This implies that opposing forces could readily encrypt their own transmissions if they chose to do so. Such encryption can also be expected to be very strong, and difficult or impossible to break using known methods. Programs such as Pretty Good Privacy (PGP) and Gnu Privacy Guard (GnuPG), which can be obtained over the Internet by anyone, provide essentially unbreakable security if used properly.
Computational systems are used to process the data gathered by sensor systems and human agents and to produce information that can be used by decision makers in command centers and on the field. As the armed forces become increasingly networked and more data become available online, BLUE forces will rely increasingly on computational systems to sift through the available information to provide situational awareness and to identify patterns. Computational systems are also used to automate difficult or tedious decision processes. Logistic operations, for example, can be optimized through the use of automated planning and scheduling systems.
The economies of scale and competitive pressures in the commercial computer sector have produced a situation where processing power has become a commodity. Powerful 32- and 64-bit microprocessors are produced at low cost both domestically and internationally. One result of this trend is that it has become much simpler for other nations to acquire state-of-the-art computational capabilities. Consider the fact that the majority of the supercomputers in the Top 500 listing are composed of collections of standard microprocessors lashed together with high-performance networks.
It will always be the case that the most demanding computational applications such as image interpretation, automated language translation, data mining, and so on will fuel the drive for ever increasing processing power. However, the skills and components required to construct powerful computational clusters are now widely available internationally.
It is not only access to high-performance computing that is a concern. Increasingly, low-power electronics is an area of active research and development that will be especially important in the area of sensor networks. To provide increased intelligence at the sensor and to reduce the demands on the network, a significant amount of processing will have to occur at the sensor; such processing power is enabled by low-power electronics.
As the price of computing hardware has dropped, the relative importance of software has increased. At this point, some of the most significant technical challenges in implementing the vision of the Future Combat Systems program center on the issue of developing reliable software systems that can coordinate distributed networks of sensors, actuators, and computers into a seamless whole. This task is complicated by the fact that the systems are expected to work in a dynamic environment in which elements may be added or removed unexpectedly and communications are not assured. In this regard, research and development being carried out in distributed systems, grid computing, and sensor networks should be viewed as germane to the military context.
The ability to produce and maintain sophisticated software systems relies on the availability of skilled personnel, programmers, analysts, testers, and others. Here again, it is the case that human resources are available internationally. China, for example, currently graduates five times more engineers than does the United States. The Indian city of Bangalore now has more technology jobs than Silicon Valley. In the face of current worldwide trends, it is unlikely that BLUE forces will have a significant advantage in terms of their ability to design, deploy, and operate the computational infrastructure required to support information collection and exploitation. The number of trained software engineers is declining in the United States but is increasing rapidly in countries in Asia.
|
<urn:uuid:3d005977-c750-43df-ac66-1507c10f96fa>
|
CC-MAIN-2013-20
|
http://www.nap.edu/openbook.php?record_id=11286&page=104
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706635063/warc/CC-MAIN-20130516121715-00033-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.941233 | 769 | 2.734375 | 3 |
- Our programs develop outstanding computer professionals. Graduates are well prepared for careers in business, government, education, or research. Students become thoroughly grounded in programming languages, computer architecture, computer systems, and theory of computation. In addition, they gain experience in applied computer science areas such as computer graphics, compilers, databases, and networking.
- Our students are enjoying notable professional success in industry and education, including Microsoft, IBM, AT&T Bell Labs, Cisco Systems, First Data Corp., Caterpillar, SITA, Bluestem Systems, Amteva Technologies, Commerce Clearing House, Sterling Software, Marathon Photo, LHS Communications Systems, Software Artisans, and the University of Texas.
- Students develop the essentials for success in the computer science profession, and all areas of life — problem-solving ability, logical thinking, creativity, broad comprehension, and fine focus of attention.
- Students gain experience with the most advanced operating systems and computer environments including Microsoft Windows and Linux.
- Students study the unifying theory of programming languages and explore a variety of modern languages and approaches to programming in various classes, for example, Java and C# (for enterprise and large-scale systems), “Scheme/LISP” (for expert systems), and “ML” (for research in the functional approach to programming). Other specialized languages are taught as needed.
- Our faculty use an effective teaching approach that creates a learning environment of ease and enjoyment without the stress and strain that commonly accompany a rigorous discipline.
- Students study the basic principles underlying all computer hardware, and examine principles that have given rise to the most recent advances in high-performance and super computing systems, including networked, parallel, distributed, and highly concurrent approaches. Each of these systems use many computers in combination to solve a large computational task, but they differ in their scope and approach.
- The Department of Computer Science has several very well equipped computing laboratories, which provide Internet access, as well as the departmental network, and campus network. A variety of servers provide support for classes, development, and research activities. Students can also access a wide variety of resources, including scanners, printers, and other campus services including the library online catalogue and materials.
- High-speed campus and Internet access is provided to student housing, all student labs, and several other access places around campus.
- Occasional field trips and guest lectures by successful computer professionals are offered to provide students with the latest developments in computer science and their practical applications in science and industry.
- The electronic computer is amazingly powerful, and yet is limited compared to the computing ability of the 100-billion neuron parallel processing capability of the human brain. This vast capability of the brain physiology is directly cultured through the University’s curriculum, so that graduates not only master computer science, but also grow in the ability to spontaneously operate from the total potential of their own brain physiology and make right decisions without mistakes.
|
<urn:uuid:d043f080-478a-4b46-a7b2-9f5fd66ab35d>
|
CC-MAIN-2013-20
|
http://www.mum.edu/RelId/613657/ISvars/default/Special_Features.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704179963/warc/CC-MAIN-20130516113619-00095-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.941646 | 605 | 2.609375 | 3 |
A User Machine in a Time-Sharing System,
B. W. LAMPSON, W. W. LICHTENBERGER, MEMBER, IEEE, AND M. W. PIRTLE
Note: This web page was converted automatically from a Word original. There may be problems with the formatting and the pictures. To see the intended form, follow one of the links below.
Citation: B. Lampson, M. Pirtle and W. Lichtenberger. A user machine in a time-sharing system. Proc. IEEE 54, 12 (Dec. 1966), pp 1766-1774. Reprinted in Computer Structures, ed. Bell and Newell, McGraw-Hill, 1971, pp 291-300.
Email: [email protected]. This paper is at http://www.research.microsoft.com.
Abstract—This paper describes the design of the computer seen by a machine-language programmer in a time-sharing system developed at the University of California at Berkeley. Some of the instructions in this machine are executed by the hardware, and some are implemented by software. The user, however, thinks of them all as part of his machine, a machine having extensive and unusual capabilities, many of which might be part of the hardware of a (considerably more expensive) computer.
Among the important features of the machine are the arithmetic and string manipulation instructions, the very general memory allocation and configuration mechanism, and the multiple processes which can be created by the program. Facilities are provided for communication among these processes and for the control of exceptional conditions.
The input-output system is capable of handling all of the peripheral equipment in a uniform and convenient manner through files having symbolic names. Programs can access files belonging to a number of people, but each person can protect his own files from unauthorized access by others.
Some mention is made at various points of the techniques of implementation, but the main emphasis is on the appearance of the user's machine.
A characteristic of a time-sharing system is that the computer seen by the user programming in machine language differs from that on which the system is implemented , , , , . In fact, the user machine is defined by the combination of the timesharing hardware running in user mode and the software which controls input-output, deals with illegal actions which may be taken by a user's program, and provides various other services. If the hardware is arranged in such a way that calls on the system have the same form as the hardware instructions of the machine , then the distinction becomes irrelevant to the user; he simply programs a machine with an unusual and powerful instruction set which relieves him of many of the problems of conventional machine-language programming , .
In a time-sharing system that has been developed by and for the use of members of Project Genie at the University of California at Berkeley , the user machine has a number of interesting characteristics. The computer in this system is an SDS 930, a 24 bit, fixed-point machine with one index register, multi-level indirect addressing, a 14 bit address field, and 32 thousand words of 1.75 m s memory in two independent modules. Figure 1 shows the basic configuration of equipment. The memory is interleaved between the two modules so that processing and drum transfers may occur simultaneously. A detailed description of the various hardware modifications of the computer and their implications for the performance of the overall system has been given in a previous paper .
Briefly, these modifications include the addition of monitor and user modes in which, for user mode, the execution of a class of instructions is prevented and replaced by a trap to a system routine. The protection from unauthorized access to memory has been subsumed in an address mapping scheme: both the 16 384 words addressable by a user program (logical addresses) and the 32 768 words of actual core memory (physical addresses) have been divided into 2048-word pages. A set of eight six-bit hardware registers defines a map from the logical address space to the real memory by specifying the real page that is to correspond to each of the user's logical pages. Implicit in this scheme is the capability of marking each of the user's pages as unassigned or read-only, so that any attempt to access such a page improperly will result in a trap.
Fig. 1. Configuration of equipment.
All memory references in user mode are mapped. In monitor mode, all memory references are normally absolute. It is possible, however, with any instruction in monitor mode, or even within a chain of indirect addressing, to specify use of the user map. Furthermore, in monitor mode the top 4096 words are mapped through two additional registers called the monitor map. The mapping process is illustrated in Fig. 2.
Fig. 2. The hardware memory map. (a) Relation between virtual and real memory for a typical map. (b) Construction of a real memory address.
Another significant hardware modification is the mechanism for going between modes. Once the machine is in user mode, it can get to monitor mode under three circumstances
1) if a hardware interrupt occurs,
2) if a trap is generated by the user program as outlined, and,
3) if an instruction with a particular configuration of two bits is executed. Such an instruction is called a system programmed operator (SYSPOP).
In case 3), the six-bit operation field is used to select one of 64 locations in absolute core. The current address of the instruction is put into absolute location zero as a subroutine link, the indirect address bit of this link word is set, and another bit is set, marking the memory location in the link word as having come from user-mapped memory. The system routine thus invoked may take a parameter from the word addressed by the SYSPOP, since its address field is not interpreted by the hardware. The routine will address the parameter indirectly through location zero and, because of the bit marking the contents of location zero as having come from user mode, the user map will be applied to the remainder of the address indirection. All calls on the system that are not inadvertent are made in this way.
A monitor mode program gets into user mode by transferring to an address with mapping specified. This means, among other things, that a SYSPOP can return to the user program simply by branching indirect through location zero.
As the above discussion has perhaps indicated, the mode-changing arrangements are very clean and permit rapid and natural transfers of control between user and system programs. Advantage has been taken of this fact to create a rather grandiose machine for the user. Its features are the subject of this paper.
Basic Features of the Machine
A user in the Berkeley time-sharing system, working at what he thinks of as the hardware language level, has at his disposal a machine with a configuration and capability that can be conveniently controlled by the execution of machine instruction sequences. Its simplest configuration is very similar to that of a standard medium-sized computer. In this configuration, the machine possesses the standard 930 complement of arithmetic and logic instructions and, in addition, a set of software interpreted monitor and executive instructions. The latter instructions, which will be discussed more fully in the following, do rather complex input-output of many different kinds, perform many frequently used table lookup and string processing functions, implement floating point operations, and provide for the creation of more complex machine configurations. Some examples of the instructions available are:
1) Load A, B, or X (index) registers from memory or store any or the registers. Indexing and indirect ad-dressing are available on these and almost all other instructions. Double word load and store are also available.
2) The normal complement of fixed-point arithmetic and logic operations.
3) Skips on various arithmetic and logic conditions.
4) Floating point arithmetic and input-output. The latter is in free format or in the equivalent of Fortran E or F format.
5) Input a character from a teletype or write a block of arbitrary length on a drum file.
6) Look up a string in a hash-coded table and obtain its position in the table.
7) Create a new process and start it running concurrently with the present one at a specified point.
8) Redefine the memory of the machine to include a portion of that which is also being used by another program.
It should be emphasized that, although many of these instructions are software interpreted, their format is identical to the standard machine instruction format, with the exception of the one bit which specifies a system interpreted instruction. Since the system interpretation of these instructions is completely invisible to the machine user, and since these instructions do have the standard machine instruction format, the user and his program make no distinction between hardware and software interpreted instructions.
Some of the possible 192 operation codes are not legal in the user machine. Included in this category are those hardware instructions which would halt the machine or interfere with the input-output if allowed to execute, and those software interpreted instructions which attempt to do things which are forbidden to the program. Attempted execution of one of these instructions will result in an illegal instruction violation. The effect of an illegal instruction violation is described later.
The memory size and organization of the machine is specified by an appropriate sequence of instructions. For example, the user may specify a machine that has 6K of memory with addresses from 0 to 137778: alternatively, he may specify that the 6K should include addresses 0 to 37778, l40008 to l77778, and 340008 to 377778. The user may also specify the size and configuration of the machine's secondary storage and, to a considerable extent, the structure of its input-output system. A full discussion of this capability will be deferred to a later section.
The next few paragraphs discuss the mechanism by which the user's program may specify its memory size and organization. This mechanism, known as the process map to distinguish it from the hardware memory address mapping, uses a (software) mapping register consisting of eight 6-bit bytes, one byte for each of the eight 2K blocks addressable by the 14 bit address field of an instruction. Each of these bytes either is 0 or addresses one of the 63 words in a table called the private memory table (PMT). Each user has his own private memory table. An entry in this table provides information about a particular 2K block of memory. The block may be either local to the user or it may be shared. If the bock is local, the entry gives information about whether it is eurrently in core or on the drum. This information is important to the system but need not concern the user. If the block is shared, its PMT entry points to an entry in another table called the shared memory table (SMT). Entries in this table describe blocks of memory that are shared by several users. Such blocks may contain invariant programs and constants, in which case they will be marked as read-only, or they may contain arbitrary data which is being processed by programs belonging to two different users.
Fig 3. Layout of virtual memory for a typical process.
A possible arrangement of logical or virtual memory for a process is shown in Fig. 3. The nature of each page has been noted in the picture of the virtual memory this information can also be obtained by taking the corresponding byte of the map and looking at the PMT entry specified by that byte. The figure shows a large amount of shared memory, which suggests that the process might be a compilation, sharing the code for the compiler with other processes translating programs written in the same source language. Virtual pages one and two might hold tables and temporary storage which are unique to each separate compilation. Note that, although the flexibility of the map allows any block of code or data to appear anywhere in the virtual memory, it is certainly not true that a program can run regardless of which pages it is in. In particular, if it contains references to itself, such as branch instructions, then it must run in the same virtual pages into which it was loaded.
Two instructions are provided which permit the user to read and modify his process map. The ability to read the process mapping registers permits the user to obtain the current memory assignment, and the ability to write the registers permits him to reassign memory in any way that suits his fancy. The system naturally checks each new map as it is established to ensure that the process is not attempting to obtain unauthorized access to memory that does not belong to it.
When the user's process is initiated, it is assigned only enough memory to contain the program data as initially loaded. For instance, if the program and constants occupy 30008 words, two blocks, say blocks 0 and 1, will be assigned. At this point, the first two bytes of the process mapping register will be nonzero: the others will be zero. When the program runs, it may address memory outside of the first 4K. If it does, and if the user has specified a machine size larger than 4K, a new block of memory' will be assigned to him which makes the formerly illegal reference legal. In this way, the user' 5 process may obtain more memory. In fact, it may easily obtain more than 16K of memory simply by addressing 16K, reading and preserving the process mapping register, setting it with some of the bytes cleared to zero, and grabbing some more memory. Of course, only 16K can be addressed at one time; this is a limitation imposed by the address field of the machine.
There is an instruction that allows a process to specify the maximum amount of memory that it is allowed to have. If it attempts to obtain more than this amount, a memory violation will occur. A memory violation can also be caused by attempts to transfer into or indirect through unassigned memory, or to store into read-only memory. The effect of this violation is similar to the effect of a legal instruction violation and will be discussed.
The facilities just described are entirely sufficient for programs which need to reorganize the machine's memory solely for internal purposes. In many cases, however, the program wishes to obtain access to memory blocks which have been created by the system or by other programs. For example, there may be a package of mathematical and utility routines in the system which the program would like to use. To accommodate this requirement, there is an instruction which establishes a relationship between a name and a certain process mapping function. This instruction moves the PMT entries for the blocks addressed by the specified process mapping function into the shared memory table so that they are generally accessible to all users. Once this correspondence has been established, there is another instruction which allows a different user to deliver the name and obtain in return the associated process map. This instruction will, if necessary, make new entries in the second user's PMT. Various subsystems and programs of general interest have names permanently assigned to them by the system.
Fig. 4. Process and memory configuration for two users. (The processes are numbered for each user and are represented by their process mapping registers. Memory blocks are identified by drum addresses, which are written M1, M2, ...)
The user machine thus makes it possible for a number of processes belonging to independent users to run with memory which is an arbitrary combination of blocks local to each individual process, blocks shared between several processes, and blocks permanently available in the system. A complex configuration is sketched in Fig. 4. Process 1.1 was shown in more detail in Fig.3. Each box represents a process, and the numbers within represent the eight map bytes. The arrows between processes show the process hierarchy, which is discussed in the next section. Note that the PMT's belong to the users, not to the processes.
From the above discussion, it is apparent that the user can manipulate the machine memory configuration to perform simple memory overlays, to change data bases, or to perform other more complex tasks requiring memory reconfiguration. For example, the use of common routines is greatly facilitated, since it is necessary only to adjust the process map so that 1) memory references internal and external to the common routine are correct, and 2) the memory area in which the routine resides is read-only. In the simplest case, in which the common routine and the data base fit into 16K of memory, the map is initially established and remains static throughout the execution of the routine. In other cases where the routine and data base do not fit into 16K, or where several common routines are concurrently employed, it may be necessary to make frequent adjustment to the map during execution.
An important feature of the user machine allows the user program, which in the current context will be referred to as the controlling process, to establish one or more subsidiary processes. With a few minor exceptions, to be discussed, each subsidiary process has the same status as the controlling process. Thus, it may in turn establish a subsidiary process. It is therefore apparent that the user machine is in fact a multi-processing machine. The original suggestion which gave rise to this capability was made by Conway ; more recently the Multics system has included a multi-process capability , , .
A process is the logical environment for the execution of a program, as contrasted to the physical environment, which is a hardware processor. It is defined by the information which is required for the program to run; this information is called the state vector. To create a new process, a given process executes an instruction that has arguments specifying the state vector of the new process. This state vector includes the program counter, the central registers, and the process map. The new process may have a memory configuration which is the same as, or completely different from, that of the originating process. The only constraint placed on this memory specification is that the total memory available to the multi-process system is limited to 128K by the process mapping mechanism, which is common to all processes. Each user, of course, has his own 128K.
This facility was put into the system so that the system could control the user processes. It is also of direct value, however, for many user processes. The most obvious examples are input-output buffering routines, which can operate independently of the user's main program, communicating with it through memory and with interrupts (see the following). Whether the operation being buffered is large volume output to a disc or teletype requests for information about the progress of a running program, the degree of flexibility afforded by multiple processes far exceeds anything which could have been built into the input-output system. Furthermore, the overhead is very low: an additional process requires about 15 words of core, and process switching takes about 1 ms under favorable conditions. There are numerous other examples of the value of multiple processes; most, unfortunately, are too complex to be briefly explained.
A process may create a number of subsidiary processes, each of which is independent of the others and equivalent to them from the point of view of the originating process. Figure 4 shows two simple multi-process structures, one for each of two users. Note that each process has associated with it pointers to its controlling process and to one of its subsidiary processes. When a process has two immediate descendants, as in the case of processes 1.2 and 1.3, they are chained together on a ring. Thus, three pointers, up, down, and ring, suffice to define the process structure completely. The up pointers are, of course, redundant, but are convenient for the implementation. The process is identified by a process number which is returned by' the system when it is created.
Fig. 5. Hierarchy of processes
A complex structure such as that in Fig. 5 may result from the creation of a number of subsidiary processes. The processes in Fig. 5 have been numbered arbitrarily to allow a clear description of the way in which the pointers are arranged. Note that the user need not be aware of these pointers: they are shown here to clarify the manner in which the multiple process mechanism is implemented.
A process may destroy one of its subsidiary processes by executing the appropriate instruction. For obvious reasons this operation is not legal if the process being destroyed it-self has subsidiary processes. It is possible to find out what processes are subsidiary to any given one: this permits a process to destroy an entire tree of sub-processes by reading the tree from the top down and destroying it from the bottom up.
The operations of creating and destroying processes are entirely separate from those of starting and stopping their execution, for which two more operations are provided. A process whose execution has been stopped is said to be suspended.
To assure that these various processes can effectively work together on a common task, several means of inter-process communication exist. The first allows the controlling process to obtain the current status of each of its subsidiary processes. This status information, which is read into a table by the execution of the appropriate system instruction, includes the current state vector and operating status. The operating status of any process may be
2) dismissed for input-output,
3) terminated for memory violation,
4) terminated for illegal instruction violation, or
5) terminated by the process itself.
A second instruction allows the controlling process to become dormant until one of its subsidiary processes terminates. Termination can occur in the following three ways:
1) because of a memory violation,
2) because of an illegal instruction violation,
3) because of self-termination.
Interactions described previously provide no method by which a process can attract the attention of another process that is pursuing an independent course. This can be done with a program interrupt. Associated with each process is a 20-bit interrupt mask. If a mask bit is set, the process may, under certain conditions (to be described in the following), be interrupted: i.e., a transfer to a fixed address will be simulated. The program will presumably have at this fixed address the location of a subroutine capable of dealing with the interrupt and returning to the interrupted computation afterwards. The mechanism is functionally' almost identical to many hardware interrupt systems.
A process may cause an interrupt by delivering the number of the interrupt to the appropriate instruction. The process causing the interrupt continues undisturbed, but the nearest process which is either on the same level as the one causing the interrupt or above it in the hierarchy of processes, and which has the appropriate interrupt armed, will be interrupted. This mechanism provides a very flexible way for processes to interact with each other without wasting any time in the testing of flags or similar frivolous activities.
Interrupts may be caused not only by the explicit action of processes, but also by the occurrence of several special conditions. The occurrence of a memory violation, attempted execution of an illegal instruction, an unusual input-output condition, the termination of a subsidiary process, or the intervention of a user at a console (by pushing a reserved button) all may cause unique interrupts (if they have been previously armed). In this way, a process may be notified conveniently of any unusual conditions associated with other processes, the process itself, or a console user.
The memory assignment algorithm discussed previously is slightly modified in the presence of multiple processes. When a process is activated, one of three options may be specified:
1) Assign new memory to the process entirely independently of the controlling process.
2) Assign no new memory to the process. Any attempt to obtain new memory will cause a memory violation.
3) If the process attempts to obtain new memory, scan upward through the process hierarchy until the topmost process is reached. If at any time during this scan a process is found for which the address causing the trap is legal, propagate the memory assigned to it down through the hierarchy to the process causing the trap.
Option 3) permits a process to be started with a subset of memory and later to reacquire some of the memory which was not given to it initially. This feature is important because the amount of memory assigned to a process influences the operating efficiency of the system and thus the speed with which it will be able to respond to teletypes 0 other real-time devices.
The Input-Output System
The user machine has a straightforward but unconventional set of input-output instructions. The primary emphasis in the design of these instructions has been to make all input-output devices interface identically with program and to provide as much flexibility in this common interface as possible. Two advantages result from this uniformity: it becomes natural to write programs that are essentially independent of the environment in which they operate, and the implementation of the system is greatly simplified. To the user the former point is, of course, the important one.
It has been common, for example, for programs written to be controlled from a teletype to be driven instead from a file on, let us say, the drum. A command exists which permits the recognizer for the system command language and all of the subsystems to be driven in this way. This device is particularly useful for repetitive sequences of program assemblies and for background jobs that are run in the absence of the user. Output which normally goes to the teletype is similarly diverted to user files. Another application of the uniformity of the file system is demonstrated in some of the subsystems, notably the assembler and the various compilers. The subsystem may request the user to specify where he wishes the program listing to be placed. The user may choose anything from paper tape to drum to his own teletype. In the absence of file uniformity each subsystem would require a separate block of code for each possibility. In fact, however, the same input-output in instructions are used for all cases.
The input-output instructions communicate with files. The system in turn associates files with the various physical devices. Programs, for the most part, do not have to account for the peculiarities of the various actual devices. Since devices differ widely in characteristics and behavior, the flexibility of the operations available on files is clearly critical. They must range from single-character input to the output of thousands of words.
A file is opened by giving its name as an argument to the appropriate instruction. Programs thus refer to all files symbolically, leaving the details of physical location and organization to the system. If authorized, a program may refer to files belonging to other users by supplying the name of the other user as well as the file name. The owner of a file determines who is authorized to access it. The reader may compare this file naming mechanism with a more sophisticated one , bearing in mind the fact that file names can be of any length and can be manipulated (as strings of characters) by the program.
Access to files is, in general, either sequential or random in nature. Some devices (like a keyboard-display or a card reader) are purely sequential, while others (like a disk) may be either sequentially or randomly accessed. There are accordingly two major I/O interfaces to deal with these different qualities. The interface used in conjunction with a given file depends on whether the file was declared to be a random or a sequential file. The two major interfaces are each broken down into other interfaces, primarily for reasons of implementation. Although the distinction between sequential and random files is great, the subinterfaces are not especially visible to the user.
The three instructions CIO (character input-output}, WIO (word input-output), and BIO (block input-output), are used to communicate with a sequential file. Each instruction takes as an operand a file number. This number is given to the program when it opens a file. At the time of opening a file it must be specified whether the file is to be read from or written on to. Whether any given device associated with the file is character-oriented or word-oriented is unimportant: the system takes care of all necessary character-to-word assembly or word-to-character disassembly.
There are actually three separate, full-duplex physical interfaces to devices in the sequential file mechanism. Generally, these interfaces are invisible to programs. They exist, of course, for reasons of system efficiency and also because of the way in which some devices are used. The interfaces are:
1) character-by-character (basically for low-speed, character-oriented devices used for man-machine interaction),
2) buffered block I/O (for medium-speed I/O applications),
3) block I/O directly from user core (for high-speed situations).
It should be pointed out that there is no particular relation between these interfaces and the three instructions CIO, WIO, and BIO. The interface used in a given situation is a function of the device involved and, sometimes, of the volume of data to be transmitted, not of the instruction. Any interface may be driven by any instruction.
Of the three subinterfaces under discussion, the last two are straightforward. The character-by-character interface is, however, somewhat different and deserves some elaboration. Devices associated with this interface are generally (but not necessarily) used for man-machine interaction. Consider the case of a person communicating with a program by means of a keyboard-display (or a teletype). He types on the keyboard and the information is transmitted to the computer. The program may wish to make an immediate response on the display screen. In many cases this response will consist of an echo of the same character, so that the user has the feeling of typing directly onto the screen (or onto the teleprinter).
Fig. 6. The character-oriented interface.
So that input-output can be carried out when the program is not actually in main memory, the character-by-character input interface permits programs a choice of a number of echo tables. It further permits programs a choice of grade of service by permitting them to specify whether a given character is an attention (or break) character. Thus, for example, the program may specify that each character typed is to be echoed immediately and that all control characters are to result in activation of the program regardless of the number of characters in the input buffer. Alternatively, the program may specify that no characters are echoed and every character is a break character. By changing the specification the program can obtain an appropriate (and varying) grade of service without putting undue load on the system. Figure 6 shows the components of the character-by-character interface; responsibility for its operation is split between the interrupt routine called when the device signals for attention and the routine which processes the user’s I/O request.
The advantage of the full-duplex, character-by-character mode of operation is considerable. The character-by-character capability means that the user can interact with his program in the smallest possible unit, the character. Furthermore, the full-duplex capability permits, among other things
1) the program to substitute characters or strings of characters as echoes for those received,
2) the keyboard and display to be used simultaneously (as, for example, permitting a character typed on a keyboard to pre-empt the operation of a process. In the case of typing information in during the output of information, a simple algorithm prevents the random admixture of characters which might otherwise result), and
3) the ready detection of transmission errors.
Instructions are included to enable the state of both input and output buffers to be sensed and perhaps cleared (discarding unwanted output or input). Of course, it is possible for a program to use any number of authorized physical devices; in particular, this includes those devices used as remote consoles. A mechanism is provided to permit output which is directed to a given device to be copied on all other devices that are output linked to it (and similarly for input). This is useful when communication among users is desired and in numerous other situations.
The sequential file has a structure somewhat similar to that of an ordinary magtape file. It consists of a sequence of logical records of arbitrary length and number. On some devices, such as a card reader or the teletype, a file may have only one logical record. The full generality is available for drum files, which are the ones most commonly used. The logical record is to be contrasted with the variable length physical record of magtape or the fixed length record of a card. Instructions are provided to insert or delete logical records and increase or decrease them in length. Other instructions permit the file to be "positioned" almost instantaneously to a specified logical record. This gives the sequential file greater flexibility than one which is completely unaddressable. This flexibility is only possible, of course, because the file is on a random-access device and the sequential structure is maintained by pointers. The implementation is discussed in the following.
When reading a sequential file, CIO and WIO return certain unusual data configurations when they encounter an end of record or end of file, and BIO terminates transmission on either of the conditions and returns the address of the last word transmitted. In addition, certain flag bits are set by the unusual conditions, and an interrupt may be caused if it has been armed.
Fig. 7. Index blocks and pointers to data blocks.
The implementation of the sequential file scheme for auxiliary storage is illustrated in Fig. 7. Information is written on the drum in 256-word physical records. The locations of these records are kept track of in 64-word index blocks containing pointers to the data blocks. For the file shown, the first logical record is more than 256 words long but ends in the second 256-word block. The second logical record fits in the third 256-word block and the third logical record—in the 4th data block—is followed by an end of file. If a file requires more than 64 index words, additional index blocks are chained together, both forward and backward. Thus, in order to access information in the file it is necessary only to know the location of the first index block. It may be worthwhile to point out that all users share the same drum. Since the system has complete control over the allocation of space on the drum, there is no possibility of undesired interaction among users.
Fig. 8. Bit table for allocation of space on the drum.
Available space for new data blocks or index blocks is kept track of by a bit table, illustrated in Fig. 8. In the figure, each column represents one of the 72 physical bands on the drum allocated for the storage of file information. Each row represents one of the 64 256-word sectors around a band. Each bit in the table thus represents one of the 4608 data blocks available. The bits are set when a block is in use and cleared when the block becomes available. Thus, if a new data block is required, the system has only to read the physical position of the drum, use this position to index in the table, and search a row for the appearance of a 0. The column in which a 0 is found indicates the physical track on which a block is available. Because of the way the row was chosen, this block is immediately accessible, This scheme has two advantages over its alternative, which is to chain unused blocks together;
1) It is easy to find a block in an optimum position, using the algorithm just described.
2) No drum operations are required when a new block is needed or an old one is to be released.
It may be preferable to assign the new block so that it becomes accessible immediately after the block last assigned for the file. This scheme will speed up subsequent reading of the file.
Auxiliary storage files can also be treated as extensions of core memory rather than as sequential devices. Such files are called random files. A random file differs from a sequential file in that there is no logical record structure to the file and that information is extracted from or written into the random file by addressing a specific word or block of words. It may be opened like a sequential file; the only difference is that it need not be specified as an output or an input file.
Four instructions are used to input and output words and blocks of words on a random file. To permit the random file to look even more like core memory, an instruction enables one of the currently open random files to be specified as the secondary memory file. Two instructions, LAS (load A from secondary memory) and SAS (store A in secondary memory), act like ordinary load and store instructions with one level of indirect addressing (see Fig. 9) except, of course, that the data are in a random file instead of in core memory.
Fig. 9. Load and store form main and secondary memory. (a) Instructions. (b) Addressing.
Random files are implemented like sequential files except that end of record indicators are not meaningful. Although as many index blocks are used up as required by the size of a random file, only those data blocks that actually contain information will be attached to a random file. As new locations are accessed, new data blocks are attached.
Whereas it makes little sense to associate, say, a card reader with a random file, a sequential file can be associated with any physical device in the system. In addition, a sequential file may be associated with a subroutine. Such a file is called a suhroutin efile, and the subroutine may thus be thought of as a "nonphysical" device. The subroutine file is defined by the address of a subroutine together with information indicating whether it is an input or an output file and whether it is word or character oriented. An input operation from a subroutine file causes the subroutine to be called. When it returns, the contents of the A register is taken to be the input requested. Correspondingly, an output operation causes the subroutine to be called with the word or character being output in A. The subroutine is completely unrestricted in the kinds of processing it can do. It may do further input or output and any amount of computation. It may even call itself if it preserves the old return address.
Recall that for sequential files the system transforms all information supplied by the user to the format required by the particular file; hence the requirement that the user, in opening a subroutine file, must specify whether the file is to be character or word oriented. The system will thereafter do all the necessary packing and unpacking.
Subroutine files are the logical end-product of a desire to decouple a program from its environment. Since they can do arbitrary computations, they can provide buffers of any desired complexity between the assumptions a program has made about its environment and the true state of things. In fact, they make it logically unnecessary to provide an identical interface for all the input-output devices attached to the system; if uniformity did not exist, it could be simulated with the appropriate subroutine files. Considerations of convenience and efficiency, of course, militate against such an arrangement, but it suggests the power inherent in the subroutine file machinery.
The user machine described was designed to be a flexible foundation for development and experimentation in man-machine systems. The user has been given the capability to establish configurations of multiple processes; and the processes have the ability to communicate conveniently with each other, with central files, and with peripheral devices. A given user may, of course, wish only to use a subsystem of the general system (e.g., a compiler or a debugging routine) for his particular job. In the course of using the subsystem, however, he may become dissatisfied with it and wish to revise or even rewrite the subsystem. The features of the user machine not only permit this activity but make it easier.
The software portion of the system was designed and written in part by L. P. Deutsch, who is entitled to equal credit with the authors for the ideas in this paper. L. Barnes also contributed significantly to the final result.
H. S. Bright, "Philco multiprocessing system," 1964 Proc. AFIPS Conf., vol. 26, Pt. II, pp.97-141.
W. T. Comfort, "A computing system design for user service," 1965 Proc. AFIPS Conf., vol. 27, Pt. I, pp.619-626.
M. Conway, "A multiprocessor system design." 1963 Proc. AFIPS Conf., vol. 24, pp.139-146.
F. J. Corbato and V. Vyssotsky, "Introduction and overview of the MULTICS system," 1965 Proc. AFIPS Conf., vol. 27, Pt. II, pp.185-196.
J. Dennis and F. Van Horn, "Programming semantics for multiprogrammed computations," Commun. ACM, vol. 9, pp. 143-155, March 1966.
J. Forgie, "A time-and memory-sharing executive program for quick response on-line applications," 1965 Proc. AFIPS Conf., vol. 27, Pt. I, pp.599-609.
W. Lichtenberger and M. W. Pirtle, "A facility for experimentation in man-machine interaction," 1965 Proc. AFIPS Conf., vol. 27, Pt. I, pp. 589-598.
B. W. Lampson, "Interactive machine-language programming," 1965 Proc. AFIPS Conf., vol. 27, Pt. I, pp.473-481.
J. McCarthy, S. Boilen, E. Fredkin, and J. Licklider, "A time-sharing debugging system for a small computer," 1962 Proc. AFIPS Conf., vol. 23, pp. 5l-57.
J. D. McCullogh, K. Speierman, and F. Zurcher, "Design for a multiple user multiprocessing system," 1965 Proc. AFIPS Conf., vol. 27, Pt. I, pp.611-617.
[1l] J. I. Schwartz, "A general-purpose time-sharing system," 1964 Proc. AFIPS Conf., vol. 25, pp.397-411.
R. C. Daley and P. G. Neumann, "A general-purpose file system for secondary storage," 1965 Proc. AFIPS Conf., vol..27, Pt. I, pp.215-229.
J. H. Saltzer, "Traffic control in a multiplexed computer system," Mass. Inst. Tech., Cambridge, Mass., Tech. Rept. MAC-TR-30. July 1966.
|
<urn:uuid:6252dbde-c525-47d0-a77c-d1b83d0f8f99>
|
CC-MAIN-2013-20
|
http://research.microsoft.com/en-us/um/people/blampson/02-UserMachine/WebPage.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700074077/warc/CC-MAIN-20130516102754-00071-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.939184 | 8,736 | 2.921875 | 3 |
MCA Software Projects in C Language
1. Write a program to demonstrate a concept. Some of the examples listed below
a. Show how a binary number is added and subtracted using one’s and two’s complement
b. Show how two ions, one positive and one negative can be combined to have a stable
substance. Show how inert gas is so
c. Show how a cell divides itself
d. Show how a stack has an element pushed into and popped out from
e. Show how a single adder works
2. Implement a compression algorithm and compare the result with zip and other programs. (there
are many algorithms including Jpeg, Mpeg and few others, a teacher needs to specify)
3. Implement a B-tree program to provide indexed file access to a C program, or B+-tree program to provide indexed sequential file access to a C program
4. Implement a spell checker using a trie implementation
5. Implement a graphics related algorithm (for example a clipping algorithm)
6. Simulate a physics experiment (for example implement whinstone bridge)
7. Implement a program which parses a sentence typed, find out a keyword and generate an answer from the file, produces a human like interface. For example if the user types “MCA is interesting”, the program parses that sentence, find out keyword ‘interesting’ and produces a response like “what in MCA do you find interesting?”
8. Implement a program to play tic-tac-toe (Shunya Chokdi)
9. A supermarket has more than 100 items for sale. They are divided into five major groups.
1. Cosmetic Items
2. Household items
3. Bakery products and beverages
4. Toys and Chocolates
5. Cigarettes and other items for men
The customer may come at any time. He is given a bag at the time of entry. He can enter into each of
the above section at any time and can leave without visiting a single section as well. Whenever he
chooses an item, the item is added to the bag. He can drop any item before visiting the checkout
counter. No one can leave without visiting the checkout counter. At checkout counter, the customer is
billed according to the prices of items in the bag.
The program is to be made for processing the customer from entry to exit. Take care of all possible
scenarios that may take place in the supermarket.
10. A Railway timetable inquiry system is to be prepared. Ahemdabad Railway station contains host
of trains going and coming daily. Every train has a certain schedule of stations to visit. An inquiry
about a train going to a particular station must be answered from the program. Assume all
possible queries related to train, schedule and timing.
11. Ahemdabad city bus network is to be programmed in such a way that from any place to any other
place, if a person is interested in traveling in city bus, the program helps him/her out. Take care of
providing nearby bus stands and showing solutions traveling in more than one bus when direct
route is not available.
12. A teacher’s timetable is to be prepared. Take all teachers data first. Store constraints with every
teacher. There are two types of constraints. One type is hard constraint, which cannot be violated.
Like, a lecturer cannot lecture at two places simultaneously. Soft constrains can be violated if
there is no other solution exist. They are like; a lecturer should not have two lectures, one
immediately after another.
13. A restaurant has 20 odd items to offer to its customers. The ingredients for all twenty items are
100. Every item has a unique set of ingredients for itself. The consumption of items varies to a
large extent. That’s why a safety stock for every item is decided and when the ingredient quantity
is reached to a safety stock level, it is ordered. Write the program to process customer’s orders
and maintain inventory as well.
14. An MCA book details are to be maintained for each semester, for each subject. It should be able
to provide details about all text and reference books, syllabus, the relation between a particular
book and the syllabus and topic search.
15. An automobile service center has many vehicles coming daily for service and repair work. When
any new accessory is needed for replacement, the center gets some amount as a concession from
the manufacturer. The labor is also decided for each kind of repair work. There are two categories
of customers. For one type of customers the bill is prepared as actual. For others, the concession
from the manufacturer is provided to customer as well. Write a program to bill the customer
based on the repair work, accessories fitted and category of customer. Consider all possible real
16. A manufacturing division is manufacturing various parts. Root parts are atomic parts, which are
not made up of any other parts. Finished parts are final parts, which are not used to construct any
other parts. Apart from that, every other part is in turn is made up of several other parts. Write a
program to read details about each part. Provide ways to answer questions like
1. Listing parts which constitutes a given part
2. List all parts in which a given part is used as an assembly
3. Listing of all root or finished parts.
|
<urn:uuid:c6fb04cd-7c5a-46a4-beb2-b03a65a1a1a3>
|
CC-MAIN-2013-20
|
http://mcanotes.com/gtu-mca-notes/mca-projetcs/mca-software-projects-in-c-language/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703306113/warc/CC-MAIN-20130516112146-00023-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.928606 | 1,147 | 3.875 | 4 |
When: Saturday 27th August, from 10am to 5pm
Where: miLKLabs, Franciscan friary, Lower Henry Street, Limerick
Hosted by: Members of miLKLabsCost: €10 (for non-members)Limited to 20 participants (Pre-booking mandatory)
What is Arduino? Arduino is an open-source electronics prototyping platform based on flexible, easy-to-use hardware and software. It’s intended for artists, designers, hobbyists, and anyone interested in creating interactive objects or environments. (www.arduino.cc ) Ardunio makes it easy to learn how to read sensors, control electronic devices, and communicate between various hardware and your computer.
- Laptop (Mac OS X, Windows or Linux) with a USB port
- Software installed (Optional):
- Arduino IDE (http://Arduino.cc/en/Main/Software)
- Processing (http://processing.org/download/)
Good to bring (optional):
- Some experience with a programming language. If you know what if statements and loops are, you’ll be in great shape.
- Some basic knowledge of electric circuits
- Wire cutters / strippers
Aimed at students, artists, and designers or anyone who wants to learn the basics of Arduino, simple electronics and building interactive projects. The workshop covers the basis of Physical Computing using Arduino and Processing. Participants will be able to control media (graphics, video & sound) in Processing using a variety of Sensors (distance sensor, light sensor, temperature sensor, potentiometer, etc..). This initial session will be followed by groups or individual follow-up sessions to help participants complete a personal project or expand their knowledge in specific areas.
Session 1 (10am-12)
We will begin with a brief introduction to some basic electrical principles (no math, just how things get hooked up and how lights and switches do their thing). We’ll talk about the role of a microcontroller (such as an Arduino board) in an electronic circuit. Once everyone has the Arduino development software up and running we’ll start controlling LEDs or tiny motors by writing some simple code.
Lunch (12 -1pm)
Session 2 (1pm-3pm)
We’ll get information from sensors, and see how to make some sense from that information by filtering it. We’ll send that data to a program running on your laptop, and then use that program to control some devices connected to the Arduino.
Session 3 (3pm-5pm)
Once everyone has mastered what we’ve covered, we will look at examples of projects that use the Arduino. Participants will then be offered the opportunity to create a small project on their own or with a group. Finally we will spend a little time talking about slightly more advanced concepts to give you a starting point for your next steps.
Register for the miLK Labs Arduino Fest
|
<urn:uuid:6811fa35-4e23-40cf-aa49-c3ae563063fc>
|
CC-MAIN-2013-20
|
http://eirepreneur.blogs.com/eirepreneur/2011/08/arduino-fest-at-milklabs-limerick-register-now.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704134547/warc/CC-MAIN-20130516113534-00047-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.881742 | 619 | 2.796875 | 3 |
At a time when mobile phones are becoming increasingly sophisticated and customised, Monash University scientists have developed a new application that could speed up emergency response in time of disaster.
For six years, Dr. Shonali Krishnaswamy and her team at the Centre for Distributed Systems and Software Engineering at Monash University have been wrestling with the potential of mobile data mining - where information is collected from any number of sources, analysed and displayed via the limited real estate of an individual's mobile phone screen.
The team is one of many around the world trying to best display complex, constantly-changing information in a way that is simple and easily understood.
They recently filed a provisional patent for their "clutter-aware visualisation technique" that allows changing information to be constantly updated, analysed and displayed.
Dr Krishnaswamy said the application had a wide range of possibilities but the team initially focused on healthcare and disaster management systems, hoping to cut response times and enable critical decisions to be made faster.
"In one example our technique can analyse calls made to emergency services during a wind storm or heavy rains, provide a bird's eye view of where most calls are coming from and then display this information on a map to mobiles that ground personnel are carrying.
This allows them to see where the trouble spots are and quickly reach the areas that need help most urgently," she said.
"The real-time data and analysis are immediately available to ground personnel, rather than first being transmitted to a command centre and then relayed back. This way, personnel on the field and in central command can understand an emerging situation and best respond."
In another example under development, physiological indicators like blood pressure or heart rate could be collected by state-of-theart biosensors and relayed via a mobile phone to warn supervisors of escalating stress or fatigue levels at the scene of an emergency, warning them when to rotate staff.
The mobile phone screen visualiser uses an algorithm to consider the information transmitted and automatically adjusts the way it is presented to reduce clutter and facilitate understanding.
The user can personalise the display depending on the size of their screen and its computational capability, their ability to process the information on the screen, how much clutter they can tolerate and how frequently updates are required.
Dr. Krishnaswamy is now beginning to showcase the application to commercial organisations she believes will benefit from it in an emergency situation.
"The possibilities of mobile data mining are unlimited," she adds. "We're just beginning to explore the usefulness of this cost-effective technology."
|
<urn:uuid:fedf9611-b905-411f-b66d-046e9e31ec5a>
|
CC-MAIN-2013-20
|
http://www.monash.edu.au/pubs/monmag/issue24-2009/features/mobile-mining.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704054863/warc/CC-MAIN-20130516113414-00076-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.938316 | 519 | 3.125 | 3 |
From Unofficial BOINC Wiki
A system where the computing work is distributed to multiple independent computers. The computers may be co-located or geographically dispersed. Other terms used to describe this are grid computing, distributed processing, cooperative computing, parallel processing, collective computing, and mesh computing. The computers within a system may be homogeneous (all of the machines are of the same or nearly the same configuration) or heterogeneous (the computers may be of various types and capabilities).
Many new applications are being developed to perform computing tasks of this nature and even a new class of supercomputer has emerged where the computer is a collective of off-the-shelf computers brought into a room and interconnected with high-speed network connections.
An essential element within the concept of distributed computing is the ability to sub-divide the computing problem into smaller and more manageable "chunks" of stuff to do. If the problem is of a nature that there is no clear way to sub-divide the problem, then this type of computer system will not be useful in the solution of the problem. The good news is that there are not very many problems within this category.
Some of the essential components within the management of a system of this nature are:
- The tracking of the Work Units and ensuring that all of the Work Units that must be processed are, in fact, processed.
- Result Validation to ensure that the processing did produce a Valid Result.
- Result integration to gather the returned Results and aggregate them to create the final work product, usually called a Canonical Result.
- Communication connections so that the work can be distributed and the results collected.
- Validation of the Results to ensure that within the non-homogeneous systems that identical Results are obtained from the systems that are not identical.
|
<urn:uuid:44636dbc-3397-433b-aeb8-47e1e38fa363>
|
CC-MAIN-2013-20
|
http://www.boinc-wiki.info/Distributed_Computing
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704658856/warc/CC-MAIN-20130516114418-00045-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.928134 | 367 | 3.21875 | 3 |
Games and Graphics
Algorithms & Data Structures
Maths & Physics
::Basics of C++
::Compound Data Types
::C style functions in C++
::Object Oriented Programming
::The Standard Template Library (STL)
www.cpp4u.com > C++ programming resources > C++ Personal HomePages
C++ Personal HomePagesC++ Personal HomePages
Web-pagesAllan Clarke's Home Page -
This is a compilation of information gathered from various sources below. The purpose of this free service is to keep developers on their toes with interesting, useful, and some non-useful tidbits.
Brad Appleton's Home Page -
Principal software engineer for Motorola Global Telecom Solutions Sector in Arlington Heights. Personal projects. Huge programming links collection.
Nitin Gupta Personal Web Site -
Student of UC San Diego University. This web site contains informations about his research on BioInformatics ,his personal likes and dislikes and collection of links to ther sites..
Liang Cheng's Personal Page -
Liang Cheng is from Department of Computer Science Georgia State University . His site has got some interesting links and papers on networking, c++, blogging.
Gary Scavone’s Home Page -
Web site consisting of the information about the research and tutorials by Gary Scavone on playing music through c++.
Christian Tenllado van der Reijden Home Page -
This site contains publications on "Algorithm Tuning on High Performance Architecture ".
Tokkee’s Home Page -
Sebastian Harl a.k.a. tokkee has created this site and it consists of informations on linux and unix.
Yixin_Yu’s Home Page -
Contains the project on java games Yixin_Yu has built
Gianluca Monaci's Home page -
This site has discussion about the paper Gianluca Monaci has published on Signal processing and Image processing.
Kim Mittendorf Home Page -
Several Links collection.
parsy's Tutorials -
Large collection of tutorials on different languages.
KSU Computer Science Lab 218 Home Page -
KSU Computer Science Lab 218 home page.Located at 218 Hathaway Hall on the Kentucky State University campus in Frankfort, Kentucky
rogimeister's Home Page -
This is an informational site about Classic Arcade Gaming through Emulation.
Kevin Alejandro Roundy Home page -
Personal webs ite of Kevin Alejandro Roundy. Contains teaching assignment and answers on OS, Compilers and Programming Languages.
JUAN CONTRERAS home page -
contains personal information and his teaching and research experince.
Francesco Nidito Home page -
Contains information about his papers on Sensor Networks.
Gholamreza Haffari's home page -
This site contains the papers published by Gholamreza Haffari on "Statistical Machine Learning " and " Natural Language Processing ".
jeffrey's Home page -
Collection of links.
Jason Gallicchio's home page -
tis is an wonderfull site developed by Jason Gallicchio. Please have a look at it, it is worth seeing.
Xavier Cavin's Home page -
Contains papers published by Xavier Cavin on memory management and Graphics acceleration and many other stuff.
Truman Collins' Home Page -
Good collection of links on various topics.
Nidhi Kalra homepage -
Ph.D. student at the Robotics Institute at Carnegie Mellon University. Within the RI, part of the Field Robotics Center and am advised by Tony Stentz.
Free Games and Open Source Game Programming - PatrickAvella.Com -
Free Games and Open Source Programming, brought to you by a broke College Student.
Tim Love's help pages -
C++ , LaTeX, Matlab Shell scripts, Talks , Downloadable documents, Making Movies. University of Cambridge, Department of Engineering
Oliver White - Home -
Free Software, including GraphicExplorer, IndexDirectory (HTML indexes), and website tools.
Archana Bharathidasan’ Home page -
Contains the papers she has published on Sensor Networks and many interesting links
Lingyin Zhu ‘s Home page -
Lingyin Zhu ‘s personal page and links collection.
Maciej Sobczak Homepage -
Maciej Sobczak is a programmer . This site contains the information about the various softwares and projects he has developed.
Web site of K. N. King. -
. Here you'll find information about his books and short courses. You'll also find his recommendations for books on programming and other computer-related subjects, and his links to programming-related sites.K. N. King is the author of C Programming: A Modern Approach and Java Programming: From the Beginning. He received the Ph.D. in computer science from the University of California at Berkeley in 1980. He was a faculty member at Georgia Tech from 1980 to 1987.
Piotr Luszczek ‘s Home page -
Contains papers piotr luszczek has published on “Self adapting numerical software (SANS) effort” and other mathematical papers.
Aharon Feigelstock ‘s Home page -
Aharon Feigelstock ‘s personal web site.
Haluk POLAT ‘s Home page -
Haluk POLAT ‘s personal web site. Contains information about his projects , question papers/answers.
Mark Michel Atallah ‘s Home page -
Mark Michel Atallah ‘s personal web site.
Archana Sharma ‘s Home page -
Archana Sharma ‘s personal web site. Contains information about different courses and links.
Damon Turney ‘s Home page -
Damon Turney ‘s personal web site. Recourses on various topic related to water , gas turbine , Remote sensing etc.
Marko Juhani Repo ‘s Home page -
Marko Juhani Repo ‘s personal web site.
Miron Vranjes‘s Home page -
Miron Vranjes ‘s personal web site. He is a teen age web designer, and this site contains his portfolio as web designer.
J. A Moore‘s Home page -
J. A Moore ‘s personal web site. Contains description about the paper he published on Mathematical Morphology.
Prashant N Mhatre ‘s Home page -
Prashant N Mhatre ‘s personal web site. Very Large collection of various good links.
Alex Vinokur's c++ link collection -
Huge collection of c++ links
|
<urn:uuid:f7e32c6e-535d-4209-8ce6-0cdcd52ae1bf>
|
CC-MAIN-2013-20
|
http://www.cpp4u.com/homepages.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704664826/warc/CC-MAIN-20130516114424-00064-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.841002 | 1,418 | 2.71875 | 3 |
- After reading the question, click on the answer that you think is
correct to go to the whatis.com definition. If the answer you've chosen
is correct, you will see the question text (or a paraphrase of it) somewhere
in the definition.
- After reading the question, write down the letter of your answer
choice on scrap paper. Check your answers by using the answer key at the
end of the quiz, where you'll also find additional resources related to the correct answer.
1. This is a document that states in writing how a company plans to protect the company's physical and IT assets.
b. security policy
2. This is a type of security management system for computers and networks.
a. intrusion detection
c. Common Information Model
d. eye-in-hand system
3. This is a self-managing computing model patterned after the human body's nervous system.
a. organic transistor
b. autonomic computing
d. brain-machine interface
4. This is a computer system that's designed so that in the event that a component fails, a backup component or procedure can immediately take its place without loss of service.
c. waterfall model
5. This type of information technology system can accomodate multiple remote computers and devices.
a. remote wakeup
b. distributed computing
d. parallel sysplex
6. This is the job title commonly given to the person in charge of a company's information technology and computers.
7. This is an administrative approach to systems management that establishes procedures in advance to deal with situations that are likely to occur.
a. partner relationship management
b. Environmental Resource Management
c. policy-based management
d. adaptive technology
8. This is a type calculation designed to help enterprise managers assess both the direct and indirect costs and benefits of an IT component.
c. RFM analysis
9. This provides remote offices or individual users the secure access to their organization's network.
10. This is a holistic approach to management that focuses on how a system's parts interrelate over time withing the context of larger systems.
b. systems thinking
Scroll down for Answer Key
Because quizzes are so much fun-go here for more!
Answer key: 1b; 2a; 3b; 4d; 5b; 6a; 7c; 8a; 9d; 10b
This was first published in June 2005
|
<urn:uuid:2bee63eb-4b91-4c1c-bf9c-4abe54988431>
|
CC-MAIN-2013-20
|
http://searchdatacenter.techtarget.com/quiz/Ruling-your-IT-universe
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710196013/warc/CC-MAIN-20130516131636-00098-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.896923 | 512 | 3.125 | 3 |
Sistem Himpunan Doa-doa Harian Dan Peristiwa Bersejarah Dalam Islam Secara Atas Talian.
Other thesis, Universiti Teknologi Malaysia.
Computer technology has changed the situation especially in broadcasting,management and education field. The system is developed using an evolutionary prototype approach and Unified Modeling Language(UML) as a modeling technique
for analysis and design. Result of the project is a system that can be used by any level of users to search and learn doas and Islamic history.Users can also doing quiz to know their performance. Module for system administrator is also prepared for updating activities in the system.
Actions (login required)
|
<urn:uuid:8ec524fa-b1ca-495e-a0a0-a5e181d101e1>
|
CC-MAIN-2013-20
|
http://ir.fsksm.utm.my/2569/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705956263/warc/CC-MAIN-20130516120556-00028-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.854267 | 145 | 2.578125 | 3 |
After completing this self-contained course on server-based Internet applications software, students who start with only the knowledge of how to write and debug a computer program will have learned how to build web-based applications on the scale of Amazon.com. Unlike the desktop applications that most students have already learned to build, server-based applications have multiple simultaneous users. This fact, coupled with the unreliability of networks, gives rise to the problems of concurrency and transactions, which students learn to manage by using the relational database system.
After working their way to the end of the book, students will have the skills to take vague and ambitious specifications and turn them into a system design that can be built and launched in a few months. They will be able to test prototypes with end-users and refine the application design. They will understand how to meet the challenge of extreme business requirements with automatic code generation and the use of open-source toolkits where appropriate. Students will understand HTTP, HTML, SQL, mobile browsers, VoiceXML, data modeling, page flow and interaction design, server-side scripting, and usability analysis.
The book, which originated as the text for an MIT course, is suitable for classroom use and will be a useful reference for software professionals developing multi-user Internet applications. It will also help managers evaluate such commercial software as Microsoft Sharepoint of Microsoft Content Management Server.
About the Authors
Philip Greenspun, a software developer, author, teacher, pilot, and photographer, originated the Software Engineering for Internet Applications course at MIT. He is the author of Philip and Alex's Guide to Web Publishing.
Andrew Grumet received his Ph.D. in Electrical Engineering and Computer Science from MIT and builds Web applications as an independent software developer.
"Filled with practical advice for elegant and effective websites."--Edward Tufte, author of *The Visual Display of Quantitative Information*
|
<urn:uuid:2687370a-8284-4316-abf6-965005b544cb>
|
CC-MAIN-2013-20
|
http://mitpress.mit.edu/books/software-engineering-internet-applications
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703306113/warc/CC-MAIN-20130516112146-00061-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.930537 | 387 | 2.8125 | 3 |
Copyright (c) 2000-2007, Peter J. Denning.
You may make one copy for personal use only.
All other uses require written permission of the author.
Operating systems fulfill two functions: managing the resources of a computing system among the competing demands of the system's users, and providing a high-level environment for programming and program execution. Current operating systems control components operating on time scales of one event every trillionth of a second (the gate speeds of the chips) all the way up to the time scale of one event every few days (on-going computations). Those 15 orders of magnitude rank operating systems among the most complex systems built by human hands.
To help us build operating systems that work as expected we organize the components into levels of abstraction. The objects visible at a given level are composed of smaller objects defined at lower levels. Our levels chart gives a road map of the levels.
Since the first operating systems were built in the 1950s, designers have been seeking the best methods of implementing the many components at the different levels. They have created models of the operation of these levels and used the models as guides to implementation and performance evaluation. The best of these models are technology art forms because of their simplicity, elegance, and effectiveness. Although most real systems do not implement the models faithfully, their designers often refer to these models as ideals.
The purpose of The Art of Operating Systems is to present the best models for each level of an operating system and to help you see how they fit together into a working whole. This will help your understanding of operating systems.
|
<urn:uuid:de89ac2a-4f14-474b-80cb-70326e9cb112>
|
CC-MAIN-2013-20
|
http://cs.gmu.edu/cne/pjd/ArtOS/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701760529/warc/CC-MAIN-20130516105600-00079-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.955272 | 324 | 3.59375 | 4 |
The Scientific Programmer's Toolkit: Turbo Pascal Edition presents a complete software environment for anyone writing programs in mathematical, engineering, or science areas. This toolkit package is designed for use with Turbo Pascal, the de facto standard Pascal system for PC and compatible machines.
The book and its software provides an integrated software library of programming tools. The programs and routines fall into three categories: graphical, mathematical, and utilities. Routines are further subdivided into three levels that reflect the experience of the user. For graphics and text handling routines there is also a Level 0, which provides an interface to the machine operating system. By using hierarchically structured routines, the clearly written text, and a wide range of example programs, software users can construct a user-friendly interface with minimal effort. The levels structure makes it easy for newcomers to use the Toolkit, and with growing experience, users can achieve more elaborate effects.
The Scientific Programmer's Toolkit will be useful to consultants, researchers, and students in any quantitative profession or science, in private or public sector research establishments, or in secondary and higher education.
"anyone with a good working knowledge of Pascal will find here a wealth of programmes and units which would enable them to write highly sophisticated routines for a very wide range of scientific application...well written with a very wide range of examples throughout." LMS Newsletter with a good working knowledge of Pascal will find here a wealth of programmes and units which would enable them to write highly sophisticated routines for a very wide range of scientific application...well written with a very wide range of examples throughout." LMS Newsletter
Number Of Pages: 448
Published: 1st January 1991
Dimensions (cm): 22.9 x 15.2 x 2.3
Weight (kg): 1.63
|
<urn:uuid:596d0fdf-1619-4c7e-9d08-4a0b9ebdc353>
|
CC-MAIN-2013-20
|
http://www.booktopia.com.au/scientific-programmer-s-toolkit-m-h-beilby/prod9780750301275.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701409268/warc/CC-MAIN-20130516105009-00041-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.898723 | 361 | 2.734375 | 3 |
In the current teaching context, where students work and require flexible time, E-learning is presented as a didactic proposal which adapts to the time and abilities of students and which, at the same time challenges traditional teaching modalities. In an interview with Argentina Investiga, Marcela Chiarani, specialist in Computing Sciences oriented to education, states that “virtual environments facilitate to make a follow up of students”.
In Bahía Blanca, researchers and advanced students developed a small cardiographer which can be connected to a mobile phone and send the medical exam by Internet. Due to its low cost, simplicity to use and economy, it is ideal for health rooms, schools, sport clubs, permanent monitoring of outpatients and it might replace expensive and sophisticated non-portable medical equipment.
A group of researchers created a mobile robot which by means of software applications can work independently. The robot, which is prepared to carry out tasks such as the cleaning of dangerous areas and which could be used in the factories in La Matanza, goes back to its base to be charged when it finishes its work.
The software was created by a multinational research group in which there are scientists from the UNSL. It was presented in an international competition which seeks to improve the quality of Wikipedia contents and won the first place. The system predicts automatically if a website has flaws, which enables a better performance of Wikipedia stable editors.
Computing attacks are more and more frequent. They put at risk companies and public government organisms’ security. Through specialized malicious programs, these attacks cause serious problems which go from the access to secret State information to the interruption of a country’s bank system. In a study about Computing Security, specialists recommend to assume technological challenges to prevent the risks.
A research project pretends to integrate artificial intelligence technologies in a new electronic government platform which permits to process in an intelligent way the citizens’ opinion offered in some social networks, such as Facebook and Twitter. Through data mining the researchers filter information significant patterns which can work as a reference tool of the citizens’ opinion for the authorities.
Through the use of CAD programs and videogames, university students created virtual images to go over the Fortaleza Protectora Argentina, the fort of Bahía Blanca which was created in 1828 and gave origin to the city. The initiative will be available to be downloaded for free from the Internet and it is part of a project to spread the regional culture with new technologies.
Through the project “Reconstruction of sound recording antique technologies”, researchers could reproduce an original for photoliptophone, a sound register system patented in Argentina towards the thirties, whose objective was the massive diffusion of music. Although it was internationally recognized, the invention was forgotten. The researchers plan the construction of a photoliptophone and the recording of sounds in pages to reproduce them.
By means of a research project from the Multimedia Arts of the Transdepartamental area of the IUNA, researchers created a table which permits the users to create music scores through an interactive phase to generate, transform and interpret sound structures. It is expected that this prototype, which will be very cheap and will use free software will contribute to the understanding of music and the development and creative capacity of the students.
Researchers from Rosario work in a 3D head model which is activated by the human voice. The development will permit that any person can communicate with a computer in the same way they do it with another person. The reach of the model’s possibilities includes a wide range which goes from the development of techniques for the film and videogames industry to the assistance in clinical treatments.
Scientists from Bahía Blanca integrate a pioneering group in the world of the argumentation in artificial intelligence. They developed a system which has been used for a Facebook intelligent application, the social web which gathers 700 million users around the world. Artificial intelligence is present in almost all the processes used nowadays in computers. The research group has turned into a national center of relevance for this topic.
In order to encourage scientific activities among the students who are in the last years of the career BS in Computing Sciences, a group of teachers carries out an innovation project to improve degree teaching. Through different activities which locate the student in the roles of researcher and teacher, learning is put to play as a social activity and an active process.
A study carried out by the UNGS reflects that most of the enterprises of the sector is devoted to the development of applications or customized software. An 89% has university employees and the 70% was trained during the last few years. In general, the enterprises are small and their creation dates from a short time ago, however, most of them export their products. There is a strong link to carry out joint commercial actions, offer or receive technical assistance and training in HHRR.
Researchers from Mendoza analyzed data about collection, treatment and deposits of electronic trash generated by companies, institutions and users in general. This waste puts into risk the human health and the environment, given that it contains toxic substances which filter in the groundwater causing a high level of contamination. The researchers plan to elaborate a diagnosis and project the evolution of this waste and its management to avoid environmental impact.
An interdisciplinary approach of the UNNOBA is working on computing methods and mathematical applications to simulate the effects of pesticides and resistance mechanisms. By means of representational models we point to facilitate the learning of students from Agrarian and Genetic careers. The phenomena described from these models are very complex and can give rise to serious consequences for the environment.
|
<urn:uuid:021934fd-b526-48d8-ba8f-111f77e56881>
|
CC-MAIN-2013-20
|
http://infouniversidades.siu.edu.ar/english/categorias.php?id=13
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702127714/warc/CC-MAIN-20130516110207-00028-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.948414 | 1,138 | 2.59375 | 3 |
448 EVOLUTION OF COMPUTER BUILDING BLOCKS
The concept of using high level building blocks is not new, but we think this particular implementation of a set of simple blocks is quite useful to many digital systems engineers. The design time using this approach is significantly less than with conventional logical design. The modules are especially useful for teaching digital system design. We have solved many benchmark designs with reasonably consistent results. The modules can be applied quickly and economically where there are between 4 and 100 control steps, a small read-write memory (100 words), and perhaps some read-only memory. Larger system problems are usually solved better with a stored program computer, although such a computer can be designed using RTMs. The user need only be familiar with the concept of registers and register operations on data, and have a fundamental understanding of a flowchart.
These modules were formally proposed in March 1970 in a form essentially described herein by one of the authors, C. G. Bell. In June 1970 the project was seriously started by constructing the computer of the previous example using them. The authors gratefully acknowledge the organization and management contributions of F. Gould, A. Devault, and S. Olsen (Digital Equipment Corporation) without whose goal-oriented commitment the RTMs could not have been built. The authors are also indebted to Mrs. D. Josephson of Carnegie-Mellon University for typing the manuscript.
|
<urn:uuid:2137e0be-402e-4d69-b10b-58c9cc8931af>
|
CC-MAIN-2013-20
|
http://research.microsoft.com/en-us/um/people/gbell/Computer_Engineering/00000470.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705543116/warc/CC-MAIN-20130516115903-00065-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.955311 | 287 | 3.140625 | 3 |
From iA wiki
An on-going problem with detailed models is that it is difficult to take into account the many interrelationships among relevant factors such as varying grades and dynamic types of interdependencies. It is also difficult to make completely accurate measurements and then place these initial variables into a model.
A computer is a good tool for the visualization of models and changes within. For example, computer models are used to make predictions regarding near-future weather conditions. Logistic modeling, a tool with many social applications should match up with reality as much as possible.
Many models exist for engineering, psychoacoustics, for generating profit with a business, for content distribution with RSS and in computer networking there is the client-server model. Other models of interest include:
|
<urn:uuid:272bf287-1e8a-41be-9ab8-d3cc9c3698b3>
|
CC-MAIN-2013-20
|
http://www.infoanarchy.org/en/Model
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699684236/warc/CC-MAIN-20130516102124-00080-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.955708 | 156 | 2.5625 | 3 |
This paper was invited by the Information Technology Institute, Singapore, to form part of their HCI feature articles. In due course it will probably become available via their website (follow the link to Special Feature and then, if necessary, to Previous Features). Meanwhile, the APA Guidelines suggest it should be cited as:
MODELLING IN OSM - AN OVERVIEW
INFORMAL REASONING FROM AN OSM
FORMAL REASONING FROM AN OSM
TESTING OUR CLAIMS
Computer Based Learning Unit
School of Computing Science
We introduce a new HCI modelling technique, 'Ontological Sketch Modelling', which identifies certain types of usability problem by exposing misfits between the user's conceptual model and the conceptual model imposed by the device. These types of problem are not addressed by previous techniques of HCI analysis. To make the approach widely accessible and easy to use we are developing an 'OSM editor'; after describing a device using the editor, the user can obtain a computer-generated analysis containing 'usability alerts', which warn of potential misfits.
Meeting a new application or a new IT device for the first time can be like starting to watch a soap after missing half the episodes. What are all these buttons and lights? What do their labels means - TD, TA, RDS, and the like? How does it all make sense and how does it relate to what you already know?
Then, when you understand the basic functionality, different problems arise. You've got started - you've drawn some curves, or started to organise your home finance, or begun your new song. Fine stuff, but you want to make a small change - and now is when you find out how usable your new gizmo really is. Perhaps you change one of the points on your curve and everything else moves around in apparently unpredictable ways. Or you discover that to make a small change to your home finance system, you'll have to do so much work that you might as well start again. Or maybe you find that although you can change the tune and the chords you've started to compose, it means understanding a whole new set of obscure tools.
Conventional approaches to HCI concentrate on how fast you can get the buttons pressed, or whether the system messages are confusing. We want to find a way to go deeper.
We are pioneering an approach to HCI that concentrates on the user's conceptual model of the device -- and of the domain as well. Analysing the misfit between these can reveal potential problems in learning and use, problems of a semantic type that are not revealed by existing HCI approaches. Here are some common types of misfit.
Inexpressiveness Definition: the user wishes to do something that is part of the conceptual domain but cannot be expressed using the device Example: electronic organizers cannot indicate relative importance of engagements, nor links between them (see below)
Indirectness Definition: the user thinks of a single direct operation on a conceptual entity, but the device requires an indirect operation.
Example: trying to lay out graphics aesthetically, command-based interfaces are indirect while direct manipulation interfaces are direct - BUT trying to lay out graphics in predefined positions, the opposite is true!
Viscosity Definition: the user thinks of a single direct operation but the device requires an indirect operation, or a large number of operations
Example: updating section and figure references in a standard word-processor
Premature commitment Definition: the user wants to do an operation but, before it can be done, the device requires a commitment to something that can only be guessed at, or that can only be determined by lookahead.
Example: starting to draw a family tree or a map, the first mark must be made on the paper before knowing how far the map will extend.
Second example: using an algebraic calculator to solve a problem stated in words, the user has to look ahead to discover whether parentheses will be needed at the beginning of the calculation.
Misfits cannot be revealed by any approach to HCI that focuses solely on either the user or the device. Traditional task-centered user-modelling for HCI has some very effective results, but it cannot reveal misfits because it does not explicitly consider how the user's domain model relates to the domain model imposed by the device.
In this paper we shall introduce Ontological Sketch Modelling (OSM). The idea is that the modeller describes the entities that are visible, and their attributes and how they are linked within the device; and also describes the entities contained in the user's conceptual model. The resulting entities may be private to the device (the user cannot alter them), or they may be private to the user (the device does not know about them), or they may be shared (accessible to both the device and the user). All communication between the two worlds of user and device takes place through the shared entities. If the user-private entities do not fit well onto the shared entities, the device will have usability problems.
Aspirations Ontological Sketch Modelling is still a developing approach, but we hope that it will have the following virtues; if we meet all our aims, we believe it will be an approach that is useful and usable. * OSM will be easy to learn and easy to do, because it is directly concerned with entities and concepts. Traditional task-centered approaches to HCI are harder because they are indirect, like trying to describe a teapot by describing the tasks you could do with it. * OSM will avoid 'death by detail'. Many HCI techniques generate a huge mass of details: OSM is succinct. * OSM will reveal problems of a different sort from traditional task-centered modelling, because it focuses on fit or misfit at the conceptual level instead of on the surface features of devices. * OSM will lend itself equally to informal pencil-and-paper modelling and to computational analysis of formal models.
Background Our approach rests on a number of points that have been established in previous research by ourselves or others.
Sketchy models are needed There is a place for detailed analytical models, but they take time to apply and time to learn, and the high time-investment has deterred designers from using many existing HCI techniques (Bellotti, 1989). At present, the few techniques that are quick to use focus on surface features, not on deep problems.
Misfits can be identified Misfit analysis has been attempted at least twice in the HCI literature (although not by that name), but has not become a strong tradition. Moran's ETIT analysis (1983) mapped the 'external' task of the domain onto the 'internal' task of the device, from which an efficiency metric could be computed, essentially the number of device actions required to achieve one domain-level goal. Payne (1993) drew on ETIT for his 'Task-Entity Analysis' by which he explained the low usage of early electronic diaries and calendars.
"A task entity analysis begins with an enumeration of the conceptual objects in the task domain and their interrelations and inspects the degree to which these entities and relationships can be represented in the device." (Payne, 1993, p. 95)
Intentions to do things formed the main class of things-to-be-remembered. Intentions are nestable ("an intention to phone a colleague is part of a broader intention to organise a conference") and they have dependencies ("you cannot book the conference dinner until you have an idea of the expected number of delegates"). Some intentions are more important than others, and some have to be performed at a precise time while others merely have to be done sometime.
Payne found that users of paper diaries could make use of variations in writing size, scribbles and arrows, and vaguely-specified times to convey all those different attributes of intentions. Electronic diaries were unsatisfactory because their entities and attributes were too limited to express the user's conceptions of the domain - there was no way to indicate all the subtle but important differences between types of intention that he identified. As a result, even people who used electronic diaries also used paper ones as support.
Payne's approach was entirely informal and could not be attempted without knowledge of HCI and cognitive psychology; also, it was not powerful enough to yield alerts for viscosity, etc. Nevertheless, explaining the weakness of electronic diaries by relating conceptual entities to the expressiveness of the device was an important result.
Identifying misfits suggests design improvements The best-developed work on misfits appears to be the 'cognitive dimensions' framework developed by Green and others (Green, 1989, 1990, 1996; Green and Petre, 1996). These dimensions, including terms such as viscosity, premature commitment and other examples described above, are easy to understand, and they describe the real difficulties that users talk about and complain about. By providing a richer language in which to express user problems they create 'discussion tools' which allow users and designers to communicate at a higher level than merely describing the surface details of an application.
Moreover, thinking in these terms shows up the trade-offs between dimensions. Designers can alleviate viscosity by introducing new 'power tools', such as style-sheets in word processors, but by doing so they increase the abstraction level of the device. If the new user has the option of working the device without having to master the power tools, then the result will be quite good; but if the new user has to master the power tools at an early stage of learning, then probably the entry cost of getting to use the device will be a serious deterrent.
Experience in several domains has shown that the terms used in the cognitive dimensions framework are comprehensible and that designers can gain a better understanding of possible user difficulties from analysing their work in these terms; moreover, they can find themselves prompted to redesign their systems in the light of their analyses (Yang et al. 1998).
But the cognitive dimensions framework is not a complete answer. If traditional task analytic approaches are too detailed for some purposes, cognitive dimensions are too broad for some purposes. If traditional HCI is too highly precise, cognitive dimensions are too undefined and intuitive.
The Ontological Sketch Model is meant to stand midway between these poles. It is precise but not over- detailed. It yields analyses of some of the cognitive dimensions, but does not try to achieve complete coverage.
User knowledge can be modelled The attempt to characterise the user's knowledge and to reason about the consequences of that knowledge is also found in the work on Programmable User Modelling (see, amongst others, Blandford and Young, 1996). PUM concentrates on fine detail analysis of the user's knowledge of the interaction device, producing computational models written in Lisp or Soar which simulate the user's cognitive processes in reasoning out how to perform a task.
Like almost all modelling approaches in HCI, PUM is task-centered, but it is unusual in being an executable model. While the results are impressive within their scope, it must be noted that the fine level of detail required makes it extremely tedious to construct a PUM. Moreover, although specific problems can be identified with particular designs, there is no easy way to generalise the problems identified, because there is no classification scheme to which they can be referred. In contrast, the approach taken in OSM analysis, although more limited in what problems it can identify, allows the problem to be classified and therefore is capable of prompting the designer to choose one of the standard remedies for such a problem.
Nevertheless, the PUM work is an important part of the background, because it has shown that HCI can model users at the knowledge level, rather than at the action level that characterises so many approaches to HCI.
Entity modelling reveals misfits Task-centered models such as GOMS do not reveal misfits, but the analysis performed by Payne (1993) introduced the idea of task entities rather than task procedures. Green and Benyon (1996) applied entity- relationship modelling to HCI, modified to include some of the important aspects of information artefacts, in a scheme called ERMIA (entity-relationship modelling for information artefacts). Entity-relationship modelling, a technique long practised in the design of information systems, lists entities and their attributes and the relationships between those entities. It is a relatively weak expressive system which can nevertheless help to ensure that records are adequate and that information can be found when needed.
In Green and Benyon's modified version, the models included conceptual entities as well as device entities. By this means some types of misfit could be identified, although not all the types that OSMs can identify. On the other hand, ERMIA can produce results that are beyond OSM, such as estimates of the 'cost of knowledge' (originally defined by Card et al., 1994) and the 'cost of update' (a type of viscosity misfit).
On the plus side, the ERMIA analyst could choose the preferred level of detail in a way that was not possible in many of the other HCI modelling approaches. We have succeeded in preserving this important characteristic. On the negative side, although ERMIA was a successful language for collaborative modelling (Whitelock et al. 1994), it forced the analyst to work in terms of very abstract relationships, which is not easy.
Expert explanation starts with entities At this point we turn to the activity of modelling, rather than the contents of the model. Making a model can be compared to explaining a device. Recent research (Cawsey, 1993) shows that when experts give explanations of devices they frequently start by identifying the device type and then go on to list the components, describing their constituents and their functions, possibly at some length, ending with causal- event descriptions at the process level.
Modelling in OSM takes almost exactly that course. The entities are described and their constituents. Causal events are modelled as dependencies. Moreover, those entities that are part of the device itself are likely to be visible and will therefore prompt the modeller; while those entities that are part of the conceptual domain are at least likely to have well understood names.
In contrast, other modelling techniques are much less direct. Task-centered models obviously require a task analysis. Doing a good task analysis requires practice. Modelling internal relationships at an abstract level, as in ERMIA, is downright difficult.
OSM draws on the sources above by modelling entities rather than tasks, by adopting a deliberately sketchy representation, by representing the features that lead to misfits, and by allowing the level of detail to be chosen by the analyst. OSM models represent entities (and their attributes), constraints between entities, and to a lesser extent the actions that affect those entities. The most important feature of OSMs is that they include both domain entities and device entities.
Entities are linked together in several possible ways. Some links are dependencies: changing an attribute of one entity may cause a variety of changes (e.g. adding a word to a document may change its length, and may cause the pagination to change, so the table of contents may no longer be in synch, and so on). These dependencies may cross the bounds between domain entities and device entities -- in fact, if the user is to be able to change the domain-relevant entities, the dependencies must cross the bounds, leading to device entities that affect the domain entities.
To see how this works, consider drawing a figure, such as a whale (which is what we asked our subjects to do in the experiment mentioned below). The whale is a domain-relevant entity, consisting of lines and curves; the lines and curves are themselves also domain-relevant, but unlike the whale itself, they have a device representation, so whereas the whale is user-private, the lines are shared. If the user wants to alter some part of the whale, such as its tail, then each line of the tail must be altered, because there is no shared entity that is made up of [all the lines of the tail]. So changing the tail will be relatively viscous.
Other links expressible in OSM include hierarchical composition, a simple form of inheritance, and constraint. Constraints are the most interesting, especially constraints that exist in the conceptual domain but are not inherent in the device. Nothing in a typical WYSIWYG word-processor constrains the figures to be numbered in sequence, but that is a constraint imposed by the domain, and it is one that is potentially quite time-consuming to fulfil, as is evident from OSM analysis.
Compared to writing with a word-processor, a drawing package offers rather few device entities and the user has to translate the domain-relevant entities into device terms. But on the plus side, a drawing package usually contains few dependencies between its entities: adding a line usually affects very few other parts of the drawing, so that the user can add, delete, or move individual lines very freely. In contrast, text can contain many types of dependency both within sentences and between them, so that when parts of the text are moved around the writer has to spend time repairing broken dependencies.
In short, drawing packages and word-processors have many differences at the fit/misfit level. These differences will affect their usability for different types of work.
The differences could be modelled in reasonably faithful detail by some of the advanced modelling and knowledge representation languages, but to do so would submerge the analyst in the 'death by detail' that we are anxious to avoid. We have therefore adopted the sketchiest of approaches that can reveal a reasonable amount about the dependencies and their possible consequences. There are many results that can be obtained from our sketchy level of analysis, but there are also some that cannot be analysed; however, to get deeper results would greatly increase the labour of modelling and would also greatly increase the training required to use our system.
One of the great benefits of our sketchy approach is that much of the activity of modelling can be performed by inspection. When the modeller thinks of an entity, they just write the name down, without deep analysis into its nature.
The purpose of writing out the OSM is to help the analyst to spot potential usability difficulties. While writing the OSM, or afterwards, the analyst can check for potential difficulties. These include:
The small study described below shows that informal OSM analysis is quite successful. However, at least some degree of expertise is required, and even though the approach uses sketchy modelling, some degree of reflection is required to find the usability features inherent in a design. For these reasons we also wished to develop an algorithmic approach that could be used in a simple mechanical fashion, at least for a first pass.
The apparatus of entities and attributes can readily be represented in more formal terms, and the usability properties listed above can then be represented as conditions on the formal representation. We have developed proof-of-concept programs to show that many of the usability conditions can be extracted algorithmically.
Our work has used Prolog as the formal vehicle. Because writing multitudinous Prolog assertions is slow and error-prone we have developed an 'OSM editor' in Hypercard (again at a proof-of-concept level) with a table-based interface, generating the appropriate Prolog assertions (see Figure 1). The usability alerts can then be generated automatically by scanning the Prolog model. For example, a usability alert for repetition viscosity would be generated if the program detected the following conditions:
there is an entity-attribute pair E(A) such that:
E is domain relevant
[and therefore the user may want to change it]
E(A) is not directly modifiable
[so the user will have to change it indirectly]
for each P(Q) that affects E(A):
P(Q) is modifiable
E(A):P(Q) :: 1:M
[so each individual P(Q) may need to be modified]
Our intention is to produce a modelling technique that is demonstrably usable and useful. At an early stage in the project we conducted an experiment using the first version of the OSM approach (subsequently revised, partly as a result of the experiment). Two drawing packages were compared, ClarisWorks and JSketch, in a study conducted with a group of 20 final-year undergraduate students who were enrolled on a module on HCI and Graphics (Blandford and Green, 1997).
To assess the usefulness of the OSM approach, we did an OSM-based usability analysis of each of the software packages we were using, and then compared our predictions against empirical data. Students were put into pairs, matched as closely as possible for prior experience. Each pair was allocated to one of the two drawing programs, one partner making a drawing and thinking aloud, while the other partner noted what difficulties were encountered. Students were asked to draw a whale (pictures were provided as a guide), then to modify it by moving its tail. The following week, the same procedure was repeated, with each pair of students using the other program.
The difficulties encountered by the students were compared with the difficulties that we predicted from our prior OSM analyses. As predicted, most students learned to use the basic facilities of J-Sketch readily. The results for ClarisWorks were less clear-cut. Of the nine specific difficulties encountered by subjects, 4 had been predicted, but 5 had not. Additional predictions were made about aspects of the program that subjects never got around to exploring. Some of the unpredicted difficulties were ones that an OSM analysis would not be expected to highlight; for example, students commented that they had difficulty getting the shape acceptable -- a point that might emerge from several aspects of the system being difficult to work with, but not one that would emerge directly from an OSM analysis. Other difficulties could have been predicted but our own OSM models had been inadequate, we later realised, because we had based our descriptions on documents that gave incomplete information.
To assess the usability of the OSM we asked the same 20 subjects to produce their own OSM descriptions of the same systems, after the "usefulness" study had been completed. We found that:
Their subsequent usability reports on OSM confirm the evidence from the data -- that entities and actions were easily comprehended and described, but that relationships presented more difficulties, and that few of the subjects really understood how the modelling was meant to be used to derive usability assessments of the system.
While these initial results are promising, since they represent modelling activity after a very short period of training and practice, their greatest value lies in the way they have been used to inform re-design of the OSM. It should also be noted that the applications that we studied, being characteristic of their class, did not contain certain types of potential user problem.
There are many different HCI modelling approaches. Very few of them are in serious use, for a variety of reasons. To make OSM genuinely useful, at least three steps are needed. We want to make it highly accessible; we want to demonstrate its effectiveness in real contexts; and we want to extend its coverage to collaborative situations.
Accessibility One way to improve OSM accessibility is by developing a web-site devoted to it. We have started such a site by giving short OSMs of a number of familiar devices (see Further Reading). In the future, funding permitting, we shall greatly extend this site with more examples and with a guide to using the approach.
Our existing prototype OSM editor needs to be further developed (and its own usability needs to be tested!). Having done so, it should be possible in principle to develop an interactive web-based version which can be used remotely by any designer. The model can be set up over the web and when complete it can be submitted to the Prolog analysis program. Usability alerts will then be readily available as part of every designer's toolkit.
Real contexts The purpose of setting up an interactive, web-based site is, of course, to examine OSMs effectiveness in real use. If not accessible, it will not be used. By making it accessible, we shall be able to determine its strengths and weaknesses. At one extreme, it may transpire that nobody uses it twice; at the other, it could become a regularly-used design step.
Before trying to make it accessible in such a wide context, we intend to study its use in more conventional ways, by testing it out with student designers and by exposing it to the critical eyes of practising designers.
Coverage There can be person-person misfits as well as user-device misfits -- e.g., misfits between the conceptual models held by different participants in a work-system. For instance, one person may need to satisfy constraints that another person is unaware of or unconcerned with. So our approach could in principle be applied to collaborative work. At present we have not explored that possibility, but future developments will, we hope, take us in that direction.
More detail about OSM and the experiment mentioned above.
Blandford, A. and Green, T. R. G. (1997) OSM: an ontology-based approach to usability evaluation. Workshop on Representations, Queen Mary College London, 1997.
Postscript version: http://www.uclic.ucl.ac.uk/annb/RepWkshp.ps
Rich Text Format (RTF) version: http://www.ndirect.co.uk/~thomas.green/workStuff/OSMs_Workshop_paper.rtf
Blandford, A. and Green, T. R. G. (1997) Design and redesign of a simultaneous representation of conceptual and device models. Submitted.
Postscript version: http://www.uclic.ucl.ac.uk/annb/OSM-DR.ps
An OSM web site In addition, examples of OSM models for some simple objects such as analogue watches are available from this site. (The models are presented using the first version of the OSM approach rather than the redesigned version described here, but the differences are not substantial.)
Bellotti, V. (1989) Implications of current design practice for the use of HCI. In D. Jones & R. Winder (Eds.) People and Computers IV, Proceedings of HCI '89, 13-34. Cambridge University Press.
Blandford, A. E. & Young, R. M. (1996) Specifying user knowledge for the design of interactive systems. Software Engineering Journal. 11.6, 323-333.
Card, S. K., Pirolli, P. and Mackinlay, J. D. (1994) The cost-of-knowledge characteristic function: display evaluation for direct-walk dynamic information visualizations. In Adelson, B., Dumais, S. and Olson, J. (Eds.) CHI '94: Human Factors in Computing Systems. New York: ACM Press.
Cawsey, A. (1993). Explanation and Interaction: The Computer Generation of Explanatory Dialogues. Cambridge, MA: MIT Press.
Green, T. R. G. (1989) Cognitive dimensions of notations. In R. Winder and A. Sutcliffe (Eds), People and Computers V. Cambridge University Press
Green, T. R. G. (1990) The cognitive dimension of viscosity - a sticky problem for HCI. In D. Diaper and B. Shackel (Eds.) INTERACT '90. Elsevier.
Green, T. R. G. (1996) The visual vision and human cognition. Invited talk at Visual Languages '96, Boulder, Colorado. In W. Citrin and M. Burnett (Eds.) Proceedings of 1996 IEEE Symposium on Visual Languages. Los Alamitos, CA: IEEE Society Press, 1996. ISBN 0 - 8186 - 7508 - X
Green, T. R. G. and Benyon, D. (1996.) The skull beneath the skin: entity-relationship models of information artifacts. Int. J. Human-Computer Studies, 44(6) 801-828.
compressed (80 kB): ftp://ftp.mrc-apu.cam.ac.uk/pub/personal/tg/Skull_v2.ps.gz
uncompressed (305 kB): ftp://ftp.mrc-apu.cam.ac.uk/pub/personal/tg/Skull_v2.ps
Green, T. R. G. and Petre, M. (1996) Usability analysis of visual programming environments: a 'cognitive dimensions' framework. J. Visual Languages and Computing, 7, 131-174.
compressed (180 kB): ftp://ftp.mrc-apu.cam.ac.uk/pub/personal/thomas.green/VPEusability.ps.gz
uncompressed (2.2 MB): ftp://ftp.mrc-apu.cam.ac.uk/pub/personal/tg/VPEusability.ps
Moran, T. P. (1983) Getting into a system: external-internal task mapping analysis. Proc. CHI 83 ACM Conf. on Human Factors in Computing Systems, pp 45-49. New York: ACM.
Payne, S. J. (1993) Understanding calendar use. Human-Computer Interaction, 8, 83-100.
Whitelock, D., Green, T. R. G., Benyon, D., and Petre, M. (1994) Discourse during design: what people talk about and (maybe) why. In R. Oppermann, S. Bagnara and D. Benyon (Eds.) Proceedings of ECCE- 7, Seventh European Conference on Cognitive Ergonomics. Sankt Augustin: Gesellschaft für Mathematik und Datenverarbeitung MBH. GMD-Studien Nr 233.
Yang, S., Burnett, M. M., DeKoven, E. and Zloof, M. (1998) Representation design benchmarks: a design- time aid for VPL navigable static representations. Journal of Visual Languages and Computing, 8 (5/6), 563-599.
|
<urn:uuid:bf3c1b9d-b8cc-49f3-ab5b-223cb562bc6e>
|
CC-MAIN-2013-20
|
http://homepage.ntlworld.com/greenery/workStuff/Papers/OSMsIntro/OSMsIntro.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00059-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.938817 | 6,293 | 2.53125 | 3 |
Describing the world, or a segment of it, is one of the key problems in communication. People of all historic eras have been looking for solutions. An array of remarkable approaches has emerged, like Egyptian engravings used to record historical events, Greek and medieval astronomers' survey of the Universe, or Carl von Linne's taxonomy of the world's living creatures.
There is no "best" technique, especially because different human activities call for different approaches. Art deliberately maintains a wide variety of techniques to communicate subtle impressions, beliefs, or affections; the style of the description is often equally as important as the subject itself. On the other hand, science, especially the natural sciences and technology, is looking for precision, conciseness, and unambiguity instead.
Most scientific description techniques like databases, computer programming languages, or even formal description methods are based on the modeling approach. Still, the most well-known representatives of modeling are graphical description systems. These tools enhance the illustrative, easy-to-understand nature of charts and diagrams with the precision of modeling concepts and rules. Things represented by these models can be as diverse as organizational hierarchy charts, genetic maps, and community sewer network diagrams.
Modeling is not only about description and illustration for human use. Their adherence to rules and patterns makes models different from drawings, figures, or free format textual descriptions. This not only reduces data size and disambiguates interpretation, but also makes the model suitable for automatic processing. For example, given an up-to-date company hierarchy chart, ordering correct business cards for each employee is a very straightforward process.
Second, computers are also suitable for automatic processing of models, thus making maximal use of them. While manual processing is tedious, all kinds of information (lists, reports, executable code, documentation, etc.) can be extracted and formatted from model data "at the press of a button" once the data has been inserted into a computer system.
Finally, information technology is an application area as well as a provider for computer modeling. Designing, building, programming, and configuring computer systems are tasks with a level of complexity that has been previously unseen. Human limits become obvious, and modeling again proves to be instrumental in building reliable systems.
There are are two tutorials presented here. The first tutorial on modeling and the Generic Modeling Environment (GME) takes its application example from the IT domain. Throughout the first tutorial's lessons, a design, configuration, and analysis tool for network infrastructures will be developed. The first tutorial gives an in depth, detailed perspective on GME and modeling. For those users that want a quick, simple tutorial to get up and running with GME in no time at all, there is a second set of tutorial lessons, much shorter than the first. The second tutorial uses the computer science concepts of Finite State Machines (FSM), Signal Flow (SF), and Boolean logic circuits as its application of GME.
GME can export/import from/to XML. Each figure will have a link to an exported XML file containing the GME project data used to create that screenshot for users that want to check their own progress against the author's save files. For more on exporting to XML, see Lesson 7 of the long tutorial.
|
<urn:uuid:52725ff4-16be-4d90-ac93-47730c53848b>
|
CC-MAIN-2013-20
|
http://w3.isis.vanderbilt.edu/Projects/gme/Tutorials/Index.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702414478/warc/CC-MAIN-20130516110654-00025-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.904303 | 670 | 3.5625 | 4 |
Tuesday, January 24, 2012
IST Lunch Bunch
Gerard Holzmann, Chief Scientist, Laboratory for Reliable Software, JPL/NASA
Within the last few years, multi-core systems have become ubiquitous, as expected. Virtually every desktop and laptop systems sold today has at least a dual-core chip inside, and a graphics card that may have hundreds of powerful processing engines -- all capable of processing massive amounts of data in parallel. If only we could learn how to program these systems well, we could tackle many more highly interesting practical problems. <br><br> One such fundamentally important problem is to develop new highly scalable algorithms for program analysis, but there are many others. Computing science education has traditionally focused only on sequential languages and sequential algorithms, which are increasingly becoming the technologies of the past. In this talk, I'll discuss what we need to do to prepare for our parallel future, and what types of problems we are up against.
|
<urn:uuid:94306a19-67a3-4f17-a9ab-ac63ba160b9d>
|
CC-MAIN-2013-20
|
http://www.caltech.edu/content/ist-lunch-bunch-34
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708711794/warc/CC-MAIN-20130516125151-00017-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.954381 | 194 | 2.703125 | 3 |
RAL is a scientific R&D organization tasked to develop real world weather-related decision support applications for clients. The technology produced here results from merging scientific research with algorithm and software development.
The technology transfer methods fall into the following categories:
- scientific findings documented in scientific papers
- algorithms and models documented in scientific papers
- algorithms documented using pseudo-code
- individual software applications
- small software systems (a few applications running on a single machine)
- moderate software systems (a moderate number of applications running on a few machines)
- large software systems (a large number of applications running on many machines)
The software systems generally comprise components designed for some or all of the following tasks:
- data acquisition and transfer
- scientific algorithms and models
- dissemination of results
- display and visualization of results
The methods used for the transfer of technology and know-how varies significantly from one project to another.
For example, the Low Level Windshear Alert System (LLWAS), one of RAL's early successes, was delivered to the Federal Aviation Administration (FAA) in 1992 in the form of algorithm specifications and pseudo-code. By today's standards the LLWAS windshear detection algorithm is a relatively small software system and this was a practical way to perform technology transfer for this application. For most of the current RAL projects this relatively simplistic form of technology transfer is not a practical option.
More commonly, RAL develops large, complex software systems which are delivered for clients as fully-functional (turn-key) systems. These are often developed prototypes that mature to become production systems. These fully-functional systems are generally delivered to the client as both source code and compiled applications. Frequently, the compilation step is performed at RAL. Some clients perform the compilation step on their own hardware. Almost all clients require source code delivery.
Since RAL software systems are typically large and complex, transferring the knowledge about how to install, run and maintain them is a major challenge. Client training has become an important step in the process. Formal documentation frequently only covers the installation and use of the system, and does not cover the details of individual components or system design.
For long term projects, on-going annual contracts generally cover maintenance for installed systems. A major part of this maintenance covers modifying the data acquisition components to keep up with changes in how data is delivered from the outside world.
|
<urn:uuid:5af8c61b-9858-4ed3-8b28-99e681d32215>
|
CC-MAIN-2013-20
|
http://www.rap.ucar.edu/technology/techtransfer/index.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709135115/warc/CC-MAIN-20130516125855-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.932777 | 493 | 3 | 3 |
This paper was originally published in: NATO Advanced Study Institute on Computer Communication Networks, University of Sussex, held in 1973. Proceedings were published by Noordhoff - Leyden in 1975.
PRESENTATION AND MAJOR DESIGN ASPECTS
OF THE CYCLADES COMPUTER NETWORK(1)
by Louis POUZIM
Institut de Recherche d'lnformatique et d'Automatique (IRIA)
This paper has been originally published in the proceedings of the Third Data Communications Symposium, Tampa, Nov. 1973. It is reproduced with kind permission of ACM-IEEE.
A computer network is being developed in France, under government sponsorship, to link about twenty heterogeneous computers located in universities, research and D.P. Centers. Goals are to set up a prototype network in order to foster experiment in various areas, such as : data communications, computer interaction, cooperative research, distributed data bases. The network is intended to be both an object for research, and an operational tool.
In order to speed up the implementation, standard equipment is used, and modifications to operating systems are minimized. Rather, the design effort bears on a carefully layered architecture, allowing for a gradual insertion of specialized protocols and services tailored to specific application and user classes.
A particular objective, for which CYCLADES should be an operational tool, is to provide various departments of the French Administration with access to multiple data bases located in geographically distant areas.
Host-host protocols, as well as error and flow control mechanisms are based on a simple message exchange procedure, on top of which various options may be built for the sake of efficiency, error recovery, or convenience. Depending on available computer resources, these options can be implemented as user software, system modules, or front end processor package. For each of them, network-wide interfaces are defined to conserve consistency in human communications.
CYCLADES uses a packet-switching sub-network, which is a transparent message carrier, completely independent of host-host conventions. While in many ways similar to ARPANET, it presents some distinctive differences in address and message handling, intended to facilitate interconnection with other networks. In particular, addresses can have variable formats, and messages are not delivered in sequence, so that they can flow out of the network through several gates toward an outside target.
Terminal concentrators are mini-hosts, and implement whatever services users or applications require, such as sequencing, error recovery, code translation, buffering, etc... Some specialized hosts may be installed to cater for specific services, such as mail, resource allocation, information retrieval, mass storage. A control center is also being installed and will be operated by the French PTT.
CYCLADES is one of the more recent computer network projects, which has been launched in France beginning with 1972. Its conception carries most of the characteristics found in the type of general purpose heterogeneous computer network such as experimented by ARPA, or proposed by NPL.
Our goals are to construct a prototype network in order to foster experiments in various areas, such as : data communications, computer interaction, cooperative research, distributed data bases. This action is two-fold. In order to acquire valid experience, the network must also be used in a realistic environment, which requires a variety of operational services acceptable by customer standards.
In order to speed up the implementation, standard equipment is used, and modifications to operating systems are minimized. Rather the design effort bears on a carefully layered architecture, providing for an extensible structure of protocols and network services, tailored to various classes of traffic and applications.
This concern for built-in evolutionism translates itself in putting as few features as possible at levels buried in the sensitive parts of the network. With experience gradually building up, and depending on trends in international standards, more stable characteristics will eventually emerge. By putting them at some lower system level, it will be possible to obtain higher efficiency and reduce duplication, at the cost of freezing a few more parameters.
The Cyclades design attempts to be both precise and independent from the implementation at the user level, so that heterogeneous sites can have their way, and still communicate with others in a consistent manner.
II. PARTICIPANTS AND EQUIPMENT
Cyclades is sponsored by the Délégation à l'Informatique, a government agency in charge of coordinating all activities related to computing. Participating centers are only partially funded and put their own contribution on a voluntary basis. In a first stage, all network centers are research oriented organizations, universities, or engineering schools. In a second stage some D.P. centers of the French Administration will be connected to phase in real applications.
Participating centers are ;
- Institut de Recherche en Informatique et Automatique (IRIA), (2 centers)
- Compagnie Internationale pour 1'Informatique (CII), (2 centers)
- Météorologie Nationale (METEO)
- Institut de Recherche des Transports (IRT)
- Université de Grenoble (IMAG)
- Centre Universitaire de Calcul de Lyon (CCILS)
- Ecole des Mines de Saint-Etienne (MINES)
- Université de Toulouse (TOU)
- Centre d'Etudes et de Recherches de Toulouse (CERT)
- Centre Electronique de 1'Armement (CELAR)
- Université de Rennes (REN)
- Centre Connnun d'Etudes de Télécommunications et Télévision (CCETT)
- Ecole Supérieure d'Electricité (ESE)
- Centre National d'Etudes des Télécommunications (CMET)
Computers are :
9 CII - 10070, 2 CII - IRIS/80, 2 CII - IRIS/50, 1 IBM 360/67, 1 CDC 6600, 1 PHILIPS - 1200. Communications computers are CII-MITRA/15.
The Cyclades topology is shown on Fig. 1. Transmission lines range from 4.8 kb up to 48 kb. The French PTT are providing lines and modems free of charge till end 1975. Also they will run the network control center.
The Cyclades project was launched on the beginning of 1972. First host-host communications have been tested in June 1973, without packet switching, which started working in August 1973, on one node. Thereafter the network will come up gradually, until all hosts are connected in April 1974. More centers will be introduced in 1975, along with real applications.
IV. GENERAL OBJECTIVES
1. Incremental implementation :
Systems such as computer networks are still in the mainstream of research, and it would be inappropriate, if not unrealistic, to delay implementations until all issues are entirely understood, evaluated, and all possible functions completely designed. Some experimentation is necessary to gain insight, acquire know-how, and test hypotheses that appear initially in a most subjective context.
Furthermore, building a computer network is by essence a distributed effort, in order to create the motivations and common understanding so necessary for coordinating tasks and achieving network standards. Involving users in a proper way is a guarantee to have productive feedback and imaginative suggestions to cure the deficiencies of the network and extend its capabilities in a useful manner.
For all these reasons, Cyclades is being brought up stepwise and should be capable of providing some services at an early stage of implementation. Versatility, convenience, efficiency, will be phased in gradually, with the introduction of new components, and substitution of old ones.
2. Design approach :
Since Cyclades was not the first of its kind, it was more than advisable to study other networks before starting out. Most available documents were originating from ARPA 8 and NPL 10. A few ones were centered on other networks, MERIT 2, TYMNET 11, INTENET 9.
From this preliminary study and various live discussions with "networkers", one could draw some tentative conclusions :
a - Data communications should be an independent sub-problem. Its main virtues are simplicity, reliability, transparency.
b - Packet switching can work.
c - Computer-computer protocols are still toddlers.
d - Homogeneous computers are a lot easier.
e - Ill defined protocols mean distributed headache.
f - Computer communications require human communications.
Bearing these headlines in mind, the Cyclades design concentrated initially on the communications interface as seen at the basic user level. A common user interface was felt to be a keystone for building more elaborate functions. From there on, the design proceeded inwards down to the packet switching interface, and outwards up to virtual terminals protocols, (TELNET like) 3.
By basic user level is meant in a broad sense a process executing a user program in a conventional operating system. Since Cyclades computers were deliberately heterogeneous, there were to be unavoidable variations in implementing the user-network interface.
Consequently, it was all the more important to produce specifications such that these local variations would not introduce ambiguities and misfits between any pair of users.
3 Data transfer :
Access to multiple data bases is a major operational objective for Cyclades, while time-sharing will take only a modest share. Even though a sophisticated system of distributed data bases may require some time to emerge, there will be a rising demand for file transfer, mainly because users tend to minimize adversity by splitting their tasks. Thus, basic protocols should provide for efficiency in using whatever bandwidth is available.
4. Standards :
Emphasis is put on using standards wherever possible, so as to protect present or future investments. A standard may be a set of recommendations promulgated by an official body. By default, it can be a widely accepted convention among network users. Specifically, communications hardware and procedures should conform to manufacturer or CCITT standards
On the other hand, when standards do not exist , or are ill-suited, proper interfaces should insulate the domain, in order to allow for future adjustment, and defer commitment.
5. Private user groups :
In any large conglomerate of persons or associations, some groups tend to develop special ties and preferred relationship, based on common interest or necessity. Such a phenomenon should be expected as a natural ingredient of computer network sociology. Consequently, basic communications procedures should leave enough flexibility, at the user level, to allow for private conventions tailored to specific applications. On the other hand, standard network communications should be compatible with this customization.
6. Inter-network communications :
The motivations for computer networks apply as well to networks of networks, which means that interconnecting with other networks should be a capability built in Cyclades. Presently, some networks communicate at terminal level. Although this may suit well some types of interactions, it is too restrictive for a broad class of applications. Interconnection at user, or communications network level should be anticipated.
V. COMMUNICATIONS ENTITIES
1. Network model (Fig 2) :
All host computers communicate with one another through a host software called ST (transfer station), and a communications network 7. There may be several ST's within a host. Except for this latter characteristic, ST's correspond to Arpanet (NCP'S. They are local network subsidiaries within a host town 5.
Host entities, such as processes, users, devices, etc... communicate by exchanging letters, which are handed over to a local ST shipped to the appropriate addressee's ST, and finally delivered to the destination entity.
Not every host entity may enjoy the privilege of sending letters using network services. To do so, one has to be formally introduced to the network as a subscriber. Roughly speaking, a subscription is a badge that allows its bearer to obtain network services, presumably at a cost some day. It is network business, viz. ST, to manage subscriptions, but it is host business to manage their association with local entities, and enforce rules for proper sharing and privacy. In other words, as seen from host, a subscription is usually a capability attached to a local process or user under host operating system protection.
2. Subscribers :
As seen from within the network, they are permanent names known network-wide. Opening and cancellation of subscriptions are administrative procedures which require some agreement from the network Authority. At a future stage of design, subscribers might be given capabilities and resource credit. For the moment, they are just global names.
Usual subscribers are attached to a particular ST : <global subscriber name > : : = <ST name> <local name> . But there can be general subscribers whose location might change in time.
Typically, a subscriber could be a software processor, a subsystem, a human user, a device, a special answering service, etc... But this association is immaterial as far as the network is concerned, provided that basic exchange protocols are adhered to.
For the sake of convenience in network operation, particularly in human communications, it is expected that associations between subscribers and host entities will be rather stable, like the pair : person name - telephone number. It should be worthwhile to print and disseminate subscriber directories reasonably up to date. If at all necessary, administrative delays or costs will be tacked on subscription changes to make them sufficiently infrequent.
Since most subscribers will use network services only occasionally, it would be wasteful to maintain subscription information at all times within ST's in high speed memory. Therefore, a subscription can be enabled or disabled, very much like login-logout for a time-sharing user. This operation is executed dynamically on subscriber's request.
3. Ports :
Many software systems deal with data exchange in terms of flows, streams, channels, or similar concepts. One could argue on whether this is a so called natural way or if it is just bequeathed by a persistent addiction to card readers, magnetic tapes, and other sequential devices. Nevertheless, I/O-like techniques permeate most forms of inter-process communications. The concept of port has come to be commonly used to designate an abstract entity on which data flows may be anchored, and addressed.
To that effect, a subscriber can apply to its ST for port names. They are created dynamically and are local to the subscriber. They can also be exchanged between ST's, as part of specific protocols, to set up links between subscribers.
In other words, subscribers and ports make up a hierarchical name space, network-wide. The subscriber component is global and basically stable, while the port component is local and basically transient.
So far we have not felt the need for further levels. Should it appear useful, growing sub-ports would not be a technical problem.
4. Letters :
It is a piece of information exchanged between two subscribers. There may be several varieties of letter mechanisms. Presently, 4 have been designed.
a) Regular letter :
It can be sent at any time to any subscriber, as long as both subscriptions are enabled. A priority may be specified, and an acknowledgement may be requested. By acknowledgment is meant a return message sent back to the sender subscriber, after the letter has been delivered to the receiver subscriber. A letter can contain up to 240 octets of text (1920 bits).
b) Liaison :
Letters are sent over a port, and delivered from a port. An initial set up is necessary to open a liaison, and exchange port names, which are only paired by order of creation. The liaison machinery includes error and flow control, and it is bi-directional. A symmetrical procedure solves all contention problems.
c) Connection :
It has the same properties as a liaison. But letters are delivered to the receiver subscriber in the same order as they have been sent. Furthermore, letters can be indefinite strings of bits. The connection machinery includes error and flow control, and it is bi-directional. The same symmetrical procedure as for liaisons applies to connections.
d) Events :
They are short letters (16 bits) transmitted with higher priority. They may be sent separately or over an existing liaison or connection amidst text flow, as out-of-band messages.
The previous set of mechanisms is intended to provide basic user facilities on top of which more sophisticated services may be built. Each one is aimed at a particular class of traffic which is expected to be frequently encountered in the network.
Regular letters are intended for conversational traffic between slow terminals and server processes. They can also be used as control messages between several processes cooperating within a distributed activity.
Liaisons are intended for bulk traffic such as file transfer or data base processing, where letters contain self-identifying items of information, and are well suited for parallel processing.
Connections are intended for I/O streams, typically remote sequential devices or files, as well as conventional interprocess communications.
Events are intended for control information when it is desirable to send it asynchronously with data flow. A typical case is attention or diagnostics messages to be used by a control environment rather than the normal receiver process.
VI. FUNCTIONAL COMPONENTS
1. Component hierarchy (Fig. 3) :
Starting with the communications network, one finds ;
a) A host communications interface, implementing a line transmission procedure. Initially, it will be one the bi-synchronous family, depending on the host at hand. In the future, an ISO standard procedure of the HDLC type 12 will be installed, when I-O adapters will be available on the market. One may notice that a host can have more than one physical link with the communications network, to provide for more reliability in case of node or line failure.
b) A transfer station (ST), implementing the subscriber name space, ports, and a basic letter handling at an intermediate system interface. There may be several ST's, for testing new versions, implementing special services, and communicating with foreign host protocols, e.g. Arpanet, or COST-11 13. Monitoring and diagnostic aids are also introduced at this level.
c) A set of user oriented letter handling functions implementing error and flow control, queue management, and liaison/connection management when applicable. This approach results from the recognition that there is no ideal way of handling data exchange. It depends on user environment. Rather than piling layer upon layer of functions, with the associated overhead and duplication, it appeared more efficient to leave room for expansion not only upwards as usual, but also sideways. Again, it becomes a casual matter to try out new options, and to develop private network access methods, without loosing the benefit of standard interfaces.
2. Transfer station structure :
Our objective was not limited to specify a set of rules for exchanging messages between hosts. Rather, it was ideally to write the specifications of a complete ST, including various letter handling, so that every network user would see a common interface, regardless of the host type.
It is clear that this is a trivial problem in homogeneous networks. One possible approach for a heterogeneous network would be to use a portable programming language. But operating system peculiarities introduce a variety of discrepancies and inefficiencies. Consequently, the design could not be so ideal ; it could only attempt to define functions without referring to specific host facilities. This objective resulted in the following scheme.
An ST is thought of as an abstract machine (Fig. 4), driven by commands, and exchanging information with the external world through a communications area. Some internal states may be observed through a glass window. Communications mechanisms are implementation dependent, but they should not bear any relationship with the ST internal logic. On the other hand, they can be implemented using well known engineering techniques.
An ST is then further broken down into individual components given maximum autonomy (Fig. 5). So as to allow for implementation freedom, individual components can be thought of as asynchronous machines cooperating via state variables, or queues. These constituent machines are listed below.
- Command : checks arguments and signals appropriate machine ; 1 mach
- Subscription : enables/disables subscriptions ; 1 mach/subscriber
- Regular letter : send/receive regular letters ; 1 mach/subscriber
- Port : handles ports ; 1 mach/subscriber
- Liaison : handles liaisons ; 1 mach/subscriber/liaison
- Connection : handles connections ; 1 mach/subscriber/connection
- Communications : send/receive packets ; 1 mach
- Debug : special modes ; 1 mach
- Operator : manual/automatic control ; 1 mach
Readers may not have failed to notice the structured programming approach used as a design methodology 4. Of course, implementation may deviate somehow in making machines less autonomous, such as sub-routines. But the design structure should be kept highly visible, or else unanticipated interferences may well creep in.
The logic of each machine is specified in natural language algorithms, using a loose form of pseudo-Algol. It was not felt at this point that a genuine programming language would have helped human communications.
3. Subscriber interface :
Although communications between a user process and an ST machine are implementation dependent, it was considered important to specify ST commands in a non-ambiguous way, resembling a subroutine or macro-call. Therefore, all commands have been given some sort of formal representation, using mnemonics and argument names, as they should be passed over to the ST. Message formats and states are also specified.
E.g. ; OPEN, LI, LOC-SUB, DIS-SUB, LI-X, MIN, MAX
meaning : open a liaison from local subscriber to distant subscriber, liaison number, minimum and maximum characteristics proposed in terms of buffer allocation, letter length, bandwidth.
In an actual implementation, OPEN LIAISON could be a system primitive, DIS-SUB and LI-X could be arguments in registers, MIN and MAX packed into a liaison control block, and LOC-SUB supplied by the operating system.
4. Communications network interface :
The ST makes packets out of letters, and vice versa. Letters are stitched with control information, and if at all possible several letters are blocked within a single packet towards the same destination. Infinite letters are fragmented. Thereafter, the packet is passed to a line handler to be delivered to the communication network. The packet format is :
Header : 72 bits (9 octets), Text : 2040 bits (255 octets) max. As usual, additional bits are inserted when output to transmission lines takes place : sync, CRC, etc...
VII. PACKET SWITCHING
Packet switching technology is just emerging, and building a computer network is an adequate opportunity to experiment and acquire know-how in this domain. So far well defined problems have been solved quite satisfactorily, e.g. fault detection, remote loading, packet ordering, etc... On the other hand, there are ill defined problems, such as congestion, routing , topology, which are only partially understood, and most likely interdependent. Our concern in a first stage is not to make breakthrough in packet switching technology, but to build a reliable communications tool for Cyclades, while preserving the possibility of a major redesign when more experience becomes available.
Consequently we have been very strict in insulating logically, and even physically, functions related to computer network on one hand, and those germane to packet switching on another hand. E.g. terminal concentration will be done by mini-hosts containing a stripped down ST, implementing unsophisticated connections.
Some specific features of our packet switching network, called CIGALE 7, are presented in the following.
1. Addressing :
The basic purpose of a packet switching network is to deliver messages to an addressee located outside of the network, and not to reach its own components. Therefore, there is a need for a global name space network-wide, to designate source and destination of messages. In Arpanet such a name space maps onto network components, viz. node and line number. There are two consequences:
a - addressees can only be reached through a unique gateway,
b - topology changes may require address changes.
Let us assume that an addressee is not a single computer, but a distributed computer, i.e. a network, then it will likely be required to link them by multiple paths, for reliability, traffic smoothing, response time, etc... An addressee becomes an outside target, which may be reached through several possible gateways. Thus, we need an independent name space.
In Cigale, names are ST's, which can be reached from several nodes. Furthermore, there may be several ST's on one line. In a large network, it would be a severe constraint if every node should know all possible addressees. Therefore, we use a hierarchical name space : region - ST.
Each node has only to know region names, and ST names within its own region. Any ST belongs to only one region. But let us note that this does not prevent from reaching an ST directly from a node in a different region, as long as this ST name is also listed in the node name space. But this practice should be restricted to isolated ST's, as it tends to make address look up more costly.
International communications will have to deal with a jumble of address formats, and the only practical way out will be to introduce variable formats as a way of switching. This will bring another hierarchical structure, for which every network should be prepared. In anticipation of that, an address type component is provided in Cigale.
The general address format would be : type (3 bits), region (5 bits), ST (8 bits). But Cyclades does not need such a large name space, and some bits may be set aside for future use, leaving region (4 bits), ST (4 bits).
2. Internal ST (STI) :
Some special functions are useful within a packet switching network, e.g. collecting bad messages, echoing, traffic generation, ... Rather than having specially formatted packets along with the decoding software, it is much easier to reserve a subset of the ST name space to address those special components. Since the general addressing mechanism applies, STI's may be either located within nodes, or be within real hosts considered as extensions of the network, and supported by the network Authority. Also some services may be experimented within a real host, and once approved, integrated within the network functions.
In commercial networks some services should be offered to attract customers and add more convenience : e.g. data conversion, mailboxes, broadcasting, file editing, etc... Using STI's is a handy way to hook those services without disturbing network operation.
In Cigale, some address variations allow a few STI's to be located : - at only one node, - at every node, - at some nodes. Through that facility, STI's may be distributed according to traffic requirements, and even moved about the network during operation.
3. Message reassembly :
Cigale does not fragment messages. It only takes in packets.
4. Message ordering :
Cigale delivers packets as soon as they arrive at a destination gateway. There is no ordering. On the other hand ordering does not appear compatible with multipaths to a host.
5. Flow control :
Cigale does not apply flow control to any specific flow. On the other hand, it will attempt to resist congestion, but the techniques to be used are not yet clear. An approach would be to allocate input traffic according to available buffer space, using exponential smoothing. Simulation studies are planned.
VIII. INTER-NETWORK COMMUNICATIONS
Inter-network communications have still to demonstrate their practical feasibility, if one excepts the present situation where a network mimics a terminal to the other. It seems that key-points include simplicity and open-endness.
The more sophisticated a network, the less likely it is going to interface properly with another. In particular, any function except sending packets is probably just specific enough not to work in conjunction with a neighbor. The result is an intersection of properties rather than a union.
In this respect Cigale does not present any excess properties. All functions are self-contained, none extends across network boundary. As long as packets are within the maximum size, with proper format, they will be delivered to a known ST. Some mismatch may result from error messages sent back to the source in case of wrong progress. A possible solution would be to use an STI as middle man, in charge of the necessary format conversions.
Trans-network communications bring up another problem. Assuming that interface problems are solved, intermediate networks, supposed to carry messages along, do not possess the final destination in their name space. Thus a new function arises : international routing. But this is a general problem unrelated to specific network characteristics.
Cyclades hosts are also well suited to inter-network exchange, since : - their basic letter protocol is simple, - more ST's or protocols can be added. In the worst case a special ST must be built to interface with a foreign host. However there may be some devious-timing problems that can probably be solved on an ad hoc basis. But this would provide only a straight-forward type of exchange. The whole set of procedures and practices that make up a computer network environment is a much larger task.
Cyclades is one of the largest computer projects in France. Major universities and research centers are actively working on its development. It is expected that techniques and insight acquired during the project will benefit research and industry, specifically at a moment when several communications networks are being planned by large corporations, and the French Administration. Its extensible structure at several levels makes it well suited to all sorts of experiments on a national and international scene.
The Cyclades design is largely a teamwork, and it is quite difficult to trace back the genesis of ideas. Our work stemmed mainly from earlier research accomplished at NPL and within the ARPA community. Among individuals who contributed major parts of the design, are M. ELIE (CII) and J.L. GRANGÉ (Cyclades). A particular acknowledgment is due to H. ZIMMERMANN (Cyclades) for an outstanding contribution on host protocols (ST. Stimulating discussions with D. WALDEN (BBM) brought substantial improvement and simplification.
1 - CARR C.S., CROCKER S.D., CERF V.G. - Host-Host communication protocol in the Arpa network. SJCC (1970), 589-597.
2 - COCAMOMER A.B. - Functional characteristics of CCOS. Merit computer network, (Jun. 1971), 39 p.
3 - CROCKER S. et al. - Function oriented protocols for the Arpa computer network. SJCC (1972), 271-279.
4 - DIJKSTRA E.W. - Notes on structured programming. (Aug. 1969), 84 p.
5 - ELIE M., ZIMMEBMAMN H. et al. - Spécifications fonctionnelles des stations de transport du Réseau Cyclades. SCH 502, (Nov. 1972). 105 p.
6 - GIRARDI S. - SOC project, an experimental computer network. Intern. Comp. Symp. Venice, (Apr. 1972), 210-220.
7 - GRANGÉ J.L., POUZIN L. - Cigale, la machine de commutation de paquets du Réseau Cyclades. Congrès AFCET 1973, 24 p.
8 - ROBERTS L.G., WESSLER B.D. - Computer network development to achieve resource sharing. SJCC (1970), 543-549.
9 - RUTLEDGE R.M., et al. - An interactive network of timesharing computers. 24th ACM Nat. Conf. (Aug. 1969), 431-441.
10 - SCANTLEBURY R.A. - A model for the local area of a data communication network. Objectives and hardware organization. ACM Symp. on problems in the optimization of data communications systems (1969), 179-201.
11 - TYMES L.R. - Tymnet, a terminal oriented communication network. SJCC (1971), 211-216.
12 - ISO/TC 97/SC 6. Doc. 731 - HDLC procedures. Proposed draft international standard on frame structure (Feb. 1973), 4 p.
13 - BARBER D.L.A. - The European computer network project. ICCC, Washington D.C., (Oct. 1972), 192-200.
|
<urn:uuid:af4a25a9-b146-4d3a-9233-d77a9cf9ed18>
|
CC-MAIN-2013-20
|
http://rogerdmoore.ca/PS/CYCLB.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704253666/warc/CC-MAIN-20130516113733-00045-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.913888 | 6,796 | 2.515625 | 3 |
|Issue: Volume: 23 Issue: 6 (June 2000)
Computer, Visualize Thyself
Diana Phillips Mahoney
The rapid, substantial advances in computational and graphics performance over the past several years have allowed researchers in countless application areas to gain unique perspectives on their numerical and statistical data through the use of specialized information-visualization techniques.
Recently, the driving force behind these techniques-high-performance computing-has be gun to reap the technology's benefits as well. Numerous re searchers are exploring the use of visualization capabilities to manage and manipulate the massive datasets representative of various computer systems' operations. An innovative effort in this regard is a powerful, general-purpose computer-systems visualization and analysis framework under development at Stanford University.
Called Rivet, the unique visualization environment was born out of a collaboration between the university's computer graphics and computer systems groups, whose driving objective has been to develop tools for understanding complex computer systems while also furthering the state of the art in information visualization. Succeeding on both fronts, Rivet enables the rapid development of interactive visualizations for studying a range of data-intensive computer systems components, including operating systems, processor and memory systems, compilers, multiprocessing architectures, and network technologies.
|A Rivet visualization analyzing mobile-network usage in the San Francisco Bay area relies on visual metaphors to show both user-mobility patterns and usage patterns over time (inset). The mobility view uses four scatter plots of the same dataset, each wit|
The need for such technology is becoming particularly acute as today's increasingly complex com puter systems outgrow traditional data-analysis methods. "Com puter-systems data has typically either been analyzed using statistics, which can obscure interesting behavior by aggregating large amounts of data into a single measure, or by manually searching through large text files, which can easily leave analysts lost in the details," says Rivet principal researcher Robert Bosch.
In contrast, by combining elements of scientific visualization, human-computer interaction, data mining, imaging, and graphics, information visualization is able to transform abstract, voluminous data that often has no obvious physical representation into understandable pictures. "Visualization exploits the high perceptual bandwidth and pattern-recognition capabilities of the human visual system, enabling analysts to explore large data sets and find the information that is of particular interest to them," Bosch notes.
The computer-systems challenge, however, can bring existing information-visualization techniques to their knees because, says Bosch, many of these tools have been developed to handle very specific problems and are not easily adaptable to broader applications. Rivet is intended to serve as a single, cohesive, general-purpose computer-systems visualization environment. Bosch and colleagues Chris Stolte and Diane Tang, under the direction of professors Pat Han rahan, Mendel Rosenblum, and Mary Baker, are developing the system in the context of a range of diverse real-world problems, such as the study of application behavior on superscalar processors and the development of a performance analysis system for evaluating parallel applications.
Rivet is built on the premise that a single data set and visualization is often not sufficient for answering some of the complex questions that computer-systems analyses invite. Instead, says Bosch, "a given data visualization often sug gests a new set of data to be collected and incorporated into the visualization." Thus, he says, "one of our goals for Rivet was to support an iterative and integrated analysis and visualization process. Consequently, we have focused on supporting rapid prototyping and incremental development of visualizations."
To facilitate this approach, the re searchers employ a "compositional" or modular architecture made up of individual visualization building blocks and interfaces that users can define and assemble in whichever way best meets the needs of their specific applications. Because in most cases it's impossible to visually represent both the huge number of entities involved in a specific analysis as well as the events to which the entities are subjected (hundreds of processors engaged in millions or billions of transactions, for example), Rivet relies on a system of visual "metaphors" to capture the essence of the data.
|In the San Francisco Bay mobile-network visualization, a different visual metaphor provides a window into overall network statistics. The inset detail focuses on a particular region of interest. Using the control panel, a user can dynamically select the n|
At Rivet's core are data elements called tuples, which are unordered collections of data attributes conceptually similar to a row of data in a spreadsheet table. Each tuple contains information about the entity being analyzed. If it's a processor, for example, the tuple might contain such information as the nature of the transaction it's engaged in and the degree to which it's being utilized at a given time step. Tuples with a common format are grouped into tables, accompanied by metadata describing the tuple contents. This organization allows the same dataset to be visualized in many different ways. For example, an analyst might want to evaluate a processor's activity level at a given point in time or its performance relative to other processors.
Once organized into tables, the data can be passed through a transformation network to perform such standard relational-database operations as sorting, filtering, aggregation, grouping, and merging. Rivet also lets users incorporate their own operations, such as clustering and data-mining algorithms. The transformed data is then mapped to a visual representation using graphical meta phors that depict the data tables and the individual tuples using primitive shapes.
The metaphors rely on numerous visualization attributes-ranging from simple luminance variations to full-color animations-and support multiple levels of detail and interactivity to optimize the data representation. The animation capabilities are particularly useful, notes Bosch, because they provide "a relatively natural means of representing the evolution of systems over time." For example, in the visualization environment designed to study application behavior on superscalar processors, a "pipeline" view animates instructions as they traverse the stages of pipeline utilization. Additionally, says Bosch, "during interactive navigation of the data, animation helps preserve the user's context and prevent disorientation." In the superscalar application, the animated pipe line behavior is correlated to specific regions of interest identified in the timeline view, which displays pipeline utilization and occupancy statistics for the entire period of study.
The key to Rivet's success is its ability to provide the flexibility necessary to enable rapid prototyping of visualizations for exploring complex, real-world problems without sacrificing high-performance graphics and support for large datasets. This is achieved through its reliance on both compiled and interpreted programming languages. "All of our basic building blocks are written in C++ and OpenGL. The interfaces to these objects are exported to Tcl or Perl [standard scripting languages], enabling visualizations to be rapidly assembled and modified using scripts," says Bosch. "Once the visualizations are assembled, the interpreter is out of the main loop, providing a good mix of performance and flexibility."
One of the ongoing challenges the researchers face is finding the optimal decomposition of visualizations into simple building blocks. "Our collection of visualization building blocks has evolved as we have had more experience with the system," says Bosch. "Early on, we discovered that it was critical that the visual components be distinct from the data components. Recently, we have further decomposed the visual and data components to an even finer level to provide more flexibility in how we compose and create visualizations.
The Stanford researchers consider Rivet a work-in-progress. Upcoming research efforts include expanding the environment's data-management capabilities. "We want to support even larger datasets," says Bosch. "We can currently handle hundreds of megabytes of data, but many interesting datasets are much larger."
Much of the Rivet researchers' attention of late has been focused on validating the system through its use in individual case studies, such as the superscalar processor study. The system has also been successfully applied in studies of parallel applications, memory systems, and wireless networks. In addition, the re searchers are using Rivet to develop two more general visualization frameworks. One, called Polaris, is an environment for the visualization of high-dimensional relational data. The second, called DataGrove, is an interface for hierarchically structured data.
The application opportunities for Rivet are vast. And as long as computer systems continue to grow in complexity, so will the application space of Rivet.
Diana Phillips Mahoney is chief technology editor of Computer Graphics World.
|Back to Top
|
<urn:uuid:6c1ce2fe-33e0-4560-ae29-0912b14b9c82>
|
CC-MAIN-2013-20
|
http://www.cgw.com/Publications/CGW/2000/Volume-23-Issue-6-June-2000-/Computer-Visualize-Thyself.aspx?LargeFonts=true
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705300740/warc/CC-MAIN-20130516115500-00032-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.92165 | 1,766 | 2.828125 | 3 |
Role of Computers in ManagementJanuary 23, 2008
Computer in Managment we are going to talk here in this blog. Agency management System (AMS) is best example of use of computer in managment. Prof. H.A.Simon views the computer as the fourth great breakthrough in history to aid man in his thinking process and decision-making ability. The first was the invention of writing which gave man a memory in performing mental tasks.
The remaining two events prior to the computer were the devising of Arabic number system with its zero and positional notation, and the invention of analytic geometry and calculus, which permitted the solution of complex problems in scientific theory.
We will talk Need of information handling in our next post.
|
<urn:uuid:28bcce3b-95a3-4ac2-a5ae-6808f89a0083>
|
CC-MAIN-2013-20
|
http://computersinmanagement.wordpress.com/2008/01/23/hello-world/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00059-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.968996 | 147 | 2.578125 | 3 |
Terms that are on use on this site.There are 493 entries in this glossary.
contact among or between CPUs for identification.
An interface procedure that is based on status/data signals that assure orderly data transfer as opposed to asynchronous exchange.
the physical, manufactured components of a computer system, such as the circuit boards, CRT, keyboard, and chassis.
Continuous distortion of the normal sine wave, occurring at frequencies between 60 Hz and 3 kHz.
Describes an approach based on common sense rules and trial and error, rather than on comprehensive theory.
A problem-oriented programming language in which each instruction may be equivalent to several machine-code instructions.
1) A central controlling computer in a network system.
the primary computer in a multi-processor network that issues commands, accesses the most important data, and is the most versatile processing element in the system.
exchange of components during operation.
Hyper text markup language
An interactive on-line documentation technique that allows users to
select ?? typically via a mouse click ?? certain words or
phrases to immediately link to information related to the selected item.
1) The effect of residual magnetism whereby the magnetization of a ferrous substance lags the magnetizing force because of molecular friction. 2) The property of magnetic material that causes the magnetic induction for a given magnetizing force to depend upon the previous conditions of magnetization. 3) A form of nonlinearity in which the response of a circuit to a particular set of input conditions depends not only on the instantaneous values of those conditions, but also on the immediate past of the input and output signals.
|
<urn:uuid:8c74d61b-f3d2-4e05-ae8f-95865f9eb8ab>
|
CC-MAIN-2013-20
|
http://www.automationmag.com/component/option,com_glossary/glossid,51/letter,H/task,list/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698493317/warc/CC-MAIN-20130516100133-00025-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.861256 | 339 | 3.171875 | 3 |
EU project to smooth the path through data-intensive environments
Scientists in Europe are working hard at strengthening cooperation and decision-making data-intensive and cognitively complex settings by advancing information systems. Helping fuel this technology drive is a new EU-funded project that is exploiting and building high-performance computing paradigms and broad data processing technologies, such as cloud computing and column databases, to search, assess and aggregate data in varied, extensive and evolving sources. DICODE ('Mastering data-intensive collaboration and decision making') is supported under the 'Information and communication technologies' (ICT) Theme of the EU's Seventh Framework Programme (FP7) to the tune of EUR 2.6 million.
Coordinated by the Research Academic Computer Technology Institute in Patras, Greece, the DICODE project ultimately aims to integrate the reasoning capacities of man and machine.
Experts say that cooperation and decision-making settings are generally linked to massive amounts of multiple data types that have a low signal-to-noise ratio. Various sources are used to gather these data, which are not only different in terms of subjectivity and importance, but are characterised by people's opinions, practices, indisputable measurements and scientific results.
According to the eight-strong team, data types can be of varied levels as far as human understanding and machine interpretation are concerned.
Nowadays, large volumes of data are added to databases with as few problems as possible. Throbbing headaches result when people attempt to consider and use data gathered over longer periods of time, and analyse them to help in their decision making. The DICODE partners say more complex situations require the identification, understanding and use of data patterns. Large volumes of data must be aggregated from multiple sources, and then mined for insights that would not materialise from manual inspection or analysis of any single data source, according to the team.
DICODE is set to provide solutions with its technological advances. The consortium is developing and consolidating services to be released under an open source licence. The partners characterise the DICODE solution as an innovative workbench incorporating and coordinating a set of interoperable services that ease the data-intensiveness and complexity overload at critical decision points to bring them down to a manageable level. At the end of the day, stakeholders will benefit immensely, as their production and creativity levels will rise.
The DICODE partners point out that the project's success will be validated through three use cases, which will test the transferability of DICODE solutions in diverse cooperation and decision-making settings.
The consortium consists of experts from Biomedical Research Foundation in Greece; Neofonie GmbH, Publicis Frankfurt GmbH and the Fraunhofer Society for the Advancement of Applied Research (FHG) in Germany; the University of Leeds and Image Analysis Limited in the UK; and Spain's Universidad Politécnica de Madrid (UPM).
In a statement, the UPM's Biodemical Informatics Group said its role in DICODE is to integrate services, tools and project resources. The Spanish team is also helping develop the new tools and services for the project platform.
Kicked off in 2010, the DICODE project is set to end in 2013.
For more information, please visit:
Universidad Politécnica de Madrid (UPM):
Research Academic Computer Technology Institute:
Related stories: 32687
Data Source Provider: Universidad Politécnica de Madrid (UPM); DICODE
Document Reference: Based on information from Universidad Politécnica de Madrid (UPM) and DICODE
Subject Index: Information and communication technology applications ; Telecommunications; Scientific Research; Innovation, Technology Transfer; Information Processing, Information Systems; Coordination, Cooperation
|
<urn:uuid:043ddd16-1517-4ba2-b79d-0a2b07128e53>
|
CC-MAIN-2013-20
|
http://cordis.europa.eu/fetch?CALLER=EN_NEWS&ACTION=D&DOC=19&CAT=NEWS&QUERY=01370d8c8f1c:0169:21a1ffd5&RCN=32971
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00049-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.880414 | 776 | 2.53125 | 3 |
For anyone who would like to apply their technical skills to creative work ranging from video games to art installations to interactive music, and also for artists who would like to use programming in their artistic practice.
Learn everything you need to know to get started building a MongoDB-based app.
Join the data revolution. Companies are searching for data scientists. This specialized field demands multiple skills not easy to obtain through conventional curricula. Introduce yourself to the basics of data science and leave armed with practical experience programming massive databases.
Examines key computational abstraction levels below modern high-level languages.
In this course we will learn how to apply patterns, pattern languages, and frameworks to alleviate the complexity of developing concurrent and networked software.
The course is an introduction to linear and discrete optimization - an important part of computational mathematics with a wide range of applications in many areas of everyday life.
This course covers the essential information that every serious programmer needs to know about algorithms and data structures, with emphasis on applications and scientific performance analysis of Java implementations.
In this course you will learn several fundamental principles of algorithm design: divide-and-conquer methods, graph algorithms, practical data structures, randomized algorithms, and more.
The Internet is a computer network that millions of people use every day. Understand the design strategies used to solve computer networking problems while you learn how the Internet works.
|
<urn:uuid:2203c4b2-a117-4cee-a45d-af9bdbd5516b>
|
CC-MAIN-2013-20
|
http://www.mooc-list.com/tags/java
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704517601/warc/CC-MAIN-20130516114157-00041-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.911722 | 278 | 2.8125 | 3 |
This class will open a port in Houdini an emulate a terminal with the ability to send and receive commands.
Imagemagick is a command line tool that you can use to manipulate images.
This java class emulates a signal that can be convolved with other signals in discrete time.
Creating the set of all possible subsets, the powerset of a set with Scheme.
Redirecting the input output stream using the << symbol in the c or bash shell. This is useful when you wish to run another program that is initialized in the shell through a command.
Example environment diagrams in Scheme.
Message passing is the underlying theme of object oriented programming. Data directed programming uses tagged-data.
Y-combinator is a way of defining a function that can call itself without the define function.
This demonstrates a mathematical concept of a perfect number in a computational abstraction utilizing scheme.
Implementing tree recursion with Scheme.
This python program fits n-Dimensional data.
This bash script helps to align images, or define boundaries for external postscript files within a tex file that uses pstricks.
This generates the LaTeX code for creating vectors on a unit circle.
This PHP script generates LaTeX code for a matrix.
This PHP script generates Stem Plots in LaTeX using PStricks.
A simple BASH script for LaTeX rendering.
This bash script creates README files within a php folder hierarchy and determines all functions, their arguments, and which php files use them.
Create an arbitrarily sized array in a for loop.
Distinguishable and Indistinguishable boxes and objects. Combinations and permutations. Placing objects into boxes.
Using LaTeX to generate pseudocode-type display for algorithms.
Using scheme to compute derivatives and integrals.
A sorting function in C++ that uses a bubble sort algorithm.
Creating a vector-filter function without the use of lists as an intermediate value.
How to create a vector append procedure in Scheme.
Return the path of a given element within a tree.
Utilizing environments to make classes and objects.
Binary trees, binary search trees, and algorithms for searching these trees.
Reversing a list in scheme using iterative and recursive processes.
The Euclidean Algorithm is used to find the Greatest Common Divisor. Written in Scheme.
How to compute a base b expansion of a number n.
Modular Exponentiation is an important algorithm in cryptography and computer science.
Recursive string substitutions in lists in Scheme.
Writing and opening files in Scheme.
Generate audio files with Python for programming music!
Python music! Scales, keys, music, beats, and Python!
Using fscanf() to read characters from a file into a multi-dimensional character array.
String functions in C involving arrays of characters.
How to pass arguments into a C program with the command line.
Generate the open-source ABC format in C for MIDI music.
Using malloc() within Multi-dimensional Arrays in C.
This program contains functions that sort arrays, and opens and reads files. Written in C.
Comparison operators, <, <=, >, >=, ==, etc.
Introduction to srand() function in C and having a maximum and minimum specified as a range for random numbers.
Two methods of using getchar() function in C to input characters and echo them to the textport.
Dividing floats with true division as opposed to floor division.
Create a string by concatenating a character and an integer.
Get all subfolders in a MySQL table using PHP.
How to create a modular function in C to print stars.
How to use the scanf statement within a function by using pointers.
Adds two fractions, and reduces them using the modulus function.
This code will determine the musical relationship between different notes based on a tonic. Includes inversions.
Combining programming and music through intervals.
Returning character arrays from a function in C.
Mathematically moves any notes into a scale.
Mathematics, music theory, and programming to create a table of the 88 keys of a grand piano.
Enter the number of seconds and return Hours, Minutes, Seconds.
Counting the number of pennies and converting to other change in C.
Mathematically generate all of the frequencies of the 88 keyboard piano.
The system command in C allows programmers to communicate with the computer's operating system.
A program that generates a blank C file with the specified filename, then opens the file in a vi editor in the terminal.
How to use the help function when you need assistance.
The dir function returns all the functions inside of a module.
Emulate the setenv command and retrieve all environment variables.
Methods for reading files and lines of files.
Some string methods along with string replace, find, and where.
The very basics of for and while loops in Python.
Different methods for listing folders in a directory.
Setting up a Ruby on Rails (RoR) website.
This creates a dictionary from two lists, and redirects the input output stream to a file.
A brief, yet in depth overview of the string replacing capabilities of the sed command. sed Unix Matrix
This includes some terminal key commands, bracing, basic math, and creating variables.
A trick using head and tail commands to pull out a line of a file.
|
<urn:uuid:f58a6e97-8f4b-4661-b419-0094281c48ed>
|
CC-MAIN-2013-20
|
http://www.3daet.com/cat/59/programming/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00023-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.796855 | 1,140 | 2.671875 | 3 |
High-Performance Distributed Computing
Dr. Greg Wolffe, [email protected]
This project involved learning and using a new, open-source, high-performance distributed computing framework called Hadoop. As it is a relatively recent release from Yahoo, the first step in the process was researching this cutting-edge technology. Since the project was intended to be a complete investigation from hardware to results, the next step was to setup a distributed computing platform using a blade server powered by the Ubuntu operating system. The infrastructure stage was completed with the installation and configuration of the Hadoop framework and filesystem. The next step, learning and using the features of the framework, was approached by writing a simple Hadoop application that implemented the well-known Traveling Salesman Problem. This naturally helped in learning the basics of distributed computing using Hadoop, although it did not stress the file handling capabilities of the system. To test that aspect, a second and much more complex application was developed. This represented a social networking research application that performed data mining on a large set – data from the Wikipedia website. The final step involved gathering metrics to show the improvement in execution time of the distributed Hadoop applications against their serial versions. This project was very time-consuming and complex, but offered numerous learning opportunities. There were quite a few problems that had to be overcome, ranging from hardware issues to language incompatibilities. This afforded a deep reflection on the project, the framework, and lessons learned.
Alofs, Vinay, "High-Performance Distributed Computing" (2008). Technical Library. Paper 27.
This document is currently not available here.
|
<urn:uuid:2fddc530-8d56-4629-8cb1-e18270e682bf>
|
CC-MAIN-2013-20
|
http://scholarworks.gvsu.edu/cistechlib/27/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701281163/warc/CC-MAIN-20130516104801-00070-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.948699 | 339 | 2.75 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.