text
stringlengths 242
506k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 15
389
| file_path
stringlengths 138
138
| language
stringclasses 1
value | language_score
float64 0.65
0.99
| token_count
int64 57
112k
| score
float64 2.52
5.03
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Artificial intelligence (AI) is a branch of computer science and engineering that deals with intelligent behavior, learning, and adaptation in machines. Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, speech, and facial recognition. As such, it has become an engineering discipline, focused on providing solutions to real life problems, software applications, traditional strategy games like computer chess and other video games.
SimPy is an object-oriented, process-based discrete-event
simulation language based on standard Python and released under
the GNU GPL. It provides the modeller with components of a
Dnote (Duncan's Notepad) is a fast text editor designed for people who use a lot of files at once, or who want to quickly view the contents of a file. It keeps a list of your commonly used...
100's of real-time graphical traffic metrics reports can be
generated by this free PHP hit counter script. Advanced
interactive Flash Reporter tracks virtually every statistic
about your site...
This is a simple Tic-Tac-Toe game which has 3 levels of
computer AI. 1: Very easy ((Almost) Completely random) 2:
Moderate (You have to get the hang of the game before beating
this one) 3:...
Search Engine Optimizer is a Windows (all versions) software program that offers specialized checks on Web pages in an effort to achieve higher search engine rankings. Users can run their Web pages...
GRKda runs under Windows Vista/XP/2000/NT/Me/9x and assist you in achieving high relevancy cores for your Web Pages in regards to the various search engines by allowing you to analyze and duplicate...
Sorting and searching program by different method / sort 1-d
array by different sorting method // sort number up to -2, 147,
483, 648 to 2, 147, 483, 647 // 1. sort by bubble sort // 2.
This programm is written in assembly. It is a simulation of
DOS. This program creat file, delete file, create folder delete
folder, execute a pogramm, show the list of files in a folder.
ProAddress is an easy to use electronic address book.
ProAddress can manage all your contacts business or personal.
ProAddress is logical in functionality making it a sensible
|
<urn:uuid:c9572b56-c659-489c-9d91-03523a58e094>
|
CC-MAIN-2013-20
|
http://www.programmersheaven.com/tags/AI/Files/?Page=23
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705926946/warc/CC-MAIN-20130516120526-00045-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.896786 | 506 | 2.6875 | 3 |
Biju Patnaik University of Technology-MCA 1st Sem-Programming in C Exam – Download Previous Year’s Question Papers
Computer Applications is an important subject especially in today’s times where the computer is an indispensable part of the life. The subject of computer applications usually deals with topics that are related to the different application, software, programming and language of the computer that helps the computer to function and display certain information after the data is fed into it. In one of the computer language papers the programming of C Paper is very significant. Some of the topics taught under this course and paper are Hamming distance, compiler and interpreter and the differences between them, purpose of malloc function, evaluation of binary expression source code, object code and executable code, different storage classes available in C, different data types in C with examples, control structures, advantages and disadvantages of a recursive algorithm, structural data file and steps involved in creating a new file , pointer variable and its different features. The paper consists of 2 marks and 10 marks questions. Ten 2 marks questions and five 10 marks questions should be attempted. Below are attached some of the previous year’s question papers for your help.
|
<urn:uuid:44d38170-b500-4b9f-a80d-77bbf9c768e6>
|
CC-MAIN-2013-20
|
http://entrance-exam.net/forum/question-papers/biju-patnaik-university-technology-mca-1st-sem-programming-c-exam-download-previous-year-s-question-papers-729358.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702525329/warc/CC-MAIN-20130516110845-00062-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.921072 | 243 | 2.921875 | 3 |
HARDWARE - contains information on the physical parts of a computing system, including electrical and mechanical components.
SOFTWARE - covers the use of popular programs such as word processing, spreadsheets, databases and DTP.
COMPUTER LANGUAGES - explains the development, different conventions, and basic terminology used within programming languages.
DEVELOPING AN INFORMATION SYSTEM - contains information on how to create a system to organise, catalogue, store, retrieve and maintain information.
INFORMATION REPRESENTATION - highlights the different ways that information is represented covering topics such as binary, hexadecimal, octal and other number systems.
NETWORKS - explains the different ways in which information systems are connected together.
INTERNET - describes the basic functions of the largest network of computers called the Internet.
|
<urn:uuid:04eadee2-5a4c-401d-ad25-84be23ab73fa>
|
CC-MAIN-2013-20
|
http://doit.ort.org/course/introduction.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703662159/warc/CC-MAIN-20130516112742-00022-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.849391 | 166 | 3.6875 | 4 |
Video Lectures, Video Courses, Science Animations, Lecture Notes, Online Test, Lecture Presentations. Absolutely FREE.
Lecture 35: Linked Lists Video Lecture:
Click to Dim the Lights
Lecture 35: Linked Lists
Lecture duration: 44 min
This video lecture series on Higher Computing by Richard Buckland of by The University of New South Wales, Australia is an introductory course for computer science. This course consists of three strands: programming, systems, and general computer science literacy. The programming strand is further divided into two parts. For the first half of the course we cover small scale programming, in the second half we look at how to effectively use teams to produce more substantial software. In the systems strand we will look at how computers work. Concentrating on microprocessors, memory, and machine code. In the literacy strand we will look at topics drawn from computing history, algorithms, WWW programming, ethics and law, cryptography and security, and other topics of general interest. The stran
|
<urn:uuid:529906cf-f4b8-4661-8c5a-bb2691c949ed>
|
CC-MAIN-2013-20
|
http://www.learnerstv.com/video/Free-video-Lecture-11471-Computers.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711240143/warc/CC-MAIN-20130516133400-00078-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.893854 | 214 | 2.65625 | 3 |
- Join over 1.2 million students every month
- Accelerate your learning by 29%
- Unlimited access for just £4.99 per month
Comparing Operating Systems
The first 200 words of this essay...
Operating systems are the programmes that create environment which enable the various programmes to run on a computer. That is why they are also referred to as platforms. The programmes that run on these platforms can range from simple office automation softwares that enable us to do wordprocessing to games and also device drivers.
All major companies make their software for multiple platforms. By platform we mean the base environment that enables the communication between the onboard devices like the hard-disk, memory, various ports (input/output) and the functions it will carry out using other programmes that will run in that environment. For example if we look at Microsoft which makes the MS Office software that is used by most personal computers. They make MS Office for not only their own operating system namely Windows but also for MAC and also for UNIX / LINUX.
The function of the Operating system is to provide an environment and background on which the other applications will run. This involves the use of Hardware like the display card, network card, sound card, printers, scanners, other input and output devices.
The hardware is linked to the computer through
Found what you're looking for?
- Start learning 29% faster today
- Over 150,000 essays available
- Just £4.99 a month
Not the one? We have 100's more
Computer Science (view all)
- Why its important to have protocols and standards on a netwo...
- Barriers to Communication
- Plan an installation and an upgrade - Requirements in prepar...
- Control Unit, Memory Unit, and Arithmetic Logic Unit. The C...
- Open and Closed Loop Control System
- Describe the application and limits of procedural, object or...
- Case Study. LEGAL ISSUES-: Data Protection Act. Whiteman Lei...
- Growth and Influence of Computers and Computing
- Common methods of attack and types of malware
- Designing a data system for a vintage clothes business.
""Sabrina, Washington. IB English, History, Chemistry, Anthropology.
""Shailan, Leicester. A Level Student. Very useful for Biology and Chemistry, also great help in Econo
|
<urn:uuid:e4062388-2077-476d-ab31-d106f598fa17>
|
CC-MAIN-2013-20
|
http://www.markedbyteachers.com/as-and-a-level/computer-science/comparing-operating-systems.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702185502/warc/CC-MAIN-20130516110305-00065-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.890502 | 488 | 3.53125 | 4 |
The purpose of Vista (VISualization Tool and Analyzer) is to provide data
retrieval, management,manipulation and visualization. The overall
philosophy is to give a tool to access, manipulate and visualize data
with ease. A graphical user interface is provided for first time and
occasional users. A scripting language will be provided for power users
to automate batch production.
retrieval is accomplished using a two-tier client server architecture.
The data resides on a server and the bulk of the application resides on
the client. The server can serve data both locally and over the network.
Data management is accomplished using the
concept of data reference. A data reference is the reference to the
location of the data set and its characteristics. For instance, a time
series data is referred to by a server address, filename, pathname, time
window and a time interval. Some data references do not refer to actual
data but refer to the set of data references and the operations to be
performed on them to construct the data set. This provides transparency
to the user. For the user there is no difference between such virtual
data sets and the actual data sets.
references can be aggregated in to a Group The default view on a
database file is a Group. Furthermore, one or more Groups form a
Session. A Session can be saved and loaded from a file once created. The
initial Session is created by opening a connection to some server and
directory. The directory of database files then becomes a Session and
each file becomes a Group containing data references.
Data Manipulation is done by creating virtual data
references which contain the set of data references and the operations
to be performed on them. The actual operations on the data are performed
only when the data for the reference is requested. Math operation such
as division, multiplication, addition and subtraction are available
between data sets. Period average and moving average, and merging are
data references which are some other examples of manipulations on data
Data Visualization is done via two
dimensional plots. Examples of such plots are time series plots and
scatter plots. Zooming in and out and paging while zooming are some of
the tools that are currently available. Printing is available in gif and
postscript format. User has complete control of the attributes of each
element in the graph. For instance the user can change the text, font,
size, color and background color of the title. Most of these attributes
can be saved to a file and applied to subsequent plots. Data can also be
displayed and manipulated in tabular format.
A graphical user interface is used to display a group of
data references. The GUI is a view onto the application and contains no
information about the application other than the way the application
desires to be displayed. This separation enables support of undo/redo
commands and the recording of macros which can then be replayed on
Scripting is an
efficient way of accomplishing repetitive tasks. Scripting will use the
same application as the GUI and could use some of the GUI components as
|
<urn:uuid:c39cf967-61f5-49f3-a9f8-d21a5e06a86d>
|
CC-MAIN-2013-20
|
http://www.techarena.in/download/office-applications/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00093-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.897326 | 654 | 2.8125 | 3 |
A computer science text written by Andrew Tanenbaum. It is copyrighted 1992 by Prentice-Hall. Its ISBN is 0-13-588187-0.
The book describes different concepts, principles, and examples relevant to the purpose and design of a computer operating system. Issues such as deadlock, synchronization, task models, and memory management are dealt with at an introductory level.
The cover of the book features a representation of the dining philosophers problem based on the spaghetti and forks version of this classical computer science problem.
Various philosophers are dueling around the spaghetti, in deadlock.
To illustrate certain concepts, examples are taken from the DOS, UNIX, and Amoeba operating systems. The modern aspect of the book is that it discusses issues relevant for distributed systems.
|
<urn:uuid:0702ee58-3ba7-4543-92ed-3a7f9f13f603>
|
CC-MAIN-2013-20
|
http://everything2.com/title/Modern+Operating+Systems
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132729/warc/CC-MAIN-20130516113532-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.89873 | 160 | 3 | 3 |
Over 8,000 websites created by students around the world who have participated in a ThinkQuest Competition.
Compete | FAQ | Contact Us
An Introduction to Data Structures with C++
The web site provides an introduction to data structures, an imperative field in computer programming. It refers to various methods of storing and retrieving data in computer memory. The concepts are demonstrated using C++, however, can be applied to other programming languages as well. It is expected that the user have some background knowledge of C++, although a brief review is provided. The site covers linear structures such as lists, stacks, and queues. Tutorials on binary trees, heaps, sorting, and searching algorithms are also avaliable. Implementation, discussion of the efficiency, and application is included.
AlecParsippany High school, Parsippany, NJ, United States
TomPiedmont Academy, Covington, GA, United States
AntonioParsippany High school, Lake Hiawatha, NJ, United States
19 & under
Eric BerkowitzParsippany High school, Parsippany, NJ, United States
Computers & the Internet > Programming > C++
|
<urn:uuid:22ce9b6a-e597-4dae-85f6-cf72db901924>
|
CC-MAIN-2013-20
|
http://thinkquest.org/pls/html/f?p=52300:100:3745085596971571::::P100_TEAM_ID:501576697
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00032-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.910625 | 244 | 2.84375 | 3 |
Computer Science Research Focus Areas
- The Engagement and Collaboration in Human-Robotic Interaction group works with Melvin, a humanoid robot connected to several computers running various kinds of artificial intelligence software, to make his interaction with humans totally autonomous.
- Professor Neil Heffernan’s development of ASSISTments, a web-based system that blends tutoring assistance and performance assessment, is revolutionizing the way the nation’s schools teach the subjects of math and science.
- The Database Systems Research Group develops database, data mining, and data visualization techniques to detect and explore patterns in massive data streams in real time. Targeted uses include fraud detection, medical tracking, and emergency management.
- A recent study co-authored by computer science Department Head/Professor Craig Wills finds that existing and proposed safeguards against leakage and linking of private information currently being used by popular websites are inadequate.
- Associate Professor Rob Lindeman’s research into virtual reality is intended to enhance the gaming experience by letting players not only see and hear artificial worlds but to touch, taste, and smell them as well.
- With the explosion of the mobile device market, mobile computing is more powerful and popular than ever. Associate Professor Emmanuel Agu’s research focuses on the design and performance evaluation of wireless data link and transport protocols.
- Professor Mark Claypool, a noted expert in the field of network gaming, focuses his work on the area of the effect of latency on online gaming, and how system settings such as frame rate, resolution, and graphics settings influence game play.
- The Software Engineering Research Group applies principles of mathematics, engineering, and business to software and systems development. Their research includes software process, agile software development, and requirements management.
- The Robot Autonomy and Interactive Learning group (RAIL) develops interactive robotic and software systems. The work, led by Professor Sonia Chernova, aims to provide everyday people with the ability to customize the functionality of autonomous devices.
At WPI, we believe that the best way to learn is by taking a hands-on approach, which is why students in the Computer Science program at WPI are introduced to research techniques early and often.
Faculty members, undergraduate students, and graduate students are integral to cutting-edge research under way in core computer science areas such as computer intelligence, applications, and performance.
Our groundbreaking research is supported by agencies such as the National Science Foundation, the National Institutes of Health, the U.S. Department of Education, U.S. Army, Office of Naval Research, National Security Agency, IBM, and Microsoft.
Making Sense of Data Streams in Real Time
Elke Rundensteiner, professor of computer science, is developing novel techniques for extracting information from large-scale distributed databases in real time. Her work makes it possible to find meaning in enormous volumes of constantly changing data.
|
<urn:uuid:f765576d-7a0b-47b1-9ce6-218d1357031c>
|
CC-MAIN-2013-20
|
http://www.wpi.edu/academics/cs/research.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702957608/warc/CC-MAIN-20130516111557-00093-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.92895 | 587 | 2.609375 | 3 |
I figure that there would be two components to the system, a server, and a database.
A requirement is that the server be written in C++. I'm not sure exactly what that entails. I suppose it's writing server code to deal with listening to ports, sockets, or some form of communication through the website. Also, I'm guessing (from some googling) that the physical server (tower, hardware) will store the webpages.
Where I get lost is in figuring out what are the subsystems of this system. That is, what goes into making this website project's architecture. This is how I break it down. I'm not sure if it's correct or even in the right direction.
Database: Contains tables, for
--user data (username, pass, bio, transaction history)
--product data (user who posted, description, etc)
--transaction data (history, users involved, etc)
Server: Deals with everything else
--stores server code (server.cpp, for example) to deal with communication requests (website stuff)
--stores code for an automated system (registration confirmation emails, emails to users, and the like)
--communicates with the database
I'm not sure if I have the components for each subsystem correct. How is an automated system involved? Is the flow as follows: Webpages are loaded from server, server sends data from database to website, user sees website and all the data?
Thanks in advance. All I need is a fundamental understanding of this web-development architecture. I will continue to search online for help. Again, thanks in advance.
Edit: Sorry, didn't notice there was a web-development board. Instead of reposting, I will assume a moderator will move this topic to the appropriate board. Again, sorry for my lack of scrolling.
This post has been edited by godmoktail: 12 November 2009 - 06:08 PM
|
<urn:uuid:794abbc9-f34a-4d93-ad0d-9f50fc819e16>
|
CC-MAIN-2013-20
|
http://www.dreamincode.net/forums/topic/138696-web-architectureflow/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00083-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.92747 | 403 | 2.65625 | 3 |
Chapter 1. Introduction
There's no doubt about it: Software is expensive. The United States alone devotes at least $250 billion each year to application development of approximately 175,000 projects involving several million people. For all of this investment of time and money, though, software's customers continue to be disappointed, because over 30 percent of the projects will be
The demand for software also continues to rise. The developed economies rely to a large extent on software for telecommunications, inventory control, payroll, word processing and typesetting, and an ever-widening set of applications. Only a
There's no end in sight. A Star Trek world of tiny communications devices, voice-recognition software, vast searchable databases of human (for the moment, anyway) knowledge, sophisticated computer-controlled sensing devices, and
These techniques have undoubtedly improved productivity, but as we bring more powerful tools to bear to solve more difficult problems, the
MDA takes the ideas of raising the levels of abstraction and reuse up a
Raising the Level of Abstraction
The history of software development is a history of raising the level of abstraction. Our industry used to build systems by soldering wires together to form hard-wired programs. Machine code let us store programs by manipulating switches to enter each instruction. Data was stored on drums whose rotation time had to be taken into account so that the head would be able to read the
At some point, programming languages, such as FORTRAN, were born and "formula translation" became a reality. Standards for COBOL and C enabled portability among hardware platforms, and the profession developed techniques for structuring programs so that they were easier to write, understand, and maintain. We now have languages like Smalltalk, C++, Eiffel, and Java, each with the notion of object-orientation, an approach for structuring data and behavior together into classes and objects.
As we moved from one language to another,
Over time, however, the new
As the profession has raised the level of abstraction at which developers work, we have developed tools to map from one layer to the next automatically. Developers now write in a high-level language that can be mapped to a lower-level language automatically, instead of writing in the lower-level language that can be mapped to assembly language, just as our predecessors wrote in assembly language and had that translated automatically into machine language.
Clearly, this forms a pattern: We formalize our knowledge of an application in as high a level a language as we can. Over time, we learn how to use this language and apply a set of conventions for its use. These conventions become formalized and a higher-level language is born that is mapped automatically into the lower-level language. In turn, this next-higher-level language is perceived as low level, and we develop a set of conventions for its use. These
The next level of abstraction is the move, shown in Figure 1-1, to model-based development, in which we build software-platform-independent models.
Figure 1-1. Raising the level of abstraction
Software-platform independence is analogous to hardware-platform independence. A hardware-platform-independent language, such as C or Java, enables the writing of a specification that can execute on a variety of hardware platforms with no change. Similarly, a software-platform-independent language enables the writing of a specification that can execute on a variety of software platforms, or software architecture designs, with no change. So, a software-platform-independent specification could be mapped to a multiprocessor/multitasking CORBA environment, or a client-server relational database environment, with no change to the model.
In general, the organization of the data and processing
Raising the level of abstraction changes the platform on which each layer of abstractions depends. Model-based development relies on the construction of models that are independent of their software platforms, which include the likes of CORBA, client-server relational database environments, and the very structure of the final code.
|
<urn:uuid:307b656c-4e57-4b8c-9786-8e5d28452401>
|
CC-MAIN-2013-20
|
http://flylib.com/books/en/2.862.1.13/1/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705926946/warc/CC-MAIN-20130516120526-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.937648 | 831 | 3.15625 | 3 |
write an application that reads five numbers between1 and30.For each number that is read,your program should display the same number of adjacent asterisks.For example, if your program reads the number 7,it should display *******.
Write an application that reads five numbers between 1 and 30. For each number that is read, your program should display the same number of adjacent asterisks. For example, if your program reads the number 7, it should display *******.
A multifunction printer generally includes several devices, including Choose one answer. a. a CRT monitor b. All of the Above c. Scanner d. a mouse e. a QWERTY keyboard Which of these is both an input and an output device? Choose one answer. a. A sensor...
""Healthcare companies, like ABC Healthcare, that operate as for-profit entities are facing a multitude of challenges. The regulatory environment is becoming more restrictive, viruses and worms are growing more pervasive and damaging, and ABC Heathcare s stakeholders are...
I can only afford 20 tops.
What is a main area in an IT department, operates the centralized computer equipment and administers?
what conditions are required for 2nf violation to occur?
how silicon-based semiconductors revolutionized computing.
6.2.17 see attachment
6.1.6 see attachments
Ask a new Computer Science Question
Tips for asking Questions
- Provide any and all relevant background materials. Attach files if necessary to ensure your tutor has all necessary information to answer your question as completely as possible
- Set a compelling price: While our Tutors are eager to answer your questions, giving them a compelling price incentive speeds up the process by avoiding any unnecessary price negotiations
- 1. Identify and describe Trust/Security Domain boundaries that may be applicable to personal computer (workstation) security in a business context.
2. This is a C++ codelab question.
- The "origin" of the cartesian plane in math is the point where x and y are both zero. Given a variable, origin of type Point-- a structured type with two fields, x and y, both of type double, write one or two statements that make this variable's field's values consistent with the mathematical notion of "origin".
- Assume two variables p1 and p2 of type POINT, with two fields, x and y, both of type double, have been declared. Write a statement that reads values for p1 and p2 in that order. Assume that values for x always precede y.
- In mathematics, "quadrant I" of the cartesian plane is the part of the plane where x and y are both positive. Given a variable, p that is of type POINT-- a structured type with two fields, x and y, both of type double-- write and expression that is true if and only the point represented by p is in "quadrant I".
|
<urn:uuid:372c1239-750f-4e27-ae98-773625e6181e>
|
CC-MAIN-2013-20
|
http://www.coursehero.com/tutors/problems/Computer-Science/19961/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703326861/warc/CC-MAIN-20130516112206-00096-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.909317 | 608 | 3.53125 | 4 |
by Ed Yourdon
Computer systems contain both hard and software. Hardware is any tangible item in a computer system, like the system unit, keyboard, or printer. Software, or a computer program, is the set of instruction that direct the computer to perform a task. Software falls into one of two categories: system software and application software. System software controls the operation of the computer hardware; whereas, application software enables a user to perform tasks. Three major types of application software on the market today for personal computers are word processors, electronic spreadsheets, and database management systems (Little and Benson 10-42).
A word processing program allows a user to efficiently and economically create professional looking documents such as memoranda, letters, reports, and resumes. With a word processor, one can easily revise a document. To improve the accuracy of one’s writing, word processors can check the spelling and the grammar in a document. They also provide a thesaurus to enable a user to add variety and precision to his or her writing. Many word processing programs also provide desktop publishing features to create brochures, advertisements, and newsletters.
An electronic spreadsheet enables a user to organize data in a fashion similar to a paper spreadsheet. The difference is the user does not have to perform calculations manually; electronic spreadsheets can be instructed to perform any computation desired. The contents of an electronic spreadsheet can be easily modified by the user. Once the data is modified, all calculations in the spreadsheet are recomputed automatically. Many electronic spreadsheet packages also enable a user to graph the data in his or her spreadsheet (Wakefield 98-110).
A database management system (DBMS) is a software program that allows a user to efficiently store a large amount of data in a centralized location. Data is one of the most valuable resources to any organization. For this reason, user desire data be organized and readily accessible in a variety of formats. With aDBMS, a user can then easily store data, retrieve data, modify data, analyze data, and create a variety of reports from the data(Aldrin 25-37).
Many organizations today have all three of these types of application software packages installed on their personal computers. Word processors, electronic spreadsheets, and database management systems make users’ tasks more efficient. When users are more efficient, the company as a whole operates more economically and efficiently.
Itech troubleshooter is an advanced web development, high skilled professional software Solution Company located in New Delhi founded by, PRABHAKAR MISHRA in the year 2008.The company provides vast range of services to each and every customer in reaching their respective targeted spectators and their valuable information in fix and on steady affordable price. Today, you can easily get a lot of quality services by this company on just dialing a call to the company which includes services like website designing , web application development , Application development , Maintenance , Re-engineering , Flash development , SEO , SEO Services , Computer AMC , Computer Networking , Wireless Networking , Data Recovery , ERP Solution .
True power comes when local software and Internet services work together to solve problems. It is the combination of software plus services. Software plus Services is Microsofts approach for the next generation of computing. Learn why your business success relies on both software plus services. Whether you are a business owner, technical professional or developer, the combination of Software plus Services gives you the flexibility to accomplish your business goals. This short video simplifies the Software + Services story and helps you to understand the future of technology and the power of choice.
Related Software Articles
|
<urn:uuid:fc5eae7a-08fb-47be-99e9-6976a1f0921f>
|
CC-MAIN-2013-20
|
http://techgamesblog.com/application-software/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708835190/warc/CC-MAIN-20130516125355-00035-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.91077 | 728 | 3.390625 | 3 |
CS 211 — Data Structures
Welcome to the CS211 homepage. In this course, we are going to introduce you to the world of abstract data structures, which form the heart of our discipline. We will be learning about the classic data structures, from simple linear structures to various types of trees and graphs. We will look at how to use them, how to implement them, and how to chose between them.
This website will serve as the definitive guide to what is going on in the class. All lectures, assignments, notifications, and policies can be found here — check back often.
Course Handouts & Links
- Assignment one — Solitaire Encryption
- Assignment two — Stacks & Queues
- Assignment three — A little complexity — solution
- Assignment four — Sorting out sorting
- Assignment five — Add it up
- Assignment six — Keep your priorities straight
- Assignment seven — Huffman Tree
- Assignment eight — Huffman Tree Part II
- Assignment nine — Red Black Trees
- Assignment ten — A simple database
- Sorting Algorithms
|
<urn:uuid:1ba43ffb-2dcb-43f6-9c49-e03be57c8b04>
|
CC-MAIN-2013-20
|
http://web.cs.mtholyoke.edu/~candrews/cs211/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708690512/warc/CC-MAIN-20130516125130-00063-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.839121 | 220 | 3.03125 | 3 |
- Join over 1.2 million students every month
- Accelerate your learning by 29%
- Unlimited access for just £4.99 per month
I will explain what the specifications are for an input and output device. I will explain what input devices need to be, to be transferred to the relevant output device.
The first 200 words of this essay...
I will explain what the specifications are for an input and output device. I will explain what input devices need to be, to be transferred to the relevant output device. Here are the various items I will be looking at:
* Central processing unit
* Operating system
* Applications software
Control Processing Unit (CPU):
In PC's and laptops there is a main processing chip called the processor or either wise known as the central processing unit (CPU). This unit handles the instructions from the computer program and processes the data. Here are three items which would be found in the CPU:
* Control Unit
* Arithmetic/Logic Unit
* Main Storage
The control unit ensures that all the other components, including the input and output devices, carry out their functions efficiently and effectively. The control unit is electronically linked to all the components, so that it will be able to detect if the required device or component is connected and ensure that any required device will be able to carry out instructions. The CPU needs to be able to arrange and display an error
Found what you're looking for?
- Start learning 29% faster today
- Over 150,000 essays available
- Just £4.99 a month
Not the one? We have 100's more
Computer Science (view all)
- Why its important to have protocols and standards on a netwo...
- Barriers to Communication
- Plan an installation and an upgrade - Requirements in prepar...
- Control Unit, Memory Unit, and Arithmetic Logic Unit. The C...
- Open and Closed Loop Control System
- Describe the application and limits of procedural, object or...
- Case Study. LEGAL ISSUES-: Data Protection Act. Whiteman Lei...
- Growth and Influence of Computers and Computing
- Common methods of attack and types of malware
- Designing a data system for a vintage clothes business.
""Nikolay. Business Studies. BTEC Student.
""Kim. Nursing, Mental Health, Psychology. University Student.
|
<urn:uuid:bd5ce6c2-0fe0-425f-8252-4616be74dc56>
|
CC-MAIN-2013-20
|
http://www.markedbyteachers.com/as-and-a-level/computer-science/i-will-explain-what-the-specifications-are-for-an-input-and-output-device-i-will-explain-what-input-devices-need-to-be-to-be-transferred-to-the-relevant-output-device.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700626424/warc/CC-MAIN-20130516103706-00085-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.872562 | 490 | 3.328125 | 3 |
The Leading eBooks Store Online
for Kindle Fire, Apple, Android, Nook, Kobo, PC, Mac, Sony Reader...
Linux Cluster Architecture, Adobe Reader
Cluster computers provide a low-cost alternative to multiprocessor systems for many applications. Building a cluster computer is within the reach of any computer user with solid C programming skills and a knowledge of operating systems, hardware, and networking. This book leads you through the design and assembly of such a system, and shows you how to mearsure and tune its overall performance.
A cluster computer is a multicomputer, a network of node computers running distributed software that makes them work together as a team. Distributed software turns a collection of networked computers into a distributed system. It presents the user with a single-system image and gives the system its personality. Software can turn a network of computers into a transaction processor, a supercomputer, or even a novel design of your own.
Some of the techniques used in this book's distributed algorithms might be new to many readers, so several of the chapters are dedicated to such topics. You will learn about the hardware needed to network several PCs, the operating system files that need to be changed to support that network, and the multitasking and the interprocess communications skills needed to put the network to good use.
Finally, there is a simple distributed transaction processing application in the book. Readers can experiment with it, customize it, or use it as a basis for something completely different.
259 pages; ISBN 9780768662412
, or download in
- Academic > Mathematics > General > Mathematics
- Academic > Computer Science > Computer science
- Academic > Computer Science > Electronic data processing
- Academic > Computer Science > Electronic digital computers
- Academic > Computer Science > Parallel processing
- Academic > Mathematics > Instruments and machines
- Academic > Mathematics > Geometry. Trigonometry.Topology
- Computers > Operating Systems > Linux
- Computers > Networking
|
<urn:uuid:e0073526-ad34-45ad-a2d2-82d2ab53ecff>
|
CC-MAIN-2013-20
|
http://www.ebooks.com/226886/linux-cluster-architecture-adobe-reader/vrenios-alex/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702718570/warc/CC-MAIN-20130516111158-00008-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.895566 | 409 | 2.984375 | 3 |
Great Ideas In Computer Science, Second Edition
In Great Ideas in Computer Science: A Gentle Introduction, Alan Biermann presents the "great ideas" of computer science that together comprise the heart of the field. He condenses a great deal of complex material into a manageable, accessible form. His treatment of programming, for example, presents only a few features of Pascal and restricts all programs to those constructions. Yet most of the important lessons in programming can be taught within these limitations. The student's knowledge of programming then provides the basis for understanding ideas in compilation, operating systems, complexity theory, noncomputability, and other topics. Whenever possible, the author uses common words instead of the specialized vocabulary that might confuse readers.
Readers of the book will learn to write a variety of programs in Pascal, design switching circuits, study a variety of Von Neumann and parallel architectures, hand simulate a computer, examine the mechanisms of an operating system, classify various computations as tractable or intractable, learn about noncomputability, and explore many of the important issues in artificial intelligence.
This second edition has new chapters on simulation, operating systems, and networks. In addition, the author has upgraded many of the original chapters based on student and instructor comments, with a view toward greater simplicity and readability.
About the Author
Alan W. Biermann is Professor of Computer Science at Duke University. He is also the author of the first two editions of Great Ideas in Computer Science (MIT Press, 1990, 1997).
|
<urn:uuid:eae2ad71-09f2-42ca-a956-c283c9cabb80>
|
CC-MAIN-2013-20
|
http://mitpress.mit.edu/books/great-ideas-computer-science
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698090094/warc/CC-MAIN-20130516095450-00017-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.906455 | 310 | 3.40625 | 3 |
The three elements of Social Systems Informatics are embodied in its name. Social refers to an emphasis on interactions with others, as opposed to individual level behavior, within their physical, social, and virtual environments. Systems refers to distinct but interacting components of a social structure that are working together (or not). Informatics refers to the science of gathering, storing, and processing of data. There are opportunities to collect vast amounts of data pertaining to social interactions within and across systems- such as internet communications or video and audio images — but we currently have limited computational methods that can be applied to study these interactions.
|
<urn:uuid:ad6218e7-fe6c-4551-bb42-8e9bc442bada>
|
CC-MAIN-2013-20
|
http://ccs.miami.edu/?page_id=512
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700563008/warc/CC-MAIN-20130516103603-00083-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.95111 | 122 | 2.953125 | 3 |
|Koffman, Elliot B. / Wolfgang, Paul A. T.|
Objects, Abstraction, Data Structures and Design
1. Edition November 2005
2005. 832 Pages, Softcover
ISBN 978-0-471-46755-7 - John Wiley & Sons
E-Books are also available on all known E-Book shops.
This book combines a strong emphasis on problem solving and software design with the study of data structures. After providing the specification and implementation of an abstract data type, the authors cover case studies that use the data structure to solve a significant problem. In the implementation of each data structure and in the solutions of the case studies, they reinforce the message "Think, then code" by performing a thorough analysis of the problem and then carefully designing a solution. Readers gain an understanding of why different data structures are needed, the applications they are suited for, and the advantages and disadvantages of their possible implementations.
From the contents
Chapter P. A C++ Primer.
Chapter 1. Introduction to Software Design.
Chapter 2. Program Correctness and Efficiency.
Chapter 3. Inheritance and Class Hierarchies.
Chapter 4. Sequential Containers.
Chapter 5. Stacks.
Chapter 6. Queues and Deques.
Chapter 7. Recursion.
Chapter 8. Trees.
Chapter 9. Sets and Maps.
Chapter 10. Sorting.
Chapter 11. Self-Balancing Search Trees.
Chapter 12. Graphs.
Appendix A: Advanced C++ Topics.
Appendix B: Overview of UML.
Appendix C: The CppUnit Test Framework.
|
<urn:uuid:5608cf79-62e8-4a62-9467-003e6af0e19d>
|
CC-MAIN-2013-20
|
http://www.wiley-vch.de/publish/en/books/bySubjectCS00/bySubSubjectCSZ0/0-471-46755-3/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699036375/warc/CC-MAIN-20130516101036-00009-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.737219 | 345 | 2.59375 | 3 |
Book Description: A modern introduction to computers and computing, covering both the ``how'' and ``why'' of computers. Presents traditional computer concepts in an applied and relevant format, and discusses the use of computers by future managers and computer professionals. Incorporates tutorials on DOS, spreadsheet programs, word-processing, and database management. Contains numerous problems. Contains a comprehensive appendix on programming in Basic.
|
<urn:uuid:9c1704e2-b4b6-4b97-91db-ae2a93e902fc>
|
CC-MAIN-2013-20
|
http://www.campusbooks.com/books/childrens-books/computers/software/9780471518495_Robert-A-M-Stern-Nancy-Stern_Computing-With-EndUser-Applications-and-Bas.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00055-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.873604 | 79 | 2.796875 | 3 |
Marius Watz: Drawing machine
Every day millions of people worldwide take part in the greatest revolution in publishing since the invention of Gutenberg’s printing press. Not that any of them pay it much heed. The idea of hypertext was first articulated in 1965, when Ted Nelson proposed an associative way of describing relationships between separate pieces of information. The utopian vision was to connect all published documents in a giant network of information. Today the World Wide Web is well on its way to become such a network, and hypertext is so ubiquitous that hardly anyone knows what it is.
Connecting separate documents by creating hyperlinks between them is a revolution in terms of the possibilities it offers to both writers and readers to connect and navigate a large selection of texts. But it is when millions of texts are connected over the Internet that the real magic occurs. A new space of pure information is created, with a topology given by hyperlinks. This space is virtual, electronic and hypertextual, yet its geography nonetheless appears real to users.
Odin is a public electronic information service for the Norwegian Government and Ministries of state. All statements from the Government, news from the Ministries etc. are published here - over 50 000 documents at the present time. Odin is visited by thousands of users every month.
Odin is a natural choice for a test project for art in public digital spaces . The large number of documents and users makes Odin the digital equivalent to a major public building with an important public access function. The challenge is to find a type of art project that will provide an extra dimension to this space, without getting in the way of its main function, namely the distribution of information.
Like all web sites Odin is in a state of constant development. New documents are added, navigational structures are updated and even the visual presentation of documents change over time. A project for this space must be able to adapt to a dynamically changing environment.
Drawing machines as art for public
"Drawing machine 1-12" is an art project that develops over time, continuously changing over the two years that the project will be online on the Odin web site. It consists of 12 drawing machines in the form of software that run on the Odin servers. Each machine draws a single picture over the scope of two months, and after 24 months all the machines will have drawn one picture each.
I use the term “drawing machine" to denote a virtual machine with a set of rules determining how it moves and draws in a virtual space. In reality the drawing machine is a piece of software. In contrast to commercial software that is used in the production of art, where the software is a tool used by the artist to produce a finished work, the drawing machine creates its images without interaction with the artist. The task of the artist is to “construct" the machine so that it creates aesthetically satisfying images, but once the machine is set in motion the artist is reduced to spectator.
Each drawing machine is based on different principles of movement and drawing strategies, and moves over a 2-dimensional surface. The dimension of this surface is set conceptuallt to 2x2 meters so as to have a physical reference for size. The machines work only with a local intelligence, and do not relate to the global composition of the image. In this way the global image is created through local movement, a bottom up process where complexity is created from simple rules. The rules are clearly defined, but utilise randomness constantly so that the image is unique each time the machine is run.
To exploit the aspect of a micro / macro duality between global and local composition, and to make sure that the image that is shown on Odin is noticeably different every day, two images are rendered for each day: One image of the micro level, showing the area on the surface where the drawing machine is currently drawing, and an image of the macro level showing the whole surface.
Users of Odin encounter the project as a visual element showing an excerpt from the daily micro level image, placed in conjunction with the menu navigation on the page. By clicking on this image the user can see the complete image, as well as navigate the archive of images that have already been drawn. The user can also se animations of the drawing machines in Windows Media format, so that it is possible to see an animated version of how the image has developed.
About art in public digital spaces
"Drawing machine 1-12" is the product of a pilot project where the National Foundation for Art in Public Buildings wanted to examine the possibility of placing art in public digital spaces, as well as what kind of projects would be suited to this kind of space. The process of developing the piece has been marked by this, and has gone through a number of changes before finding its final form.
The Norwegian painter Olav Christopher Jenssen was a conceptual partner early in the project. He contributed substantially to the idea of a virtual drawing machine, and his suggestion of using a surface with a real physical dimension was an important key to finding good solutions for drawing and composition.
An art project for a public space must always take into account limitations of the space in which it exists. As a major web site Odin naturally has a number of security measures. Instead of generating HTML on-the-fly from a database the pages are generated as static pages, limiting the use of dynamically changing images. In most cases a project for a public space is created for a space that is still being planned. Thus the project can influence the development of the space. Odin is an already existing space with established conventions, hence “Drawing machine 1-12" has been adapted these conventions.
In working on the project I chose to create a piece of software that did not require network access or have other technical features that would compromise security on Odin. The choice of presentation on the actual web pages was made so as to integrate optimally with the existing visual design. Potential aspects of the piece, such as interactivity or solutions that would affect the navigation, structure or visual design were passed over in favor of a solution that is self-contained, time-based and well suited for presentation on Odin.
The consequence of these decisions is that the project does not utilise the potential of a web site as an interactive space. Nor is it as tightly integrated to the structure and visual presentation of Odin as one might wish. I see this as a positive challenge for future projects, where the interactive nature of the Web can be explored more fully.
Future projects could also explore the duality between physical and digital spaces by bridging them – a digital piece could have physical manifestations or events in physical space could influence a digital work that is displayed online. The possibilities are many, and art in public digital space shows much potential.
The artist wishes to thank:
Olav Christopher Jenssen
Everyone at Odin, Lava and the National Foundation for Art in Public Buildings who have helped with the realisation of the project
Due credit should be given to the vision of Stig Andersen, who in 1998 as director of the National Foundation for Art in Public Buildings asked if such a project could be possible
Marius Watz, juni 2003
|
<urn:uuid:a1521adc-b978-482f-b84a-1d7cef8afd2b>
|
CC-MAIN-2013-20
|
http://www.unlekker.net/dm1-12/index_e.php?textid=0
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701638778/warc/CC-MAIN-20130516105358-00080-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.942814 | 1,467 | 2.65625 | 3 |
Auto Multiple Choice (AMC) is a piece of software that helps you to create and manage multiple choice questionnaires, with automated marking. Tests can be written in plain text or LaTeX. Automated correction and grading is performed from scans of the answer sheets using optical mark recognition.
Shelk-test is a program for creating tests for students. Tests take up just three modules: creation, testing, and reporting. Shelk-test can insert images into all types of questions, making it possible to create a question such as "What is shown in the picture?" All tests are stored centrally in a SQLite database, increasing reliability and allowing you to have a database file without having to install a database server.
FroZenLight interrelates line arts, mathematics, and cryptography. Circular shaped mirrors which are arranged in a grid-like manner reflect a light ray according to the reflection law of geometric optics. While random positions of the light source produce chaotic reflection patterns, it is possible to position the light source so that beautiful symmetric reflection patterns are created.
RobotMinds is a simulation of a tournament in which programmable robots compete. Each robot's objective is to find its way out of a maze to its home tile. The robots have sensors, and can act on what they sense. There are toxic tiles and radiation from other robots that will destroy robots if exposed to it for too long. The robots can be programmed by way of four screens of checkboxes representing binary switches, so you can program a robot with no knowledge of any programming languages. You can lay walls or full maps to restrict movement.
Open Allure plays interactive text-to-speech scripts fetched from blogs, wikis, or local text files. As part of the interaction, it can call a Web browser to display Web pages, opening the possibility of text-to-speech voice-overs that span multiple Web sites (for providing tours, giving instructions, etc.). Voice quality and language depend on what is available from the OS via StaticSay on Windows, Say on Mac OS X, or eSpeak on Linux.
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Theano features tight integration with numpy, transparent use of a GPU, efficient symbolic differentiation, speed and stability optimizations, dynamic C code generation, and extensive unit-testing and self-verification. Theano has been powering large-scale computationally intensive scientific investigations since 2007. But it is also approachable enough to be used in the classroom (IFT6266 at the University of Montreal).
The PARSEC (Preliminary Analysis of Revolutionary Space Exploration Concepts) CEE (Collaborative Engineering Environment) creates a single-user interface for engineers and scientists to work together to design launch vehicle and spacecraft concepts. The interface allows for seamless integration of design tools for any discipline as well as communication with other team members. Data storage and maintenance is handled automatically. The interface gives users the ability to run multiple design codes and iterative analyses. Branching and other logic operations are also supported. Some data reduction ability is provided as well.
Dr. Higgins will teach you and help you teach yourself any language. It works like a quiz, asking you the translations of words and keeping score of your correctness. It includes a number of quizzes for self-study in different languages (English, Spanish, Finnish, and a bit of Japanese). It's very easy to add your own lists, thanks to a simple file format.
|
<urn:uuid:fb8133cf-96ca-4d53-b441-4b7ce2ba615f>
|
CC-MAIN-2013-20
|
http://freecode.com/tags/mac-os-x?page=1&sort=created_at&with=838&without=1451
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706472050/warc/CC-MAIN-20130516121432-00018-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.903937 | 721 | 2.640625 | 3 |
A type of computing in which different components and objects comprising an application can be located on different computers connected to a network. So, for example, a word processing application might consist of an editor component on one computer, a spell-checker object on a second computer, and a thesaurus on a third computer. In some distributed computing systems, each of the three computers could even be running a different operating system.
One of the requirements of distributed computing is a set of standards that specify how objects communicate with one another. There are currently two chief distributed computing standards: CORBA and DCOM.
Featured Partners Sponsored
- Increase worker productivity, enhance data security, and enjoy greater energy savings. Find out how. Download the “Ultimate Desktop Simplicity Kit” now.»
- Find out which 10 hardware additions will help you maintain excellent service and outstanding security for you and your customers. »
- Server virtualization is growing in popularity, but the technology for securing it lags. To protect your virtual network.»
- Before you implement a private cloud, find out what you need to know about automated delivery, virtual sprawl, and more. »
|
<urn:uuid:1187074f-1487-446b-8299-9de93982e048>
|
CC-MAIN-2013-20
|
http://www.webopedia.com/TERM/D/distributed_computing.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383259/warc/CC-MAIN-20130516092623-00087-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.915019 | 238 | 3.453125 | 3 |
1992-96 Working Papers
The typesetting language TEX [Knuth (1984)] is now available on a range of computers from mainframes to micros. It has an unequalled ability to typeset mathematical text, with its many formatting features and fonts. TEX has facilities for drawing diagrams using the packages <I>tpic</I>and PiCTEX. This paper describes a TEX preprocessor written in Pascal which allows a programmer to embed diagrams in TEX documents. These diagrams may involve straight or curved lines and labelling text. The package is provided for people who either do not have access to <I>tpic</I>or PiCTEX. Or who prefer to program in Pascal.
A computer program has been written which composes blues melodies to fit a given backing chord sequence. The program is comprised of an analysis stage followed by a synthesis stage. The analysis stage takes blues tunes and produces zero, first and second order Markov transition tables covering both pitches and rhythms. In order to capture the relationship between harmony and melody, a set of transition tables is produced for each chord in the analysed songs. The synthesis stage uses the output tables from analysis to generate new melodies; second order tables are used as much as possible, with fall back procedures, to first and zero order tables, to deal with zero frequency problems. Some constraints are encoded in the form of rules to control the placement of rhythmic patterns within measures, pitch values for long duration notes and pitch values for the start of new phrases. A listening experiment was conducted to determine how well the program captures the structure of blues melodies. Results showed that listeners were unable to reliably distinguish human from computer composed melodies.
Existing computer supported co-operative work (CSCW) systems for group communication typically require some amount of keyboard input, and this may limit their usefulness. A voice input prototype system for asynchronous (time separated transactions) group communication (AGC) with simulated conversion to text was developed and an experiment constructed to investigate if advantages over conventional keyboard input computer conferencing were possible for the information exchange task. Increases in words used and facts disclosed were higher for voice input compared to text input, which implies that voice input capability could be advantageous for future asynchronous group communication systems supporting information exchange.
The information content of each successive note in a piece of music is not an intrinsic musical property but depends on the listener's own model of a genre of music. Human listeners' models can be elicited by having them guess successive notes and assign probabilities to their guesses by gambling. Computational models can be constructed by developing a structural framework for prediction, and “training” the system by having it assimilate a corpus of sample compositions and adjust its internal probability estimates accordingly. These two modeling techniques turn out to yield remarkably similar values for the information content, or “entropy,” of the Bach chorale melodies.<P>
While previous research has concentrated on the overall information content of whole pieces of music, the present study evaluates and compares the two kinds of model in fine detail. Their predictions for two particular chorale melodies are analyzed on a note-by-note basis, and the smoothed information profiles of the chorales are examined and compared. Apart from the intrinsic interest of comparing human with computational models of music, several conclusions are drawn for the improvement of computational models.
As graduate programs in Computer Science grow and mature and undergraduate populations stabilize, an increasing proportion of our resources is being devoted to the training of researchers in the field. Many inefficiencies are evident in our graduate programs. These include undesirably long average times to thesis completion, students' poor work habits and general lack of professionalism, and the unnecessary duplication of having supervisors introduce their students individually to the basics of research. Solving these problems requires specifically targeted education to get students started in their graduate research and introduce them to the skills and tools needed to complete it efficiently and effectively.<P>
We have used two different approaches in our respective departments. One is a (half-) credit course on research skills; the other a one-week intensive non-credit “survival course” at the beginning of the year. The advantage of the former is the opportunity to cover material in depth and for students to practice their skills; the latter is much less demanding on students and is easier to fit into an existing graduate program.
Speech understanding systems (SUS's) came of age in late 1971 as a result of a five year development programme instigated by the Information Processing Technology Office of the Advanced Research Projects Agency (ARPA) of the Department of Defense in the United States. The aim of the programme was to research and develop practical man-machine communication systems. It has been argued since, that the main contribution of this project was not in the development of speech science, but in the development of artificial intelligence. That debate is beyond the scope of this paper, though no one would question the fact that the field to benefit most within artificial intelligence as a result of this programme is natural language understanding. More recent projects of a similar nature, such as projects in the United Kingdom's ALVEY programme and Europe's ESPRIT programme have added further developments to this important field.<P>
This paper presents a review of some of the natural language processing techniques used within speech understanding systems. In particular, techniques for handling syntactic, semantic and pragmatic information are discussed. They are integrated into SUS's as knowledge sources.<P>
The most common application of these systems is to provide an interface to a database. The system has to perform a dialogue with a user who is generally unknown to the system. Typical examples are train and aeroplane timetable enquiry systems, travel management systems and document retrieval systems.
Technology, in the form of personal computers, is making inroads into everyday life in every part of every nation. It is frequently assumed that this si `a good thing'. However, there is a need for the people in each cultural group in each nation to appropriate technology for themselves. Indigenous people, such as the Maori of New Zealand/Aotearoa, are in danger of losing their language because technology has a European face. Yet despite the fact that the Maori are currently experiencing a cultural renaissance, there are no commercially available products that are specifically designed for Maori-speaking people.
The apparent divergence between the research paradigms of text and image compression has led us to consider the potential for applying methods developed for one domain to the other. This paper examines the idea of “lossy” text compression, which transmits an approximation to the input text rather than the text itself. In image coding, lossy techniques have proven to yield compression factors that are vastly superior to those of the best lossless schemes, and we show that this a also the case for text. Two different methods are described here, one inspired by the use of fractals in image compression. They can be combined into an extremely effective technique that provides much better compression than the present state of the art and yet preserves a reasonable degree of match between the original and received text. The major challenge for lossy text compression is identified as the reliable evaluation of the quality of this match.
Textual image compression is a method of both lossy and lossless image compression that is particularly effective for images containing repeated sub-images, notably pages of text (Mohiuddin <I>et al.,</I>1984; Witten <I>et al.,</I>. The process comprises three main steps:
<LI>Extracting all the characters from an image;
<LI>Building a library that contains one representative for each character class;
<LI>Compfressing the image with respect to the library.
The architecture for an optimistic, highly parallel, scalable, shared memory CPU - the WarpEngine - is described. The WarpEngine CPU allows for parallelism down to the level of single instructions and is tolerant of memory latency. Its design is based around time stamping executable instructions and all memory accesses. The TimeWarp algorithm [Jefferson 1985, 1989] is used for managing the time stamps and synchronisation. This algorithm is optimistic and requires that all computations can be rolled back. The basic functions required for implementing the control and memory system used by TimeWarp are described.<P>
The WarpEngine memory model presented to the programmer, is a single linear address space which is modified by a single thread of execution. Thus, at the software level there is no need for locks or other explicit synchronising actions when accessing the memory. The actual physical implementation, however, is multiple CPUs with their own caches and local memory with each CPU simultaneously executing multiple threads of control.<P>
Reads from memory are optimistic, that is, if there is a local copy of a memory location it is taken as the current value. However, sometimes there will be a write with an earlier time stamp in transit in the system. When it arrives it causes the original read and any dependent calculations to be re-executed.<P>
The proposed instruction set is a simple load-store scheme with a small number of op-codes and fixed width instructions. To achieve latency tolerance, instructions wait until their arguments are available and then dispatch the result to (a small number of) other instructions. The basic unit of control is a block of (up to 16) instructions. Each block, when executed, is assigned a unique time stamp and all reads and writes from within that block use that time stamp. Blocks are dispatched into the future so that multiple blocks can be simultaneously active.
Many techniques have been developed for abstracting, or “learning,” rules and relationships from diverse data sets, in the hope that machines can help in the often tedious and error-prone process of acquiring knowledge from empirical data. While these techniques are plausible, theoretically well-founded, and perform well on more or less artificial test data sets, they stand or fall on their ability to make sense of real-world data. This paper describes a project that is applying a range of learning strategies to problems in primary industry, in particular agriculture and horticulture. We briefly survey some of the more readily applicable techniques that are emerging from the machine learning research community, describe a software workbench that allows users to experiment with a variety of techniques on real-world data sets, and detail the problems encountered and solutions developed in a case study of dairy herd management in which culling rules were inferred from a medium-sized database of herd information.
Survival of the species vs survival of the individual
R. H. Barbour, K. Hopper
This paper examines the relationships between human and computing entities. It develops the biological ethical imperative towards survival into a study of the forms inherent in human beings and implied in computer systems. The theory of paradoxes is used to show that a computer system cannot in general make a self-referential decision. Based upon this philosophical analysis it is argued that human and machine forms of survival are fundamentally different. Further research into the consequences of this fundamental difference is needed to ensure the diversity necessary for human survival.
Data transformation: a semantically-based approach to function discovery
Thong H. Phan, Ian H. Witten
This paper presents the method of <I>data transformation</I>for discovering numeric functions from their examples. Based on the idea of transformations between functions, this method can be viewed as a semantic counterpart to the more common approach of formula construction used in most previous discovery systems. Advantages of the new method include a flexible implementation through the design of transformation rules, and a sound basis for rigorous mathematical analysis to characterize what can be discovered. The method has been implemented in a discovery system called “LIMUS,” which can identify a wide range of functions: rational functions, quadratic relations, and many transcendental functions, as well as those that can be transformed to rational functions by combinations of differentiation, logarithm and function inverse operations.
The architecture of an optimistic CPU: The WarpEngine
John G. Cleary, Murray Pearson, Husam Kinawi
The architecture for a shared memory CPU is described. The CPU allows for parallelism down to the level of single instructions and is tolerant of memory latency. All executable instructions and memory accesses are time stamped. The TimeWarp algorithm is used for managing synchronisation. This algorithm is optimistic and requires that all computations can be rolled back. The basic functions required for implementing the control and memory system used by TimeWarp are described. The memory model presented to the programmer is a single linear address space modified by a single thread of control. Thus, at the software level there is no need for explicit synchronising actions when accessing memory. The physical implementation, however, is multiple CPUs with their own caches and local memory with each CPU simultaneously executing multiple threads of control.
Providing integrated support for multiple development notations
John C. Grundy, John R. Venable
A new method for providing integrated support for multiple development notations (including analysis, design, and implementation) within Information Systems Engineering Environments (ISEEs) is described. This method supports both static integration of multiple notations and the implementation of dynamic support for them within an integrated ISEE. First, conceptual data models of different analysis and design notations are identified and modelled, which are then merged into an integrated conceptual data model. Second, mappings are derived from the integrated conceptual data model, which translate data changes in one notation to appropriate data changes in the other notations. Third, individual ISEEs for each notation are developed. Finally, the individual ISEEs are integrated via an integrated data dictionary based on the integrated conceptual data model and mappings. An environment supporting integrated tools for Object-Oriented Analysis and Extended Entity-Relationship diagrams is described, which has been built using this technique.
Proceedings of the First New Zealand Formal Program Development Colloquium
This volume gathers together papers presented at the first in what is planned to be a series of annual meetings which aim to bring together people within New Zealand who have an interest in the use of formal ideas to enhance program development.<P>
Throughout the World work is going on under the headings of “formal methods”, “programming foundations”, “formal software engineering”. All these names are meant to suggest the use of soundly-based, broadly mathematical ideas for improving the current methods used to develop software. There is every reason for New Zealand to be engaged in this sort of research and, of growing importance, its application.<P>
Formal methods have had a large, and growing, influence on the software industry in Europe, and lately in the U.S.A. it is being seen as important. An article in September's “Scientific American” (leading with the Denver Airport debacle) gives an excellent overview of the way in which these ideas are seen as necessary for the future of the industry. Nearer to home and more immediate are current speculations about problems with the software running New Zealand's telephone system.<P>
The papers in this collection give some idea of the sorts of areas which people are working on in the expectation that other people will be encouraged to start work or continue current work in this area. We also want the fact that this works is going on to be made known to the New Zealand computer science community at large.
We present an approach to the design of complex logic ICs, developed from four premises.<P>
First, the responsibilities of a chip's major components, and the communication between them, should be separated from the detailed implementation of their functionality. Design of this <I>abstract architecture</I>should precede definition of the detailed functionality.<I>
Secondly, graphic vocabularies are most natural for describing abstract architectures, by contrast with the conventional textual notations for describing functionality.<P>
Thirdly, such information as can be expressed naturally and completely in the idiom of the abstract architecture should be automatically translated into more complex, lower-level vocabulary.<P>
Fourthly, the notations can be integrated into a single, consistent design-capture and synthesis system.<P>
PICSIL is a preliminary implementation of a design environment using this approach. It combines an editor and a synthesis driver, allowing a design's abstract architecture to be created using a graphical notation based on Data Flow Diagrams and state machines, and its functionality to be designed using a more conventional textual hardware description language. On request, it also translates a design into appropriate input for synthesis software, and controls the operation of that software, producing CIF files suitable for fabrication.<P>
Thus computer systems become appropriate for <I>ab initio</I>design production rather than <I>post facto</I>design capture.
ATM has now been widely accepted as the leading contender for the implementation of broadband communications networks (Brinkmann, Lavrijsen, Louis, <I>et al,</I> 1995) ATM networks are no longer restricted to research laboratories, and commercial products such as switches and interfaces manufactured by well known computer and communications companies have started to appear in the market place. The main advantage seen in ATM over other broadband networking technologies such as Synchronous Transfer Mode (STM) is its ability to transmit a wide variety of traffic types, including voice, data and video efficiently and seemlessly.
Data compression is an eminently pragmatic pursuit: by removing redundancy, storage can be utilised more efficiently. Identifying redundancy also serves a less prosaic purpose-it provides cues for detecting structure, and the recognition of structure coincides with one of the goals of artificial intelligence: to make sense of the world by algorithmic means. This paper describes an algorithm that excels at both data compression and structural inference. This algorithm is implemented in a system call SEQUITUR that efficiently deals with sequences containing millions of symbols.
This document reports on an investigation conducted between November, 1995 and March, 1996 into the use of machine learning on 14 sets of data supplied by agricultural researchers in New Zealand. Our purpose here is to collect together short reports on trials with these datasets using the WEKA machine learning workbench, so that some understanding of the applicability and potential application of machine learning to simi8lar datasets may result.<P>
We gratefully acknowledge the support of the New Zealand agricultural researchers who provided their datasets to us for analysis so that we could better understand the nature and analysis requirements of the research they are undertaking, and whether machine learning techniques could contribute to other views of the phenomena they are studying. The contribution of Colleen Burrows, Stephen Garner, Kirsten Thomson, Stuart Yeates and James Littin and other members of the Machine Learning Group in performing the analyses was essential to the completion of this work.
The approach of combining theories learned from multiple batches of data provide an alternative to the common practice of learning one theory from all the available data (i.e., the data combination approach). This paper empirically examines the base-line behaviour of the theory combination approach in classification tasks. We find that theory combination can lead to better performance even if the disjoint batches of data are drawn randomly from a larger sample, and relate the relative performance of the two approaches to the learning curve of the classifier used.<P>
The practical implication of our results is that one should consider using theory combination rather than data combination, especially when multiple batches of data for the same task are readily available.<P>
Another interesting result is that we empirically show that the near-asymptotic performance of a single theory, in some classification task, can be significantly improved by combining multiple theories (of the same algorithm) if the constituent theories are substantially different and there is some regularity in the theories to be exploited by the combination method used. Comparisons with known theoretical results are also provided.
One powerful technique for supporting creativity in design is analogy: drawing similarities between seemingly unrelated objects taken from different domain. A case study is presented in which fractal images serve as a source for novel crochet lace patterns. The human designer searches a potential design space by manipulating the parameters of fractal systems, and then translates portions of fractal forms to lacework. This approach to supporting innovation in design is compared with previous work based on formal modelling of the domain with generative grammars.
Many problems encountered when applying machine learning in practice involve predicting a “class” that takes on a continuous numeric value, yet few machine learning schemes are able to do this. This paper describes a “rational reconstruction” of M5, a method developed by Quinlan (1992) for inducing trees of regression models. In order to accommodate data typically encountered in practice it is necessary to deal effectively with enumerated attributes and with missing values, and techniques devised by Breiman et al. (1984) are adapted for this purpose. The resulting system seems to outperform M5, based on the scanty published data that is available.
Identifying hierarchical structure in sequences: a linear-time algorithm
Craig G. Nevill-Manning, Ian H. Witten
This paper describes an algorithm that infers a hierarchical structure from a sequence of discrete symbols by replacing phrases which appear more than once by a grammatical rule that generates the phrase, and continuing this process recursively. The result is a hierarchical representation of the original sequence. The algorithm works by maintaining two constraints: every digram in the grammar must be unique, and every rule must be used more than once. It breaks new ground by operating incrementally. Moreover, its simple structure permits a proof that it operates in space and time that is linear in the size of the input. Our implementation can process 10,000 symbols/second and has been applied to an extensive range of sequences encountered in practice.
Dataset cataloging metadata for machine learning applications and research
Sally Jo Cunningham
As the field of machine learning (ML) matures, two types of data archives are developing: collections of benchmark data sets used to test the performance of new algorithms, and data stores to which machine learning/data mining algorithms are applied to create scientific or commercial applications. At present, the catalogs of these archives are ad hoc and not tailored to machine learning analysi8s. This paper considers the cataloging metadata required to support these two types of repositories, and discusses the organizational support necessary for archive catalog maintenance.
Timestamp representations for virtual sequences
John G. Cleary, J. A. David McWha, Murray Pearson
The problem of executing sequential programs optimistically using the Time Warp algorithm is considered. It is shown how to do this, by first mapping the sequential execution to a control tree and then assigning timestamps to each node in the tree.<P>
For such timestamps to be effective they must be finite, this implies that they must be periodically rescaled to allow old timestamps to be reused. A number of timestamp representations are described and compared on the basis of: their complexity; the frequency and cost of rescaling; and the cost of performing basic operations, including comparison and creation of new timestamps.
Teaching students to critically evaluate the quality of Internet research resources
Sally Jo Cunningham
The Internet offers a host of high-quality research material in computer science-and, unfortunately, some very low quality resources as well. As part of learning the research process, students should be taught to critically evaluate the quality of all documents that they use. This paper discusses the application of document evaluation criteria to WWW resources, and describes activities for including quality evaluation in a course on research methods.
OzCHI'96 Workshop on the Next Generation of CSCW Systems
John Grundy - Editor
This is the Proceedings of the OZCHI'96 Workshop on the Next Generation of CSCW Systems. Thanks must go to Andy Cockburn for inspiring the name of the workshop and thus giving it a (general) theme! The idea for this workshop grew out of discussions with John Venable concerning the Next Generation of CASE Tools workshop which he'd attended in 1995 and 1996. With CSCW research becoming more prominent within the CHI community in Australasia, it seemed a good opportunity to get people together at OZCHI'96 who share this interest. Focusing the workshop on next-generation CSCW system issues produced paper submissions which explored very diverse areas of CSCW, but which all share a common thread of “Where do we go from here?”, and, perhaps even more importantly “Why should be doing this?”
Reconstructing Minard's graphic with the Relational Visualisation Notation
Matthew C. Humphrey
Richly expressive information visualisations are difficult to design and rarely found. Few software tools can generate multi-dimensional visualisations at all, let alone incorporate artistic detail. The Relational Visualisation Toolkit is a new system for specifying highly expressive graphical representations of data without traditional programming. We seek to discover the accessible power of this notation-both its graphical expressiveness and its ease of use. Towards this end we have used the system to design and reconstruct Minard's visulisation of Napoleon's Russian campaign of 1812. The resulting image is very simi8lar to the original, and the design is straightforward to construct. Furthermore, the design is sufficiently general to be able to visualise Hitler's WWII defeat before Moscow.
Selecting multiway splits in decision trees
Eibe Frank, Ian H. Witten
Decision trees in which numeric attributes are split several ways are more comprehensible than the usual binary trees because attributes rarely appear more than once in any path from root to leaf. There are efficient algorithms for finding the optimal multiway split for a numeric attribute, given the number of intervals in which it is to be divided. The problem we tackle is how to choose this number in order to obtain small, accurate trees.<P>
We view each multiway decision as a model and a decision tree as a recursive structure of such models. Standard methods of choosing between competing models include resampling techniques (such as cross-validation, holdout, or bootstrap) for estimating the classification error; and minimum description length techniques. However, the recursive situation differs from the usual one, and may call for new model selection methods.<P>
This paper introduces a new criterion for model selection: a resampling estimate of the information gain. Empirical results are presented for building multiway decision trees using this new criterion, and compared with criteria adopted by previous authors. The new method generates multiway trees that are both smaller and more accurate than those produced previously, and their performance is comparable with standard binary decision trees.
Melody transcription for interactive applications
Rodger J. McNab, Lloyd A. Smith
A melody transcription system has been developed to support interactive music applications. The system accepts monophonic voice input ranging from F2 (87 HZ) to G5 (784 HZ) and tracks the frequency, displaying the result in common music notation. Notes are segmented using adaptive thresholds operating on the signal's amplitude; users are required to separate notes using a stop consonant. The frequency resolution of the system is ±4 cents. Frequencies are internally represented by their distance in cents above MIDI note 0 (8.176 Hz); this allows accurate musical pitch labeling when a note is slightly sharp or flat, and supports a simple method of dynamically adapting the system's tuning to the user's singing. The system was evaluated by transcribing 100 recorded melodies-10 tunes, each sung by 5 male and 5 female singers-comprising approximately 5000 notes. The test data was transcribed in 2.8% of recorded time. Transcription error was 11.4%, with incorrect note segmentation accounting for virtually all errors. Error rate was highly dependent on the singer, with one group of four singers having error rates ranging from 3% to 5%, error over the remaining 6 singers ranged from 11% to 23%.
|
<urn:uuid:f5994a36-7439-4019-a379-57c13ba8d4bf>
|
CC-MAIN-2013-20
|
http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0wordpdf--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-about---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&c=wordpdf&cl=CL1&d=HASHf72d14e94473a7a13713f8
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698222543/warc/CC-MAIN-20130516095702-00016-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.93419 | 5,747 | 2.53125 | 3 |
This last ACM meeting we got to have some real talk with Dr. Baas as he explained to us, “What is Computer Science really?” The way he described it was algorithms! Although simple in principle, all of computer science involves the analysis and solutions of problems through algorithms. Of course, he also talked about the limitations of computers as well. An example being the inability to detect if a program will terminate or continue on infinitely. But to give somewhat of an idea of what Computer Science truly encompasses we have the following list:
- Algorithms and Complexity
- Architecture and Organization
- Computational Science
- Discrete Structures
- Graphics and Visual Computing
- Human-Computing Interaction
- Information Assurance and Security
- Information Management
- Intelligent Systems
- Networking and Communications
- Operating Systems
- Platform-based Development
- Parallel and Distributed Computing
Yeah, there is a lot. But to help guide you on your way to learning more about computer science, here is a website that Dr. Baas highly recommended us for future use.
Well it appears we have run out of space, but there is still so much more content that is available at the meetings themselves, so partake of the weekly communal or you shall find yourself a quadrate! Until next time, ACM out!
|
<urn:uuid:f3bae73f-b744-470e-a5df-073da5200e12>
|
CC-MAIN-2013-20
|
http://www.letuacm.org/?p=24878
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706933615/warc/CC-MAIN-20130516122213-00019-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.932924 | 275 | 2.671875 | 3 |
The advancement in science and technology is touching new horizons every day. From laptops to smartphones to computers, scientists and developers are putting their best efforts to provide better facilities to mankind.
In today’s era, where all kinds of people use computing devices, system crashes are a major reason which degrades user experience. Scientists and researchers at the University College of London have been trying to find a solution to this problem for a long time which led to the creation of a self-healing computer that will never crash.
The Systemic Computer, developed by computer scientist, Dr. Peter Bentley, and UCL research engineer, Christos Sakellariou, mimics the chaos of nature to repair itself. This computer is based on the concept of systemic computing, which is clearly reflected in the name of the computer. It completes its tasks, not necessarily sequentially but using the processes that are “distributed, decentralized and probabilistic” in nature. Presently, all the computers work sequentially i.e. executing one instruction before going on to the next. Then the instructions are extracted from the memory, being executed and finally the results are placed in the memory. But the systemic computer combines the data and the instructions on what to do with the data, into systems.
This self-repairing machine consists of a pool of systems that interact in parallel, after which the result of a computation simply emerges from those interactions. The new computer systems are executed at times chosen by a pseudorandom number generator, designed to mimic nature’s randomness. In this, there are multiple copies of instructions distributed across various systems so that if one of the systems is corrupted then the computer is able to access another clean copy to repair its code. As we all know that when conventional systems crash, they can’t access even bit of memory but the systemic computer carries on regardless of crashes because each system has its own memory.
The various fields where the systemic computer can give a remarkable performance are: in military drones to reprogram themselves in order to cope up with the damages caused in combat, they can also be used to create more realistic models of the human brain. The scientists are even working on teaching the systems to rewrite their own codes in accordance with the changes in environment. In this way,the combination of self-learning nature combined with that of redundant and pseudorandom nature of the system will make it somewhat similar to the human mind.
Apart from these, systemic computer can also be used in military robotics, swarm robotics and as mission critical servers. It’s incredibly fast and stable in comparison to the systems of the present world.
The main concern now is that it’s not for the first time that a crash-proof system is being introduced. The earlier systems failed because of some reasons. So, let’s see if it works well only in papers or practically too. If it works well, then the next generation kids have to make other creative excuses for failing to complete their works on time.
|
<urn:uuid:d333f5ca-2352-4192-81f6-038748260384>
|
CC-MAIN-2013-20
|
http://www.examiner.com/article/the-computer-that-never-crashes?cid=rss
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708808740/warc/CC-MAIN-20130516125328-00048-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.955588 | 618 | 3.65625 | 4 |
Relevance of this week's material
By building your own web-based information system you will discover alot of things about computers, software and communications over the Internet. We have all seen Web pages but you will experience how they are created and how they get transported onto a Web server where they can be viewed by everybody. The wonder of networking is that you can access other computers remotely, log in to them, query them and ask them for services. The additional material directly explains and demonstrates the sortware and the skills that you will need to complete your assessment exercises this week. You will:
- experience the WWW at a slightly more detailed level that will enable you to understand the underlying technologies and terms that you may only have heard of before now.
- experience a computer system that has a completely different operating system to the good ole' Windows or MacOS that you may be used to.
- experience the creation of a Web site and operation of a Web server
- experience a range of communication software and "protocols" that allow us to send information over the Internet
First let's catch up on the business world of the Systems Analyst
We will view a few of these slides to put you into the picture - there are more than we need for this introduction but feel free to read through them to get more depth. Here are the slides.
It's all about communication
How do we communicate? What are the rules?
- Face to face (multi-channel)
- Broadcast vs. conversation (one-to-many, one-to-one, many-to-one, many-to-many)
- In person but at a distance (sight only)
- Over a telephone (sound only)
- email (text only)
- Deaf people (other senses)
- Blind people (other senses)
How do computers communicate? What kind of human communication is it most like?
- (computer science) rules determining the format and transmission of data [syn: communications protocol] 2: (people) forms of ceremony and etiquette observed by diplomats and heads of state 3: (organisations) code of correct conduct; "safety protocols"; "academic protocol"
|Source: WordNet ® 2.0, © 2003 Princeton University
- A set of formal rules describing how to transmit data,
especially across a network. Low level protocols define the
electrical and physical standards to be observed, bit- and
byte-ordering and the transmission and error detection and
correction of the bit stream. High level protocols deal with
the data formatting, including the syntax of messages, the
terminal to computer dialogue, character sets, sequencing of
Many protocols are defined by RFCs or by OSI.
|Source: The Free On-line Dictionary of Computing, © 1993-2005 Denis Howe
What is the difference between the Internet and the Web?
The Internet is a planet wide collection of interconnected networks used for computer communication. Networks are made up of transmission media (cables, radio waves, fibreoptics, IR, etc), network communication devices (telephones, modems, routers, hubs, switches, etc) and computers. It can be divided into private (intranets, extranets) and public parts.
Every device that is connected to the Internet is uniquely identified (hardware identification) and has a unique address (IP address).
Communication over the Internet can be broken down into a hierarchy of perspectives. Each perspective or layer has a set of rules that govern its own communication and has a set of requirements and services.
- Human to human (web pages, emails, file transfers, chat, messaging, newsfeeds)
- Computer to computer (end-to-end communication)
- Communication device to communication device (point-to-point or link)
- Type of communication medium being used (microwave, radio, fibreoptic, wires, etc)
These perspectives are reflected in a network reference model called the TCP/IP network model.
Web browsers, email clients, file transfer clients, remote terminal clients, etc
- End-to-end transmission of messages
- Breaking messages into individually identified chunks that are manageable
- Message reconstruction
- Point-to-point transmission of chunks between network "nodes"
- Breaking messages into individually identified "packets" within an allowable size range
- Reconstructing chunks
Bits and bytes to frequencies, voltages, currents, phases, pulses, etc
Each perspective relies on a range of communication and processing rules, called protocols to manage the effective and efficient transfer of information from one person/computer/device to another over the Internet.
The Protocols that you are most likely to use every day relate to the different application programs that you use to transfer information back and forth. These are programs like browsers, chat and email clients. A very short selection appears in the table below:
HyperText Transfer Protocol
|A set of rules for sending and receiving Web pages
Simple Mail Transfer Protocol
|A set of rules for sending email messages from one email server to another
Post Office Protocol
|A set of rules for transfering mail between an email client (outlook, pegasus, etc) and an email server (the email equivalent of a Post Office)
File Transfer Protocol
|A set of rules for transfering files between computers
Intant Messaging and Presence Protocols
|A set of rules for sending instant messages between messaging clients and for signalling online presence
This site gives an exhaustive range of Internet related protocols.
So what is the difference between the Internet and the Web?
The Web or World Wide Web (WWW or W3) is the hugest document repository in the world and is a system for storing, transmitting, linking and viewing Web pages and operates under the HTTP protocol.
Internet Protocol Demonstrations
The Internet Protocol (IPv4) has an addressing system that assigns a unique IP address to each computer on the Internet. An IP address looks like 184.108.40.206 and each of the four numbers separated by dots can be between 0 and 255. That means that there are 255x255x255x255 = 4228250625 (4 billion) unique IP addresses available. A quick think will reveal that there are not enough IP addresses to go around each of the devices that are connected to the Internet in the world today. IPv6 allows alot more addresses and there are various ways that many computers inside corporate firewalls can share just a few IP addresses. IP addresses are pretty hard for people to remember so we use handles called URLs or Universal Resource Locators. To you and me that is a Web address like: www.gbcycles.co.uk or www.victoriassecret.com
1) IP: There is a utility called "ping" that allows us (to ping a computer) to see if a computer connected to the Internet is currently "awake" or accessible. To use it first ensure that Netcheck is on so that you can get information through the university firewall then go to Start-> Programs -> Accessories -> Command Prompt. At the C:> prompt type the word ping followed by a web address that you know (or try one of the above) eg: ping www.victoriassecret.com
What is it doing? The first thing that happens is that the URL is transformed into its equivalent IP address by looking it up in a Domain Name Service (DNS). Next the Ping program sends 4 small packets of data to the remote host (web server in this case) requesting that they be sent back immediately. The round trip time is recorded and averaged. A ping packet only has a limited time to live and will time-out after a fixed number of milliseconds or after too many hops to intermediate hosts on the network. "Ping" basically lets you know that the computer is on and it is accessible via the network.
2) TCP: There is a related utility called "tracert" that traces the route that packets take to get to the final destination. There are several different routes that a packet could take to get from one computer to the destination computer with many intermediate steps inbetween. If a particular host is not responding or if the transmission times are too long at a particular time then packets may be rerouted to another host so that they can continue their journey. It is possible to run tracert twice and get two different results. At the C:> prompt type the word tracert followed by a web address that you know, eg: tracert www.victoriassecret.com
What is it doing? Again the URL is transformed into its equavalent IP address and then a program or device called a "router" determines the next host (may be a router) to send the packet to. At each new host the packet is delivered to a router with its destination address and return address. Each host from start to destination sends back its IP address and some packet timing information. In this way one can tell the currently most efficient or effective route from one computer to another.
The Application Layer
Remote Access using Telnet or PuTTY
We all use Windows in the labs and on GU computers. We authenticate to Novell to prove that we are a registered student or staff member and then we are allowed to use that computer for as long as we need. During that time there is noone else logged into that compter, only you. This is what is referred to as a "single user system". We have sole access to the computing power and resources of that machine while we are logged in.
Unix and Linux and some other operating systems are "multi-user" systems. Accounts can be created for a number of users and they can simultaneously access the computer's processing capabilities and storage. Users are usually remote from the actual computer so they must have a remote means of interacting with the operating system. The system is also multitasking, meaning that several processes or programs can run at the same time by sharing the computers processing time between tasks (and users). This process is called process "scheduling".
Telnet and Putty allow us to remotely access a multi-user machine using a text based interface. That means that we must execute actions using a command line. So long as we stay within our designated "home" directory we can do just about anything with that computer. Here is a basic picture of what we are doing with Telnet and PuTTY:
FTP - File transfer to and from a remote computer
File transfer allows us to do just that: transfer files between two computers that we have access to. The usual configuration is a local and a remote computer. Often, a developer will be creating files on a local machine and then later "uploading" them to a remote computer that provides some service like publishing web sites. If editing an existing site for which there is no local copy then the files that make up the site may be "downloaded" to the local computer.
From the local FTP client perspective, uploading is referred to as "putting" or executing a PUT operation. Downloading from the remote to the local machine is called "getting" or executing a GET operation. There are a range of FTP operations that allow transfer between text-based interfaces and they include navigating between directories on local and remote computers and defining the type of data that is being transferred so that optimal settings may be used.
File transfer clients like WinSCP3 implement the FTP file tranfer protocol as outlined in the TCP/IP suite.
Creating a basic Web page - HTML in the raw!
Online HTML computer labs and resources
Relationship between computers based on their roles
The key words in this discussion are "Servers", "Clients" and "Peers". On a network whether it is the world spanning Internet or just a small local area network (LAN) computers and their programs take on particular roles with relative levels of importance. A Peer-to-Peer network has computers that each assume the same level of importance: a Peer-to-Peer relationship. A Client-Server relationship is an unequal relationship where the server is far more powerful and has access to much faster communications to supply services to many usually lower-specced clients.
Peers: As the name suggests, computers that are peers in a network share the same level of importance and may be identical in function and utility. A Peer-To-Peer (P2P) network is made up of any number of computers that work together and share resources but none has any special role compared to the others. Imagine a handful of computers around an office or connected via the Internet that share hard disk space, a printer, and a set of communication programs to facilitate interaction.
Servers: This term has two different but related meanings. A server often refers to a type of computer that has hardware specifications that make it most efficient at network communication, hard drive access and has exceptional computational power. Server hardware is most often used to run software that provides a particular service over the Internet or the local intranet (even an extranet). Often a server provides its service to multiple requestors in a many to one relationship.
Software applications that may be considered a "service provider" or a "server" include things like a Web Server (HTTP), Email Server (POP, SMTP), Network News Server (NNTP), File Server (FTP) and other similar applications. Services can include things like access to databases, authenticating users, remote log ins and a range of other things. The key thing here is that a server accepts requests for the service that it specialises in and then performs the service. A Web Server, for example would accept a HTTP request for a particular web page and then send it to the requesting entity (client). In high traffic situations where there are many requests for service being executed in a short period of time the server hardware needs to be of highest specification. In extreme cases a single server/service may be executed by multiple computers at once providing what is termed a "server farm".
Because servers are the distributors and often storage points of important information they may be kept physically secured and are regularly maintained by people like Systems Programmers and Database Administrators.
Clients: Again, the term "client" can refer to computers at the hardware level or to the particular software that the computer runs. Client computers often take the form of workstations and their mode of operation is to communicate with servers to get the information and services that it needs to complete user tasks. At GU all of the computers that reside in labs and on desktops in offices throughout the university are what would be considered clients. They have amongst their application programs various items of communication software that are "client" applications. You would not be surprised to find out that for each server application there is likely to be a client application. For example, a web client is a Web Browser (HTTP), an Email client would be an application like MS Outlook (POP), a file transfer client would be an application like WinSCP (FTP) and a remote log-in client would be an application like PuTTY. All a client does is sends requests for service to a server at the user's initiation and then receives and displays the response to that request.
Setting up a Web page on Dwarf
Dwarf is the student Web server and it is used so that students can create Web pages for viewing within the university. It is used for other things as well but that is what we are going to use it for. Dwarf runs a version of the Unix operating system so we are going have to learn a little bit about Unix too.
Dwarf is physically located at the Nathan campus which means that we have to remotely log in to it in order to perform any tasks. We can do this with a secure TTY application like PuTTY. A what? TTY is an abbreviation of the word "teletype" and means typing over a distance. Rather than learn a whole new language for using the Unix operating system on Dwarf, we will set up our websites using the secure FTP application called WinSCP3. Not only can we transfer our files to Dwarf but we can create directories and manage our security so that our web pages become accessible via a browser while on Dwarf.
The process for developing web pages is cyclic and progresses as follows:
||Create HTML code locally (on a lab computer) using a text editor
||Save it to the local disk (USB) or removable drive (*.html)
||Test the webpage/website locally using a browser
(refresh the browser)
||Upload the finished webpage to your public_html directory on the remote Web server using secure FTP (WinSCP)
||Make sure that all web pages, images and documents have permissions set to 644 and make sure that any subdirectories that you create have permissions of 705 or 755
||Test the webpage on the remote server - dwarf.cit.griffith.edu.au/~s123456/ - using a browser
(refresh the browser)
||Edit the HTML code locally
||Go to 2
Browsers will save a copy of a recently visited file in a memory location called a cache. This is to save time if you are reaccessing a file that you have been to recently. A browser will look in the cache for a local copy first then go the website if it cannot find one. When repeatedly editing and testing a web page it is important to force the browser to go and get the most recent copy of the web page. This is done with the browser's "refresh" button.
Using secure FTP to transfer files from Windows to Dwarf
If the computer that you are using does not have a secure FTP (File Transfer Protocol) application then right click on this link and select "Save target as" to download a copy of WinSCP3.
Execute WinSCP3 from its icon and you will get the following log in script. Fill in the fields with the appropriate information and press the "Login" button to connect. The "Private Key File" field will gray out once you start to enter the password for your connection to Dwarf.
A window with two panes will open up. The left pane represents the file system on your PC and the right pane the file system on the remote server (dwarf). You can navigate by clicking on folder icons on either pane and move files between machines simply by dragging and dropping or by using the appropriate buttons at the bottom of the window. Similarly you can change drives on the local PC by using the drop down menu on the tool bar above the lefthand pane.
Basic (but really, really important) rules:
- Always save the file that you are editing (in Notepad) before you transfer the file to the server. If you don't save the file first then it is the last version of the file that goes to the web site.
- Never edit your web page by accessing the code using the "view source" function on IE.
- Never "click" on files in the WinSCP windows expecting to access files on the remote computer - all you ever get is the local copy and really confused when it doesnt work properly.
- Never use Dreamweaver or Netscape Composer to create your web pages as they add code that we dont need for this exercise.
The Bottom Line - All you really need is WinSCP to manage your Web site
Many of the file management items that you would normally use a secure telnet appliction like PuTTY to complete can be done using the graphical interface of the WinSCP application. This is so much easier for Windows users - "Thank heavens" say a few hundred students. So how do you do the important stuff to get the website up and running?
Setting up your dwarf web area
- Run WinSCP3 and log on to your dwarf account
- Make sure that the left (local) window is showing the contents of the folder holding your current week's work on your USB drive or network drive – basically, where ever you have created your local files
- In the right window: Double click on the up arrow folder (top) on the dwarf (right) window to access the list of student home directories (this could take a few seconds)
- When the list appears in the right hand window, right click on your home directory ('s' followed by your student number) “s123456” and select “properties” from the drop down menu
- Make sure that your home directory has the permissions 0701 so that:
- Owner (you) has R W X permissions,
- Group has --- permissions
- Others (nobody = web server) has - - X
- Once that is done double click on your home directory to enter it and make sure that you have a “public_html” directory. If you don’t have one then create one using the “F7 Create directory” button or menu option and make sure that it is spelled exactly as above.
- When you do have a public_html directory, right click on it to make sure that it has permissions 0755.
- Owner (you) has R W X permissions,
- Group has R - X permissions
- Others (nobody = web server) has R - X
- Into the public_html directory on dwarf you should drag and drop all of the html, image and word documents that you have created on you USB or network drive – this should include your home page which should be named “index.html”
- When they have all of the files transferred they should set all of the permissions 0644
- Owner (you) has R W - permissions,
- Group has R - - permissions
- Others (nobody = web server) has R - -
- Fire up an IE session and point the browser at: http://dwarf.cit.griffith.edu.au/~s123456 so that you can see your index page, alternatively http://dwarf.cit.griffith.edu.au/~s123456/index.html will do the same or http://dwarf.cit.griffith.edu.au/~s1234546/myblog.html will bring up your blog page and will work with whatever html files that you have.
Stick around for some simple examples of how to build web pages. You will be writing HTML in no time and having heaps of fun creating your websites. Just to get you started here is a basic outline for a web page. Get ready to take down some notes in the mass computer lab after the lecture/workshop.
<title>Text for title bar</title>
<!--No HTML code in the head of the document-->
<!--HTML comments dont appear in the browser-->
<!--In this section you can put tables, lists, images, links and everything that appears on your web page-->
Two Tier Web Architecture
This is most commonly known as a client-server architecture. Host computers such as dwarf provide web services to other computers through their Web server. On local computers such as those at home or in the labs you use a browser to make a request for a Web page stored on a remote computer. The server then responds to your request by sending a copy of the required file. The client level and the server level make up two tiers of the Web architecture. A third level or tier is added when the Web server requests data from a different server such as a database to make dynamic Web pages.
Each of the applications (IE, WinSCP, PuTTY, etc) uses a particular protocol or set of rules for the transmission of the data that it specialises in. When you use IE you are communicating with a remote Web server. When you use WinSCP you are communicating with a remote FTP server. Similar things happen for email (POP Mail server), instant messaging and news services.
1) The term "server" is most often used to refer to a computer (hardware device) that hosts the service. Server computers are different to the basic desktops and laptops that we see every day in that they have extremely fast hard drives and internal communications and have extremely efficient network connections to deal with multiple requests for service.
2) The term "server" is most correctly used to refer to a piece of software that runs on that computer. Web servers, email servers, FTP servers and all of the rest are programs that run continously waiting for and responding to requests for an information service. In actuality, server computers are powerful enough to run many different servers at the same time if required. This sort of makes sense as you need to access and FTP server to upload files to dwarf and a web server on the same machine to view the Web files that you have uploaded.
Local computers at home and in the labs take the role of "client" computers. In effect they run several different client programs to request email, web pages, FTP operations and many more.
|
<urn:uuid:2906d394-45ed-411f-a460-216530e3e686>
|
CC-MAIN-2013-20
|
http://www.ict.griffith.edu.au/teaching/1008ICT/mod2info.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703748374/warc/CC-MAIN-20130516112908-00037-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.924822 | 5,183 | 3.59375 | 4 |
KPN Research, the Research Department of the largest Dutch Telecom Operator, is actively designing and testing out new broadband service concepts that make it easier for their customers to use the Internet.
Recently, KPN initiated a pilot project that would give users faster, easier access to web sites that they might not have known about. The project focused on the interface and services that a mobile webpad might have in the Living Room area of the home.
Their goal was to create a bookmark-based browser interface that could be tailored to the information and entertainment needs of every family member, yet was as easy to use as a television remote. It also needed to provide recommendations of interesting web sites based on the users' interests and surfing patterns.
The base technology for this product is a Recommender System. Although quite a bit of literature is available about these systems, information about their strengths, weaknesses and actual field-tested potential, is unavailable or very limited in scope.
The task of the recommendation engine server is to monitor user behaviour and generate recommendations of web pages for each user. This process consists of the following distinct parts: user profiling, content profiling and matching.
"Given the background of the programmers we might have used Java,
but we expected to process millions of URL's and we have enough
experience with Java to know that this would be too slow. C++ was
never an option given the short development time granted to us. Lisp
was the best solution."
-- Professor Jans Aasman
Manager, Homeservices Group
The user profiling process builds a user profile for every user of the system. This is constructed by combining explicit topic selection by the user and information gathered by analysing the user's surfing behaviour (by visiting or storing web pages belonging to certain topics, users implicitly show interests in these topics). The implicit data analysis is used to confirm the selections explicitly made by the user and to keep the profile up-to-date.
While user interests are described in terms of topics or categories, so are web pages.
Using advanced neural network document classification; web pages are assigned categories describing the page's topic or topics. Apart from the automatic document classification, the user gives input for the classification process as well. A user that places http://www.weather.com in the folder `Weather,' has automatically classified the page in the system as being about weather.
Matching user profiles and content profiles is relatively easy. Documents about classical music are recommended to classical music lovers while the latest news in tennis is only recommended to tennis fans.
KPN's web page recommender system was built in Common Lisp using using Allegro CL, Allegrostore, Allegroserve and the Clementine neural network engine.
"We had about 6 months to build the recommendation engine with one AI-engineer and two programmers." says Professor Jans Aasman, the Manager of the Homeservices Group. "We also knew beforehand that we would have to experiment with several designs before we would settle for a final one."
"Given the background of the programmers we might have used Java, but we expected to process millions of URL's and we have enough experience with Java to know that this would be too slow. C++ was never an option given the short development time granted to us. Lisp was the best solution." Aasman adds.
KPN's system consists of both server-side and client-side software. Every user has a specialized client-side interface called "The Personal Browser, which allows for normal web surfing. The interface also contains a Ѣookmark-tree' of folders corresponding to the user's interests.
What makes these folders unique, is that they not only contain favorite websites that the user has bookmarked, but recommendations of new websites that the system has found as well. The user's client-side interface is a connection over the web with a recommendation engine server generating the recommendations based on each user's personal interests.
"Our solution is unique." says Alan Verberne, the project leader responsible for the recommendation engine. "First, we combined existing collaborative filtering techniques with advanced document classification software and detailed user monitoring. Secondly, we used hundreds of people to generate and validate an initial bookmark structure for a large number of categories and subcategories. Thirdly, the recommendation engine is integrated in a very user friendly personal browser that has been tested with real users in a number of pilots.
KPN used the AllegroStore object database to store the persistent objects that make up the application. The overall architecture consists of a complex bundle of persistent CLOS classes that reference each other in various ways. "We shudder at the thought of using a relational database for storing these objects." Says Aasman.
AllegroServe was used to prefetch pages in the URL database and to easily program an editorial interface for the system. From this interface a human editor can alter various properties and parameters of the recommendation engine system. "During the development period we encountered some technical difficulties but as always, Franz was quick to provide help and offer solutions." Aasman adds.
Apart from KPN's own usage in a pilot, the software is currently being reviewed by a number of publishers who are interested in using it both for knowledge management purposes within large companies and for distributing their own content (including opt-in advertisements) to consumers.
Click here to download a PDF version of this story.
|Copyright © 2013 Franz Inc., All Rights Reserved | Privacy Statement||
|
<urn:uuid:c498aff1-a505-4c1a-bb24-db6fe1b8cf58>
|
CC-MAIN-2013-20
|
http://franz.com/success/customer_apps/knowledge_mgmt/kpn.lhtml
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701543416/warc/CC-MAIN-20130516105223-00097-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.938634 | 1,118 | 2.578125 | 3 |
Our researchers are at the forefront of a new science that is finding ways in which computers can work intelligently in partnership with people. This could support the management of some of today's most challenging situations, such as the aftermath of major disasters.
"Emergency situations, such as earthquakes, floods and fires, are extremely chaotic, with new information coming in all the time and priorities constantly shifting," explains Professor Nick Jennings, who leads the University's Agents, Interactions and Complexity research group - the largest group of its kind in the world.
"Computers are much better than people at collecting and analysing large amounts of information," Nick continues. "In our previous research we have harnessed this to produce systems in which computers work together, share this information and reduce human error."
Nick is heading up the ORCHID project, which is taking the research a stage further. Putting humans back into the picture, ORCHID is looking at how people and computers can most effectively exchange information and work together.
The £10m project builds on the success of a five-year programme called ALADDIN (Autonomous Learning Agents for Decentralised Data and Information Networks), which ended in 2010. ALADDIN's researchers designed a system of multiple agents working together to give an overall picture of an emergency situation as it unfolded.
The agents, in sensors, cameras and unmanned aerial vehicles, were programmed to collect and process data about the situation. Using techniques such as game theory, the agents negotiated with each other to arrive at a coordinated plan of action - for example sending the correct number of fire engines to the location where they were most needed.
ORCHID is a collaboration between the universities of Southampton, Oxford and Nottingham and industrial partners BAE Systems, Secure Meters UK Ltd and the Australian Centre of Field Robotics. It is funded by the Engineering and Physical Sciences Research Council.
Nick comments: "The breadth of our multidisciplinary approach, coupled with our focus on industrial applications, means that this research can be expected to be truly transformational."
|
<urn:uuid:00630751-a8ed-42d4-8983-8cc938be5347>
|
CC-MAIN-2013-20
|
http://60.southampton.ac.uk/improving-coordination-in-a-crisis/37
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700168711/warc/CC-MAIN-20130516102928-00044-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.958237 | 416 | 3.03125 | 3 |
Skip to Main Content
Studying trade-off analysis in computer architecture and higher-level computer organization can be very hard due to the small amount of detail given to it in undergraduate courses. The key concepts and processes taking place can be very difficult to understand by using pen and paper alone. A graphical simulator can be utilized to represent the concepts in a very intuitive and interactive manner thus revolutionizing and simplifying the way students and instructors learn and teach computer architecture. One of Project GRAMSpsila primary objectives is to provide methods to easily visualize the effects of instruction set reconfiguration in various datapath implementations. Another objective is to enable users to code and compile machine programs for testing a particular configurationpsilas performance and flexibility. Lastly, the software also presents a contextual and graphical representation of data exchange processes during an instructionpsilas execution. The project is written in JAVA and is designed to run in various computer platforms available in the academe.
TENCON 2008 - 2008 IEEE Region 10 Conference
Date of Conference: 19-21 Nov. 2008
|
<urn:uuid:ea704231-06eb-4970-bfeb-4d86e81f5d16>
|
CC-MAIN-2013-20
|
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=4766609&contentType=Conference+Publications
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708711794/warc/CC-MAIN-20130516125151-00084-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.918851 | 215 | 2.671875 | 3 |
Media Contact: Fred Strohl ([email protected])|
Communications and External Relations
Three linked supercomputing centers earn gold medal for high performance
OAK RIDGE, Tenn.,
Feb. 12, 1997
The Department of Energy's (DOE) Oak Ridge National Laboratory (ORNL) is part of a three-center consortium that is linking supercomputers from sites across the country to solve scientific problems that are too large to solve on a single computer.
Their success in this approach to problem solving netted them a gold medal in the category for fastest linked computers of the High Performance Computing Challenge.
The award was presented during the Supercomputing '96 Conference in Pittsburgh to ORNL, DOE's Sandia National Laboratories and the Pittsburgh Supercomputing Center. The competition, in which participants from around the globe demonstrate leading-edge projects, is held annually at the Supercomputing Conference.
Four massively parallel processing (MPP) computers, in various combinations, are linked to solve scientific problems that are too large to solve on a single computer. These include two Intel Paragons at ORNL, one Intel Paragon at Sandia and a Cray T3D at Pittsburgh. An MPP consists of hundreds to thousands of processing nodes, each of which is analogous to a powerful desktop computer. Nodes are connected to one another via a high-speed network within the machine. MPPs solve problems using a divide and conquer strategy in which each node works on a small part of the overall problem. Nodes exchange data over the network, when necessary.
Each MPP consists of between 512 and 1,824 nodes. Eventually, 3,872 nodes will be able to be brought together to solve a single problem. What would take 10 years to solve on a personal computer could be solved in one day on the linked system.
While the linkage concept is simple, there are obstacles to overcome. A computer that is waiting for data from another machine cannot do its work. Robust, high-speed networks provided by DOE's ESnet and the National Science Foundation's vBNS are solving this problem in order for the machines to get the data they need quickly and continue computing.
The four computers are from two different manufacturers and run a total of three different operating systems. The programs must be ported to each machine separately and then made to run on them as if they were one machine. This is like having to translate a book for readers who speak three different languages and then lead a discussion about it. Parallel Virtual Machine (PVM) software, which was developed at ORNL, serves the function of a simultaneous translator. PVM lets a user connect different machines and presents the user with the image of the connected machines as a single MPP.
Initially, three problems are being solved using this system. The first is a model of the alloy nickel-copper, which exhibits magnetic behavior when it is composed of predominantly nickel, but does not when it is predominantly copper.
Using a computer program developed at ORNL, scientists are beginning to uncover the physical mechanism responsible for complex magnetic behavior first observed 25 years ago. This research into the fundamental nature of magnetic alloys paves the way for studies of a variety of magnetic materials. Application of this research could improve computer storage devices such as hard disk drives, magnetic motors used in the power generation industry and shadow masks to sharpen images on computer monitors and televisions.
The second problem involves predicting the response of a nuclear weapon to the effects of a hypothetical nearby explosion. The blast and fragmentation environment from the nearby blast would present the possibility of a sympathetic detonation of the weapon. The calculations will help assess the safety performance of the warhead in such a scenario without the need for an extensive and costly full-scale test program.
The final problem links atmosphere, ocean and sea ice computer models to study the Earth's climate system. Physically realistic climate models enable scientists to assess the consequences of natural and man-made environmental changes on the Earth's climate.
"This project has linked more than computers," explained Tim Sheehan, director of special projects at ORNL's Center for Computational Sciences. "Physicists, mathematicians, computer scientists, network engineers and applications programmers from all three centers have worked together on this project for more than one year."
Driven by the desire to solve massive scientific problems, they continue to work together to push the frontiers of science by exploiting the power of linked massively parallel supercomputers.
ORNL, one of the Department of Energy's multiprogram national research and development facilities, is managed by Lockheed Martin Energy Research Corp.
|
<urn:uuid:72087926-5f61-4f81-a6b7-04b2bb6780b3>
|
CC-MAIN-2013-20
|
http://www.ornl.gov/ornlhome/print/press_release_print.cfm?ReleaseNumber=mr19970212-01
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702414478/warc/CC-MAIN-20130516110654-00014-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.94119 | 946 | 2.6875 | 3 |
some time now, both small and large companies have been building robust applications
for personal computers that continue to be ever more powerful and available at
increasingly lower costs. While these applications are being used by millions
of users each day, new forces are having a profound effect on the way software
developers build applications today and the platform in which they develop and
deploy their application.
The increased presence
of Internet technologies is enabling global sharing of information-not only from
small and large businesses, but individuals as well. The Internet has sparked
a new creativity in many, resulting in many new businesses popping up overnight,
running 24 hours a day, seven days a week. Competition and the increased pace
of change are putting ever-increasing demands for an application platform that
enables application developers to build and rapidly deploy highly adaptive applications
in order to gain strategic advantage.
possible to think of these new Internet applications needing to handle literally
millions of users-a scale difficult to imagine a just a few short years ago. As
a result, applications need to deal with user volumes of this scale, reliable
to operate 24 hours a day and flexible to meet changing business needs. The application
platform that underlies these types of applications must also provide a coherent
application model along with a set of infrastructure and prebuilt services for
enabling development and management of these new applications.
Windows DNA: Framework for a New Generation of Computing Solutions
the convergence of Internet and Windows computing technologies promises exciting
new opportunities for savvy businesses: to create a new generation of computing
solutions that dramatically improve the responsiveness of the organization, to
more effectively use the Internet and the Web to reach customers directly, and
to better connect people to information any time or any place. When a technology
system delivers these results, it is called a Digital Nervous System. A Digital
Nervous System relies on connected PCs and integrated software to make the flow
of information rapid and accurate. It helps everyone act faster and make more
informed decisions. It prepares companies to react to unplanned events. It allows
people focus on business, not technology.
a true Digital Nervous System takes commitment, time, and imagination. It is not
something every company will have the determination to do. But those who do will
have a distinct advantage over those who don't. In creating a Digital Nervous
System, organizations face many challenges: How can they take advantage of new
Internet technologies while preserving existing investments in people, applications,
and data? How can they build modern, scalable computing solutions that are dynamic
and flexible to change? How can they lower the overall cost of computing while
making complex computing environments work?
You may also like this : EDGE, Holographic Data Storage , Integer Fast Fourier Transform, NRAM, Orthogonal Frequency Division Multiplplexing , Ovonic Unified Memory, 4G Wireless Systems , Daknet, AC Performance Of Nanoelectronics , High Performance DSP Architectures, Millipede , Free Space Laser Communications, Short Message Service (SMS), Conditional Access System , SyncML, Virtual keyboard, High Altitude Aeronautical Platforms, MANET , Smart Fabrics, Dynamic Virtual Private Network, Blue Tooth, Autonomic Computing , Voice Over Internet Protocol, Artificial Neural Network (ANN) , DNA Based Computing, Digital Subscriber Line , Freenet, Access gateways , Free Space Optics, Introduction to the Internet Protocols, High Altitude Aeronautical Platforms, Fiber Distributed Data Interface , Hyper-Threading technology , IMode, Cyberterrorism Adding Intelligence to Internet, Self-Managing Computing, Unified Modeling Language (UML), Socket Programming, SAM, VoCable , ATM with an Eye,Mind Reading Computer, Blue Brain, 6G Wireless, Touch Screens,IT Seminar Reports, PPT and PDF.
|
<urn:uuid:87463afa-4352-4b4b-8a93-9a03f7d7d319>
|
CC-MAIN-2013-20
|
http://www.seminarsonly.com/IT/Windows%20DNA.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700626424/warc/CC-MAIN-20130516103706-00083-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.870625 | 804 | 2.640625 | 3 |
Guy E. Blelloch
Recently researchers have suggested several computational models in which, one programs by specifying large networks of simple devices. Such models are interesting because they go to the roots of concurrency - the circuit level. A problem with the models is that it is unclear how to program large systems and expensive to implement many features that are taken for granted m symbolic programming languages. This paper describes the Concurrent Inference System (CIS), and its implementation on a massively concurrent network model of computation. It shows how much of the functionality of current rule-based systems can be implemented in a straightforward manner within such models. Unlike conventional implementations of rule-based systems in which the inference engine and rule sets are clearly divided at run time, CIS compiles the rules into a large static concurrent network of very simple devices. In this network the rules and inference engine are no longer distinct. The Thinking Machines Corporation, Connection Machine - a 65,536 processor SIMD computer - is then used to run the network. On the current implementation, real time user system interaction is possible with up to 100,000 rules.
|
<urn:uuid:8f2d247e-9351-4017-b2a3-8e66c27e90f4>
|
CC-MAIN-2013-20
|
http://aaai.org/Library/AAAI/1986/aaai86-123.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700074077/warc/CC-MAIN-20130516102754-00083-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.946487 | 222 | 2.65625 | 3 |
A glossary of words used in ICT and Computing including many related to modern technologies.
Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL
Part of a computer that you can touch. The physical parts of a computer e.g keyboard, monitor and mouse.
The first of three important constructs in programming. The order in which different instructions are executed. For example 1, 2, 3 or 3, 1, 2.
Sets of instructions grouped into programs that tell a computer what to do.
|
<urn:uuid:cd355c36-c0bf-4a8a-8de5-3f6b6c73ec0f>
|
CC-MAIN-2013-20
|
http://moo.compu2learn.co.uk/mod/glossary/view.php?id=79
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699632815/warc/CC-MAIN-20130516102032-00065-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.926559 | 148 | 3.46875 | 3 |
Video Lectures, Video Courses, Science Animations, Lecture Notes, Online Test, Lecture Presentations. Absolutely FREE.
25: A Miracle Video Lecture:
Click to Dim the Lights
25: A Miracle
Lecture duration: 53 min
This video lecture series on Higher Computing by Richard Buckland of by The University of New South Wales, Australia is an introductory course for computer science. This course consists of three strands: programming, systems, and general computer science literacy. The programming strand is further divided into two parts. For the first half of the course we cover small scale programming, in the second half we look at how to effectively use teams to produce more substantial software. In the systems strand we will look at how computers work. Concentrating on microprocessors, memory, and machine code. In the literacy strand we will look at topics drawn from computing history, algorithms, WWW programming, ethics and law, cryptography and security, and other topics of general interest. The stran
|
<urn:uuid:23b0b4b3-8891-4b5c-be4e-19ed76515ae9>
|
CC-MAIN-2013-20
|
http://www.learnerstv.com/video/Free-video-Lecture-11460-Computers.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699977678/warc/CC-MAIN-20130516102617-00094-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.88859 | 206 | 2.703125 | 3 |
The computer's operating system is the lowest-level software running on your computer. It coordinates the use of the computer's hardware resources, such as its CPU, memory and I/O devices. Beyond this, it provides security, protecting users from each other and providing a firewall to protect access through the network. In this course, we study a variety of techniques used in operating systems to perform these services, including concurrency, CPU scheduling, memory management, file systems and security.
This course is programming intensive and assumes you have substantial experience using Java. Prereqisite: Computer Science 221.
Learning Outcomes: In this course students will learn:
Students will learn alternative strategies for the tasks that an operating system performs, such as alternative approaches to CPU scheduling and memory management. Exercises will involve evaluating the behavior of running systems, studying well-known algorithms, and implementing pieces of an operating system.
|
<urn:uuid:75ade005-ee03-41fb-bb54-d91bc82a7318>
|
CC-MAIN-2013-20
|
http://www.mtholyoke.edu/~blerner/cs322/index.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.922384 | 183 | 4.0625 | 4 |
|Collection:||The Beauty and Joy of Computing|
Lab Goals: Learn to use key listeners to move characters around on the screen, learn to use the movement and drawing commands in scratch to draw complex images and geometric shapes, learn to use variables that change dynamically within a program, and use layers of abstraction in programs to draw complex new images. Note: Click "Login as Guest" to access.
This item appears in:
Programming (Big Ideas)
Abstraction (Big Ideas)
Abstracting (Computational Thinking Practice)
Developing Computational Artifacts (Computational Thinking Practice)
|
<urn:uuid:bf0b5dba-1452-458f-a411-78491af4fb0b>
|
CC-MAIN-2013-20
|
http://www.computingportal.org/node/7730
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383160/warc/CC-MAIN-20130516092623-00029-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.764218 | 126 | 2.890625 | 3 |
The Most Complex Machine: A Survey of Computers and Computing
Library Home || Full Table of Contents || Suggest a Link || Library Help
|An introductory computer science textbook for anyone who wants to understand how computers work and what computer science is about. It is supplemented by free software available for Macintosh and in a Java version that can be used over the Web, and lab worksheets. The preface, table of contents, and short chapter summaries are onsite, together with Java software and Macintosh software and labs (downloadable), and information about the Computer Science course Eck teaches in which he has used these materials.|
|Levels:||High School (9-12), College|
|Resource Types:||Courses, Books, General Software Miscellaneous, Web Interactive/Java|
|Math Topics:||Computer Science|
© 1994-2013 Drexel University. All rights reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.
|
<urn:uuid:e9a192c6-7176-448c-af05-50bcddba5051>
|
CC-MAIN-2013-20
|
http://mathforum.org/library/view/1388.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711605892/warc/CC-MAIN-20130516134005-00090-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.84806 | 204 | 2.515625 | 3 |
Coordination refers to those features of parallel programs involving multiple processes, namely communication, synchronization, and scheduling. For simple programs with regular data access patterns, coordination structures can be determined at compile time and require little runtime support. For applications with irregular data access patterns, for example, loops that iterate over portions of an array or dynamic data structures, some coordination decisions are best left until runtime, necessitating more powerful runtime support and more sophisticated compile-time analysis. In this project, coordination is provided by Delirium, a coordination language for expressing scheduling and communication patterns, by adaptive runtime scheduling, and by Multipol, a distributed data structure library.
Some of the software developed in this project is being integrated with other parallel software efforts at Berkeley through the Castle Project.
Application studies are used extensively in this project to test our ideas. Some of the recent applications include: the Grobner basis problem, timing level circuit simulation, the phylogeny problem, magnet simulation, and cell simulation. Here is a short movie from the cell simulation, which shows platelets flowing through an artery.
|
<urn:uuid:1cf9e3cb-5568-4c5e-a1d4-801f27215446>
|
CC-MAIN-2013-20
|
http://www.eecs.berkeley.edu/~yelick/coordination/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707439689/warc/CC-MAIN-20130516123039-00062-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.89338 | 220 | 2.59375 | 3 |
ROS: I mean, what exactly do you do?
PLAYER: We keep to our usual stuff, more or less, only inside out. We do on stage things that are supposed to happen off. Which is a kind of integrity, if you look on every exit as being an entrance somewhere else
Tom Stoppard Rosencrantz and Guildenstern are Dead
We have reached the end of this introduction to computing and program design. While there is more to learn about both subjects, this is a good point to stop, to summarize, and to look ahead.
From elementary school to high school we learn to compute with one form of data: numbers. Our first use of numbers is to count real things, say, three apples, five friends, twelve bagels. Later we use numbers without any appeal to concrete objects, but we have learned that numbers represent information in the real world.
Computing with software is algebra for all kinds of data, not just numbers. Nowadays, computer programs process representations of music, molecules, law cases, electrical diagrams, architectures of houses, and poems. Fortunately, we have learned to represent information with other forms of data than just numbers. Otherwise, computing and programming would become extremely tedious tasks.
Above all, we shouldn’t forget that computing means manipulating data through proper basic operations. Some operations create new values. Others extract values from values. Yet others modify values. Finally, there are also basic operations for determining to which class a piece of data belongs. Built-in operations and functions are of course just another class of data. Definition is value creation; application is a form of value extraction.An object in a language such as Java is a function with many different bodies. Each method represents a different way of extracting data from an object.
When we define a function, we combine basic data operations. There are two fundamental mechanisms for combining functions: function composition and conditional expressions. The former means that the result of one function becomes the argument of another one. The latter represents a choice among several possibilities. When we eventually apply a function, we trigger a computation.
In this book we have studied the laws of basic operations and the laws of operation combination. Using these laws we can understand, in principle, how any function processes its input data and how it produces its results and effects. Because the computer is extremely fast and good at using these laws, it can perform such evaluations for more data and for larger programs than we can do with paper and pencil.
Programs consist of definitions and expressions. Large programs consist of hundreds and thousands of definitions and expressions. Programmers design functions, use other programmer’s functions, leave, start on the project. Without a strong discipline we cannot hope to produce software of high quality. The key to programming discipline is to understand the design of programs as a means to describe computations, which, in turn, is to manipulate data through combinations of basic operations.
For that reason, the design of every program—
A project plan identifies what data we wish to produce from the data that the program will be given. In many cases, though, a program doesn’t process data in just one way but in many ways. For example, a program for managing bank accounts must handle deposits, withdrawals, interest calculations, tax form generation, and many other tasks. In other cases, a program may have to compute complex relationships. For example, a program for simulating a ping-pong game must compute the movement of the ball, bounces on the table, bounces from the paddle, paddle movements, etc. In either case, we need to describe what the various ways of processing data are and how they relate to each other. Then we rank them and start with the most important one. We develop a working product, make sure that it meets our specifications, and refine the product by adding more functions or taking care of more cases or both.
Designing a function requires a rigorous understanding of what it computes. Unless we can describe its purpose and its effect with concise statements, we can’t produce the function. In almost all cases, it helps to make up examples and work through the function’s computation by hand. For complicated functions or for functions that use generative recursion, we should include some examples with the purpose statements. The examples illustrate the purpose and effect statements for others who may have to read or modify the program.
Studying examples tends to suggest the basic design recipe. In most cases, the design of a function is structural, even if it uses an accumulator or structure mutation. In a few others, we must use generative recursion. For these cases, it is important to explain the method for generating new problems and to sketch why the computation terminates.
When the definition is complete, we must test the function. Testing discovers mistakes, which we are bound to make due to all kinds of reasons. The best testing process turns independently developed examples into test suites, that is, a bunch of expressions that apply the function to select input examples and compare its results and effects with expected results and effects (mostly) automatically. If a mismatch is discovered, the test suite reports a problem. The test suite should never be discarded, only commented out. Every time we modify the function, we must use the test suite to check that we didn’t introduce mistakes. If we changed the underlying process, we may have to adapt the test suite mutatis mutandis.
No matter how hard we work, a function (or program) isn’t done the first time it works for our test suite. We must consider whether the development of the function revealed new interesting examples and turn such examples into additional tests. And we must edit the program. In particular, we must use abstraction properly to eliminate all patterns wherever possible.
If we respect these guidelines, we will produce decent software. It will work because we understand why and how it works. Others who must modify or enhance this software will understand it, because we include sufficient information on its development process. Still, to produce great software, we must practice following these guidelines and also learn a lot more about computing and programming than a first book can teach.
The knowledge and design skills from this book are a good foundation for learning more about programming, computing, and even practical work on software. First, the skills are good for learning the currently fashionable collection of object-oriented languages, especially Java. The two languages share a philosophy of programming. In both settings, computing means dealing with data, and programming means describing classes of values and functions on them. Unlike Racket, however, Java requires programmers to spell out the class descriptions in Java, not just in English, and to place function definitions with class descriptions. As a result, Java requires programmers to learn a lot of syntactic conventions and is unsuitable as a first language.
The two mechanisms of computing are rather different. Can one mechanism compute what the other one can compute and vice versa?
The laws we have used are mathematical and abstract. They do not take into account any real-world limitations. Does this mean that we can compute whatever we wish?
The (simulated) hardware shows that computers have limitations. How do these limitations affect what we can compute?
Finally, the design knowledge of this book is enough to build some real-world programs in Racket. DrRacket with its built-in Web browser and email capabilities is such a program. Building large real-world programs, however, requires some more knowledge about the functions that Racket uses to create GUIs, to connect computers on a network, to script things such as shells, web servers, networks, databases, etc. No matter what you do now, don’t forget that good programming makes your life easy and fun.
Remember the design recipe, wherever you go.
|
<urn:uuid:b46001b2-c38c-4b04-9b14-2f0b827c799a>
|
CC-MAIN-2013-20
|
http://www.ccs.neu.edu/home/matthias/HtDP2e/Draft/htdp2e-epilogue.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00047-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.929399 | 1,604 | 3.171875 | 3 |
Library Home || Full Table of Contents || Suggest a Link || Library Help
|Epistemology and Learning Group, MIT Media Lab|
|A programmable modeling environment for exploring the workings of decentralized systems. With StarLogo, you can model (and gain insights into) many real-life phenomena, such as bird flocks, traffic jams, ant colonies, and market economies. StarLogo is a version of Logo intended for ages 13 and up. A version for the Mac is downloadable here, with a PC version on the way. Product information, sample projects, information about the user community, and company main pages also available.|
|Levels:||Middle School (6-8), High School (9-12), College|
|Resource Types:||General Software Miscellaneous|
|Math Topics:||Operations Research|
© 1994-2013 Drexel University. All rights reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.
|
<urn:uuid:62c7909a-35c4-492c-b70b-9263dda50300>
|
CC-MAIN-2013-20
|
http://mathforum.org/library/view/5471.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700497024/warc/CC-MAIN-20130516103457-00079-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.852258 | 207 | 2.515625 | 3 |
The ‘feel’ of an interactive system can be compared to the impressions generated by a piece of music. Both can only be experienced over a period of time. With either, the user must abstract the structure of the system from a sequence of details. Each may have a quality of ‘naturalness’ because successive actions follow a logically self-consistent pattern. A good composer can write a new pattern which will seem, after a few listenings, to be so natural the observer wonders why it was never done before.
Just as a composer follows a set of harmonic principles when he writes music, the system designer must follow some set of principles when he designs the sequence of give and take between man and machine. Hansen’s (1972) principles — called user engineering principles — where employed while designing the Emily text editing system.
- First principle: Know the user – The system designer should try to build a profile of the intended user: their education, experience, interests, how much time they have, their manual dexterity, the special requirements of their problem, their reaction to the behaviour of the system, their patience.
- Minimise memorisation — Because a user forgets, the system must augment their memory.
- Selection not entry
- Names not numbers
- Predictable behaviour
- Access to system information
- Optimise operations — This stresses the physical appearance of the system — the modes and speeds of interaction and the sequence of user actions needed to invoke specific facilities. The guiding principle is that the system should be as unobtrusive as possible, a tool that is wielded almost without conscious effort. The user should be encouraged to think not in terms of the fight pen and keyboard, but in terms of how he wants to change the displayed information.
- Rapid execution of common operations
- Display inertia
- Muscle memory
- Reorganise command parameters
- Engineer for errors — Modern computers can perform billions of operations without errors. Knowing this, system designers tend to forget that neither users nor system implementers achieve perfection. The system design must protect the user from both the system and themselves.
- Good error messages
- Engineer out the common errors
- Reversible actions
- Data structure integrity
- Hansen, W. J. (1971). User Engineering Principles for Interactive Systems. Proceeding AFIPS ’71 (Fall). Proceedings of the November 16-18, 1971, Joint Computer Conference, 523-532. New York, NY, USA: ACM Press. doi: 10.1145/1479064.1479159.
|
<urn:uuid:3d3d27e2-f1e5-40e1-9175-935b445b4923>
|
CC-MAIN-2013-20
|
http://www.simonwhatley.co.uk/hansens-user-engineering-principles-for-interactive-systems
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382261/warc/CC-MAIN-20130516092622-00027-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.909861 | 528 | 3.328125 | 3 |
People generally use the Web to access passive information. Educational materials on the Web are generally fixed presentations of predetermined content.
Intelligent books are different, because the information they present is only partly composed by the authors: most of an intelligent book is dynamically constructed and accumulated through conversations with its readers. More concretely, an intelligent book understands the principles and strategies of the subject matter, and uses those to answer questions, to fill in details, and to accumulate new examples as suggested by the conversations with its readers.
This project is investigating the general system infrastructure required to write and present intelligent books with specific case studies in material for teaching electronic circuits and discrete mathematics.
The current state of the project can be followed on the Intelligent Book website.
|
<urn:uuid:4864fdfe-259e-4ddd-aea7-a5be4356f88d>
|
CC-MAIN-2013-20
|
http://www.cl.cam.ac.uk/research/rainbow/research/intelligent-book.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00026-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.929244 | 148 | 2.578125 | 3 |
Abstract data types and their implementations as data structures. Efficient of algorithms employing these data structures; asymptotic analyses. Dictionaries: balanced search trees, hashing. Priority queues: heaps. Disjoint sets with union, find. Graph algorithms: shortest path, minimum spanning tree, topological sort, search. Sorting. Not available for credit for students who have completed CSE 373. Prerequisite: CSE 321.
This course will be about data structures, which are key to all efficient and effective programs one writes in practice. We will try to develop a clear theoretical understanding of the importance, strengths and weaknesses of various data structures such as stacks, graphs and dictionaries. There will also be a significant focus on implementations of these ideas to solve some interesting problems. The programming language used will be Java.
NOTE: The textbook was incorrectly listed as "Data Structures and Algorithms in C++" by Weiss. We will instead be using the JAVA version of this book. The University Bookstore has been notified of this.
Student learning goals
General method of instruction
Class assignments and grading
|
<urn:uuid:8649b3da-30a4-47f1-9301-170d1c25ef64>
|
CC-MAIN-2013-20
|
http://www.washington.edu/students/icd/S/cse/326ashish.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00005-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.906043 | 229 | 2.671875 | 3 |
Ask a new question
Unlimited access to over 2.5 million step-by-step Chegg textbook solutions.
Ask questions around the clock, get fast answers.
Write a program which overloads a binary Minus (-) operator,
The program will contain a class
Explain the purpose and working of the
followingprogram. Also, write com
Write a program which overloads a binary
The program will contain a class Matrix
“Listat least four applications of FlipFlop”.
1) Create a new
Justify your answer that nondeterministic PDA is more
powerfulthan PDA as for as acceptance of langu
explain nodal analysis
explain mesh analysis
Q: Having a solid disaster
recoveryplan for business continuity canm
A typical page size is 4 Kbytes. How
manyvirtual pages would this imply given the virtual
Consider the following Action/Goto
table for certa
“Is team structure(Organizational structure) dependent on project requirements”
Software Process Models are not seen as a
Write a program in C++which creates two classes named as
|
<urn:uuid:a44cb147-7df8-43a4-97db-28ccc9f5dfe8>
|
CC-MAIN-2013-20
|
http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2007-june-19
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704986352/warc/CC-MAIN-20130516114946-00032-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.81216 | 238 | 3.328125 | 3 |
A type of Web cam called a(n) ____________________ cam has the illusion of moving images because it sends a continual stream of still images.
After a programmer plans the logic of a program, she will next understand the problem translate the program test the program code the program
please answer the below questions
IT 244 Week 9 Day 7 Final project: Information Security Policy
Create a table with the following four column headings: Top-Level Objects, Communicates With, Incoming Messages, and Outgoing Messages. o Identity the top-level objects of the microwave.
Case study: The databases behind MySpace What kind of databases and database servers does MySpace use? Why is database technology so important for a business such as MySpace? How effectively does MySpace organize and store the data on its site? What data management problems have...
Palm OS provides no means of concurrent processing. Discuss three major complications that concurrent processing adds to an operating system
If you were asked to make a karyotype from chromosome spreads from HeLa cells, list the materials you would use and describe how you would create the karyotype.
Write a script that declares and sets a variable that's equel to the count of all rows in the Invoices table that have a balance due that's greater than or equeal to $5000. Then, the script should display a message that looks like this: 3 invoices exceed %5000.
describe how the various types of firewalls interact
Ask a new Computer Science Question
Tips for asking Questions
- Provide any and all relevant background materials. Attach files if necessary to ensure your tutor has all necessary information to answer your question as completely as possible
- Set a compelling price: While our Tutors are eager to answer your questions, giving them a compelling price incentive speeds up the process by avoiding any unnecessary price negotiations
- 1. Identify and describe Trust/Security Domain boundaries that may be applicable to personal computer (workstation) security in a business context.
2. This is a C++ codelab question.
- The "origin" of the cartesian plane in math is the point where x and y are both zero. Given a variable, origin of type Point-- a structured type with two fields, x and y, both of type double, write one or two statements that make this variable's field's values consistent with the mathematical notion of "origin".
- Assume two variables p1 and p2 of type POINT, with two fields, x and y, both of type double, have been declared. Write a statement that reads values for p1 and p2 in that order. Assume that values for x always precede y.
- In mathematics, "quadrant I" of the cartesian plane is the part of the plane where x and y are both positive. Given a variable, p that is of type POINT-- a structured type with two fields, x and y, both of type double-- write and expression that is true if and only the point represented by p is in "quadrant I".
|
<urn:uuid:eb8c58e9-493d-475c-85cf-9dafb9b22ad4>
|
CC-MAIN-2013-20
|
http://www.coursehero.com/tutors/problems/Computer-Science/12311/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382396/warc/CC-MAIN-20130516092622-00098-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.902075 | 634 | 3.109375 | 3 |
What is the HEC Digital Library project?
Does HEC provide scholarships for higher education within Pakistan?
What are the four management functions?
What are management activities?
What are the nine basic physical abilities?
What is the difference between verification and validation?
What are the goals of verification and validation?
What is the structure of a software test plan?
What is meant by Carrier and Information Signals?
What is modulation?
What is Amplitude Shift-Keying (ASK)?
What is Frequency Division Multiple Access (FDMA)?
What is GPRS?
What is 3G W-CDMA?
What are the limitations of 3G Wireless Systems?
What are the objectives of 4G Wireless Systems?
Can Microsoft Front Page be used to create style sheets?
Does Java language support unions?
For what types of operations is DMA useful?
How can I convert JSP output into MS Excel Spreadsheet?
How can I link an existing style sheet with a web document created in MS front page?
How can we achieve multiple inheritance in Java?
How can you connect two computers using Ethernet cards?
How can you define baud rate?
How to evaluate a programming language?
How do I/O-bound and CPU-bound programs differ?
How do you convert a binary number to hexadecimal?
What is the formula to calculate gearing ratio?
How to write EBNF description of a C language switch statement?
How is an interrupt executed?
How many exclusive-OR operations are used in the Data Encryption Standard(DES)cipher?
How many networks can a router connect?
How to handle form request using servlet?
How to solve First Readers Writers problem using semaphores?
How to write a bubble sort algorithm?
How to write algorithm for bisection method?
How to write algorithm for Newton Raphson method?
In Accounting, what are cash book and bank book? For what purpose do we use them?
In Accounting, what is balancing of books of accounts?
In Accounting, what is Trial Balance?
In Accounting terms, what are liabilities of the business?
What is the difference between diffusion and confusion?
In database management, what is concurrency control?
In financial management, what is portfolio diversification?
How to remove all special characters from a String in Java?
In Java, what is the range of interger, short and byte?
In operating system, how can hold and wait condition be prevented?
International Ranking of Harvard University?
International Ranking of Yale University?
Times Higher Ranking University of Cambridge?
Continue >> | Page 1 | Page 2 | Page 3 | Page 4 | Page 5 | Page 6 | Page 7 | Page 8 | Page 9
Copyright © 2004-10 Paked.com. All rights reserved.
Note: Site best viewed at 1024 x 768 or higher screen resolution
|
<urn:uuid:8fac747c-15ff-4f7f-9a72-c75eaaf2015a>
|
CC-MAIN-2013-20
|
http://www.paked.net/question/index_1.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704054863/warc/CC-MAIN-20130516113414-00069-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.834264 | 609 | 2.828125 | 3 |
Software Development >
the first generic purpose no-coding programming system in the world.
It can be used to create computer software without using computer
Limnor opens the door of computer programming to a much broader
population. With it, almost anyone can do computer programming.
Programming is no more complicated than using dialy office software.
Sales persons, office workers, business persons, ..., can make their
own vivid presentations, data managing applications, kiosk applications,
..., without hiring extra software engineers.
It is a new bridge between computers and users. Now non-technically
oriented users can use computers in new ways and gain a greater
control of their PCs.
It is a system made for unlimited expansions/customizations by any
software developer, not just by the maker of this system. It is
a new platform for professional software developers, using computer
languages, to deliver their products to non-technically oriented
users and let the users use their products in new ways.
Limnor supports fundamental programming capabilities such as
Limnor supports functional programming such as
- User defined mathematic expressions supporting common mathematic operations and 27 common functions and constants. Users may add any kinds of new math functions via DLLs.
- Bitwise operations.
- Loop (recursive execution)
- Branch (conditional execution)
- String operations
- User defined variables
- User defined logic expressions supporting AND, OR, NOT, >, >=, =, <, <=, <> (not equal), and grouping by "(" and ")"
- Standard user interface elements: buttons, lables, text boxes, list boxes, picture boxes, drop-down boxes, radio buttons, check boxes, main menus and context menus, group boxes, file browsers, file selectors, tree views, timers, toolbars, etc.
- Video, audio, Flash, Windows Media Player
- Hotspots which let you define any irregular shapes and areas as active places firing mouse events
- Videoes can be played on any user interface elements. For example, play video on a button, on a label, etc.
- Graphic drawings and Send/receive emails
- Different degree of transparency and color for page and text
- Web browse supporting blocking of unwanted pages and domains/partial-domains in many ways to fit your needs
- File upload/download via FTP
- Client/server and desktop databases, database structure management, database query builder, data viewers, pivot tables, charts, data-binding, data transfer
- Support databases with OLE DB or ODBC drivers. Because almost all databases have such drivers, Limnor effectively supports almost all databases in the world
- phone dialing, Smartcard reader and Coin validator.
- and more...
- Abee CHM Maker Pro - program for easy making chm-files .
- Enhanced Notepad - windows notepad replacement
- Visual Text Template - Create and edit templates without having to type in a single line of code.
- Add Email ActiveX - create and send HTML email messages
- Aldo's Text-PDF PRO+ - converts text and image files to PDF
- Password Recovery Toolbox -cleaning and managing logins and passwords stored by Internet Explorer and Outlook Express.
- AWinstall - installation development software for creating small setups!
Free to try
Win 98, Me, NT, 2000, XP
5M free HD space
Click to Enlarge
|
<urn:uuid:da481443-81fb-49c0-b050-4e1e5c39424b>
|
CC-MAIN-2013-20
|
http://www.tomdownload.com/software_development/tools_editors/limnor.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00023-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.80554 | 717 | 2.59375 | 3 |
One way to examine what may be happening in self-organizing complex systems is through the use of computer simulations. Two free software programs, StarLogo (“Starlogo”, 2004) and NetLogo (Wilensky, 1999, 2004), offer users opportunities to witness self-organization in action by modeling the dynamics of complex systems. The Logo language, which is the foundation of these modeling systems, was developed by Seymour Papert at MIT in order to teach children the basics of computer programming. As such, it is user-friendly and easy to learn. The novice can explore models that are included in the model libraries, manipulating the variables through sliders and simple commands. Those with greater interest or more experience can create models of their own. Because of their accessibility and ease of use, these software programs can be found in labs and classrooms all over the world.
The three main components of the modeling environment are turtles, patches, and the observer. The individual agents in the system are called turtles, although they can represent any kind of agent from a molecule to a person. The environment in which the turtles operate is divided into patches. Patch size and movement by turtles within and between patches is determined by the program designer. Patches are not necessarily passive but may be, and typically are, active components of the system. Commands may apply either to turtles or to patches. The third component, the observer, can issue commands that affect both patches and turtles. The observer also conducts maintenance and documentation of the turtle world.
Variables within a model may be set up as sliders, and in many models the sliders can be manipulated while the model is running. This feature allows the user to alter variables and search for excellent solutions within the constraints identified by the model designer. For example, a simple model of an ecosystem might include agents identified as predators, other agents called prey and patches with food for the prey in varying amounts. The interactions between the two different kinds of agents, as well as between the agents and the patches, can be defined by simple commands that identify when predators eat prey, when prey eat food, under what conditions new agents are “born” and “die,” and so on. If such a model is designed with sliders to control the numer of predators and prey, as well as the proportion of food available, the user can experiment to try to determine how a change in one part of the system affects the system as a whole and how a system might adapt in order to survive or thrive.
The beauty of these modeling tools with regard to building the scientific mind is that they provide the user with a dynamic visual and interactive medium through which to explore the concepts of complex systems. They are simple enough to be used by students in middle or high school, while at the same time they have the potential sophistication required of graduate level research. As such, the use of these free modeling tools opens up the world of complex systems to a broad audience, including those without advanced understanding of science and mathematics. The medium itself can describe and explain, through color, pattern and motion, concepts that previously might have been incomprehensible.
|
<urn:uuid:abcc66cd-8053-4d8d-bc6e-7326054544eb>
|
CC-MAIN-2013-20
|
http://protogenist.wordpress.com/2012/07/20/modeling-complex-systems/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707440693/warc/CC-MAIN-20130516123040-00055-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.95816 | 640 | 3.484375 | 3 |
Desktop computers fail to support many key aspects of knowledge work. Describe these aspects and give scenarios that make your point.
Some of the intrinsic characteristics of desktop computers impose limitations on the usefulness a computer system may provide to a knowledge worker. For instance, desktop computers typically are configured and physically installed as a work place. While this configuration presents the knowledge worker some advantages, it also limits the portability of the system. Several people in the past have noticed the need for a small and portable tool for knowledge workers. As a result, there had been designed systems based on the notebook [Kay and Goldberg 1977; Mel et al. 1988; Shipman et al. 1989]. Knowledge workers need portable systems, because there are situations when they need to move to different places. Some of these situations are data acquisition, meetings and field experiments. Lets take one example. A sociologist may need to go and live among a tribe in the middle of some jungle in order to study the tribe's social interactions. A desktop computer may be very difficult to transport, to install and to operate in such an environment. Also the mere presence of such a notorious piece of equipment may disturb the social interaction within the tribe. For this case, some equipment less conspicuous, easy to transport and operate may prove much more useful. Knowledge workers can benefit from a tool that would enable them to capture information on the spot using the appropriate media.
Also, knowledge workers can benefit from a tool that allows them to acquire new information, manipulate the new information, correlate and compare the new and the previously stored information in an interactive fashion. Desktop computers can collect data from other devices such as video cameras and printed images using appropriate peripherals. Nevertheless desktop computers also fail to meet this requirement in the sense that the knowledge worker can not manipulate the data immediately and therefore the data acquisition and data analysis becomes a batch operation instead of an interactive operation. An interactive style of acquiring data and analyzing that data may result in a faster and more accurate process, since the on-line analysis may be indicate possible adjustments and corrections to the data acquisition process. For instance a biologist collecting data about the propagation of a new plant disease detected a remote field would benefit from analyzing the data interactively and therefore being able to detect possible errors in the measures and being able to correct them before the disease propagates and kills the whole field. In the case of a batch style operation, the biologist may not detect the errors until the whole field is dead losing the opportunity to analyze the disease.
Another issue raised with desktop computers is the eye fatigue they impose on knowledge workers. Especially since knowledge workers tend to read large amount of data and desktop computers screen are light emitters. Real world examples are abundant and anybody who had spent more than two hours in front of a computer screen have noticed the fatigue. This is even more obvious in the cases when a person needs to read a large document stored on a computer system. Usually the person opts for printing the document on paper instead of reading it directly from the screen.
Go to next section: Reading Practices
Back to CPSC 610 Homework 2 page Back to CPSC 610 page
|
<urn:uuid:1e4ed3a4-66a5-4e82-8acc-02302f351f7c>
|
CC-MAIN-2013-20
|
http://www.csdl.tamu.edu/~l0f0954/academic/cpsc610/hw2-1.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00066-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.935944 | 632 | 2.90625 | 3 |
Walsh University Undergraduate Catalog 2012-2013
CS 111 Introductory Programming 3 sem.hrs.
Brief introduction to hardware configuration of a desktop computer. Procedural programming in Java as a preparation for object-oriented programming. Data types: int, double, boolean, char, String (with standard String methods). Data-type conversions and casts. One-dimensional arrays. Constructs: if, while, for, static methods. Interaction with console and input/output from/to text files. Prerequisite: MATH 104. Offered every spring semester.
Up one level
Click arrowheads to expand or collapse contents
|
<urn:uuid:b3909419-6baf-4e89-b7f9-bf9a1b3c80e5>
|
CC-MAIN-2013-20
|
http://www.walsh.edu/course-catalogs/Undergraduate-Catalog/2012-13/cs-111-introductory-programming-3-semhrs.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697503739/warc/CC-MAIN-20130516094503-00053-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.758146 | 129 | 2.71875 | 3 |
Fall 2011, Winter 2012 and Spring 2012 quarters
- Neal Nelson computer science, mathematics , Sheryl Shulman computer science , Richard Weiss mathematics, computer science
- Fields of Study
- computer science and mathematics
- Preparatory for studies or careers in
- computer science, education and mathematics.
The goal of this program is for students to learn the intellectual concepts and skills that are essential for advanced work in computer science. Students will have the opportunity to achieve a deeper understanding of increasingly complex computing systems by acquiring knowledge and skills in mathematical abstraction, problem solving, and the organization and analysis of hardware and software systems. The program covers material such as algorithms, data structures, computer organization and architecture, logic, discrete mathematics and programming in the context of the liberal arts and compatible with the model curriculum developed by the Liberal Arts Computer Science Consortium (LACS).
In all quarters the program content will be organized around four interwoven themes. The computational organization theme covers concepts and structures of computing systems from digital logic to operating systems. The programming theme concentrates on learning how to design and code programs to solve problems. The mathematical theme helps develop mathematical reasoning, theoretical abstractions and problem solving skills needed for computer scientists. A technology and society theme explores social, historical or philosophical topics related to science and technology.
- Campus Location
- Online Learning
- Hybrid Online Learning < 25% Delivered Online
- Greener Store
- Offered During
|
<urn:uuid:5797a4fa-af97-4640-bf18-73229a459e72>
|
CC-MAIN-2013-20
|
http://www.evergreen.edu/catalog/2011-12/programs/computersciencefoundations-1412
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381630/warc/CC-MAIN-20130516092621-00089-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.889864 | 292 | 2.8125 | 3 |
Federal Profile Platform : eLearning Products and Services
Operating systems are the fundamental part of every computing device to run any type of software. The increasing use of computing devices in all areas of life (leisure, work), lead to a variety of operating systems. Yet all operating systems share common principles. These principles are important for computer science students in their understanding of programming languages and software built on top of operating systems.
The Operating System Laboratory, OSLab is an online course that will teach students about principles of operating systems using a constructivist approach and problem-oriented learning. OSLab focuses on the hands-on training experience of the students and will complement existing lectures. The course is modular structured, where each module covers a topic and is in itself closed. Thereby a tutor can select modules according to his need and easily add new modules to the course.
During this project we intend to create 7 learning modules covering the topics of process scheduling, inter-process communication, memory management, file systems, distributed file systems, security as well as device drivers and input/output.
Computer Science, Telecommunications
Project Leader University of Berne Prof. Dr. Torsten Braun braun at iam.unibe.ch
Project Coordinator University of Berne Dr. Markus Wulff mwulff at iam.unibe.ch
|
<urn:uuid:16e6702e-5759-44bb-a0e1-2adaf47791e2>
|
CC-MAIN-2013-20
|
http://www.swissvirtualcampus.ch/displayb820.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711240143/warc/CC-MAIN-20130516133400-00068-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.872536 | 271 | 3.140625 | 3 |
William Hibbard, Curtis Rueden, Steve Emmerson, Tom Rink, David Glowacki, Tom Whittaker, Don Murray, David Fulker, John Anderson
The Web is the Internet's killer application because it enables the widespread sharing of information. Within 10 years, the numerical computing environment will enhance sharing among scientists by adding a new structure to the Web. It will consist of a persistent, active network of numerical data and computational, display, and user-interface components distributed across the Internet. As with the Web, users and even application programs will be largely unaware of physical computers but will instead have access to a worldwide shared network of logical components. Users will explore the network through browsers and add new components to it. For example, atmospheric chemists might use a browser to locate a weather simulation data set or even a running weather model, connect it as input to their chemistry models, then connect a display component to visualize each model's computations. Weather-modeling colleagues might then clone that display in their own browsers and connect user-interface components to collaborate on experiments with the coupled models.
Such an environment will challenge several long-held assumptions about the way programmers write numerical software, including:
We are developing the Visualization for Algorithm Development, or VisAD, library of Java components to overcome the limitations these assumptions impose on numerical visualization, defining four types of Java components:
Data. Implements an abstract data model that defines a schema grammar for organizing numerical and text values. It also defines associated metadata (such as units, coordinate systems, sampling geometries and topologies, missing data indicators, and error estimates). The schema grammar and metadata can express images, 3D grids, time series, map boundaries, simple real numbers, and practically any other numerical data. The data-component API hides interfaces to a variety of file and server data formats, movement of data between disk and memory, movement of data across the network, and partitions of large data components across processor clusters.
Display. Implements an abstract display model that enables applications to define data depictions descriptivelyvia mappings from primitive data values to primitive display valuesrather than procedurally. The display-component API hides the graphics library (such as Java3D and Java2D) used to render depictions, as well as whether depictions are rendered in windows on workstation screens, in browsers, or in immersive virtual reality displays.
Computation. Uses code supplied by applications to compute values for data components or manipulate display components based on the values of other data components. The computational-component API hides the programming language used for application-supplied code.
User interface. Includes the familiar screen icons (such as buttons and sliders) linking user actions (such as mouse clicks) to values of data components and to library calls.
Networks of such components span multiple computers via Java Remote Method Invocation (RMI) distributed object technology. They can express various forms of distributed computing (such as client-server, cluster processing, and remote collaboration, as well as whatever programmers and users care to define). Display and computational components may be linked to data components, with their actions (updating data depictions or executing application-supplied code) triggered whenever linked data values change. A data component may be linked to multiple display components with a different depiction in each; multiple data components may be linked to a common display component for visual comparison. Display components may be linked to yet other display components, creating collaborative networks of displays that synchronize their appearance; any change by users or applications is reflected in all linked display components. User-interface components may be linked to data components, enabling users to manipulate data values. Display components can also be used as user-interface components, enabling users to manipulate data values by redrawing their depictions. All these connections may be either local or remote.
Abstract Data and Display Models
Abstraction is the key to reusability. VisAD achieves abstraction for its data components through a schema grammar for expressing data organizations, and for its display components through expression of data depictions as mappings from primitive data values to primitive display values. Figure 1 is a VisAD user-interface component that enables users to define display mappings. The top MathType window includes an expression in the schema grammar for a time sequence of 2D Earth imagesa data component to be displayed. This expression includes names for primitive numerical and text values, groupings of values into vectors, and functional dependencies among values. Below that, the Coordinate System references window shows that the data component includes an invertible transform between image coordinates and Earth coordinates. All the numerical values occurring in data components may include associated units and error estimates. There are also usually samplings for the domains of any functional dependencies. Unit conversions, coordinate transforms, resampling, and propagation of missing data and error estimates are all done implicitly as necessary in mathematical and display operations on data components.
In order to define data depictions, primitive data valuesin the Map from windoware mapped to primitive display valuesin the Map to window. The Current maps window shows the system's first guess at appropriate mappings for the schema in the MathType window. The user can clear these mappings and create new ones by alternately clicking on primitive data value names and display value names. The user interface component in Figure 1 defines mappings via library calls available to any application.
The data schema grammar enables data components to be reused for virtually any numerical and text data. Moreover, the associated metadata enables meaningful comparisons of data from diverse sources, including the spatial and temporal alignment in displays, and is important for increased data sharing on the Internet. The system includes a set of classes for interpreting data file and server formats as data components, implicitly transferring data to a memory cache as needed to execute data-component API calls. VisAD developers have applied these classes to more than 20 common numerical data formats.
The display mappings enable the reuse of display components for virtually any form of data depiction. The fact the display mappings are a descriptive rather than a procedural definition of data depictions enables display components to also be used as user-interface components, with users modifying data values by redrawing data depictions. That is, procedures are difficult to invert, whereas descriptions apply just as well in both the data-to-depiction and the depiction-to-data directions.
Developers can extend the Java classes implementing data components in order to define their own coordinate systems, sampling topologies, and interpolation algorithms. With Java platform independence, these mathematical algorithms can be transferred with data components among various machines; in this way, algorithms can function as data content. Developers can extend classes implementing display components in order to define their own rendering algorithms or even to use a different graphics library.
Visualization and Analysis
For new users, the VisAD library is a challenge due to its high level of abstraction. Their first step is usually to visualize their data in the VisAD SpreadSheet, which provides access to much of the library via a GUI. The user-interface component in Figure 1 is part of that GUI, helping users learn about the data schema grammar and the display mappings. They can experiment with data files and sets of display mappings, then see the resulting visualizations. The SpreadSheet GUI consists mainly of a rectangular array of cells (display-component windows), each possibly containing depictions of multiple data components. These components are generated by reading from files or servers or by simple formulas applied to data components in other cells.
When users are ready to program the library, the easiest way
to start is by writing Python scripts. A script can be simple, as
in the single line
which loads and displays a data file. VisAD supports Python via
the Jython implementation, providing access to Java objects from
Python, and means the entire VisAD library is accessible from
Python. Support is also available for mathematical operations on
data components via Python infix expressions and for specialized
displays (such as histograms, scatter plots, contour plots, and
image animations) without requiring users to understand the
system's display mappings. Python scripts can invoke the plot
function to depict data components using a SpreadSheet cell,
allowing users to control display mappings via the user-interface
component, as in Figure 1.
The VisAD library is being used to write traditional visualization applications that assume specific data structures and depictions. These applications typically require a few hundred to a few thousand lines of Java. The library distribution includes approximately 12 such applications as examples for new users.
The Galaxy application (its GUI is in Figure 2) is fairly typical, enabling teams of astronomers to collaborate on experiments with physicist Robert Benjamin's simulation of the Milky Way galaxy and see how its H-alpha emission sky map and spectra would look from Earth. Simulation parameters are defined by a set of simple real-number data components users adjust via the slider components shown on the left side of Figure 2. The simulation codewritten in Fortran encapsulated in a computational componentproduces data components linked to display components generating other windows in Figure 2. The upper-center window in the figure shows an isosurface of simulated warm gas density; the lower-center window shows the H-alpha sky map as seen from Earth. The red point and green line in the upper-center window depict a vector from Earth to some point outside the Milky Way galaxy. Users drag the red point to manipulate the vector; changes to the vector trigger a second computational component to produce density and spectra along the vector, as in the upper- and lower-right windows.
A user who begins running the first copy of the Galaxy application has all data, computational, display, and user-interface components. However, other users who begin running collaborative copies of the Galaxy application generate only new display and user-interface components linked via RMI to the data and computational components in the first copy of the application.
Exploiting Reusable Components
The VisAD library is being used to support applications (more sophisticated than the Galaxy application) that do not assume specific data structures and depictions. They are continuing projects requiring years of development. The SpreadSheetthe first such applicationdeals with any data structure, accesses data from remote servers, and is fully collaborative; the user-interface component in Figure 1 enables users to generate any depiction. Multiple users might link their SpreadSheets together, and actions initiated by any individual user are seen identically in the GUIs of all.
The Unidata Program Center, part of the University Corporation for Atmospheric Research in Boulder, CO, is using VisAD to develop the Integrated Data Viewer (IDV) as part of a National Science Foundation-supported mission to supply Earth science data and access software to U.S. universities. The IDV enables users to browse remote servers and combine their data in a common spatial-temporal frame of reference. Due to the diversity of environmental-observing instruments and simulations, Earth science data involves a variety of structures and properties supported by the IDV. In addition to being able to access standard Earth data servers, the IDV provides a Web-browsing user-interface component that recognizes links to numerical data files. Clicking on the links downloads the files into the spatial-temporal visualization window rather than into the browser window.
The Laboratory for Optical and Computational Instrumentation at the University of Wisconsin-Madison uses VisAD to develop the VisBio system for visualizing and analyzing large multidimensional microscopy data sets. Figure 3 shows a VisBio volume rendering of a 3D microscopy image of a live embryo of C. elegans (a species of nematode worm). In addition to various forms of image displays, the system defines custom cursors users drag to measure distances in images and movements in time sequences. A number of data schemas are appropriate for a variety of microscopy data sets, depending on whether they include depth (3D vs. 2D), time sequencing, multiple spectra, and multiple optical lifetimes. With up to six independent variables, microscopy data sets can be quite large. Thus VisBio employs progressive refinement rendering (low resolution while the scene is changing, high resolution when change stops) and complex memory management.
The Australian Bureau of Meteorology is using VisAD to develop the Australian Integrated Forecast System (AIFS) 2 system, consisting of a number of modules supporting forecaster tasks. Most of these tasks require overlays of data with diverse structures and properties. They also require user manipulation of data by dragging their depictions. Meanwhile, the U.S. National Center for Atmospheric Research in Boulder, CO, is using VisAD to develop its Visual MEteorology Tool (VMET) system for visual meteorology. Like the IDV and AIFS, VMET must display data with diverse structures and properties and produce a variety of data depictions.
VisAD's reusable components are also being used for experiments with visualization techniques. VisAD developers have extended classes implementing data components to support large data components partitioned across processor clusters. They've extended classes implementing display components to exploit parallel processing for visualizing these partitioned data components. And they've also extended classes implementing display components to depict data in the ImmersaDesk virtual reality system, as well as for progressive refinement rendering. The VisAD library has proven itself a useful tool for visualization research because it enables experiments at any level via class extensions. It also provides the necessary infrastructure programmers need to write practical applications that generate evaluations of new techniques by real users.
The VisAD library, including source code, documentation, and application examples, is freely available from http://www.ssec.wisc.edu/~billh/visad.html. Ugo Taddei of the Institute of Geography at the University of Jena in Germany has contributed a fine online general tutorial to go along with the specialized tutorials on the site. Approximately 15 programmers from a half dozen institutions have contributed code to the library. The much larger community of programmers who use the library are supported by an active mailing list used for online discussion and collaboration.
1. Colwell, R. From terabytes to insights. Commun. ACM 46, 7 (July 2003), 2527.
2. Getov, V., von Laszewski, G., Philippsen, M., and Foster, I. Multiparadigm communications in Java for grid computing. Commun. ACM 44, 10 (Oct. 2001), 118125.
3. Haber, R., Lucas, B., and Collins, N. A data model for scientific visualization with provisions for regular and irregular grids. In Proceedings of the IEEE Visualization conference (San Diego, Oct.). IEEE Computer Society Press, Los Alamitos, CA, 1991, 298305.
4. Hibbard, W., Dyer, D., and Paul, B. Display of scientific data structures for algorithm visualization. In Proceedings of the IEEE Visualization conference. IEEE Computer Society Press, Los Alamitos, CA, 1992, 139146.
5. Meyer, B. On to components. IEEE Comput. 32, 1 (Jan. 1999), 139140.
6. Treinish, L. SIGGRAPH 1990 workshop report: Data structure and access software for scientific visualization. Comput. Graph. 25, 2 (May 1991), 104118.
7. Wollrath, A., Riggs, R., and Waldo, J. A distributed object model for Java. In Proceedings of the 2nd Conference on Object-Oriented Technologies and Systems (COOTS) (Toronto, June). USENIX, Berkeley, CA, 1996, 219231.
Figure. Visualizing Hurricane Charley. Using VisAD's data and display models, the Integrated Data Viewer merges disparate numerical model output and satellite and radar data into a depiction of the hurricane's approach toward Florida, August 13, 2004. Don Murray, Unidata Program Center, Boulder, CO.
Figure 1. Display mappings dialogue panel.
Figure 2. The collaborative Galaxy application, simulating the Milky Way galaxy as it would appear from Earth.
Figure 3. VisBio volume rendering of a live C. elegans embryo (imaging performed by William Mohler, University of Connecticut Health Center, Farmington, CT).
ŠACM, 2005. This is the author's version of the work. It is posted here by permission of the ACM for your personal use. Not for redistribution. The definitive version was published in Communications of the ACM, 48, 3, March 2005.
|
<urn:uuid:9d8d4b45-ef8e-4d07-881f-16bfdbbe9f8d>
|
CC-MAIN-2013-20
|
http://www.ssec.wisc.edu/~billh/cacm2005.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707440693/warc/CC-MAIN-20130516123040-00015-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.850689 | 3,395 | 2.96875 | 3 |
June 17, 2011 Increasingly, the things people use on a daily basis can be connected to the Internet. An alarm clock not only rings, but can also switch on the coffee machine while turning on the light. But what is needed to ensure that the Internet of Things operates as efficiently as possible?
Thus far, the Internet has been an arena reserved for people. But now more and more physical objects are being connected to the Internet: we read emails on our mobile telephones, we have electricity meters that report readings automatically, and pulse monitors and running shoes that publish information about our daily jog directly on Facebook.
Tools for collaboration The Internet of Things will introduce new smart objects to our homes. One challenge is to find effective solutions to enable different products to work together. Currently no standardised tools or distribution platforms exist in this area.
A group of Norwegian researchers have been addressing this issue. In the research project Infrastructure for Integrated Services (ISIS) they have created a platform for developing and distributing applications for the Internet of Things. The platform encompasses a programming tool for developers, called Arctis and the website ISIS Store for downloading applications. The project has received funding from the Research Council of Norway's Large-scale Programme VERDIKT.
Arctis was developed by researchers at the Norwegian University of Science and Technology (NTNU). One of them is postdoctoral researcher Frank Alexander Kraemer.
"In a 'smart' everyday life objects and applications often need to be connected to several different communication services, sensors and other components. At the same time they need to respond quickly to changes and the actions of users. This requires very good control over concurrence in the system, which can be difficult to achieve with normal programming," he explains.
Dr Kraemer believes that the tool will make it easier to create new applications, adapt them to existing applications and update software as necessary.
"Developing a simple application with Arctis can be as easy as fitting together two building blocks, but more advanced applications can also be created, depending on what you are looking for," Dr Kraemer continues.
Talking to each other
"It is the collaborative system ICE Composition Engine (ICE) that will govern the whole thing and allow the objects to talk to each other," explains Reidar Martin Svendsen, project manager at the Norwegian telecommunications company the Telenor Group.
ICE can both manage the communication between objects in your home and keep track of any updates. The system is installed on a modem, a decoder or an adapter in the home and provides the user with a local gateway which ensures that the Internet of Things will continue to work even when the user is offline.
Telenor is seeking to become an operator for the Internet of Things by acting as a link between developers and end-users. But if the company is to succeed, a sufficient number of developers will need to choose to use its tools.
"We have established our own App Store where talented developers can publish the new applications they create and end-users can buy and download the applications they need. Basically, you can choose software according to your own needs and preferences," says Mr Svendsen.
The downloaded applications can be combined as needed using a software programme called Puzzle. The Puzzle programme is a user interface to the ICE system.
For the project to flourish, people have to be willing to pay for the applications. There are already many similar applications available online free-of-charge through the data infrastructure platform Pachube, for example. Why are users going to pay for something they can download legally and at no cost?
"It is better if a well-known operator is responsible for critical systems such as house alarms. For these types of systems you should go via the App Store to a supplier you trust. You don't know anything about the intentions of those who put out programmes free-of-charge on the Internet. But if your system needs updating or you require a service, it is an advantage to be using a reputable, recognised operator," explains Mr Svendsen.
"On the whole it will be up to the developers to decide what to charge for. At the ISIS Store there are currently a number of applications available that can be downloaded free-of-charge," he continues.
Other social bookmarking and sharing tools:
Note: If no author is given, the source is cited instead.
|
<urn:uuid:1a1d64d6-d0ad-4174-82b9-0102b3d00bd2>
|
CC-MAIN-2013-20
|
http://www.sciencedaily.com/releases/2011/06/110617080836.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704133142/warc/CC-MAIN-20130516113533-00006-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.947658 | 892 | 3.09375 | 3 |
Course OverviewAn operating system is a layer of software that manages hardware resources and that provides user programs with a simple and consistent interface to the computer. In this course, we will examine services and abstractions commonly provided by operating systems, and we will study the underlying mechanisms used to implement them. Topics will include processes and threads, synchronization, CPU scheduling, deadlocks, memory management, segmentation and paging, storage and file systems, security, and virtualization.
The concepts presented in class will be explored through a series of several intensive programming assignments. The assignments will make use of the C proramming language, which is the univeral language for implementing and accessing operating systems at the lowest level. The projects will give students ample practice in manipulating pointers, managing memory, invoking system services, and dealing with error conditions. Although the course will offer some technical guidance on these matters, students should expect to spend significant time debugging, consulting reference materials, and revising the projects until they work properly.
The goals for each student in this course are:
- To understand the abstractions and services provided by an operating system.
- To understand the mechanisms and algorithms used to implement these services.
- To get practical experiences using and implementing operating system services.
- Describe traditional operating system structures and algorithms.
- Demonstrate in detail how they apply to various programs and data.
- Evaluate the strengths and weaknesses of related structures and algorithms.
- Propose and evaluate a variety of improvements upon traditional methods.
- Implement basic methods in a working computing system.
- Operating System Concepts, A. Silberschatz, P. B. Galvin, G. Gagne, Wiley, 7th or 8th edition.
- The C Programming Language, B. W. Kernighan, D. M. Ritchie, Prentice Hall, 2nd edition.
- Instructor: Christian Poellabauer
- - Office hours: T 11-12, W 9-10
- - Office: 354 Fitzpatrick
- - Email: [email protected]
- TA: Pramita Mitra
- - Office hours: T 4-5, Th 11-12
- - Location: Engineering Cluster
- - Email: [email protected]
- Class location: DeBartolo 217
- Lecture times: MWF 10.40-11.30am
- - There will be no lecture on Friday March 5th!
- - The due date for the 3rd project has been pushed back to March 1st (5pm)!
- - The TA will hold a gdb tutorial on 1/21 at 11am in the Engin. lab.
|
<urn:uuid:6d05a2cc-4827-44f2-bc52-8e7e3bf1e578>
|
CC-MAIN-2013-20
|
http://www.cse.nd.edu/~cpoellab/teaching/cse30341_spring10/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00063-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.886343 | 562 | 3.234375 | 3 |
Introduction: What Are the Extraordinary Ideas
Computers Use Every Day?
How were the great ideas of computer science born? Here’s a selection:
This is a gift that I have … a foolish extravagant spirit, full of
forms, figures, shapes, objects, ideas, apprehensions, motions, rev-
—WILLIAM SHAKESPEARE, Love’s Labour’s Lost
|•||In the 1930s, before the first digital computer has even been built, a British genius founds the field of computer science, then goes on to prove that certain problems cannot be solved by any computer to be built in the future, no matter how fast, powerful, or cleverly designed.|
|•||In 1948, a scientist working at a telephone company publishes a paper that founds the field of information theory. His work will allow computers to transmit a message with perfect accuracy even when most of the data is corrupted by interference.|
|•||In 1956, a group of academics attend a conference at Dartmouth with the explicit and audacious goal of founding the field of artificial intelligence. After many spectacular successes and numerous great disappointments, we are still waiting for a truly intelligent computer program to emerge.|
|•||In 1969, a researcher at IBM discovers an elegant new way to structure the information in a database. The technique is now used to store and retrieve the information underlying most online transactions.|
|•||In 1974, researchers in the British government’s lab for secret communications discover a way for computers to communicate securely even when another computer can observe everything that passes between them. The researchers are bound by government secrecy—but fortunately, three American professors|
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Book title: Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers. Contributors: John Maccormick - Author. Publisher: Princeton University Press. Place of publication: Princeton, NJ. Publication year: 2012. Page number: 1.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.
|
<urn:uuid:0aece19e-0453-4f4f-9e49-5dc5094e9c78>
|
CC-MAIN-2013-20
|
http://www.questia.com/read/120896195/nine-algorithms-that-changed-the-future-the-ingenious
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702127714/warc/CC-MAIN-20130516110207-00047-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.881681 | 469 | 3.265625 | 3 |
a facility that converts programs developed with higher-level languages (source code) into machine language (object code).
amount of time required to get data from one point to another.
Local area networks (LAN)
computers linked into a network in a small geographic area and widely used for sharing data, software, and hardware.
identifying ways in which system components will work together to accomplish the desired result.
designation for programming languages that are easy for a computer to use; examples are machine and assembly languages.
|
<urn:uuid:65bbef98-76d3-419c-a0e6-e0d25d56065c>
|
CC-MAIN-2013-20
|
http://www.wiley.com/college/info/simon393908/glossary/glossary_l_s.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707439012/warc/CC-MAIN-20130516123039-00069-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.879052 | 106 | 2.96875 | 3 |
Originally Posted by j4v3d
So the examples in the programming books where you can download the source code and tinker around with it, is that any good? I guess it may help if i downloaded the source code for each chapter and then played around with it.
I don't believe so. It may work for helping you see one particular algorithm work, but that's the kind of thing that will go in your brain and right back out.
Need to go back to basics and start playing about with code and stick to it so it sticks to this stupid brain of mine!
If you're not doing programming for work, then you need to come up with something on your own. There are a million things, just pick something. Say you collect baseball cards (oops, let's say Cricket cards
). Decide you want the ultimate baseball card collecting web site. It should have a way to upload the card image and all the info about the card, and store it in a database. It should have a security mechanism to allow administrative access. It should be available in multiple languages. It should work on mobile devices as well as large screens. It should have a mechanism to import and export content in XML, etc, etc. If you have projects like this that are of personal interest to you, then you will tend to have the ongoing interest in working on them and learning from them.
|
<urn:uuid:f262c88f-e41d-425a-9ed6-90b2824013ef>
|
CC-MAIN-2013-20
|
http://www.codingforums.com/showpost.php?p=1305555&postcount=5
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702849682/warc/CC-MAIN-20130516111409-00042-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.96791 | 284 | 2.5625 | 3 |
So that you can give yourself quick tutorials on basic computer concepts, the history of computing, and the parts of a computer, we've arranged our topics (definitions) in a sequence, with more basic building block topics placed at the beginning.
BASIC CONCEPTS ... computer - binary - digital - program - instruction - I/O - information - data - bit - byte - octet - nibble - hexadecimal - word - file - directory - hardware - software - interrupt - workstation - minicomputer - mainframe - supercomputer - client/server - wait state - real time - parallel processing
HISTORY ... abacus - algorithm - Boolean (George Boole) - Charles Babbage - Difference Engine - Analytical Engine - Claude Shannon - John von Neumann - Vannevar Bush - Grace Hopper - DARPA - ARPANET - Internet - Multics - 3270 - x86 - @ - Mosaic - Netscape - Microsoft Internet Explorer
PARTS OF A COMPUTER: LEVEL 1 ... computer - desktop computer - notebook computer - processor - microprocessor - memory - RAM - ROM - hard disk - diskette - display - I/O - keyboard - mouse - printer - CD-ROM - modem - video adapter - sound card - Accelerated Graphics Port - peripheral - real-time clock - ABCD data switch - handheld - Zip drive
PARTS OF A COMPUTER: LEVEL 2 ... central processing unit (CPU) - arithmetic-logic unit (ALU) - floating-point unit (FPU) - motherboard - daughterboard - chassis - bus - EISA - PCI - IDE - EIDE - surge suppressor - radiation shield
View other topics for self-study by visiting our Guide to the Learning Paths .
|
<urn:uuid:1b38ffd7-d95c-4b6d-bba7-d6a23d65df15>
|
CC-MAIN-2013-20
|
http://whatis.techtarget.com/reference/Learning-Path-Basic-Computer-Concepts
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711240143/warc/CC-MAIN-20130516133400-00071-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.660083 | 366 | 3.34375 | 3 |
Birth of the
World Wide Web
The Web reminds me of early days of the PC
industry. No one really knows anything. All experts have been wrong.
Wired, February 1996
HyperText is a way to link and access information of
various kinds as a web of nodes in which the user can browse at will.
It provides a single user-interface to large classes of information (reports, notes,
data-bases, computer documentation and on-line help).
We propose a simple scheme incorporating servers already available at CERN...
A program which provides access to the hypertext world we call a browser...
It would be inappropriate for us (rather than those responsible) to suggest specific
areas, but experiment online help, accelerator online help, assistance for computer center
operators, and the dissemination of information by central services such as the user
office and CN [Computing & Networks] and ECP [Electronics & Computing for Physics]
divisions are obvious candidates.
World Wide Web (or W3 ) intends to cater for these services across the HEP [ High Energy
Physics ] community.
Tim Berners-Lee , R. Cailliau . 12
November 1990, CERN
12 November, 1990
World Wide Web: Proposal for a
To: P.G. Innocenti/ECP, G. Kellner/ECP, D.O.
Cc: R. Brun/CN, K. Gieselmann/ECP, R.? Jones/ECP, T.?
Osborne/CN, P. Palazzi/ECP, N.? Pellow/CN, B.? Pollermann/CN, E.M.? Rimmer/ECP
From: T. Berners-Lee/CN, R. Cailliau/ECP
... document describes in more detail a Hypertext
... The project has two phases: firstly we make use of existing software and hardware as
well as implementing simple browsers for the user's workstations, based on an analysis of
the requirements for information access needs by experiments. Secondly, we extend the
application area by also allowing the users to add new material.
Phase one should take 3 months with the full manpower complement, phase two a further 3
months, but this phase is more open-ended, and a review of needs and wishes will be
incorporated into it.
The manpower required is 4 software engineers and a programmer, (one of which could be a
Fellow). Each person works on a specific part (eg. specific platform support) ...
W W Why are they
"Because I see all "W"s
Robert Cailliau: Recently I
discovered that I'm a synaesthetic. Well, I've known it for a long time, but I did not
realise that there was a name for it. I'm one of those people who combine two senses: for
me, letters have colours. Only about one in 25'000 have this condition, which is perfectly
harmless and actually quite useful. Whenever I think of words, they have colour patterns.
For example, the word "CERN" is yellow, green, red and brown, my internal
telephone number, "5005" is black, white, white, black. The effect sometimes
works like a spelling checker: I know I've got the right or the wrong number because the
colour pattern is what I remember or not...
And now wait for it folks: you have all seen the World-Wide Web logo ofthree superimposed
"W"s. Why are they green? Because I see all "W"s as green... It
would look horrible to me if they were any other colour.
So, it's not because it is a "green" technology, although I also like that...
So, here I am: twenty years of work at CERN: control engineering, user-interfaces, text
processing, administrative computing support,
hypertexts and finally the Web.
|According to R. Cailliau the chain
of historical scale events was going by the following way:
CERN: A Joint proposal for a hypertext system is
presented to the management.
Mike Sendall buys a NeXT cube for evaluation, and gives
it to Tim. Tim's prototype implementation on NeXTStep is made in the space of a few
months, thanks to the qualities of the NeXTStep software development system. This
prototype offers WYSIWYG browsing/authoring! Current Web browsers used in "surfing
the Internet" are mere passive windows, depriving the user of the possibility to
During some sessions in the CERN cafeteria, Tim and I
try to find a catching name for the system. I was determined that the name should not yet
again be taken from Greek mythology. Tim proposes "World-Wide Web". I like this
very much, except that it is difficult to pronounce in French...
The prototype is very impressive, but the
NeXTStep system is not widely spread. A simplified, stripped-down version (with no editing
facilities) that can be easily adapted to any computer is constructed: the Portable
SLAC, the Stanford Linear
Accelerator Center in California, becomes the first Web server in USA.
It serves the contents of an existing, large data base
of abstracts of physics papers.
Distribution of software over the Internet starts.
The Hypertext'91 conference (San Antonio) allows us a
"poster" presentation (but does not see any use of discussing large, networked
The portable browser is released by CERN as
Many HEP laboratories now join with servers: DESY
(Hamburg), NIKHEF (Amsterdam), FNAL (Chicago).
Interest in the Internet population picks up.
The Gopher system from the University of Minnesota, also
networked, simpler to install, but with no hypertext links, spreads rapidly.
We need to make a Web browser for the X system, but have
no in-house expertise. However, Viola (O'Reilly Assoc., California) and Midas (SLAC) are
wysiwyg implementations that create great interest.
The world has 50 Web servers!
Some of the other viewpoints on the first
5 years of the WWW
... as Tim Berners-Lee and other Web developers enriched
the standard for structuring data, programmers around the world began to enrich the
One of these programmers was Marc Andreessen, who was working for the NCSA in
In January 1993, Andreessen released a version of his new, handsome, point-and-click
graphical browser for the Web, designed to run on Unix machines.
In August, Andreessen and his co-workers at the center released free versions for
Macintosh and Windows.
In December, a long story about the Web and Mosaic appeared in The New York Times... The (Second Phase of the)
Revolution Has Begun,
By Gary Wolf, Wired 2.10
Meanwhile -- between these generations -- a lot of historical scale events
Eric W. Sink clarifies
some of them:
In the Web's first generation, Tim Berners-Lee
launched the Uniform Resource Locator (URL), Hypertext Transfer Protocol (HTTP), and HTML
standards with prototype Unix-based servers and browsers.
A few people noticed that the
Web might be better than Gopher.
In the second generation, Marc Andreessen and Eric Bina developed NCSA Mosaic at the
University of Illinois.
Several million then suddenly noticed that the
Web might be better than sex.
In the third generation, Andreessen and Bina left NCSA to found Netscape...
Ether Microsoft and Netscape open some new
fronts in escalating Web Wars, By Bob Metcalfe, InfoWorld, August 21, 1995, Vol. 17, Issue
Life in the browser wars was a unique time period for me in my career...
I started work on Spyglass Mosaic on April 5th, 1994.
The demo for our first prospective customer was already on the calendar in May.
... Yes, we licensed the technology and trademarks from NCSA (at the University of Illinois),
but we never used any of the code.
We wrote our browser implementations completely from scratch, on Windows, MacOS, and Unix.
... Netscape didn't even exist yet, but things happened fast.
Just a few weeks after I started coding, Jim Clark rode into town and gathered a select group of programmers from NCSA.
Mosaic Communications Corporation was born. It was interesting to note that certain people on the
NCSA browser team were not invited to the special meeting.
I can still remember hearing about how ticked off they were to be excluded. Champaign-Urbana is a very small town.
Spyglass had the legal right to the "Mosaic" trademark. A few tantrums and lots of lawyering later,
MCC changed its name to Netscape.
We thought we had a nice head start on Netscape.
We had a really top-notch team and we moved the rest of our developers over to browser work quickly.
We were ready to compete with anybody. But Jim Clark was, after all, Jim Clark.
His SGI-ness knew how to work the advantages of being in Silicon Valley.
He provided his young company with lots of press coverage and very deep pockets.
We decided to approach this market with an OEM business model.
Instead of selling a browser to end users we developed core technology and sold it to corporations
who in turn provided it to their end users.
We considered ourselves to be the arms dealer for the browser wars.
Over 120 companies licensed Spyglass Mosaic so they could bundle it into their product.
Our stuff ended up in books, operating systems, ATM machines, set-top boxes, help systems, and kiosks.
It was an extremely profitable business. The company grew fast and ours was one of the first Internet IPOs.
Along the way, we got involved in the standards process.
In fact, I became the chair of the IETF HTML Working Group for the standardization of HTML 2.0.
I learned a lot through this experience.
In May 1994 I went to the first WWW conference in Geneva, Tim Berners-Lee took me aside and shared his plans for
a World-Wide Web Consortium. It didn't take too long for the W3C to become the venue for HTML standards discussions.
Eventually this was A Good Thing. Both Netscape and Microsoft became active participants in the W3C HTML Working
Any group which didn't have their involvement was doomed to irrelevance.
For much of 1994, it seemed like we were ahead of Netscape.
Shortly after we released our 2.0 version, I remember one of the Netscape developers griping about
how their schedule had been moved up by six months. We smiled because we knew we were the reason.
They had not been taking us seriously and they were being forced to do so.
But Netscape was running at a much faster pace.
They got ahead of us on features and they began to give their browser away at no cost to end users.
This made Netscape the standard by which all other browsers were judged.
If our browser didn't render something exactly like Netscape, it was considered a bug.
I hated fixing our browser to make it bug-compatible with Netscape even though we had already coded
it to "the standard". Life's not fair sometimes.
We won the Microsoft deal. I suppose only the higher echelons of Spyglass management really know
the gory details of this negotiation.
I was asked to be the primary technical contact for Microsoft and their effort to integrate our browser into Windows
I went to Redmond and worked there for a couple of weeks as part of the "Chicago" team.
It was fun, but weird. They gave me my own office.
At dinner time, everyone went to the cafeteria for food and then went back to work.
On my first night, I went back to my hotel at 11:30pm. I was one of the first to leave.
Internet Explorer 2.0 was basically Spyglass Mosaic with not too many changes.
IE 3.0 was a major upgrade, but still largely based on our code.
IE 4.0 was closer to a rewrite, but our code was still lingering around --
we could tell by the presence of certain esoteric bugs that were specific to our layout engine.
Licensing our browser was a huge win for Spyglass.
And it was a huge loss. We got a loud wake-up call when we tried to schedule our second conference
for our OEM browser customers. Our customers told us they weren't coming because Microsoft was beating them up.
The message became clear: We sold our browser technology to 120 companies, but one of them slaughtered the other 119.
The time between IE 3 and IE 4 was a defining period for Spyglass.
It was clear that the browser war had become a two-player race.
- Even with our IPO stash, we didn't have the funding to keep up with Netscape.
- What was interesting was the day we learned that Netscape didn't have the funding to keep up with Microsoft.
For the development of IE 4.0, a new Program Manager appeared.
His name was Scott Isaacs and I started seeing him at the HTML standards group meetings.
At one of those meetings we sat down for a talk which was a major turning point for me and for Spyglass.
Scott told me that the IE team had over 1,000 people.
I was stunned. That was 50 times the size of the Spyglass browser team.
It was almost as many people as Netscape had in their whole company.
I could have written the rest of the history of web browsers on that day -- no other outcomes were possible ...
Memoirs From the Browser Wars by Eric W. Sink.
According to Gary Wolf, "Andreessen also
left the NCSA, departing in December 1993 with the intention of abandoning Mosaic
development altogether. He moved to California and took a position with a small software
company. But within a few months he had quit his new job and formed a partnership with SGI
founder Jim Clark.
"At the NCSA," Andreessen explains, "the deputy director suggested that we
should start a company, but we didn't know how. We had no clue. How do you start something
like that? How do you raise the money? Well, I came out here and met Jim, and all of a
sudden the answers starting falling into place."
In March, Andreessen and Clark flew back to Illinois, rented a suite at the University
Inn, and invited about half a dozen of the NCSA's main Mosaic developers over for a chat.
Clark spent some time with each of them alone. By May, virtually the entire ex-NCSA
development group was working for Mosaic Communications (it was an original name of the
Netscape Communications -G.R.G.).
Andreessen answers accusations that corporate Mosaic Communications "raided"
nonprofit NCSA by pointing out that with the explosion of commercial interest in Mosaic,
the developers were bound to be getting other offers to jump ship. "We originally
were going to fly them out to California individually over a period of several
weeks," Andreessen explains, "but Jim and I said, Waita second, it does not make
much sense to leave them available to be picked up by other companies. So we flew out to
Illinois at the spur of the moment."
Since Mosaic Communications now has possession of the core team of Mosaic developers from
NCSA, the company sees no reason to pay any licensing fees for NCSA Mosaic. Andreessen and
his team intend to rewrite the code, alter the name, and produce a browser that looks
similar and works better.
Clark and Andreessen have different goals. For Jim Clark, whose old company led the
revolution in high-end digital graphics, Mosaic Communications represents an opportunity
to transform a large sector of the computer industry a second time. For Andreessen, Mosaic
Communications offers a chance to keep him free from the grip of a company he sees as one
of the forces of darkness - Microsoft.
"If the company does well, I do pretty well," says Andreessen. "If the
company doesn't do well" - his voice takes on a note of mock despair - "I work
The chair of Microsoft is anathema to many young software developers, but to Andreessen he
is a particularly appropriate nemesis...
As I ( Gary Wolf) reviewed my notes from interviews with Andreessen, I was struck by the
thought that he may have conjured the Bill Gates nemesis out of the subtle miasma of his
own ambivalence. After all it is he, not the programmers in Redmond, Washington, who is
writing a proprietary Web browser. It is he, not Bill Gates, who is at the center of the
new, ambitious industry. It is he who is being forced by the traditional logic of the
software industry to operate with a caution that verges on secrecy, a caution that is
distinctly at odds with the open environment of the Web."
The (Second Phase of the) Revolution Has Begun,
By Gary Wolf, Wired 2.10
There are two ages of the Internet - before
Mosaic, and after. The combination of Tim Berners-Lee's Web protocols, which provided
connectivity, and Marc Andreesen's browser, which provided a great interface, proved
explosive. In twenty-four months, the Web has gone from being unknown to absolutely
A Brief History of Cyberspace, by Mark Pesce, ZDNet,
October 15, 1995
Bill Gates : "...an Internet
browser is a trivial piece of software. There are at least 30 companies that have written
very credible Internet browsers, so that's nothing... "
"The most important thing for the Web is
stay ahead of Microsoft."
Steve Jobs. Wired, February 1996,
Microsoft may still be No. 2 in the Internet
race, but it's rapidly closing the gap. What's more, Microsoft has forgotten more about PR
and marketing than Netscape ever learned.
The contrast between the two companies was highlighted the day after Clark induced mass
sedation when Microsoft's group vice president, Paul Maritz, wowed the crowd with the kind
of polished, four-star presentation that the Redmondians seem to be able to do with their
Just like his boss, Maritz promised a lot of stuff that's still not here. But he generated
excitement and energy and buzz. The upshot was to create the kind of halo effect that will
pay dividends when it comes time for developers and corporate shoppers to make their
buying and investment decisions. ....
Of Silicon Valley and Sominex, by Charles Cooper, PC
Week, June 5, 1996.
Is Microsoft Evil?
Magazine, June 26, 1996 © 1996 Microsoft
I dont think it's a matter of good and evil --
Microsoft is a a competitor, and a smart one. Jim(Clark) and I both think it's important
to point out what Microsoft is doing in various areas, since they are very good at using
FUD [fear, uncertainty, doubt] to attempt to paralize the market.
"God is on the side of
the big battalions." said Napoleon.
Very few times in warfare have smaller forces overtaken bigger forces...
by Netscape's Jim Barksdale, Wired
4.03 March 1996
December, 1995: i-Pearl Harbor
"Pearl Harbor Day." Time Magazine reported
it when Bill Gates declared war on December 7, 1995... Jeff Sutherland
February, 1996: 2-year Prediction
Steve Jobs: We have a
two-year window. If the Web doesn't reach ubiquity in the next two years, Microsoft will
own it. And that will be the end of it.
Wire, February 1996, p.162
June, 1996: How many ...?
Question : Netscape has certainly
come on awfully strong.
The turn-point in the
Bill Gates: How many software developers do you think they have?
The world according to Gates By Don Tennant,
InfoWorld Electric, Jan 4, 1996.
The Web Browser Marketshare dramatically changed for a couple of
Data source: Intersé Corporation.
||Microsoft Internet Explorer
October, 1996: How much?
From: Bob Ney
Date: Tue, 8 Oct 1996 18:24:41 -0700
. . . . .
As an ISP, I want to give my customers a software package for their use. I contacted
- They said they would let be customize and repackage their product, if I committed to buy
2500 the first year at $17 each.
I said OK, I can do that.
- Then they said, great please send your check for 50% of the moneys due.
That's $21,250. As a small ISP I dont have that available without dipping into my
I am then contacted by Microsoft and was told they would send me this really nice
customization kit, which will build a release for Win95, Win NT, Win3.1 and install
Explorer 3, Netmeeting, a commercial TCP dialer and stack. And it has a automated user
sign up server built into it.
It will build a CD Rom image, if I want to distribute that way.
It configures with a wizard in about 5 minutes.
It's seamless and a really good piece of software and installer.
I said that it sounded great, how much?
- No charge. Distribute it all you want to your customers.
Microsoft is such a monster company that they can drop multi millions into development of
a product package that they will give away.
Netscape on the other hand actually wants to make a bit of money on their product.
Thinking of myself first, I took the Microsoft software.
So will most other ISP's...
Netscape Navigator market-share historical trend:
2002: How long?
||To be, or not to be: that is the question yet
and Netscape browser still exists
The market war between two
leading browsers is over. Like it or not, but now Internet Explorer is the fully dominant
one. Only about 2 - 3 percentage of the Web surfing people for some reasons (mostly
for the reasons resembling religious ones) still use Netscape browser. But as long
as the Netscape browser still exist, almost all front-end Web developers around the world
are forced to spend about 10 - 15 percentage of their paid time to provide both of
these two browsers with compatible layout & DHTML solutions. Just try to imagine what
the total price of all this essentially worthless work on the world wide scale is.
They Shoot Horses, Don't They?
years later ...
2007, Netscape announced that support for its Netscape
Navigator would be discontinued, suggesting its users
migrate to Mozilla Firefox
First 15 Years of the Browsers
Wars as it looks from the January, 2011:
Source: Data from Net Applications; chart by Stephen Shankland/CNET
|
<urn:uuid:f4695e03-fbcc-4009-bfb6-96bd2aaa1a19>
|
CC-MAIN-2013-20
|
http://www.netvalley.com/cgi-bin/intval/net_history.pl?chapter=4
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705502703/warc/CC-MAIN-20130516115822-00058-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.94906 | 4,957 | 2.8125 | 3 |
This paper appeared originally in Proc. 23rd ACM-SIGCSE Technical Symposium on Computer Science Education, Kansas City, MO, March 1992. We have added some hyperlinks, a brief update at the end, and several new references.
PORTABILITY AS AN EDUCATION ISSUE
MACHINE DEPENDENCE AND INDEPENDENCE AS AN EDUCATION ISSUE
THE DINING PHILOSOPHERS
THE PORTABLE DINERS
ENCAPSULATING MACHINE DEPENDENCIES
This paper describes a course-related project in concurrent programming using the Ada language. Dijkstra's famous "dining philosophers" problem [Dijkstra 71] is used as a vehicle for developing a program rich enough in system construction problems to be realistic yet small enough to be manageable in a classroom situation. The program--Portable Diners--is also nicely animated and fun to observe in execution.
One of the most important advantages of Ada is its strong standard, which governs the entire language including modularization (package) and concurrency (task) constructs. This project demonstrates that sophisticated Ada programs, using a number of packages and tasks, can be written, without much difficulty, to be entirely portable.
The base version of Portable Diners has been compiled without error and executed successfully using nearly thirty different Ada compilers, from most major compiler vendors, running on computers ranging from the IBM PC and Apple Macintosh through several families of VMS and Unix workstations, to IBM and Cray mainframes.
An interesting aspect of the portability tests is the use of the Internet to carry them out. The Ada source modules for Portable Diners were posted to an Internet newsgroup (comp.lang.ada), then copied from the network by Ada users around the world and tested with the compilers they had available.
Many real-world programs depend to a certain extent on machine-dependent characteristics such as the capabilities of the display device. Only the machine-independent parts are therefore portable. Good system design dictates that machine-dependent information be separated and encapsulated.
To illustrate the importance of separating machine-dependent from machine-independent parts of a program, we have developed several versions of Portable Diners. These differ only in the style of animation, which is governed by the kind of graphics display.
Current animation styles include line-by-line output (completely portable), very simple window-oriented animation (requiring a 24 x 80 ANSI terminal), and IBM PC color (requiring a compiler-specific graphics library). Creating a new version requires modifying only a single package body embodying the display instructions.
The emergence of strong and enforced standards for production-oriented programming languages represents a maturation and professionalization of the software industry--an end to the "feature wars" and proprietary dialects that have characterized language development until recently (indeed, "feature wars" still rage in the Pascal industry).
The formal education of students in the computing disciplines should include some exposure to the benefits of language and system standards; it is an aspect of professionalism. A language standard, together with a validation process that assesses a compiler's conformance to that standard, makes it possible to develop programs that can be moved from today's hardware to tomorrow's without major coding changes. More immediately, a language standard encourages the existence of multiple compilers for a single computer system, each compiler having its strengths, e.g., in compilation vs. runtime efficiency, or user-friendliness vs. code optimization, but all compilers accepting the same language.
Ada is an especially good case in point, because its government sponsors mandated a standard [DoD 83, Nyberg 89] that governs the entire language, including features for modularization and configuration management (packages) and concurrent programming (tasks). The government also sponsors a validation process in which a compiler is tested, using a publicly available suite of several thousand programs, for conformance to the standard. Subsets and supersets are not allowed. It is not forbidden to market a non-conforming compiler--the government allowed its trademark to lapse in 1988--but only a conforming compiler can be advertised as "validated" and used for government work. To our knowledge, no non-conforming compilers are currently on the market.
The strong Ada standard and the validation process have spawned an industry of more than thirty compiler vendors, with perhaps a dozen major players. More than 300 compiler/computer pairs--a pair is a given vendor's product running on a given machine--are currently validated. A Sun-3 or VAX/VMS installation can choose from around ten distinct compilers each; five companies market MS-DOS-family compilers. An academic computer laboratory can therefore, without too much difficulty or financial hardship, create an environment with several compilers, in which students can be taught the benefits of portability. (A very useful counterexample is the difficulty experienced in moving a non-trivial Pascal program from, say, Borland's compiler to Microsoft's, or from MS-DOS to UNIX.)
A related issue is that of machine-dependence vs. machine-independence. Not every program can be written to be entirely machine-independent. A program may require access to specific "hard" memory locations, e.g., to communicate with specialized devices, or need to run on a display terminal with certain characteristics. In this case, good design principles dictate that machine dependencies be localized and encapsulated to the extent possible, with a clean interface to the machine-independent part of the program. In this way most of the program source code is machine-independent, and the machine-dependent code is easier to change because it is easier to find.
Students should be exposed to the issue of localizing machine dependencies, as part of their general study of abstraction mechanisms and good design.
We illustrate both portability and encapsulation of machine dependencies using an elaborate rendition of the Dining Philosophers. This famous metaphor for resource allocation and deadlocking problems was first stated in 1971 by Edsger Dijkstra [Dijkstra 71]. Five philosophers sit around a table; they spend their lives alternately thinking and eating. In the center of the round table is an infinite supply of Chinese food; before each philosopher is a plate; between each pair of plates is a single chopstick. To eat, a philosopher must obtain the chopsticks to his or her right and left. (Note: Dijkstra's original formulation involved spaghetti and forks; since most philosophers can eat spaghetti with a single fork, many writers now use the Chinese food metaphor instead.)
Figure 1 shows the situation in the dining room, with well-known modern philosophers at the table (apologies for the poor resemblance of the caricatures to their namesakes). Dijkstra's right chopstick is #1; the chopsticks are numbered clockwise around the table.
The diners must cooperate to remain alive. Each right stick is someone else's left one, so if each philosopher first acquires his or her right chopstick, holding it greedily until (s)he can acquire the other chopstick, all philosophers will starve. This is a classical circular-wait deadlock.
It is easy to see why discussion of this scene is an obligatory part of classes on operating systems and concurrent programming. Many non-deadlocking solutions exist; the one we use here, is for one of the philosophers to be a non-conformist, grabbing his or her left chopstick first. In such a case, the circularity is broken. At least one philosopher can always eat, finish the meal, and yield up the sticks, thus no deadlock occurs.
The classical Ada implementation of the diners is presented in the literature on Ada concurrent programming [Ben-Ari 90, Feldman 90, Gehani 91]. These examples illustrate Ada's task type construct for creating concurrent processes: Philosophers and chopsticks are represented as objects of their respective task types. Our implementation builds on the standard literature example, but in addition focuses attention on system design. The diners and chopsticks are really separate classes of objects, communicating via messages, which Ada calls rendezvous; each class should therefore be exported from its own package.
Many programs demonstrating the diners have allowed the philosopher processes to communicate with the world outside the room (via display statements). This is an incorrect implementation of the situation: The diners should concentrate only on eating and thinking. To allow an outside observer to follow the action, however, we compromise and permit philosophers to communicate their state via messages to a head waiter task. The head waiter serves as the interface between the dining room and the outside world; its job is not only to assign chopsticks but also to serve as a play-by-play announcer, interpreting the goings-on to the audience.
Figure 2 gives the system structure for Portable Diners. Each rectangular box represents a library package; the arrows show the import structure, e.g., Main imports Room, which in turn imports Philosophers, Chopsticks, and Text_IO (Ada's standard input/outpt library). Note that Philosophers and Room are mutually importing. This is allowable but a bit subtle to implement. A philosopher is assigned chopsticks by the head waiter.
A philosopher's life is ruled by the algorithm in Figure 3, which is adapted from the task body for the philosopher type. Each philosopher determines the length of the next meal or thinking session by drawing a random integer from 1 to 10; pseudo-random numbers are delivered by a function in the random numbers package. The pseudo-random sequence is seeded from the system time-of-day clock, so that action is unpredictable from run to run. A meal or thinking interval is simulated by a delay of the given number of seconds. Room imports Ada's Calendar package in order to time-stamp each output message with the number of seconds elpased since the beginning of the run.
The main program's only function is to bring the head waiter to life, then wait until the program is terminated. The head waiter creates the dining room and brings the philosophers to life, one by one, deciding whether each philosopher will grab the left or right stick first.
Figure 4 shows a few lines of the scrolling output produced by the head waiter. Stroustrup is the non-conformist, having been instructed by the head waiter to choose his left chopstick (#1) before his right one (#5).
This implementation of Dining Philosophers is believed to be entirely portable and will produce similar output regardless of the compiler, computer, or display used; our portability tests are discussed below.
Room.Head_Waiter. Report_State(Who_Am_I, Breathing); LOOP Room.Sticks(First_Grab).Pick_Up; Room.Head_Waiter.Report_State (Who_Am_I, Got_One_Stick, First_Grab); Room.Sticks(Second_Grab).Pick_Up; Room.Head_Waiter.Report_State (Who_Am_I, Got_Other_Stick, Second_Grab); Meal_Time := Random.Random_Int(10); Room.Head_Waiter.Report_State (Who_Am_I, Eating, Meal_Time); DELAY Duration(Meal_Time); Room.Head_Waiter.Report_State (Who_Am_I, Done_Eating); Room.Sticks(First_Grab).Put_Down; Room.Sticks(Second_Grab).Put_Down; Think_Time := Random.Random_Int(10); Room.Head_Waiter.Report_State (Who_Am_I, Thinking, Think_Time); DELAY Duration(Think_Time); END LOOP;
T= 21 Eddy Dijkstra Thinking 7 seconds. T= 21 Moti Ben-Ari First chopstick 2 T= 21 Bjarne Stroustrup Second chopstick 5 T= 21 Bjarne Stroustrup Eating 6 seconds. T= 23 Barb Liskov Yum-yum (burp) T= 23 Moti Ben-Ari Second chopstick 3 T= 23 Barb Liskov Thinking 6 seconds. T= 23 Jean Ichbiah First chopstick 4 T= 23 Moti Ben-Ari Eating 4 seconds. T= 27 Bjarne Stroustrup Yum-yum (burp) T= 27 Bjarne Stroustrup Thinking 1 seconds. T= 27 Jean Ichbiah Second chopstick 5 T= 27 Jean Ichbiah Eating 5 seconds. T= 27 Moti Ben-Ari Yum-yum (burp) T= 27 Moti Ben-Ari Thinking 10 seconds. T= 28 Eddy Dijkstra First chopstick 1 T= 28 Eddy Dijkstra Second chopstick 2 T= 28 Eddy Dijkstra Eating 9 seconds. T= 29 Barb Liskov First chopstick 3 T= 32 Jean Ichbiah Yum-yum (burp) T= 32 Barb Liskov Second chopstick 4 T= 32 Jean Ichbiah Thinking 1 seconds.
To illustrate the importance of separating machine-independent from machine-dependent parts of a program, we have developed several versions of the package body for Room. The window version uses two reusable Ada packages. One package is called Screen and controls an ANSI-compatible terminal display (such as a VT100 or an IBM-PC under ANSI.SYS control); the other, larger package is called Windows, which manages output-only, non-overlapping windows on an ANSI display.
Figure 5 shows the relevant part of the new system structure.These packages are written in pure, portable Ada, but programs using them display output correctly only on an appropriate terminal. Only the body of the Room package requires modification to produce this version; the rest of the system is entirely unchanged and does not even need to be recompiled. Given Ada's standard library management facility, and assuming that Screen and Windows are already compiled into the library system, the new version is produced simply by compiling the new body of Room and relinking the system.
Figure 6 shows a snapshot of the screen output during a run of this version. The action is more heavily animated; each philosopher's state is displayed in a separate window, and the screen resembles the table in Figure 1.
We have developed a version of Room using the proprietary color graphics library supplied with a specific compiler for the IBM-PC; we are working on yet another version using the Ada binding to X-Windows. These versions are considerably more machine-dependent than the first two, yet the machine dependency is encapsulated in a single package body (Room) and creating the new versions requires only re-writing and re-compiling this package and re-linking the system. Exposing students to these different versions teaches an important lesson in system design: The philosophers concentrate on nourishing mind and body, remaining oblivious to the world outside their dining room.
An important goal of this project was to demonstrate that the resulting program is portable, that is, it will compile and execute correctly regardless of compiler or execution hardware. We had twelve Ada compilers, from six different vendors, readily available for six different systems. The line-by-line version eventually compiled and produced the desired output using all twelve systems. The window version compiled correctly on all systems and executed correctly on all but the Macintosh and IBM 4381; there is no ANSI-compatible display option on the latter two systems.
During the testing, only one significant change was necessary to get the program to execute correctly on all systems, namely the use of a compiler directive (pragma, in Ada terminology) to force a compiler-independent elaboration order on the two mutually-importing packages. The only unsuccessful test was carried out on a particular IBM-PC family compiler, which generated executable code that "hung" the computer. The test exposed a bug in the compiler's memory allocation scheme.
To broaden the scope of the portability tests, we posted a file containing the Portable Diners source code to the Ada newsgroup on the Internet, requesting that readers test the program on their favorite compilers and report back by electronic mail. Some thirty responses were received from academic and industrial sites in North America and Europe; adjusting for multiple respondents using the same compiler, fifteen more compiler/computer pairs could be added to the list of successful tests of either the window or line version.
The list of successful tests, by compiler vendor, is given below. The twelve companies in question represent most of the major Ada suppliers, especially suppliers of compilers to the academic world.
The correct behavior of Portable Diners under Macintosh, MS-DOS, VM, and VMS operating systems, not to mention many versions of UNIX, is possible only because the concurrent programming constructs were included in the programming language, instead of in a system-dependent library package.
Portable Diners is also a fairly small program. The line-oriented version, consisting of Main and the packages Room, Chopsticks, and Philosophers, is about 100 statements long, not including 40 statements of general-purpose packages for input/output instantiation and random-number generation, because these packages can be assumed to have been pre-compiled into the library.
The window-oriented version is only about 20 statements (40 lines) longer; the difference is in the more elaborate head waiter task. We do not count the packages Screen (25 statements) and Windows (150 statements), again because these are general-purpose packages assumed to be in the library already.
The entire system, including the reusable packages, is available from the author by Internet mail or on diskette. The system is included in the government-sponsored Ada Software Repository, and has already received wide Internet distribution. Several compiler vendors are considering it for inclusion in their demonstration libraries.
That such an interesting program can be written so portably and compactly is a commentary on the power of using reusable, pre-compiled components, and also on the benefit of including concurrent-programming constructs in the programming language. Exposure of students to programs like this is a valuable part of their educational experience.
Since this paper appeared in 1992, Ada 95 has come on the scene. An Ada 95 version of Portable Diners is being distributed as part of the demonstration library with the GNU Ada 95 compiler available by anonymous ftp. A description of the program is also published in Chapter 15 of [Feldman 96].
Much information on Ada and Ada 95 is available on the World Wide Web, starting from Ada Resources for Educators and Students
|
<urn:uuid:178a3b12-717a-48b7-be61-5ffa978f6f6a>
|
CC-MAIN-2013-20
|
http://www.seas.gwu.edu/~mfeldman/papers/portable-diners.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699068791/warc/CC-MAIN-20130516101108-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.904791 | 3,838 | 2.515625 | 3 |
US 20010013061 A1
Using a music oriented Web site, a “student” user requests a tutorial or tour of a musical artist or genre on the World Wide Web portion of the Internet. The “expert” user peruses the “student” personal music library and creates a playlist for that library to assist in further understanding of the music by the “student” user. The playlist is transferred to a server which generates a command file. This command file is sent to the “student” user to control various multimedia components according to the “expert” user's selection. This tutorial may be accompanied by the “expert” user's personal commentary on his/her selections.
1. A system for accessing, over a wide area network, multimedia equipment for reproducing multimedia information recorded on data storage media, comprising:
means for generating a list of contents of the multimedia information, the list of contents being modified to include only user selected multimedia information;
means for converting the modified list of contents to at least one command for controlling the multimedia equipment; and
means for controlling the multimedia equipment based on said one command, wherein the user selected multimedia information is reproduced on the multimedia equipment based on the modified list of contents.
2. The system according to claim 1
3. The system according to claim 1
4. The system according to claim 1
5. The system according to claim 1
6. A system for sharing, over a wide area network, multimedia information recorded on a data storage medium in multimedia equipment, comprising:
means for reading the multimedia information from the data storage medium in the multimedia equipment of a first user;
means for transferring the read multimedia information to a second user over the wide area network; and
means for reproducing the transferred multimedia information in the multimedia equipment of the second user.
7. The system according to claim 6
8. A method for accessing, over a wide area network, multimedia equipment for reproducing multimedia information recorded on data storage media, said method comprising the steps of:
generating a list of contents of the multimedia information, the list of contents being modified to include only user selected multimedia information;
converting the modified list of contents to at least one command for controlling the multimedia equipment; and
controlling the multimedia equipment based on said one command, wherein the user selected multimedia information is reproduced on the multimedia equipment based on the modified list of contents.
9. The method according to claim 8
10. The method according to claim 8
11. The method according to claim 8
12. The method according to claim 8
The present invention is related to network communications and, in particular, to a method and system for allowing users to access and/or share personal media libraries, including multimedia collections of audio and video information, via a wide area network or a group of networks, i.e., the Internet, for example.
One cannot disagree that appreciation of music is enhanced through greater understanding of the performing artists, as well as of the music itself. In most cases, music experts offer invaluable information on a particular music piece, genre or artist, which is not widely known by the public. Based on the music collection owned by an average user, the experts may reveal to the user a different listening experience by arranging the pieces to play in a particular order and by providing a personal commentary accompanying this arrangement.
The average user, however, typically has no access to this tailor-made expert information. Namely, the user may own a number of Compact Disks (CD) with classical music, for example, and he or she listens to these CDs in random order. Although the pieces in the user personal library can be researched individually to determine what every one of them represents, the user typically cannot properly digest and synthesize such a piece-meal information to obtain a collection that transcends the user's random listening. Only with the music experts' help can the user achieve that ultimate listening experience by combining individual pieces from various CDs to form a special playlist: it is as if a unique CD or tape were produced for the user by an expert or group of experts. It is possible to obtain such a unique CD by spending a lot of effort in laboriously writing down the titles of each album and sending them to the experts. Or, the experts may be invited to the user's home for advice and coffee. Both alternatives do not appear to be viable or, at best, easily achievable.
It is an object of the present invention to provide access to the contents of multimedia information over the wide area network.
It is another object of the present invention to share the contents of multimedia information over the wide area network.
It is a further object of the present invention to transfer multimedia information over the wide area network.
It is still another object of the present invention to control by a first user a multimedia component in an audio/video/data system of a second user remotely located from the first user.
It is yet a further object of the present invention to reproduce information on a multimedia component in the audio/video/data system of the first user according to a playlist compiled by the second user.
These and other objects, features and advantages are accomplished by a method and system for accessing, over a wide area network, multimedia equipment for reproducing multimedia information recorded on data storage media. According to the present invention, a list of contents of the multimedia information is generated and modified to include only user selected multimedia information. The modified list of contents is converted to at least one command for controlling the multimedia equipment. The multimedia equipment is then controlled based on this command, wherein the user selected multimedia information is reproduced on the multimedia equipment based on the modified list of contents.
In accordance with one aspect of the present invention, the list of contents is generated by a first user and is transferred via the wide area network to a second user. The second user modifies the list of contents, wherein the modified list of contents is transferred via the wide area network to the first user for reproducing the multimedia information only as selected by the second user.
In accordance with another aspect of the present invention, the list of contents is generated and modified by the first user. The modified list of contents is then transferred via the wide area network to the second user for reproducing the multimedia information only as selected by the first user.
The above-mentioned as well as additional objects, features and advantages of the invention will become readily apparent from the following detailed description thereof which is to be read in conjunction with the accompanying drawings, in which:
FIG. 1 is a block diagram of the system for providing a remote access of multimedia information over the Internet;
FIG. 2 is a functional flowchart for providing remote access of multimedia information over the Internet;
FIG. 3 is a block diagram of the system for sharing multimedia information between two Internet users in accordance with another aspect of the present invention;
FIG. 4 is a functional flowchart for sharing the multimedia information between two Internet users;
FIG. 5 is a functional flowchart for transferring data between two users in accordance with yet another aspect of the present invention.
In all Figures, like reference numerals represent the same or identical components of the present invention.
As a general overview, the present invention allows the user of any video/audio/data equipment to receive an expert's advice on how to arrange the user personal multimedia library for reproduction of information in multimedia equipment in accordance with the expert's advice. This advice—in a form of an on-line tutorial accompanying the suggested order of the audio/video/data reproduction—is obtained without leaving the confines of the user's living room with the minimum of effort involved.
The invention will now be described in detail with reference to the accompanying drawings. FIG. 1 is a block diagram of the system for providing a remote access of multimedia information over the Internet. Shown in FIG. 1 is the Internet 10, which is a group of interconnected networks with various servers attached to those networks for providing information to users (clients) on the Internet, as well known to people skilled in the art of the network communications. Via the Internet, users around the world communicate with each other, access various information in databases, receive from those databases (download) information for personal use, etc. The World Wide Web (WWW) is probably the most interesting and widely used section of the Internet containing graphics images in addition to text.
As shown in FIG. 1, connected to the Internet 10 is representative client workstation 26 (hereinbelow referred to as Student). The reason for referring to this workstation as Student will become clear in connection with the explanation of the system operation hereinbelow. Student 26 includes audio/video (A/V) system 22 that may contain one or several interconnected multimedia reproduction devices, such as a CD player, a Video Tape Recorder (VTR), a Digital Video Disk (DVD) player, a Digital Audio Tape (DAT) player, etc.
Further included in Student 26 is a general purpose computer, such as a ubiquitous personal computer (PC), or intelligent audio/video (A/V) receiver 20. Either one of these devices is communicatively coupled to audio/video (A/V) system 22 for controlling the operation thereof.
PC/Intelligent A/V receiver 20 is attached to the Internet via Network Interface Card (NIC)/modem 18. That is, PC/Intelligent A/V receiver 20 establishes a node—via NIC/modem 18—on a particular network, which is a part of the Internet. The NIC serves as the interface for PC/Intelligent A/V receiver 20 by setting up a communications path with users of various networks (via the Internet) in conformance with the Internet protocol. Alternatively, the dial-up modem may be used for logging on to the network by following the proper communications protocol, as well known in the art.
At a geographical location that may be remotely located from Student 26, be it several miles or several thousand miles apart, another client workstation is located. This client workstation is referred to as Expert 24, as shown in FIG. 1. Again, the reason for this terminology will become obvious following the description of the system operation hereinbelow. Expert 24 has a general purpose computer (PC 12) and NIC/modem 14, that are similar to the PC and NIC/modem of the Student configuration. Similar to the above-described setup in Student 26, the Internet connection is achieved via PC 12 and NIC/modem 14.
Further shown in FIG. 1 is Music Web server 16. The server is typically a fast-processing computer (a mid-range, a mainframe, multiprocessors, etc.) having a fast access to a local or remote database. Music Web server 16 maintains a music site on the WWW accessible by such client stations as Student 26 and Expert 24, among others. As known in the art, a Web site may have a title page as well as several additional pages which are optional, along with Hypertext Transfer Protocol (HTTP) links to various other Web sites, for example. The music Web site maintained by Music Web server 16 provides the database collection of titles for CDs, video tapes, DVDs, etc. That is, the database stores titles of songs, movies, games, etc. recorded on various data storage media (analog or digital) and reproduced in audio/video/data system, such as A/V system 22, for example.
The system operation will now be described with reference to the sequencing flowchart of FIG. 2. Each step, as summarized in FIG. 2, will be explained in detail, whose understanding might be facilitated by referring to the block diagram of FIG. 1.
In step 200, Student 26 requests a tutorial from the Music Web site. In particular, let it be assumed that the user has in his CD changer (such as a 200 CD changer produced by Assignee of the present invention) of A/V system 22 multiple CDs with various recordings thereon. From his multiple CDs in the CD changer, the user would love to listen to a collection of jazz songs, as compiled by the on-line music expert. Using the personal computer and modem, he logs onto the Internet to obtain such a compilation. The Internet log-on connection may occur through proprietary content-providers, such as America OnLine® or CompuServe®, or through service providers without any proprietary content but serving as a gateway to the Internet, such as Erol's®, for example.
After being linked to the Internet, the user “surfs” to the Music Web site, either by entering the appropriate domain name (starting with HTTP) or by using any of the commercially available “Web” browsers. As known in the art, a “Web” browser provides Graphical User Interface (GUI) access to network servers. At the home page (or any other page) of the Music Web site, the user requests a “music tutorial” by pointing and clicking on that option. A mouse, for example, or any other conventional input device may be used for navigating through the Internet and the Web site. The “music tutorial” option, provided by the Music Web site, is displayed on the computer screen and is selected by the user. Hence, the user is referred to as “Student,” as shown in FIG. 1 and referred to throughout the description.
Next, Expert 24 obtains Student's media library contents in step 202. In this step, Music Web server 16 sends a command to the CD changer of A/V system 22 via the PC of PC/Intelligent A/V system 20. For control and file transfer between these devices, any of the file transfer protocols (known in the art as FTP) may be used, as long as the FTP is supported by the Internet standard. The command issued by Music Web server 16 requires the PC to read Table of Contents (TOC) of each disk in the CD changer. Namely, the PC reads the TOC of each disk and sends this data—using the FTP—back to Music Web server 16.
As known in the art, the TOC on each disk is a special recording area allocated for various “house-keeping” non-informational data about the disk, including, among other things, the number of tracks and the length of each track. The TOC may be easily analogized to a File Allocation Table (FAT), for example, recorded on computer floppy disks. As also known in the art, information on a CD may be identified by the TOC data. That is, the number of tracks and the length of each track recorded in the TOC area uniquely identify the title of the CD and the name of each track thereon: the TOC data for the Tupac Shakur CD is different than the TOC data for the Rachmaninoff CD.
As a result of this “fingerprint” data, the TOCs read from each disk in the CD changer can be matched against the corresponding title and name of the track stored in the database of Music Web server 16. It is understood, of course, that such information, namely, CD titles and names of the tracks corresponding to the TOC data, has been pre-loaded in a form of a look-up table, for example, into the database. In response to the read command, the PC obtains the TOCs from the CDs in the CD changer and transfers this data to Music Web server 16. Using the database, the TOC data from each CD is matched against its title and the name of each track on that CD. The Student's library file, comprising a list of the CD titles and track names that are currently in the CD changer, is thus generated by Music Web server 16.
The generated Student's library file is then transferred to another user (or users), referred to as Expert 24, because a music connoisseur is staffing this computer station. There are many alternatives as to how the music connoisseur finds out that the list is waiting to be transferred. The most obvious method is for the music connoisseur to periodically log on to the Internet and access the Music Web site. Once he or she has access to the Music Web site, the music connoisseur selects the “file transfer” option on the home page. The Student's library file is then downloaded to Expert 24.
Another alternative is to notify the music connoisseur of the Student's library file by an audible tone or the like, similar to the e-mail notification as currently employed by many computer programs. Naturally, several other alternatives will become obvious to those skilled in the art following this disclosure of the present invention.
Regardless of how Expert 24 determines that Music Web server 16 generated the Student's library file with the request for the tutorial session, this library file is transferred, via the FTP, to PC 12 using NIC/modem 14 as the communications interface device.
Following the file transfer operation, in step 204, Expert 24 creates a playlist based on the Student's media library contents. Expert 24 views the library contents on the computer monitor, for example, and selects the CD titles or track names via the input device. Alternatively, the Student's media library contents can be printed out on a printer, if available, as desired by Expert 24. After reviewing the Student's library contents, Expert 24 arranges selected songs, video, or other information for reproduction in a particular order to expertly introduce Student 26 to classical music, for example. Using the above example, the music connoisseur selects jazz from the Student's library and arranges the CDs and/or individual songs on the CDs for reproduction in A/V system 22 in the particular order.
The selected songs or CD titles are saved in a file (as ASCII code, for example), containing a playlist in the requested genre intended for Student 26. The thus created playlist is then transferred from PC 12 to Music Web server 16 via NIC/modem 14.
In step 206, the playlist is translated into a command script file. That is, after receiving the playlist file, Music Web server 16 uses the Common Gateway Interface (CGI) program or other server program to form a command script file from the playlist. The command script file includes a series of commands for controlling A/V system 22 in compliance with a smart control protocol used in multimedia components. For example, the Assignee of the present invention has such a protocol referred to as S-Link™. This protocol provides the complete integration of multimedia components into a single coherent system: the components in this system are automatically configured (e.g., switch to a proper mode of operation) in according with the user action. For example, when the user inserts a tape into a VTR, the audio/video receiver changes to the VTR playback mode without any additional user involvement.
Next, in step 208, the command script file is transferred to Student 26. In particular, using the appropriate FTP, the command script file is sent to PC/Intelligent A/V receiver 20 via NIC/modem 18. PC/Intelligent A/V receiver 20 parses the command script file to obtain a series of commands for controlling A/V system 22.
Finally, in step 210, Student's A/V system 22 is controlled according to these commands. Namely, PC/Intelligent A/V receiver 20 executes the commands to play the CDs in the CD changer, for example, as selected by the music connoisseur. Using the control protocol and without any user involvement, appropriate components of Student A/V system 22 will be activated, and information will be reproduced from various types of data storage media, such as CDs, DVDs, tapes, etc. in response to the playlist compiled by Expert 24.
In another aspect of the present invention, peers may exchange playlists among themselves, as opposed to the music connoisseur sending a playlist to the student as described above. FIG. 3 shows a block diagram of the system for sharing multimedia information between two Internet users, for example. Since identical or similar elements in FIGS. 1 and 3 are designated with the same reference characters, description of those elements in FIG. 3 which were previously described with reference to FIG. 1 will be omitted to avoid redundancy.
FIG. 3 is similar to FIG. 1, except that in FIG. 3 both Internet users have an A/V system and a PC/Intelligent A/V receiver. In particular, User-B 30 of FIG. 3 has PC/Intelligent A/V receiver 20 connected to the Internet 10 via NIC/modem 18. PC/Intelligent A/V receiver 20 controls A/V system 22, as described above. Similar to this setup, User-A 28 has A/V system 22′, PC/Intelligent A/V receiver 20′ and NIC/modem 18′ for connection to the Internet 10. As previously explained, Music Web server 16 has the database of music titles, track names, etc. for matching with the TOC data.
In operation, as shown in FIG. 4, User-A 28 requests a playlist from User-B 30 in step 400. If User-B 30 desires to share the playlist in step 402, then he or she sends the playlist to Music Web server 16 via PC/Intelligent A/V receiver 20 and NIC/modem 18. Music Web server 16, using the appropriate server program, translates the playlist into a command script file in step 404. In step 406, the command script file is transferred to User-A 28 via the Internet and NIC/modem 18′. User-A's A/V system 22′ is controlled, in step 408, in accordance with the command script file. That is, information is reproduced in step 410, based on the User-B's playlist, from the various recording media, such as CDs, DVD, tapes, etc., under the control of PC/Intelligent A/V receiver 20′.
In yet another aspect of the present invention, actual recording information, not only the playlists, may be exchanged between two Internet users. As illustrated in the sequencing flowchart of FIG. 5 with reference to the system block diagram of FIG. 3, User-A 28 accesses the Music Web site run by Music Web server 16 and requests multimedia information, such as audio/video/data, from User-B 30 in step 500. If User-B 30 affirmatively responds to this request, PC/Intelligent A/V receiver reads, in step 502, the requested multimedia information from the appropriate recording media in A/V system 22. This information is transferred, via the Internet and under the control of Music Web server 16, to PC/Intelligent A/V receiver 20′ of User-A 28 in step 504. Subsequently, User-B's information, as controlled by PC/Intelligent A/V receiver 20′, is transferred to A/V system 22′ (i.e., any data storage media including disks, tapes, RAM memory, etc.) for reproduction, in step 506, on the appropriate system component.
Throughout the above description, reference was made to PC/Intelligent A/V receiver 20. Either the PC or A/V Intelligent receiver may used in the present invention. That is, the PC may perform the function of logging on and connecting to the Internet, of accessing the Music Web site, and of controlling the audio/video/data equipment, as described above. Alternatively, the intelligent A/V receiver, controlled by a programmable controller, for example, can replace the PC by providing an access to the Music Web site only and by allowing the user to perform the selection operations as described above. In effect, the intelligent A/V receiver may operate as a dedicated Music Web site access device, in addition to its other functions, to replace the need for the PC.
In addition, personal commentary of the music connoisseur or peer may accompany the playlist to the student/peer. Namely, when the playlist from Expert 24 or User B 30 is transferred to Music Web server 16, a text file containing the description of the selected information, an opinion on its content, etc. may be attached to the playlist file. This commentary file is created by entering the text into the PC, etc. using any conventional input device, such as the keyboard. The personal commentary then appears on the display screen of the monitor in Student 26 or in User A 28 to accompany the reproduction of the CD information, for example. This personal commentary—ranging from an objective historical information to subjective opinions—further facilitates the understanding of the audio/visual material received by the student/peer.
It is understood that while the Internet is used in the above description as the communications network, the example of using the Internet is illustrative only. Any wide area network, as known in the art, having at least two nodes and establishing a communications path between those nodes, that is between the music server and clients, can be used without detracting from the scope and spirit of the present invention.
Having described specific preferred embodiments of the invention with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope or the spirit of the invention as defined in the appended claims.
|
<urn:uuid:db68b985-71c2-4c5a-937a-82cc9952a5ff>
|
CC-MAIN-2013-20
|
http://www.google.co.uk/patents/US20010013061
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00096-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.921006 | 5,265 | 2.625 | 3 |
Feb. 25, 1998 Designers of complex structures -- from toasters to nuclear submarines -- often use computers to construct three-dimensional models electronically. But a snag can occur: The more detailed these models become, the longer it takes to put them in motion on screen.
A Johns Hopkins University computer scientist has developed software that addresses this problem by significantly speeding up the way a computer re-displays a three-dimensional model as it changes position. The program, devised by Subodh Kumar, assistant professor of computer science, also gives designers greater control over the level of detail that appears on screen.
Kumar recently posted a preliminary version of the software, called sLIB (short for "surface library"), on the World Wide Web for free downloading by designers who use the Irix operating system. (A Windows version is being developed.) The program is available at:
The secret to Kumar's software, he says, is in how it handles Non-Uniform Rational B-Spline representations, or NURBS, the mathematical shapes that computers can use to depict curved surfaces.
A computer can put NURBS together to form a three-dimensional representation of the complete object. Kumar's new software speeds up this process when an electronic designer is creating or refining a simple or complex NURBS model.
"This NURBS surface representation is in the computer's memory," explains Kumar. "It's data, just a sequence of bits and bytes that you can keep in a file and send to anybody. But how do you bring it back on screen and manipulate it in three dimensions?"
One common technique is to convert the original model into numerous tiny triangles that, when assembled on the computer screen, look very much like the original shape. Each time the designer clicks a mouse to look at the model from a different point of view, the triangles must be re-displayed in a new way. Kumar's software's streamlines this task by generating far fewer triangles and taking several other technological shortcuts. These improvements, he says, "enable us to speed up the whole process of displaying the NURBS models by better than 100 to 200 times over the older techniques."
His software also lets a designer zoom in on a particular part of the model to continuously increase the level of detail visible at that location.
While Kumar refines sLIB, he is allowing users of computer graphics systems to download the preliminary version at no charge. "This provides us with a wide user base to test the software," he explains. "It's not just a simple surface-rendering system. It's a whole framework in which you can test your own ideas, plug in your own little piece and see how it behaves."
The Hopkins researcher hopes that his software will someday allow a designer to take visitors on a highly detailed "virtual tour" through the interior of a submarine that exists only inside a computer. The computer model could then guide construction of the real vessel. "My dream is to increase the level of detail you can see on screen infinitely and still continue to display it at interactive speed," he said. "It may sound impossible, but it's more possible than it seems."
Kumar's research has been funded by the National Science Foundation, the Office of Naval Research and the Department of Defense.
Related Web Sites:
Johns Hopkins University Department of Computer Science: http://www.cs.jhu.edu/
Kumar's home page: http://www.cs.jhu.edu/~subodh/
Other social bookmarking and sharing tools:
The above story is reprinted from materials provided by Johns Hopkins University.
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead.
|
<urn:uuid:e350d9d0-4c53-4027-ac14-59caac4c8d87>
|
CC-MAIN-2013-20
|
http://www.sciencedaily.com/releases/1998/02/980225155918.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00095-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.936299 | 777 | 3.515625 | 4 |
Objects, Abstraction, Data Structures and Design: Using Java 5.0
This revolutionary book intertwines problem solving and software engineering with the study of traditional data structures topics. The book emphasizes the use of objects and object-oriented design. Early chapters provide background coverage of software engineering. Then, in the chapters on data structures, these principles are applied. The authors encourage use of a five-step process for the solution of case studies: problem specification, analysis, design, implementation, and testing. As is done in industry, these steps are sometimes performed in an iterative fashion rather than in strict sequence. The Java Application Programming Interface (API) is used throughout the text. Wherever possible, the specification and interface for a data structure follow the Java Collections Framework.
- Emphasizes the use of objects and object-oriented design
- Provides a primer on the Java language and offers background coverage of software engineering
- Encourages an iterative five-step process for the solution of case studies: problem specification, analysis, design, implementation, and testing
- The Java Application Programming Interface (API) is used throughout
Table of Contents
Chapter 1. Introduction to Software Design.
Chapter 2. Program Correctness and Efficiency.
Chapter 3. Inheritance and Class Hierarchies.
Chapter 4. Lists and the Collection Interface.
Chapter 5. Stacks.
Chapter 6. Queens.
Chapter 7. Recursion.
Chapter 8. Trees.
Chapter 9. Sets and Maps.
Chapter 10. Sorting.
Chapter 11. Self-Balancing Search Trees.
Chapter 12. Graphs.
Appendix A: Introduction to Java.
Appendix B: Overview of UML.
Appendix C: Event-Oriented Programming.
Sign up now »
- FTOS Web Applications DeveloperNSW
- FTQuality ManagerSA
- FTLead Software EngineerSA
- FTR&D EngineerSA
- FTSenior Python DeveloperNSW
- FTTechnical Business AnalystNSW
- FT.NET - Sitecore Developer - Melbourne - PermNSW
- FTJob Title: Mac Systems/ Enterprise Systems EngineerNZ
- FTFlash / ActionScript Developer - ContractNSW
For businesses looking to provide real-time business solutions to employees and customers alike, you need to have a comprehensive network management strategy. The network is the foundation of all successful ...
The nature of work has changed fundamentally and forever and it continues to evolve rapidly. Geographic distance and ...
"Suggesting that people's "purpose is to get information to flow through the ..."
Why change management doesn’t work
"Darn those pesky laws that get in the way of commercial exploitation ..."
Larry Page wants to see your medical records
"Instead of partitioning the device between corporate and personal data, another approach ..."
Dual-Persona Smartphones Not a BYOD Panacea
"Well that's a nice back-handed compliment isn't it? So now, finally, my ..."
After two-year hiatus, EFF accepts bitcoin donations again
"Actually, both Mobile App developers and CIOs should be blamed for it. ..."
CIOs struggle to deliver timely mobile business apps: survey
- CITRIX SYNERGY ’13: Look beyond Cloud infrastructure, says Liang
- CITRIX SYNERGY ’13: Christiancen highlights the need for collaboration
- CITRIX SYNERGY ’13: Devices will change how people work, says Duursma
- IN PICTURES: Citrix Solutions expo (49 photos)
- IN PICTURES: Citrix parties one more night with Maroon 5 ( +57 photos)
- Analytics and personalisation drive leading marketer behaviour: Report
- Innovation and big data take centre stage during CMO panel
- Twitter targets second screen interaction with Amplify advertising partnerships
- Facebook talks hyper-targeting, analytics and cross-platform at AANA event
- Tapping into social experience: Tourism Australia
|
<urn:uuid:10de919f-5500-49d2-8cb2-51bf62e10445>
|
CC-MAIN-2013-20
|
http://www.cio.com.au/books/product/objects-abstraction-data-structures-and-design/0471692646/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705958528/warc/CC-MAIN-20130516120558-00025-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.83738 | 834 | 2.671875 | 3 |
IT 212 Computer Fundamentals and Logic
Discusses the basic features of a computer, binary numbers and ASCII codes, and the application of computer logic in the development of algorithms that solve problems in the business world. Also discussed: function-oriented versus object-oriented design.
IT 228 Computer Organization and Architecture
Discusses the organization and architecture of the computer, including the functioning of the cpu, RAM, ROM, Boolean logic, truth tables, and I/O. Issues related to interfacing the computer to a network are covered, as is the role played by the operating system in controlling the hardware.
IT 232 Introduction to Programming in JAVA
The use of java in performing object-oriented programming (OOP) is discussed, with emphasis on coding algorithms that solve business problems. Also covered: features of the JAVA language, such as classes, objects, variables, control constructs, etc.
IT 312 Fundamentals of Networking
Presents a thorough discussion of computer networks and how they function under the direction of a network operating system (NOS). Also covered: the use and installation of NOSs, such as Windows 2000 Server and Linux, and protocols such as TCP/IP and network addressing.
IT 328 Principles of Internetworking
Covers the use of internetworking devices such as routers, gateways, switches, etc., in the construction of an internetwork. Also discussed: related issues such as network security and the methodology followed in developing a network for a small, midsize, or large business.
IT 348 Database
The types and uses of databases are covered. Students learn how to create a “realistic” relational database using software such as Microsoft SQL Server and Microsoft Access.
IT 408 Web Design and Development
IT 422 Client/Server Programming
Discusses Web site development with server side programming, using Active Server Pages or JAVA Server Pages. Also covered: the creation of static versus dynamic web pages.
IT 482 IT Project Development
A discussion of the lifecycle in the design, implementation, and maintenance of a significant IT project implemented in a business environment.
|
<urn:uuid:b8544c0e-6273-4e22-b970-8b5580819860>
|
CC-MAIN-2013-20
|
http://public.elmhurst.edu/it/196237461.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00095-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.877761 | 432 | 3.453125 | 3 |
This paper is concerned with fundamental data structures and their
algorithms. It involves a study of classical and recently discovered
methods, aimed at giving students an awareness of techniques for solving
a diverse range of problems using a computer. Analysis of important
performance characteristics, efficiency and scalability and discussion of
issues pertaining to applicability, adaptation and design will also be
addressed. This is an essential paper for students interested in the art and
science of computer programming.
After passing this paper, students will be able to recognise the general domain of a new problem, and which algorithm (or class of algorithms) to apply in creating a solution. They will know how to decompose an algorithm into its key parts, analyse how each part works and how it combines with the others to create an effective and efficient solution. Equally important, they will be able to recognise the limitations of an algorithm, or situations where particular algorithms fail to work well. Students will understand how to apply their knowledge of existing algorithms to the design and implementation of new ones, thus giving them the potential to make significant contributions to the discipline of computer science.
COMP203 Programming with Data Structures or
COMP241 Software Engineering Development
Official Timetable Information
Lectures will focus on problem classes and their properties. Specific techniques for solving these problems are presented, predominantly in the form of data structures and their associated algorithms. Students will be called upon to implement computational solutions to selected problems with the aim of giving them practical experience with the details of various algorithms. Additional issues relating to the analysis of algorithms, including efficiency, scalability, adaptation and correctness are also covered, and may be included as part of the practical assessment.
Students are encouraged to sign up for one of the three 1-hour small group tutorial sessions scheduled for each week of the semester. These are informal, unstructured study sessions run by an experienced tutor. They are intended to give students the opportunity to ask questions, and to obtain additional help in understanding lecture material or overcoming problems relating to the assignments.
It is hoped that students will actively participate in the course by freely asking questions and proffering ideas.
Students should expect to spend approximately 16 hours per week on this class (in conformance with school guidelines for Part III courses in computer science).
There is no required textbook for this course. Links to reference material will be provided on the course website, and students are expected to supplement this material on their own (either using the library or other online material).
Harel, David, Algorithmics: the spirit of computing.
Cormen, Thomas H., Leiserson, Charles Eric, Rivest, Ronald L., Introduction to algorithms.
Goodrich, M.T. and Tamassia, R., Algorithm Design.
(see also: algorithmdesign.net)
The machines in computing laboratory R6 are available for students to use in this course. After hours access to G Block requires CARDAX, which can be obtained through the Department of Computer Science main office in G1.21.
Programming assignments must be written in Java. Suitable alternatives for a programming language may be used only with prior approval by the course lecturer. Students may choose to complete their assignments on their machines at home, but are cautioned that their source code solutions must compile and run under the Java and Linux environment of R6 without alteration.
Four assignments, each worth 15% of the final grade, will be issued at roughly equal intervals through the course. Specific due dates for individual assignments will be stated on the assignment specifications.
Assignments must be submitted by the due date. If you have not completed it, turn in what you have done so far. Individual extensions will not be given except for medical circumstances specifically affecting that item of assessment, documented by a medical certificate or counsellor's letter. In the unlikely event that technical problems arise which the instructor considers merit an extension, the due date for the assignment will be extended for the entire class.
A mid-term test will reinforce assessment of lecturing
material, and will also prepare students for the final
exam. The mid-term test will be held in class during the first lecture after the Teaching Recess.
Internal assessment/final examination ratio 2:1
An overall mark of 50% is required for a pass, with a minimum of 35% in the final exam.
Assessment as a percentage of the final grade:
||15% each for a total of 60%|
|one mid-term test
Class attendance is expected. The course text is not comprehensive, as additional material will be covered in class. You are responsible for all material covered in class.
Follow this link for Academic Integrity information.
Follow this link for information on Performance Impairment.
Student Concerns and Complaints
Follow this link for Student Concerns and Complaints information.
Application for Extension
Follow this link for information on applying for an Extension.
Review of Grade
Follow this link for information on applying for a Review of Grade.
Your attention is drawn to the following regulations and policies, which are published in the University Calendar:
|
<urn:uuid:49ec86d1-7583-43d2-bc58-2788a26ea717>
|
CC-MAIN-2013-20
|
http://www.cs.waikato.ac.nz/genquery.php?linklevel=4&linklist=CS&linkname=All_Papers-1&linktype=report&listby=Paper_Number&lwhere=unique_record_id=32&children=
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711406217/warc/CC-MAIN-20130516133646-00095-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.923746 | 1,056 | 2.828125 | 3 |
VISTA (Visualization Tool and Analyzer)
VISTA provides data retrieval, management, manipulation and
visualization. The philosophy is to access, manipulate and visualize
data with ease. A graphical user interface is provided for first
time and occasional users. A scripting language will be provided
for power users to automate batch production.
Data retrieval is accomplished using a two-tier client server
architecture. The data resides on a server and the bulk of the
application resides on the client. The server can serve data
locally and over the network.
Data management is accomplished using data reference. A data
reference is to the location of the data set and its characteristics.
For instance, a time series data is referred to by a server address,
filename, pathname, time window, and a time interval. Some data
references do not refer to actual data but to the set of data
references and the operations to be performed on them to construct
the data set. This provides transparency to the user. For the
user there is no difference between such virtual data sets and
the actual data sets.
Data references can be aggregated in to a Group (Figure
2). The default view on a database file is a Group. Furthermore,
one or more Groups form a Session (Figure
1). A Session can be saved and loaded from a file once created.
The initial Session is created by opening a connection to a server
and directory. The directory of database files then becomes a
Session and each file becomes a Group containing data references.
Data manipulation is done by creating virtual data references
which contain the set of data references and the operations to
be performed. The actual operations on the data are performed
when the data for the reference is requested. Math operation
such as division, multiplication, addition and subtraction are
available between data sets. Period average and moving average,
and merging are data references which are some other examples
of manipulations on data sets.
Data visualization is done by two-dimensional plots (Figure
3). Examples of such plots are time series plots and scatter
plots. Zooming in and out and paging while zooming are some of
the tools that are available. Printing is available in gif and
postscript formats. A user has control of the attributes of each
element in the graph. The user can change the text, font, size,
color and background color of the title. Most of these attributes
can be saved to a file and applied to subsequent plots. Data
can also be displayed and manipulated in tabular format (Figure 4).
A graphical user interface is used to display a group of data
references. The GUI is a view onto the application and does not
contain information about the application other than the way
the application desires to be displayed. This separation lets
support of undo/redo commands and the recording of macros which
can be replayed on different sessions.
Scripting is an efficient way of accomplishing repetitive
tasks. Scripting would use the same application as the GUI and
could use some of the GUI components.
This application was done in Java. Java was chosen for ease
of development and wide industry support. This ensures long-term
support and multiplatform portability. Java is ideal for a client-server
architecture. One of the disadvantages of Java is efficiency
of memory and cpu resources. Just-In-Time Compilers and better
virtual machine implementations are bringing the efficiency of
Java closer to traditional languages such as C++ and Fortran.
The client side GUI is in Java and will run as-is on platforms
supporting Java. This effort was made to allow the client to
run embedded in a web browser. This will enable anyone on the
Internet with a web browser to use the latest version of the
client and manage and visualize the data in the form that they
desire. The server side is written using Java, FORTRAN and C
languages and as such will be made available and supported on
Solaris and Windows NT platforms. The database used to store
data is HEC-DSS, however all the details of database specific
access are isolated on the server side. This makes the client
unware of the actual mechanisms of data storage. Object-oriented
analysis and design techniques with an evolutionary prototype
approach was used throughout this project.
The concept of client-server is new in the modeling world.
Many new concepts are being tried here for the first time. Other
than a few minor glitches work has progressed to the implementation
and distribution of the first beta version of VISTA. A second
beta version with flag editing and writing data back to the server
will be made available in the early part of June.
Some ideas for the future are :
- Improving the 2D graphics by using the latest library from
- Improving the postscript printing to provide production quality
- Graph editing tools
- Scripting language for batch processing of data
- A schematic as an alternative view of a Group
- Report generation for the automatic formatted generation
- Animation facilities for easy set up of animation of time
- Online context sensitive help for the application
- Security and access control levels as fine as individual
A first beta version of VISTA was released to the Modeling
Section in April 1998. A second beta version of VISTA with more
features will be released in early June 1998. A first version
of VISTA with all the features will be released in 1999.
Goto: 1998 Annual
Goto: Annual Reports
|
<urn:uuid:c01ddb59-63ce-42d6-9597-064ad627edb7>
|
CC-MAIN-2013-20
|
http://modeling.water.ca.gov/delta/reports/annrpt/1998/chpt8.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382917/warc/CC-MAIN-20130516092622-00069-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.902141 | 1,166 | 2.8125 | 3 |
Some of the key features of this books are listed below:
- Code with comments are provided throughout the book to illustrate how the various features of the language are put together to accomplish specified tasks
- Case Studies at the end of the chapters illustrate real-life applications using C
- Programming Projects discussed in the appendix give insight on how to integrate the various features of C when handling large problems
- “Just Remember” section at the end of the chapters lists out helpful hints and possible problem areas.
- Guidelines for developing efficient C programs are given in the last chapter, together with a list of some mistakes that a less experienced C programmer could make.
|
<urn:uuid:4aa84c3d-930f-4aa7-921a-af1fadcd5371>
|
CC-MAIN-2013-20
|
http://www.freeebooktemple.net/2011/11/programming-in-ansi-c-by-balaguruswamy.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706631378/warc/CC-MAIN-20130516121711-00056-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.913613 | 133 | 3.03125 | 3 |
The students will be aquainted with basic principles and methods in modern operating systems and how they are organized. This will show how a computer can optimize use of the resources. This knowledge shall help the student in evaluation, use and maintainance of operating systems.
System calls, processes and threads, how they can be synchronized and how they can communicate.
CPU - scheduling algorithms.
Memory management:Virtual memory, swapping, paging and segmentation.
File systems: Implementation, backup, consistens and performance.
IO systems: Polling, interrupt and DMA. interrupt handlers, drivers, device independant layer, disk systems and timers.
Deadlocks: Detection and recovery, prevention and avoidance.
OS in a multimedia concept.
Multiprocessor systems and virtualization.
Security: Cryptography, authentication, attacks from inside og outside, protection mechanisms, trusted systems.
LecturesGroup worksLaboratory workExercises
Written exam, 4 hours
Written Exam, 4 hours
Alphabetical Scale, A(best) – F (fail)
Graded by course instructor(s).
Tanenbaum: Modern Operating Systems. 3rd edition. ISBN-10: 0-13-600663-9
© Gjøvik University College
2815 Gjøvik, Norway
Tel.: +47 61 13 51 00
Fax: +47 61 13 51 70
postmottak @ hig.no
webredaksjonen @ hig.no
|
<urn:uuid:8475a143-00d3-47ee-ac51-ff564aa92be3>
|
CC-MAIN-2013-20
|
http://english.hig.no/course_catalouge/student_handbook/2010_2011/courses/avdeling_for_informatikk_og_medieteknikk/imt2282_operating_systems
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706631378/warc/CC-MAIN-20130516121711-00013-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.781928 | 309 | 3.359375 | 3 |
Fundamentals of Computer SystemsFundamentals of Computer Systems is about two related areas of knowledge.
First is digital logic, which concerns the design of circuits to implement logic functions using standard components such as AND-gates, OR-gates, and inverters. The circuits might be used to control the flow of data within a computer, or the processing of the data (e.g., arithmetic operations), or to control the overall action of a computer. Students will learn how to specify logic functions precisely, to manipulate formal expressions, and to implement them efficiently. They will learn how to design the basic building blocks, including the control, of modern digital computers. Both combinational and sequential circuits will be covered.
The second part of the course involves the structure of digital computers. Focussing our attention on modern RISC architecture, we will discuss the functional blocks such as the arithmetic unit, register files, and memory. Single-cycle and multiple-cycle implementations will be presented and then students will be introduced to the concept of pipelining. They will learn the basics of caches and virtual memory. Machine language programming is a feature of the course. Main memory systems, currently DRAM, will be discussed as well as the operation of magnetic disk drives. Some aspects of I/O will also be introduced.
|
<urn:uuid:258d1145-47eb-481d-9d0b-1b3f814487b3>
|
CC-MAIN-2013-20
|
http://www.ee.columbia.edu/misc-pages/csee_w3827.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382261/warc/CC-MAIN-20130516092622-00018-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.918836 | 267 | 3.828125 | 4 |
Author : Harry R. Lewis & Larry Denenberg
Edition : FIRST EDITION
Publisher : Addison Wesley
ISBN - 10 Number : 067339736X
ISBN - 13 Number : 978-0673397362
BOOK Length : 509 pages
BOOK File Format : PDF
BOOK Language : English
BOOK Description :
Using only practically useful techniques, this book teaches methods for organizing, reorganizing, exploring, and retrieving data in digital computers, and the mathematical analysis of those techniques. The authors present analyses that are relatively brief and non-technical but illuminate the important performance characteristics of the algorithms. Data Structures and Their Algorithms covers algorithms, not the expression of algorithms in the syntax of particular programming languages. The authors have adopted a pseudocode notation that is readily understandable to programmers but has a simple syntax.
- Click below to free download Book :
HOW TO DOWNLOAD ?
- Click on the download link.
- Wait for 5 seconds and then click on as shown in below 1st type of visual button.
|
<urn:uuid:ddf64a31-74e2-4a68-8b64-2fbc1ee4b4b5>
|
CC-MAIN-2013-20
|
http://gate-study-material.blogspot.com/2011/12/data-structure-and-their-algorithms-by.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708664942/warc/CC-MAIN-20130516125104-00056-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.756245 | 216 | 2.75 | 3 |
Graphical Algorithm Modeling
One interesting innovation that is being employed in conjunction with CEP platforms is the ability to implement new algorithms graphically. Graphical programming has always been a challenging area. Using graphical development environments to develop new programs on top of traditional languages, it can take as much time and knowledge as simply typing in the text of the language syntax. However, graphical modeling tools have been very successfully used in conjunction with CEP platforms. Modeling state flow and rules in an event-based system is well suited to graphical abstractions (see Figure 4).
As well as graphically modeling the logic inside their algorithms, today's tools give the traders the ability to visualize, in real time, all runtime activity once their algorithm is running. Real-time "dashboards" can display representations of the changing real-time variables within the algorithms, with automatic alerts when complex conditions or exceptions are detected. Dashboard design studios and runtime rendering frameworks act as a complete design and deployment environment with a wide range of visual objects, including meters, scales, tables, grids, bar and pie charts, along with trend and x-y chartsall of which change dynamically as events occur in real time (Figure 1 shows an example of a deployed dashboard). Elements are accessible through a design palette from which the objects can be selected, placed on a visual canvas, and parameterized. This capability removes the reliance on the technical development team traditionally required for the creation and adaptation of trading strategies.
One question that is occupying the minds of many with an interest in algorithmic trading is: "Will this ultimately replace the trader?" The answer is nofor now. Algorithms have expanded the capabilities of the trader, making each trader much more productive. It still falls to humans to devise new algorithms by analyzing, with computer help, opportunities in the changing market.
Algorithmic trading technology will only begin to replace humans if algorithms are actually devised, developed, tuned, and managed by other algorithms. There are already some techniques being deployed to this end.
One approach is the automatic selection of an appropriate algorithm to use in a particular circumstance, based on events occurring in the market at that point.
Another approach is the use of "genetic" algorithms, whereby a large number (potentially thousands) of variants of an algorithm are createdeach with slightly different operating parameters. Each variant can be fed with real market data, but rather than actually trading, can calculate the profit or loss it would be making if it was live in the market. Thus, the most profitable algorithm variants can be swapped live into the market on a continuing basis.
In all of these approaches, Complex Event Processing offers a compelling platform for the creation and management of trading algorithms. The promise of CEP is in providing a powerful platform to enable even the nonprogrammer to encode an event-based algorithm. This year, we will see increased adoption of this approach.
Algorithmic trading is just the first of many exciting applications of CEPin the financial markets, use in risk management and compliance are the obvious next steps. As we move into 2007, CEP will continue to revolutionize trading on the capital markets as we know it.
|
<urn:uuid:aa0b7aa3-f1da-4778-ac47-8eabe515e47a>
|
CC-MAIN-2013-20
|
http://www.drdobbs.com/parallel/algorithmic-trading/197801615?pgno=4
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383508/warc/CC-MAIN-20130516092623-00018-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.942398 | 645 | 2.53125 | 3 |
write a C# program that can add up to 10 vectors. Ask the user how many vectors they want to add at the beginning. then calculate the sum of the vectors. and the direction of the resultant. Make sure you allow entering the magnitude and direction in degrees. YOU CAN EARN AN EXTRA 5 POINTS BY...
Discuss file signatures and their use for forensic analysis
Discuss how you can check what kind of Internet site a suspect has visited using a Windows machine; the suspect may use some tools to hide her activities
1. CheckPoint: Interfaces and Communication Messages (Due Day 3) Understanding object-oriented methodologies is often difficult. You already understand that object-oriented analysis and design emulates the way human beings tend to think and conceptualize problems in the everyday world....
Network Security and National Security Policies (word count 100) Can Secret and Confidential information be discussed over a cell phone or land line? Why or Why not? What would be if any security violations?
Network Security Information Security Audits (word count 250) Which of the following would be part of an bi-annual corporate audit (see a-e) and what type of information would be gathered including which polices if any would apply? (see attachment) a)A review of background...
I submitted this question yesterday and gave ample time to answer it and you said you couldnt and now you are saying you need more time I would like another expert to look at the question and stop giving me the run around.
What kind of databases and database servers does MySpace use? Why is database technology so important for a business such as MySpace? How effectively does MySpace organize and store the data on its site? What data management problems have arisen? How has MySpace solved or attempted to solve...
Week 8/Task 4 Readings: Kroenke Text Book (Experiencing MIS), Chapter 8 (pp. 210-2236) & Chapter Extension 15(pp. 547-557) Reread the MIS in Use 8 on page 205. In this exercise, you will compare and evaluate these two publications strategies for using e-commerce. 1. These two very...
using the periodic table, predict how many outer level electrons will be in elements 114, 116, and 118. Explain your answer.
Ask a new Computer Science Question
Tips for asking Questions
- Provide any and all relevant background materials. Attach files if necessary to ensure your tutor has all necessary information to answer your question as completely as possible
- Set a compelling price: While our Tutors are eager to answer your questions, giving them a compelling price incentive speeds up the process by avoiding any unnecessary price negotiations
- 1. Identify and describe Trust/Security Domain boundaries that may be applicable to personal computer (workstation) security in a business context.
2. This is a C++ codelab question.
- The "origin" of the cartesian plane in math is the point where x and y are both zero. Given a variable, origin of type Point-- a structured type with two fields, x and y, both of type double, write one or two statements that make this variable's field's values consistent with the mathematical notion of "origin".
- Assume two variables p1 and p2 of type POINT, with two fields, x and y, both of type double, have been declared. Write a statement that reads values for p1 and p2 in that order. Assume that values for x always precede y.
- In mathematics, "quadrant I" of the cartesian plane is the part of the plane where x and y are both positive. Given a variable, p that is of type POINT-- a structured type with two fields, x and y, both of type double-- write and expression that is true if and only the point represented by p is in "quadrant I".
|
<urn:uuid:d5dd161e-f3df-4708-91cc-492666514dd2>
|
CC-MAIN-2013-20
|
http://www.coursehero.com/tutors/problems/Computer-Science/10861/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00016-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.911489 | 802 | 2.8125 | 3 |
New York University
Computer Science Department
Courant Institute of Mathematical Sciences
Jini Connection Technology
Course Title: Application Servers Course Number: g22.3033-011
Instructor: Jean-Claude Franchitti Session: 5
Table of Contents:
Vision or Reality?
Jini technology promises to be a reality in the immediate future as an architecture to enable connections between devices any time, anywhere.
How Jini Technology Makes
Jini Technology provides mechanisms that group devices together into a service network, secured through the Java™ language.
Flexible devices in a Jini system, irrespective of size, manage themselves.
Computing has surpassed many bounds within the past decades to evolve from mainframes to smaller, yet much more powerful processors in this Web-dominated age.
Sun Community Source
The Sun Community Source License (SCSL) provides licensees with source code and three levels of participation.
What the Java community is
Already, several major manufacturers are applying Jini technology in their devices.
Jini technology is the architecture to streamline the future of computing.
Picture this: Three strangers from three companies--on the road to jointly bid a job. Among them they have a laptop, a project disk drive with data & applications, a PDA, a cell phone, and a clamshell pager--and their presentation is not ready. Their hotel suite has a small network: scattered net jacks, a couple of infrared & short-range RF transceivers, and though they never worked together before, they have snapped into an impromptu working community. The suite has an Internet gateway & low-resolution printer. The TV, the VCR, and the set-top box are connected to the same network. As they work they use all the services in the room, including those on each other's devices. Plug in the disk & PDA, turn on the laptop & TV--total setup time: 2 minutes.
While one of them edits the presentation, the others watch it unfold, mirrored on the TV. From time to time one of them takes the laptop to another hotel room to concentrate on a section while the others keep working, running simulations from the project disk on the gateway & viewing results on the TV. Sometimes one of them prints a slide or graph to proof. With the clamshell they fetch files from offices & email questions. Using the cell phone as a remote, they order room service through the TV & check their flights.
When they're done, they need a high-resolution set of overheads. From the pager they reserve the hotel's high-res color printer for 15 minutes--it's in a small room down the hall. Using the suite key programmed by the reservation service to open the room, one of them goes into the printer room, loads foils bought from online room service, and phones into the suite's network to start printing.
Vision or Reality?
This is a great vision, but aren't computers and software too brittle to pull it off anytime soon? No--it's about to happen. Jini connection technology makes computers and devices able to quickly form impromptu systems unified by a network. Such a system is a federation of devices, including computers, that are simply connected. Within a federation, devices are instant on--no one needs to install them. The network is resilient--you simply disconnect devices when you don't need them.
In our story the devices and how they communicated were central. The laptop was merely one player that came and went on a larger stage. The key was several smart and flexible independent devices simply connected. This created a work environment where the tools were ready to hand and largely invisible.
How Jini Connection Technology Makes This Work
Jini technology provides simple mechanisms which enable devices to plug together to form an impr_omptu community--a community put together without any planning, installation, or human intervention. Each device provides services that other devices in the community may use. These devices provide their own interfaces, which ensures reliability and compatibility.
In our story, the hotel suite provided a small network and a lookup service with which devices and services registered. When the project disk was plugged in, it went through an add-in protocol--called discovery and join--in which the disk first located the lookup service (discovery) where it then uploaded all its services' interfaces (join). The other devices--the PDA, the clamshell pager, the cell phone, and the laptop--all went through the same process.
To use a service, a person or a program locates it using the lookup service. The service's interface is copied from the lookup service to the requesting device where it will be used. The lookup service acts as a switchboard to connect a client looking for a service with that service. Once the connection is made, the lookup service is not involved in any of the resulting interactions between that client and that service.
It doesn't matter where a service is implemented--compatibility is ensured because each service provides everything needed to interact with it. There is no central repository of drivers, or anything else for that matter.
In our story, the presentation was mirrored on the TV; to do this, the person operating the laptop selected the TV screen display service and plugged it into the presentation software. To reserve the hi-res printer, a service was selected that was built on top of the printer service to control who, how, and when the printer is used.
The Java™ programming language is the key to making Jini technology work. Devices in a network employing Jini technology are tied together using Java Remote Method Invocation (RMI). By using the Java programming language, a Jini connection architecture is secure. The discovery and join protocols, as well as the lookup service, depend on the ability to move Java objects, including their code, between Java virtual machines.
Jini technology not only defines a set of protocols for discovery, join, and lookup, but also a leasing and transaction mechanism to provide resilience in a dynamic networked environment. The underlying technology and services architecture is powerful enough to build a fully distributed system on a network of workstations. And the Jini connection infrastructure is small enough that a community of devices enabled by Jini connection software can be built out of the simplest devices. For example, it is entirely feasible to build such a device community out of home entertainment devices or a few cellular telephones with no "computer" in sight.
Devices permeate our lives. Look around: TVs, VCRs, DVDs, cameras, phones, PDAs, radios, furnaces, disk drives, printers, air conditioners, CD players, pagers, and the list goes on. A device performs a simple task, and only that task: Today devices are unaware of their surroundings--they are rigid and cannot adapt. When you buy a disk drive, you expend a lot of effort to install it or you need an expert to do it for you.
Now, devices of even the smallest size and most modest capabilities can affordably contain processors powerful enough for them to self-organize into communities that provide the benefits of multi-way interactions. A device can be flexible and negotiate the details of its interaction. We no longer need a computer to act as an intermediary between a cell phone and a printer. These devices can take care of themselves--they are flexible, they adapt.
A device that can take charge of its own interactions can self-configure, self-diagnose, and self-install. When computers were the size of large rooms, it made sense to have a staff of people to take care of them. As computers became smaller and shared by fewer people, each sys admin took responsibility for more computers. But now the cost of a computer is low, and Jini technology creates the possibility of impromptu device communities popping up in all kinds of places far from any sys admin. Self-managing devices reduce further the need for expert help, and this should lower the total cost of ownership for Jini connection technology-based systems.
How have we arrived at a place where connected devices are the locus for the next wave of computing?
The most significant reason is our better understanding of physics, chemistry, the physical bases for computation, and chip manufacturing process. Today, a significantly powerful computer can be built from one or two small chips and an entire computer system can built on one small board.
There were three dimensions of improvement: size, cost, and computational power. Since the 1960s, size and cost of computers have decreased dramatically while computational power has gone through the roof.
The mainframe of the 1960s was a collection of boxes in a large room--it cost millions of dollars and set the bar for computational power. Only a company could afford one.
The minicomputer became possible when the functionality of a mainframe could be put in a few boxes. It had the computational power of the previous mainframe generation, and could be bought by a single department. Most minicomputers were connected to interactive terminals--the beginnings of computer-based culture, a community.
When a computer the power of a mini shrank to a box that fit beside a desk, we got the workstation. A department could afford to buy one for a couple of professionals. A workstation had enough computational power to support sophisticated design, engineering, and scientific applications, and to provide the graphical support for them.
The personal computer was small enough to fit on a desk and powerful enough to support intuitive graphical user interfaces, individuals could afford them, and companies bought them for every employee.
Eventually processors became small enough and cheap enough to put one in a car in place of an ignition system, or in a TV instead of discrete electronics. Today's cars can have fifty or more processors, the home over a hundred.
The computational power dimension has another fallout. The overall trend toward smaller, faster, cheaper processors meant that fewer people had to share a CPU, but it also meant that people in the organization could become isolated. When a tool is shared, it creates a community; as the tool shrinks, fewer people use it together, and the community disperses. But, a community is hard to give up. Fortunately, computational power kept pace with the shrinking processor, and as the community served by a single computer system shrank, there was enough power to support communication between systems. Thus for example, workstations became successful once they could communicate and exchange data.
The final stretch of the computational power dimension is that now processors are powerful enough to support a high-level, object-oriented programming language in such a way to support moving objects between them. And such a processor is small enough and cheap enough to sit in the simplest devices.
Once there is sufficient computational power, the ability to connect and communicate is the dominant factor determining value. Today for most people, a computer runs only a few applications and mainly facilitates communication: email, the Web. Recall how fast Internet popularity soared first with email and more recently once the Web and browsers became prevalent.
Sun Community Source Licensing
When the Internet was developing, there were two essential activities: defining and perfecting the underlying protocols and infrastructure, and creating applications and services on top of that infrastructure. Internet infrastructure includes TCP/IP, HTTP, SMTP, and FTP--protocols and their implementations. On top of these were built email composers and readers, file fetching programs, Web browsers, and the Web itself. No single company or organization did all the work, and none could, if the venture was to be successful, because underlying it all is a standard protocol, and a protocol can be successful only if it is widely adopted.
For Jini connection technology to succeed, the underlying protocols and infrastructure must become pervasive, and to accomplish this requires a strong community of participants and partners.
The Sun Community Source License (SCSL) is a mechanism to build such a community around Jini technology. The SCSL opens the source code for the Jini technology infrastructure to the community of Jini technology licensees, who are free to use it, extend it, improve it, and repair it by following an open process that insures both fairness and stable evolution of the technology. Community members may add to this common body of source code while still maintaining, if they wish, proprietary implementations, though interfaces must be published so other community members can build their own implementations.
There are three levels of participation in the SCSL:
· Research Use: This enables researchers and students to use the source for any non-deployment purpose, and provides a way for organizations and individuals to examine and evaluate Jini connection technology.
· Internal Deployment: This enables organizations and individuals to deploy products based on Jini connection technology within their organization.
· Commercial Use: This is for commercial distribution and is based on a branding license model.
Fees are associated only with commercial use, and only in for-profit situations. The research use and internal deployment licenses are Web-based click-throughs, so that joining the community is simple and immediate, just as is Jini technology itself.
The SCSL anticipates an emergent, self-organizing community coalescing into interest groups surrounding different sorts of services, such as printing and digital cameras. Such interest groups would define, refine, and standardize interfaces for their category of service, providing useful community source and verification suites for new community members to get started.
What the Java Community is Doing
A network of devices employing Jini technology can be built from many diverse types of devices. Today there are initiatives underway by several major manufacturers to enable their devices with Jini technology. These devices include printers, storage devices such as disks, personal digital assistants, digital cameras, cell phones, residential gateways, digital video cassette recorders, TV sets, set-top boxes, DVD players, industrial controls, and every sort of imaginable consumer electronics device. There are several technologies in preparation for connecting devices within a particular domain, such as home entertainment or industrial control, and bridges are being planned to connect these other types of networks to communities utilizing the Jini connection technology.
The right architecture makes it all simple--and the results can startle. Jini connection architecture is only what is needed to gather a group of devices into an impromptu community that is simply connected: a simple protocol to discover a lookup service and join it, a lookup service which acts as a switchboard connecting clients to services, the Java programming language to provide the underlying object model, and RMI technology to provide federation and to move objects. By looking to simple devices as the archetype of how to design devices and by creating a way to simply connect them, we may at last begin to see advanced computer technology simplify our lives.
|
<urn:uuid:cda22059-bd9c-471a-a8cb-c3d715469758>
|
CC-MAIN-2013-20
|
http://www.nyu.edu/classes/jcf/g22.3033-011_fa01/handouts/g22_3033_011_h5b.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00067-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.95084 | 3,030 | 2.828125 | 3 |
August 12, 2009
Lighting strikes, floods, and other natural and manmade disasters can mean life or death for people, and they also can devastate computer systems at times when they're most needed.
Professor H.J. Siegel
H.J. Siegel, Tony Maciejewski and Arnold Rosenberg, engineering professors at Colorado State University, have received more than $1 million from the National Science Foundation to design techniques for building robust and dependable computing and communications systems capable of withstanding major, unexpected disruptions. The CSU team includes graduate and undergraduate students.
The grant money is made possible through the American Recovery and Reinvestment Act of 2009.
“Information systems are often a heterogeneous mix of machines and networks that experience degraded performance due to such problems as machine failures, changes in workload or other uncertainties,” said Siegel, Abell Distinguished Professor of Electrical and Computer Engineering and director of the university’s Information Science & Technology Center, or ISTeC. “The goal is to bring together researchers and practitioners to collectively investigate the problem of robust computing systems.”
“Uncertainty is the enemy of a robust computer system, but this grant will help us minimize damaging failures and work to build computer systems that perform well through crises,” said Tony Maciejewski, head of the Electrical and Computer Engineering department in Colorado State’s College of Engineering. “As computer systems become more integrated with everyday life, it’s really important that they continue to perform critical functions even when there’s an unpredicted circumstance.”
Professor Tony Maciejewski
Also collaborating on the grant that is led by CSU are DigitalGlobe, which supplies images to Google Maps and Microsoft Virtual Earth, the National Center for Atmospheric Research, which studies prediction of severe and catastrophic weather, and the University of Colorado at Boulder.
The team will design models and mathematical and algorithmic tools to derive robust resource management schemes as well as to quantify the probability of system failures.
“The robustness concepts being developed have broad applicability, and will significantly contribute to meeting national needs to build and maintain robust information technology infrastructures,” said team member Jay Smith at DigitalGlobe.
Siegel and Maciejewski serve as co-directors of the CSU Center for Robustness in Computing Systems, which has been funded by the Colorado Commission on Higher Education Technology Advancement Group, DARPA, and an earlier NSF grant. Siegel's research focuses on distributed computing and communication systems, heterogeneous computing, parallel processing, computer architectures and algorithms, and interconnection networks. Maciejewski’s research and teaching interests center on the design and analysis of robust systems, including fault-tolerant robotic systems for operation in hazardous or remote environments.
Contact: Emily Wilmsen
Phone: (970) 491-2336
|
<urn:uuid:b78810ef-6279-4917-80c6-20773d0b9b86>
|
CC-MAIN-2013-20
|
http://www.today.colostate.edu/story.aspx?id=1949
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699068791/warc/CC-MAIN-20130516101108-00033-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.925562 | 602 | 2.78125 | 3 |
The Challenge of Computing on a Planetary Scale: Inside Google’s Faculty Summit
(Page 2 of 2)
spoken search query, regardless of the accent of the user. While this sounds simple, it is a grand challenge problem in artificial intelligence. Our voice systems, which are available on Android, iPhone and other mobile devices, are trained on over 230 billion spoken utterances and possess a one million word vocabulary—and we are working to make them even better. Interestingly, we have a new challenge: Quite often, our systems are even more accurate than the humans that rate them for accuracy, making it challenging to evaluate our own quality!
In the domain of ultra-large software systems, John Wilkes, a distinguished engineer from Mountain View, spoke to the audience about a new system we are building that will automatically manage the seemingly endless number of computers in Google’s worldwide data centers.
Some of these computers are working round the clock answering user queries, processing e-mail, or otherwise attending to tasks that require instantaneous responses. Other computers are working on long jobs, for example, actually learning to do language translation from vast corpora in English, French, German, Chinese, etc. The key, John described, is to make sure that we can easily specify the requirements of each job that needs to run, and then have an uber-manager, “the cluster management system,” automatically allocate those jobs to the right sets of computers in a way that maximizes performance while minimizing costs. Our current cluster management system is seven years old, and we discussed its success and challenges, as well as our hopes for the new system we are building to replace it.
While you wouldn’t think that professors of computer science would be interested in shopping or commerce, Andrew Moore, a former professor at Carnegie Mellon and now the Director of Google’s Pittsburgh office, described the deep research questions in areas such as shoe shopping. How can we implement a system to analyze the image of a shoe—its color, shape and pattern? How can we show a pair of shoes that someone might purchase based on this image analysis? How can we simultaneously provide accurate results of shoes a shopper would most likely purchase, without showing shoes they would not like, and provide the serendipitous connections that a shopper would experience in a real store? Optimization, computer vision algorithms, auction theory and more all play a role, and are the subject of active research not only at Google, but across the computer science community.
The field of computer science is, in many ways, a large expanding sphere that grows into ever more domains of applicability. The greatest recent successes have been at the boundary of computer science and virtually every other discipline. So, we covered quite a diversity of topics in New York. There is more on our research blog and also on the Official Google Blog, where you can even see a poem by NYU Professor Ken Perlin in iambic pentameter, musing about the future of mobile devices.
|
<urn:uuid:0498611c-c381-413e-bde8-ea824166576a>
|
CC-MAIN-2013-20
|
http://www.xconomy.com/new-york/2011/07/25/the-challenge-of-computing-on-a-planetary-scale-inside-googles-faculty-summit/2/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708789647/warc/CC-MAIN-20130516125309-00072-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.946036 | 613 | 2.5625 | 3 |
|Apparel & Accessories||Books||Classical Music||DVD||Electronics & Photo||Gourmet Food and Groceries||Health & Personal Care||Home & Garden||Industrial & Scientific||Kitchen|
|Popular Music||Musical Instruments||Outdoor Living||Computer Hardware||Computer Software||Sporting Goods||Tools||Toys and Games||VHS Video||Video Games|
Cluster computers provide a low-cost alternative to multiprocessor systems for many applications. Building a cluster computer is within the reach of any computer user with solid C programming skills and a knowledge of operating systems, hardware, and networking. This book leads you through the design and assembly of such a system, and shows you how to mearsure and tune its overall performance.
A cluster computer is a multicomputer, a network of node computers running distributed software that makes them work together as a team. Distributed software turns a collection of networked computers into a distributed system. It presents the user with a single-system image and gives the system its personality. Software can turn a network of computers into a transaction processor, a supercomputer, or even a novel design of your own.
Some of the techniques used in this book's distributed algorithms might be new to many readers, so several of the chapters are dedicated to such topics. You will learn about the hardware needed to network several PCs, the operating system files that need to be changed to support that network, and the multitasking and the interprocess communications skills needed to put the network to good use.
Finally, there is a simple distributed transaction processing application in the book. Readers can experiment with it, customize it, or use it as a basis for something completely different.
CERTAIN CONTENT THAT APPEARS ON THIS SITE COMES FROM AMAZON SERVICES LLC. THIS CONTENT IS PROVIDED AS IS AND IS SUBJECT TO CHANGE OR REMOVAL AT ANY TIME.
|
<urn:uuid:17a225d1-9979-4441-8b71-b6229c35dd27>
|
CC-MAIN-2013-20
|
http://www.ske-art.com/qspur/0672323680
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708145189/warc/CC-MAIN-20130516124225-00004-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.897127 | 400 | 2.609375 | 3 |
Visit additional Tabor Communication Publications
August 01, 2011
Computer systems are being tasked with addressing a proliferation of graph-based, data intensive problems in areas ranging from medical informatics and social networks. As a result, there has been an ongoing emphasis on research that addresses these types of problems.
A four-year National Science Foundation project is taking aim at developing a new computer system that will focus on solving complex graph-based problems that will push supercomputing into the exascale era.
At the root of the project is Jeanine Cook, an associate professor at New Mexico State University's department of Electrical and Computer Engineering and director of the university's Advanced Computer Architectre Performance and Simulation Laboratory.
Cook specializes in micro-architecture simulation, performance modeling and analysis, workload characterization and power optimization. In short, as Cook describes, she creates “software models of computer processor components and their behavior to use these models to predict and analyze performance of future designs.”
Her team has developed a model that could improve the way current systems work with large unstructured datasets using applications running on Sandia systems.
It was her work while on sabbatical with Sandia's Algorithms and Architectures group in 2009 that led to the $2.7 million NSF collaborative project. Cook developed processor and simulation tools and statistical performance models that identified performance bottlenecks in Sandia applications.
As Cook explained during a recent interview:
“Our system will be created specifically for solving [graph-based] problems. Intuitively, I believe that it will be an improvement. These are the most difficult types of problems to solve, mainly because the amount of data they require is huge and is not organized in a way that current computers can use efficiently.”
Full story at Las Cruces-Sun News
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.
|
<urn:uuid:f0082209-ab2e-41f6-9c8c-60ea6898c693>
|
CC-MAIN-2013-20
|
http://www.hpcwire.com/hpcwire/2011-08-01/research_targets_graph-based_computing_problems.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701153213/warc/CC-MAIN-20130516104553-00067-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.921582 | 879 | 2.8125 | 3 |
An Overview of the DADO Parallel Computer
Mark D. Lerner; Gerald Q. Maguire; Salvatore Stolfo
- An Overview of the DADO Parallel Computer
Lerner, Mark D.
Maguire, Gerald Q.
- Technical reports
- Computer Science
- Permanent URL:
- Columbia University Computer Science Technical Reports
- Part Number:
- Department of Computer Science, Columbia University
- Publisher Location:
- New York
- DADO is a special purpose parallel computer designed for the rapid execution of artificial intelligence expert systems. This article discusses the DADO hardware and software systems with emphasis on the question of granularity. DADO is designed as a fine-grain machine constructed from many thousands of processing elements (PEs) interconnected in a complete binary tree. Two prototype systems, DADOI and DAD02, are detailed. Each PE of these prototypes consists of a commercially available microprocessor chip, memory Chips, and an additional semicustom I/O processor designed at Columbia University. The software includes a kernel and parallel languages. Under development are several artificial intelligence systems, including a production system interpreter, a logic programming language, and an expert system building tool.
- Computer science
- Item views:
|
<urn:uuid:bb079990-9f42-4437-a3a6-2a3bb6bc5983>
|
CC-MAIN-2013-20
|
http://academiccommons.columbia.edu/catalog/ac%3A145007
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705939136/warc/CC-MAIN-20130516120539-00065-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.856054 | 261 | 2.859375 | 3 |
This innovative text presents computer programming as a unified discipline in a way that is both practical and scientifically sound. The book focuses on techniques of lasting value and explains them precisely in terms of a simple abstract machine. The book presents all major programming paradigms in a uniform framework that shows their deep relationships and how and where to use them together.After an introduction to programming concepts, the book presents both well-known and lesser-known computation models ("programming paradigms"). Each model has its own set of techniques and each is included on the basis of its usefulness in practice. The general models include declarative programming, declarative concurrency, message-passing concurrency, explicit state, object-oriented programming, shared-state concurrency, and relational programming. Specialized models include graphical user interface programming, distributed programming, and constraint programming. Each model is based on its kernel language -- a simple core language that consists of a small number of programmer- significant elements. The kernel languages are introduced progressively, adding concepts one by one, thus showing the deep relationships between different models. The kernel languages are defined precisely in terms of a simple abstract machine. Because a wide variety of languages and programming paradigms can be modeled by a small set of closely related kernel languages, this approach allows programmer and student to grasp the underlying unity of programming. The book has many program fragments and exercises, all of which can be run on the Mozart Programming System, an Open Source software package that features an interactive incremental development environment.
|
<urn:uuid:c4f9d605-17f7-44f5-b75c-07d5e9c0ce26>
|
CC-MAIN-2013-20
|
http://www.amazon.ca/Concepts-Techniques-Models-Computer-Programming/dp/0262220695
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.931477 | 306 | 3.15625 | 3 |
A computer is programmable machine designed to operate, automaticallycarrying out the sequence of arithmetic or logical operations.Computer world is the source of technology and information for ITprofessionals.
Using web and internet is the wide source of IT, we get a reach to theentire world as a 'GLOBAL VILLAGE". The interface between the computer and human operator is known as the'user interface'. User use it for the better technology and havingactual results for the data inputed.
Computer's processing unit have aseries of instructions that makes it read, manipulate and store data.Computer have a wide range from large big screen desktop to small computers as palm laptops. every new discovery have a new and more advance technology in it, tomake it's place in the existing market.
Computer training for Microsoft Access, in London and UK wide. Learn how to use Microsoft Access with our Microsoft qualified trainers. We offer three one day MS Access training courses: Introduction, Intermediate and Advanced.http://www.microsofttraining.net/info-94-microsoft-access-course.html - Microsoft Access course >>
Get your site listed here:
List your business today! Get your website listed in the top spots of Computers
and attract motivated customers. Enjoy the benefits of global exposure and meet your business objectives. Submit your Computers
related website now!
|
<urn:uuid:30f9056b-3707-40e1-a744-2de15031b1eb>
|
CC-MAIN-2013-20
|
http://www.linkingdir.com/computers/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701562534/warc/CC-MAIN-20130516105242-00066-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.877524 | 278 | 2.640625 | 3 |
People in information technology (IT) have always had an inferiority complex. Over the past few decades, the IT industry has projected itself as producing well-engineered and thoroughly tested products. Partridge doesn’t believe a word of it. In spite of the rigorous application of project management techniques, IT projects continue to run over budget and over schedule and generally fail to meet their design goals. Throughout the book, Partridge gives details of both conventional engineering and IT disasters--there is no shortage of examples to choose from. The IT profession consistently blames failure on a lack of adequate management, but Partridge proposes the inability of humans to manage technical complexity as being at the heart of failures in large IT systems.
The book is divided into four parts. Part 1 provides an introduction to computer technology aimed at the non-computer literate. The basic craft of programming is explained, along with some of the difficulties programmers have in representing a real, analog world in machines in which only binary integers exist. Even experienced IT professionals will find the high overview of programming presented in these ten chapters to be a sobering reminder of the associated shortcomings and pitfalls involved.
Part 2 examines systemic problems that are a consequence of the problems inherent in programming. The consequences of the unmanageable complexity of IT systems in the world are discussed, as is the largely invisible dependency of our society on computer systems.
Part 3 gives us some hope for the future by looking at a number of techniques that may address the fundamental problems with IT system development. Chapters cover the promise of expert systems, the programming support environment, program visualization, and more radical approaches, such as the use of neural networks and statistical probability tools.
The two chapters of Part 4 provide a summary of Partridge’s arguments. Chapter 24, in particular, provides a bullet-point summary of each chapter. Indeed, if you wish to quickly understand what this book is about, then just read this one chapter and use it to point you to relevant chapters for further detail. Each of the four parts has a good introduction. The table of contents is thorough, there is a good index and glossary, and each chapter has relevant endnotes.
IT systems are pervasive; almost every aspect of modern life relies on a computer program. Some are innocuous, but a failure in some could place your life at risk (for example, aircraft flight systems). Partridge’s book alerts readers to the true lack of reliability of computer programs and the risk to which citizens in our modern world are exposed. Most of us have no choice but to accept the risk.
Other works in the area of IT systems management and software development, from classics such as Brooks’ to more recent works such as Schiesser’s , assert a lack of adequate management as the main reason for IT project and systems failures and suggest that enhancing management controls will solve the problem. Partridge counters that the inherent complexity of IT systems and the fallible nature of human programmers mean that faults cannot be eliminated, only minimized. This is a book that should be compulsory reading for computer programmers and IT project managers.
|
<urn:uuid:e0cfd9d1-449e-464c-a415-831795b89479>
|
CC-MAIN-2013-20
|
http://computingreviews.com/review/review_review.cfm?review_id=139680&listname=highlight
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00013-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.942171 | 631 | 2.8125 | 3 |
|a. a sequence of operations performed on data, esp by a computer, in order to extract information, reorder files, etc|
|b. (as modifier): a data-processing centre|
|an arrangement of five objects, as trees, in a square or rectangle, one at each corner and one in the middle.|
|a screen or mat covered with a dark material for shielding a camera lens from excess light or glare.|
Either the preparation of data for processing by a computer, or the storage and processing of raw data by the computer itself.
|
<urn:uuid:a2bc0b0a-cc3f-4a8a-9d91-f07bea95b1a5>
|
CC-MAIN-2013-20
|
http://dictionary.reference.com/browse/data-processor
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710115542/warc/CC-MAIN-20130516131515-00080-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.890278 | 118 | 2.78125 | 3 |
|ESRL training calendar | member libraries | contact us | home|
Unwanted or meaningless information in memory, on disk or on a tape.
An electronic door between one computer network and another. A device or set of devices that connects two or more networks, enabling data transfer between them. When the networks are similar, a gateway routes packets or messages. When the networks differ, a gateway also performs extensive protocol conversion.
Graphic Interchange Format. Compuserves non-platform specific format for low-resolution, compressed graphics interchange.
A client program available via the Internet that allows users to review and retrieve information on other host systems via easy-to-use menus.
A computer-generated picture produced on a computer screen or paper, ranging from simple line or bar graphs to colorful and detailed images.
Software that serves the group and makes the group as a whole more productive and efficient in group tasks. Example: Group Scheduling.
Graphical User Interface. Defines a format for scroll bars, buttons, menus, etc., and how they respond to the user.
A procedure performed by modems, terminals, and computers to verify that communication has been correctly established.
When a computer freezes, so that it does not respond to keyboard commands, it is said to "hang" or to have "hung."
A printed copy of machine output in a visually readable form.
A data-recording system using solid disks of magnetic material turning at high speeds.
Physical computer equipment such as electrical, electronic, magnetic and mechanical devices.
Circuits that are permanently interconnected to perform a specific function, as distinct from circuits addressed by software in a program and, therefore, capable of performing a variety of functions, albeit more slowly. Also used to describe a non-switched connection between devices.
The portion of a message, preceding the actual data, containing source and destination address and error-checking fields.
Users in need of help can often issue a command such as "?" to access on-line help and tutorial systems.
A computer that is made available for use by multiple people simultaneously.
In the context of networks, a computer that directly provides service to a user. In contrast to a network server, which provides services to a user through an intermediary host computer.
Hypertext Markup Language. A convention of codes used to access documents over the World-Wide Web. Without HTML codes, a document would be unreadable by a Web browser.
HyperText Transfer Protocol. Extremely fast protocol used for network file transfers in the WWW environment.
A device that is a center of network activity because it connects multiple networks together.
A pointer that when chosen displays the item to which it points. It typically takes the form of a button or highlighted text that points to related text, picture, video, or audio. Hyperlinks allow non-linear exploration of media that contain them.
Media (such as text, graphics, video, audio) that contains hyperlinks.
A document which has been marked up to allow a user to select words or pictures within the document, click on them, and connect to further information. The basis of the World-Wide Web.
122-126 S. Division St. Salisbury, MD 21801
|
<urn:uuid:514ab1e5-c61b-4c5a-bf97-cb55b9d9485c>
|
CC-MAIN-2013-20
|
http://www.esrl.lib.md.us/community/techterm_gh.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383218/warc/CC-MAIN-20130516092623-00085-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.878569 | 666 | 3.453125 | 3 |
1. Explain the operation of a computer system, including hardware and software components.
2.Explain the historical background and social impacts associated with computerization.
3. Use the elementary features of three popular software packages (WordPerfect, Lotus/123, and dBase) on an IBM-clone microcomputer.
4. Write a program "stack" in HyperCard on the Mac.
5. Use e-mail to send and receive messages.
6. Use Netscape to explore the "Web".
|
<urn:uuid:6645f7be-9555-4a8d-bb1b-4a7e1b222ba1>
|
CC-MAIN-2013-20
|
http://www1.assumption.edu/users/Katcher/page2
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706298270/warc/CC-MAIN-20130516121138-00072-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.811099 | 105 | 3.234375 | 3 |
The Architecture of Computer Hardware and System Software: An Information Technology Approach, 4th Edition
April 2009, ©2009
This price is valid for United States. Change location to view local pricing and availability.
Other Available Formats: Hardcover
- Provides students with an understanding of underlying, non-changing basics of computers so that they can make knowledgeable decisions about systems.
- Carefully and patiently introduces students to new technological concepts, so that they are not overwhelmed by challenging materials, but instead build a deep understanding of what makes computer systems tick.
- Examples cover a broad spectrum of hardware and software systems, from personal computer to mainframe.
- The author's "light touch" includes a breezy, readable writing style and subject-specific cartoons that introduce each chapter's material.
- As in the prior edition, discussions of hardware and system software are integrated into a single volume where symbioses between them are explored. Examples include: Virtual storage, Javabytes, distributed processing, and Virtual machines.
- Designed to serve as a reference book for the remainder of the student's career.
|
<urn:uuid:09ae6a1c-d346-4c3e-9492-a360336fb4f2>
|
CC-MAIN-2013-20
|
http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470530367,descCd-collegeFeatures.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709947846/warc/CC-MAIN-20130516131227-00010-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.885223 | 223 | 2.96875 | 3 |
Operating System Concepts 7E
Small footprint operating systems, such as those driving the handheld devices that the baby dinosaurs are using on the cover, are just one of the cutting-edge applications you'll find in Silberschatz, Galvin, and Gagne's Operating System Concepts, Seventh Edition.
By staying current, remaining relevant, and adapting to emerging course needs, this market-leading text has continued to define the operating systems course. This Seventh Edition not only presents the latest and most relevant systems, it also digs deeper to uncover those fundamental concepts that have remained constant throughout the evolution of today's operation systems. With this strong conceptual foundation in place, students can more easily understand the details related to specific systems.
* Increased coverage of user perspective in Chapter 1.
* Increased coverage of OS design throughout.
* A new chapter on real-time and embedded systems (Chapter 19).
* A new chapter on multimedia (Chapter 20).
* Additional coverage of security and protection.
* Additional coverage of distributed programming.
* New exercises at the end of each chapter.
* New programming exercises and projects at the end of each chapter.
* New student-focused pedagogy and a new two-color design to enhance the learning process.
In addition to his academic and industrial positions, Professor Silberschatz served as a member of the Biodiversity and Ecosystems Panel on President Clinton's Committee of Advisors on Science and Technology, as an advisor for the National Science Foundation, and as a consultant for several private industry companies.
Professor Silberschatz is an ACM Fellow and an IEEE Fellow. He received the 2002 IEEE Taylor L. Booth Education Award the 1998 ACM Karl V. Karlstrom Outstanding Educator Award, the 1997 ACM SIGMOD Contribution Award, and the IEEE Computer Society Outstanding Paper award for the article "Capability Manager", which appeared in the IEEE Transactions on Software Engineering. His writings have appeared in numerous ACM and IEEE publications and other professional conferences and journals. He is a coauthor of the textbook Database System Concepts.
Greg Gagne is chair of the Division of Computer Science and Mathematics at Westminster College in Salt Lake City where he has been teaching since 1990. In addition to teaching operating systems, he also teaches computer networks, distributed systems, object-oriented programming, and data structures. He also provides workshops to computer science educators and industry professionals. Professor Gagne's current research interests include next-generation operating systems and distributed computing.
Peter Baer Galvin is the chief technologist for Corporate Technologies (www.cptech.com). Before that, Peter was the systems manager for Brown University's Computer Science Department. He is also contributing editor for SysAdmin magazine. Mr. Galvin has written articles for Byte and other magazines, and previously wrote the security column and systems administration column for ITWORLD. As a consultant and trainer, Peter has given talks and taught tutorials on security and system administration worldwide.
Table of Contents
Chapter 1. Introduction.
Chapter 2. Operating-System Structures.
PART TWO: PROCESS MANAGEMENT.
Chapter 3. Processes.
Chapter 4. Threads.
Chapter 5. CPU Scheduling.
Chapter 6. Process Synchronization.
Chapter 7. Deadlocks.
PART THREE: MEMORY MANAGEMENT.
Chapter 8. Main Memory.
Chapter 9. Virtual Memory.
PART FOUR: STORAGE MANAGEMENT.
Chapter 10. File-System Interface.
Chapter 11. File-System Implementation.
Chapter 12. Mass-Storage Structure.
Chapter 13. I/O Systems.
PART FIVE: PROTECTION AND SECURITY.
Chapter 14. Protection.
Chapter 15. Security.
PART SIX: DISTRIBUTED SYSTEMS.
Chapter 16. Distributed System Structures.
Chapter 17. Distributed File System.
Chapter 18. Distributed Coordination.
PARTA SEVEN: SPECIAL PURPOSE SYSTEMS.
A Chapter 19 Real-Time Systems.
Chapter 20. Multimedia Systems.
PART EIGHT: CASE STUDIES.
Chapter 21. The Linux Systems.
Chapter 22. Windows XP.
Chapter 23. Influential Operating Systems.
Appendix A: UNIX BSD (contents online).
Appendix B: The Mach System (contents online).
Appendix C:Windows 2000 (contents online).
Sign up now »
- FTSenior Python DeveloperNSW
- FT.NET - Sitecore Developer - Melbourne - PermNSW
- FTR&D EngineerSA
- FTSenior Python DeveloperNSW
- FTSenior Python Web Applications DeveloperNSW
- FTOS Web Applications DeveloperNSW
- FTQuality ManagerSA
- FTJob Title: Mac Systems/ Enterprise Systems EngineerNZ
- FTFlash / ActionScript Developer - ContractNSW
- FTLead Software EngineerSA
If business-relevant information is not well managed, secured and analysed, it can become an underutilized asset or—worst case—a legal and competitive liability. Nearly all of the IT and business executives ...
"Darn those pesky laws that get in the way of commercial exploitation ..."
Larry Page wants to see your medical records
"Instead of partitioning the device between corporate and personal data, another approach ..."
Dual-Persona Smartphones Not a BYOD Panacea
"Well that's a nice back-handed compliment isn't it? So now, finally, my ..."
After two-year hiatus, EFF accepts bitcoin donations again
"Actually, both Mobile App developers and CIOs should be blamed for it. ..."
CIOs struggle to deliver timely mobile business apps: survey
"Too little too late. Spice normally has better standards than this. I ..."
Spiceworks' free management software gets integrated MDM
|
<urn:uuid:6a61a58a-6af4-4550-851e-0fd462b634e0>
|
CC-MAIN-2013-20
|
http://www.cio.com.au/books/product/operating-system-concepts-7e/0471694665/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701614932/warc/CC-MAIN-20130516105334-00022-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.864656 | 1,213 | 2.6875 | 3 |
The project requires skills in parallel computing, Unix and C, as well as exisiting debuggers like gdb.
Some more details can be obtained from [http://www.dgs.monash.edu.au/research/guard/].
Typogenetics, which is short for Typographical Genetics, was developed by D.R. Hofstadter to capture some of the concepts of molecular genetics in a typographical system. The system involves manipulating strings, called strands, consisting of four characters A, C, G, and T. These strands are operated on by enzymes which are sequences of basic operations. In turn, the enzymes are created from strands.
So given a strand we can create an enzyme which can then be applied to the original strand to produce a new generation of strands. These new strands can then be used to create new enzymes which can then be applied to the current generation of strands to produce the next generation, and so on. If at any stage we have two copies of the original strand in one generation, we say that the original strand is self-reproducing. While, if at any stage we are left with no strands, we say the original strand is self-destroying.
Interesting questions are:
The aim of this project is to investigate the field of Typogenetics, and to answer some of these questions. In particular it is envisage that the project will involve the development of a system that allows a user to:
Hunt the Wumpus was an early computer game created by Gregory Yob. Since then it has been used in AI as a testbed environment for intelligent agents. The aim of this project is to use this game as a testbed for developing techniques that learn users' profiles. The reason this game has been chosen is that it is simplified version of a domain we are planning to investigate in our work on user profiles. This project will involve creating agents that can operate in an existing Wumpus World simulation written in C++, and implementing a system that performs user profiling. These agents will need to be able to
There is a research project to develop a data mining software library and graphical user interface. A mixture of Java and C is being used. Graphical components will be written in Java (Sun Microsystems). Numerical routines will be written in Java or C as appropriate and this may include adapting existing code. There may be opportunities for honours projects for good programmers in this area. The ability to work with others is important.
The project is to write a translator from Java to C. It would then be possible to get the Software Engineering advantages of writing in Java with the speed of C. There would be losses in some areas, e.g. array bounds checking and security. The aim is to retain as much of the structure of the Java source program in the translated C, not just to treat C as a `machine code'. There may well be java2c translators in existence but this would not necessarily abort the project; a quick search failed to find any public translators.
Previous experience and demonstrated ability in interactive computer graphics is a necessary prerequisite for this project. The aim is to provide a 3-D world, distributed over many servers and clients, and to investigate and solve the problems of maintaining consistent views and time-lines for the users. Jamie Cameron's 1995 "indoors" world may provide some inspiration [http://www.cs.monash.edu.au/hons/projects_1995/Jamie.Cameron/index.html] In 1997 Eugene Ware produced an "outdoors" world programmed in Java and VRML (not available in '95). The aim is to develop this world further.
The aim of the project is to build a graphical environment in which objects such as graphs and networks can be interactively edited and manipulated. Information about the object, such as vertex degrees, cycle lengths, diameters etc. is to be displayed, and this displayed information must be updated as changes are made. The system must enable the user to define their own type of combinatorial object, and to specify what sort of information is to be displayed and maintained about the objects. Object-oriented design will be important. Programming will be in Java.
This project will involve developing a multi threaded servlet in Java and a client applet with a GUI. The servlet is required to generate and assess multiple choice questions using a questions database. The questionaire is then forwarded on to the client applet in a form as determined by servlet settings and user priviliges. The applet sends user responses back to the servelet for evaluation. This system allows for multiple simultaneous use of the questions server whilst maintaining data security and verifying user identity.
Although the project will involve considerable Java programming (around an existing framework) the research will be mainly concerned with electronic assessment methods that seek to address inherent problems associated with multiple choice tests.
The assessment methods should foster the students' cognitive abilities and measure their true state of knowledge. One the assessment methods of the multiple choice responses will incorporate a scheme which allows students to express their level of confidence in their answers and discriminates between students' knowledge states. This method is aimed at encouraging honesty.
To enhance the feedback to students and academics the scoring system not only calculates a mark but also calculates a self estimation factor, and a trust factor. The self estimation is the overall mark that the student expects to achieve based on the given responses. If the student's confidence levels are appropriate and justified, the self estimation will closely match the actual overall mark. The trust factor is a measure of the trustworthiness of the given information. A trust factor of 100% indicates that optimal results can be obtained by completely adopting the student's answers. Conversely, a trust factor of 0% indicates that no information about the actual correct answers can be gained from the student.
The application will be Web based with protection against unauthorised access through server and Java implemented security.
Many of the programming mistakes made by novice programmers result from their inability to correctly translate an algorithm to code. This project aims to produce software which assists them in overcoming these such problems by automatically translating their erroneous code back to algorithmic language (with annotations indicating possible errors).
In taking on this project all of the following would be of definite advantage:
Hypercard, Supercard, and Metacard are all hypermedia, stack-based, WYSIWYG, persistent, event-driven, rapid software development systems. Each provides extensive customizability through a proprietory scripting language.
The aim of this project is to create a free implementation of a similar application (LlamaCard), using Perl as the scripting language and various standard Perl libraries to help implement the GUI, so that the entire package is platform-independent.
Experience with one of the above applications and with Perl programming would be highly advantageous.
DEMUN (Distance Education Mark-up Notation) is a tag-based text annotation system for building complex, highly interlinked, interactive WWW-based course materials from simple ASCII text files.
The aim of this project is to build a DEMUN-to-HTML converter that is capable of automatically and incrementally generating:
Implementation would probably be in the Perl language, so experience in Perl would be a significant advantage.
This project builds on some existing work done in C++ where a different and (hopefully) better approach was tried. This project performs an AI search to piece together data which happens to come from the Human Genome project. There is a need to view results of the search, so a GUI using either Java or tcl/tk will probably also be developed. (No knowledge of biology/biochemistry is required!)
Joyce-Linda is a language that forces the use of parallel processes. The Linda communication model is provides a convenient mechanism for many sorts of parallel programs. This project involves changing the Joyce-Linda compiler and runtime support code to run with different packages, including running across a network.
This project involves the design & construction of an anmiation tool for the production of visual effects utilizing large numbers of tiny objects known as 'particles'. Particles have been used in the past to model clouds, steam, fog, fire, sparks and similar real world phenomena. In this project, not only are particles subject to global external forces such as gravity and the effects of wind, the particles may interact with one another in a manner akin to chemical interactions.
For example, a number of blue particles may hang freely and unchanging in space. When a red particle is introduced to the system, the blue particles are pulled towards it. When they come into contact with the red particle they too become red and are thrown violently away from the other red particles in the vicinity, producing a dazzling blue implosion followed by an explosion of red streaks.
The student should have a thorough understanding of C or C++ and should have completed 3rd year graphics. It is suggested that the student should also participate in the Graphics and Artificial Life course at HONS level (csc415)
SUGGESTED INTRODUCTORY READING:
Much work has been done at Monash in the areas of unsupervised and supervised learning by Wallace, Dowe and others for uncorrelated data. Some work has also been done at Monash using these techniques for correlated data. This project will head in that direction.
A reasonably strong mathematical background will be required. Programming will almost certainly be in C. This project is closely related to a project done by Russell Edwards in 1997. It has the potential to go in either parallel or tangential directions. There is no need to have done any 3rd year subject on AI. However, if doing this project, you are strongly encouraged to take my 4th Year Hons. 2nd semester subject CSC423 Learning and Prediction, which is the only Hons. subject teaching any amount of MML.
Finite state machines can be used to model grammars or syntax. Some bodies of data can reasonably be assumed to have come from some underlying, but unknown, grammar (or finite state machine). When the data is of great interest to us, we will be interested in inferring the finite state automaton from which the data came. This project will use the Minimum Message Length (MML) principle and will be quite mathematical in nature. It will build upon work done at Monash by Wallace and Georgeff (1983) and more recently by some of their collaborators.
Artificial sample data will initially come from generating data from some model and then seeing how well the program can discover it. Real sample data to be analysed will come from DNA, proteins and speech patterns. There is no need to have done any 3rd year subject on AI.
A strong mathematical background will be required. Programming will almost certainly be in C. If doing this project, you are strongly encouraged to take my 4th Year Hons. 2nd semester subject CSC423 Learning and Prediction, which is the only Hons. subject teaching any amount of MML.
Attaching to predictions an indication of how certain the predictor is, and rewarding such predictions properly, are important issues in many fields. This project focuses on football tipping because it is topical, accessible and may be useful in teaching. The project is partly software engineering, and partly implementation of ideas concerning prediction and inference.
For the last three years, the Department of Computer Science has run a football tipping competition in which participants must nominate, for each game, not only which team they think will win, but a probability that that team will win. Tips are scored according to a simple formula, and the theory is linked to information theory and gambling theory. This year the competition was extended to allow participants to nominate a mean and standard deviation for the margin of each game. Again, there is a soundly based way to score such tips. The competition is currently run using software written in C++ (with a curses interface) by John Hurst. The software is written as a literate program (nutweb), and managed by a version control system (RCS), currently at www.csse.monash.edu.au/~footy/ .
The aim of the project is to implement new probabilistic football tipping ideas in software, and to extend the software so that the competition can be run over the World Wide Web.
In more detail, the main tasks of the project are to:
Neural networks (NNs) are a technique in machine learning and "data mining" which are often useful but which have some notoriety for over-fitting the training data and then predicting comparatively poorly, for needing a lot of human tuning and for being "black box" predictors which obscure any semblance of an underlying model. This project is concerned with using Minimum Message Length (MML) to correct these problems for relatively simple neural networks. MML is robust to over-fitting noise, and fits explicitly parameterised models which predict well. The project will entail using MML to model the logistic or sigmoid function in NNs under the guidance of the supervisor, and to use MML to balance the cost of the number of hidden layer nodes and the inter-connection weights with the goodness of fit to the data. Some acquaintance with NNs and a reasonably strong mathematical or statistical background are essential.
According to both the principles of MML and of Kolmogorov complexity, the most likely theory to infer from a body of data is that which gives rise to the shortest two-part compression of the data. However, on occasions, this will not give rise to a simple explicit model of the variable we wish to predict explained as a function of the other, explanatory, variables. One of many cases in point is when the variable of most interest to us is unobserved but known to come from a distribution between 0 and 1; but we do observe a second variable which functionally depends upon it. We have to balance our prior beliefs regarding the likely values of the variable of interest against the values which would have been more likely to cause the observed value of the second variable.
D L Dowe and C S Wallace (1998). Kolmogorov complexity, minimum message length and inverse learning, abstract, page 144, 14th Australian Statistical Conference (ASC-14), Gold Coast, Qld, 6 - 10 July 1998.
Wallace, C.S. and D.L. Dowe (1999). Minimum Message Length and Kolmogorov Complexity, accepted, in press, to appear, Computer Journal.
Reading : D L Dowe and A R Hajek (1998). A non-behavioural, computational extension to the Turing Test, pp101-106, Proceedings of the International Conference on Computational Intelligence & Multimedia Applications (ICCIMA'98), Gippsland, Australia, February 1998.
relevant web size .
Reading : 1) C. S. Wallace, A long-period pseudo-random generator, Tech Rept #89/123, Dept of Computer Science, Monash University, February 1989.
2) Find Random Number Generator by doing the following:
1. Go to here 2. Click on " C Programming " on left of the screen 3. Click on " Code Snippets " 4. Under the heading " Portable functions and headers " click on " Random number functions " 5. Click on " Rand1.C "
The Efficient Markets Hypothesis (EMH) asserts, even in its weakest most innocuous forms, that in a market situation with no insider trading and all having access to the same public knowledge, supposedly no trading strategy can be expected in the long-term to outperform the market average. The cause of this misconception is due at least in part to the intractability of finding patterns in data that might appear to be random both on the surface and even after some analysis. Dowe and Korb (1996) have argued why markets must almost always be inefficient, and Dowe, Korb and Stillwell (in preparation) are demonstrating with empirical real-world data that practice indeed seems to follow theory. In this project, we instead create artificial, simulated markets with market agents trading with some strategies and price being driven by a combination of some underlying function and the trading strategies of the participant agents. The student will use Minimum Message Length (MML) or related machine learning techniques to discover inefficiencies in such artificial markets and also to discover trading strategies which beat the market average in the long run.
Graphs and networks are useful models of many different kinds of information: electronic circuits, software engineering diagrams, communications networks, the WWW etc. Many applications require such graphs to be laid out in space in some way, perhaps for display as a diagram or for implementing as a circuit in several layers in VLSI. In some such cases, it is important to represent the graph in 3 dimensions, and also that any angles where edges meet or bend be right angles. Such a drawing is called a 3D orthogonal graph drawing.
For an example of such a drawing, see http://www.cs.newcastle.edu.au/~richard/phd/k7-wood.gif. (The graph drawing underlying this picture was found by local PhD student David Wood, and the picture itself was made by Richard Webber (PhD student, University of Newcastle).)
The aim of the project is to develop a tool which allows 3D graph drawings to be manipulated interactively. The tool will use the C++/Java constraint solving toolkit QOCA to ensure that the constraints of orthogonality etc are properly handled. A graphical editor for 3D orthogonal graph drawings will need to be written, probably in VRML. (Prior experience in VRML is not necessary.)
The tool should provide an interesting case study in the application of QOCA, and should be useful in current research on 3-dimensional orthogonal graph drawing. (Indeed, its desirability became apparent in research by David Wood, supervised by Graham Farr.)
Strict Minimum Message Length (SMML) is a criterion (due to Wallace) by which models of data can be assessed. It can be shown to have many desirable properties, and is important in the theory of machine learning. It is, however, very difficult to compute, except in the simplest cases. In the binomial case, we have an exact and efficient algorithm, but the trinomial case is significantly harder, and may well be NP-hard. The aim of this project will be to implement and study some algorithms for cases such as the binomial, trinomial and normal, and thus hopefully shed light on the theory. A strong mathematical background is required.
For many purposes, it is convenient to regard DNA as a long sequence of symbols, where each symbol is one of C, G, A, T. It is currently not possible for scientists to determine the entire sequence for complex life forms. Small fragments of the sequence can be found, but then there is the problem of piecing the fragments together in the correct order to determine the entire sequence.
This project looks at a class of algorithms for this problem based on a divide-and-conquer approach. Algorithms will be implemented and studied experimentally.
Some background in discrete mathematics would help.
Programming will be in C++ and will build on the Leda package for combinatorial computing.
The project is part of work I am doing with Prof. Peter Eades (University of Newcastle).
The sort of games considered here are board games like Chess. A long term aim is to be able to take as input a record of the chess games of (e.g.) Garry Kasparov (World Champion), and infer something (but not everything!) about his chess playing strategy. This may be an ambitious goal, but we propose to move toward it in achievable steps. Initially, a program capable of inferring very simple aspects of strategy would be developed, and tested using records of games played by appropriately simple computer players. We have developed some basic methods for doing the inference, and expect to improve on them. Several types of inference are possible; among these, we intend to apply the principles of Minimum Message Length (MML) inference. This project builds on a 1997 project by Tony Jansen.
Programming will almost certainly be in C. If doing this project, you are strongly encouraged to take the 4th Year Hons. 2nd semester subject CSC423 Learning and Prediction, which is the only Hons. subject teaching any amount of MML. (See http://www.csse.monash.edu.au/~dld/chess.html.)
(It is not clear whether these projects will run. See Graham Farr if interested.)
Image tracking is a computationally expensive process. In recent times there have been a number of attempts to implement image tracking in hardware. Some recent results are very encouraging; leading us to believe that an implementation in FPGA using VHDL and the Mentor Graphics ECAD software, will be quite successful. The hardware component of this project would involve the acquisition of an off-the-shelf FPGA board to which the memory (to hold the images) has to be interfaced.
SGML (or "Standardised General Markup Language") is a standard for defined document class and structure, and there have been a number of tools to automate the processing of such documents (see for example: http://www.sgmltools.org).
Unfortunately, SGML is not suitable for delivering documents to the web, and a new standard, XML (or "eXtensible Markup Language") has arisen. XML is a subset of SGML, and is designed specifically to facilitate the delivery of documents on the web. For example, automatic conversion of XML documents to HTMl is possible, as is conversion to LaTeX/TeX, groff, rtf or even just plain ASCII text!
These all depend upon two basic tools, modelled fairly closely on compiler technology: a parser that determines that a document is "well-formed" and "valid"; and a back-end that translates to the document form required. Just as with compiler technology, much research has been invested into making these two processes table driven, and this project is about investigating these two aspects.
DTD, or "Document Type Definition" files describe the structure of documents, and how they are to be parsed. DSSSL, or "Document Style Semantics and Specification Language" files are about how the documents are rendered into presentation form (TeX, HTML, etc.). In addition, there are other standards such as XLL ("XLink Language") that provide models for document analysis, search and retrieval operations to the web.
The project is to investigate the use of XML as a document paradigm, to define appropriate DTD and DSSSL prototypes, and to write code to produce working models of the tools described above. Real life examples will be drawn from the university environment. A knowledge of HTML (and to a lesser extent, LaTeX, TeX, and rtf) formats would be helpful, although not essential.
The most expensive phase in the software life cycle is program maintenance, during which programs typically get modified so frequently that this phase may account for up to 70\% of their total development cost. A major factor in this cost is the need for software engineers to re-engineer some (rarely all) of the program code to either fix bugs, or to develop new functionality. In both of these cases, having access to the design decisions made during initial and subsequent program development can be invaluable in terms of understanding the code.
Literate Programming offers a mechanism to maintain such information. Originally proposed by Donald Knuth in 1984, the idea has seen a resurgence of activity with the advent of "second generation" (language independent) and "third generation" (WEB based) literate programming tools. The use of advanced macro processing features can also ease the program maintenance task, through appropriate revision control and platform independence mechanisms.
This project is about exploring these issues, and developing a state-of-the-art literate programming tool.
This project involves extending and improving BPP. Possible tasks include:
Robots generally are programmed by leading through the required motion with a teach pendant. Off-line programming which is less common, is available on two of our industrial robots permits the bulk of programming to be written without accessing robot motion. Viewing operation of robot has to now required presence at the robotic cell, by using image data via the internet it is possible to observe and control robot tasks from any connected terminal.
Users will require images of the robot work cell for controlling moves and monitoring robot execution, and interface to the robot server for loading, compiling and control of robot. The Telerobotic System, Figure 1. shows how a request from an operator to the HTTPD server launches a CGI script that communicates with the robot image servers to perform the requested move and obtain new pictures. The robot, robot controller and robot server are existing equipment.
User interaction is proposed by using Common Gateway Interface (CGI) to control physical devices by reading input from an HTML form, performing the required operation, and feeding back the result on a returned HTML page. Java offers possibilities for increasing the sophistication of the interface.
The image server development including three camera system for one robot cell three dimensional viewing connected to image server will be the first phase. This work will present images to HTML page for Teleautonomous control in discrete command control.
This work involves interface controls for trajectory control and motion kinematics, and off-line programming facility to http server. Multiple observers and off-line programmers with one active user are the conditions to be satisfied through the server.
Currently, when performing analytical flutter clearances for the RAAF, a technique is used which, whilst giving an accurate estimate for the flutter speed, can result in non-physical solutions remote from a flutter point. It would be desirable to implement a technique which gives realistic solutions throughout the range of desired airspeeds for two reasons: i) greater faith in the results in the mind of the customer; and ii) such physically realistic solutions can be used in flight-flutter trials to monitor the performance of the mathematical model well before a potential flutter region is approached.
A method of solution which achieves the above aims does exist, but, for a number of reasons, it is computationally intensive and, therefore, time consuming. One of the main difficulties with this method is that sufficiently small increments in airspeed must be chosen such that the results at previous airspeeds can be used to predict a possible solution from which to start the iterations for the next airspeed. The question is then, what is a sufficiently small increment? In early implementations of technique, a uniform, very small increment was used for all of the roots to be solved. What is really sufficiently small, for any given mode, however, depends very much upon the behaviour of that mode in the region of interest. In a previous Honours project, a student developed some algorithms to vary the airspeed increments for any given mode as the airspeed is increased. This honours project will build on that work by:
A technique has been developed by which creating the mathematical model is treated as a highly complex, non-linear optimisation process. All of the physical parameters required to define the model can be considered as unknown. Depending upon the model, the number of properties to be determined can vary from a few to hundreds. The best approach which has been found to-date to tackle such a problem is by use of Genetic Algorithms (GAs).
Genetic Algorithms have been used with considerable success at AMRL on the problem of deriving optimal structural dynamic models for aircraft structures. Such a problem involves optimising of order 100 parameters (ie. carrying out an optimisation in 100 dimensional space). Expermental data is collected during the shaking of an aircraft in a ground vibration test (GVT) and then a GA is used to create a model which gives good correlation with these data. Inherent to such a process is the issue of model complexity (ie. how many parameters can be determined given a set of experimental data; this is also addressed in the model optimisation process, but outside of the GA.
A means of distributing such tasks around a network seems to have been developed at the Oak Ridge National Laboratory in the US. This system is called a "Parallel Virtual Machine" (PVM) ( http://www.epm.ornl.gov/pvm/pvm_home.html). The proposed project would involve investigating how such a PVM could be set up for a collection of PCs using Windows NT and Windows 95/98 and hopefully demonstrating a simple parallelised process on two or more such machines. Then, investigating how the GAs described above could optimally be distributed amongst a range of machines over a network which will have differing processor speeds and differing amounts of spare processing capacity. Also, a means by which the system would be sufficiently robust to handle machines coming on and off line during processing would be required.
How much of the above proposal would be achieved over the course of a final year project will clearly be dependent on the problems which are encountered during the course of the project.
This project would involve several extensions to the basic existing hardware and software implementation.
This project would suit a BCSE student who has completed CSC2091/3091 Artificial Intelligence and who can program in C. In addition, non-engineering student could also do this project, focusing on the software aspects.
Andrew Paplinski will be on sabbatical in 2nd semester so will not be offering a project in 1999.
If we select a pixel at random from a picture, it is highly likely that the pixel will be in the same segment as its nearest neighbouring pixels. The next most likely outcome is that a pixel and its nearest neighbours are likely to be in no more than two segments. That is, the block straddles the boundary between two regions. If a pixel and all its neighbours are in the same segment, then we may use all the pixel values in subsequent processing of the current pixel. If we determine that some neighbouring pixel values are not in the same segment as the current pixel, then we should exclude those pixels from processing of the current pixel.
The aim of this project is to investigate ways of looking at small blocks of pixels and deciding whether the pixels in the block all belong to the same segment or whether they belong to at least two different segments. Minimal Message Length (MML) Inductive Inference gives us a way of choosing between two possible models or explanations for a body of data. In this case, MML techniques will be used to decide whether a block of pixels can be best described by saying that all pixels in the block belong to the same segment or they belong to at least two different segments.
The process of classifying blocks of pixels as consisting of only one segment or at least two segments can be termed local segmentation. Local segmentation is of fundamental importance in an image coding technique called Block Truncation Coding. In addition, local segmentation can be used in algorithms for edge detection, segmentation and for noise removal.
Binary images are picture whose pixels are either black or white. Binary images are often used to represent faxes or pages of documents. The Fax group 3 and group 4 compression algorithms were designed to compress binary images which consist of pages of text. These algorithms are quick and achieve reasonable results on pages of text but they perform poorly on other kinds of binary images.
The algorithms in the JBIG standard were designed to adapt to the characteristics of the binary image and perform very well across a wide range of binary images. The JBIG algorithms perform better on images of text than the Group 3 and Group 4 algorithms and can still achieve good compression on other kinds of binary images, e.g. scanned pictures and dithered pictures. The JBIG algorithms are more expensive to implement and they make use of a form of arithmetic coding which has been patented by IBM.
The aim of this project is to develop a way of compressing binary images of text that will deliver as much compression as the Group 3 and Group 4 algorithms, will adapt to the characteristics of the binary image and will deliver better throughput rates than the JBIG algorithm. Ideas to be investigated include: transforming the binary image, use of run-length coding, use of adaptive modeling, Rice coding and chain coding.
Block Truncation Coding (BTC) is a lossy method for compressing image data. It can also be applied to the lossy compression of sound data. BTC methods are relatively inexpensive to implement and BTC-coded images can be decoded very efficiently but they require relatively high bit rates to achieve a particular level of image quality.
The aim of this project is to investigate the use of prediction and interpolation in BTC coding. In addition, the coding may either use a combination of Rice coding and run length coding or it may use arithmetic coding. The aim is to see how well images can be encoded at the kinds of bit rates that are used in digital television. In video sequence coding, images might be encoded as the difference from a previous image or an image might be coded without referring to any other images. It would be interesting to see how well a BTC-based coding scheme can perform on both kinds of images.
The aim of this project is to use Minimal Message Length (MML) techniques to come up with a good objective criteria for comparing segmentations of the same image. The approach involves constructing two-part messages for each segmentation. The first part of the message allows us to recover the segmentation of the image. Once the segmentation has been recovered, it is used in conjunction with the information in the second part of the message to recover all the information in the original image. The use of the MML criterion suggests that the two-part message with the shortest overall message length is the one with the best segmentation.
The project involves looking at ways of encoding a segmentation and then using the segment information to encode the image. One approach is to associate a segment number with each pixel and to form an image of the segment numbers. This image we can call the segment map. Segment maps can be regarded as being generalised forms of binary images and effective techniques for the lossless coding of binary images may be generalised and applied to segment maps.
This project investigates genetic and other algorithms for the optimisation of fuzzy controllers in terms of membership functions and linguistic rules.
Existing processor architectures are mostly designed and optimised for crisp logic processing. This project seeks to establish and simulate some alternative processor architectures dedicated for fuzzy rule processing. It involves analytical study, VHDL modeling, simulation and FPGA implementation if possible.
Thresholding is one of the most popular approaches to image segmentation. In this approach, all the pixels having a certain range of some image property, say, intensity, are considered to belong to one group. Connected regions in these groups lead to the desired segmentation. In the past, the probabilistic entropy measure of Shannon, defined in the context of Information Theory, has been successfully applied in determining the gray level threshold between the "object" and the "background" regions of an image, assuming separate probability distributions for the object pixels and the background pixels. Image segmentation methods also exist in which non-probabilistic entropic criteria, devised in the fuzzy set-theoretic framework, have been used.
An Honours project carried out in the year 1997 involved the study of two probabilistic distance criteria, namely, the Bhattacharyya distance and Kullback-Leibler Divergence function, as image thresholding criteria. The purpose of the proposed project will be to make further investigations in the area of probabilistic distance-based image segmentation methods and their applications for both monochrome and colour images.
The aim of the project will be to make a critical study of a number of existing non-hierarchical cluster analysis methods, such as, c-means, fuzzy c-means, and isodata, with a view to their application in image segmentation. Study of cluster validity will form an essential part of the project. Applications will include both monochrome and colour images.
Content-based image retrieval is a well-known method of image retrieval. The objective of this project is to devise an algorithm for retrieval of images from a large image data base based on colour quantisation and regional quantisation.
Texture plays an important role in both human interpretation of visual scenes and computer analysis of images. Textural cues are of particular relevance in two different, but related, image analysis problems, namely, the problems of (i) segmentation and (ii) classification of images. The proposed project will deal with both of these problems. It will involve two phases. In the first phase some existing texture analysis methods will be investigated from the point of view of their theoretical soundness as textural measures as well as their practical applicability. In the second phase attempts will be made to derive a new methodology with particular attention to its computational efficiency.
The objective of document image analysis is to recognize text, graphics, and pictures in printed documents and to extract the intended information from them. There are two broad categories of document image analysis, namely, textual processing and graphical processing. Textual processing includes skew determination (any tilt at which the document may have been scanned), finding columns, paragraphs, text lines and words, and performing optical character recognition. Graphical processing deals with lines and symbols. The scope of the proposed project will be the aspect of text processing and it will concentrate on the development of a system for page layout analysis.
Experience shows that in recognition of patterns by humans, the recognition is made based on only a few important features (or attributes) characterizing the pattern classes. Conversely, in statistical pattern recognition, the patterns are often represented by a large number of numerical features. Although there is no conceptual justification in reducing the number of features to a small number, in practical problem solving, this becomes a necessary step due to the wellknown phenomenon of the 'curse of dimensionality' of the feature vector on the complexity of the pattern classifier. The aim of the proposed project is to develop an interactive feature selection paradigm well-suited to multiclass (rather than 2-class) pattern recognition problems.
The project will comprise four components, namely, (i) theoretical comparison of existing feature selection criteria, (ii) development of software tools for the existing criteria, followed by an experimental investigation of them, (iii) analysis of the above results leading to, hopefully, a feature selection paradigm, and (iv) development of an interactive software tool for the above paradigm. The software developed will be 'interactive' in the sense that depending on the classification accuracy arrived at a certain stage, the user will have the option of changing the dimensionality value. Options will also be available to supply interactively a range of values of the dimensionality. The software tools will include procedures for displaying the distribution of pattern samples in different feature spaces obtained by different feature selection methods.
a) an investigation of speech coding techniques at low bit rates;
b) an implementation and evaluation of a 16 or an 8 kbps speech codec in the form of either software or hardware.
This project, which is a continuation of a project undertaken in 1998, considers the problem of automatically generating summaries from documents on the WWW. The student will use simple statistical methods to determine ``important information'' from the articles. This information will be collated into summaries using simple linguistic techniques. This years' project uses a corpus of documents obtained from Telstra, which are more homogeneous than those used in last year's work. This feature is expected to have some impact on the summarization techniques being applied.
A student undertaking this project should have some grounding in statistics.
This project continues similar projects undertaken in previous years. It deals with the development of complex agents which interact with a user. These agents should combine planning and problem-solving capabilities as well as emotion-related features. The focus of the project is to study the interaction between an agent's emotional state and its resources (e.g., planning ability, memory) with the goal of carrying out a dialogue (in limited form) with the user.
A student undertaking this project should have previously taken CSC3309 (Artificial Intelligence).
This project is a continuation of a 1998 project. It consists of designing and implementing a computerized tutor based on an animation process which illustrates the workings of SPIM, a MIPS simulator which is used for teaching computer architecture at first and second year level. The animation, which was implemented in 1998, highlights the different components of the architecture and the data flow for a small set of MIPS instructions. The animator may be accessed at this web site. The computerized tutor will determine parts of the subject in which the student's knowledge is inadequate. This will be done based on traces of the data path clicked by the student. The tutor will then decide on appropriate help actions, e.g., presenting explanations of architecture components or giving hints about how to proceed. Some of the data collection software required to support the tutor was implemented over the summer of 1999.
There is a possibility of some funding with these projects. Students interested should speak to Mark Jessell, and also Ann Nicholson, the honours coordinator before nominating these projects in their preference list.
Mark Jessell Australian Geodynamics Cooperative Research Centre Dept of Earth Sciences Monash University, Clayton, VIC, 3168 Australia [email protected] Tel (61)(3) 9905 4902 Fax (61)(3) 9905 4903Home-Page.
|
<urn:uuid:30f18c33-797d-4946-98f1-06b19c500147>
|
CC-MAIN-2013-20
|
http://www.csse.monash.edu.au/~annn/hons/projects1999.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.931216 | 8,618 | 2.609375 | 3 |
Web edition: August 18, 2008
Libraries and other archives of physical culture have been struggling for
decades to preserve diverse media — from paper to eight-track tape recordings —
for future generations. Scientists are falling behind the curve in protecting
digital data, threatening the ability to mine new findings from existing data
or validate research analyses. Johns Hopkins University cosmologist Alex Szalay
and Jim Gray of Microsoft, who was lost at sea in 2007, spent much of the past
decade discussing challenges posed by data files that will soon approach the
petabyte (1015 —
or quadrillion — byte) scale. Szalay commented on those challenges in
Scientific data approximately double every year, due to the availability of successive new generations of inexpensive sensors and exponentially faster computing. It’s essentially an “industrial revolution” in the collecting of digital data for science.
But every year it takes longer to analyze a week’s worth of data because even though the computing speed and data collecting roughly doubles annually, the ability to perform software analyses doesn’t. So analyses bog down.
It also becomes increasingly harder to extract knowledge. At some point you need new indexes to help you search through these accumulating mountains of data, performing parallel data searches and analyses.
Like a factory with automation, we need to process and calibrate data, transform them, reorganize them, analyze them and then publish our findings. To cope, we need laboratory information-management systems for these data and to automate more, creating work-flow tools to manage our pipelines of incoming data.
In many fields, data are growing so fast that there is no time to push them into some central repository. Increasingly, then, data will be distributed in a pretty anarchic system. We’ll have to have librarians organize these data, or our data systems will have to do it themselves.
And because there can be too much data to move around, we need to take our analyses to the data.
We can put digital data onto a protected system and then interconnect it via computer networks to a space in which users can operate remotely from anywhere in the world. Users get read-only privileges, so they cannot make any changes to the main database.
For the Sloan Digital Sky Survey data, we have been giving an account to anyone with an e-mail address. People with accounts can extract, customize and modify the data they use, but they have to store it in their own data space. We give them each a few gigabytes.
We currently have 1,600 users that are using [Sloan data] on a daily basis. Those data become a new tool. Instead of pointing telescopes at the sky, users can “point” at the data collected from some portion of the sky and analyze what they “see” in this virtual universe.
This is leading to a new type of eScience, where people work with data, not physical tools. Once huge data sets are created, you can expect that people will find ways to mine them in ways we never could have imagined.
But key to its success is the need for a new paradigm in publishing, where people team up to publish raw data. Perhaps in an overlay journal or as supplements to research papers. Users would be able to tag the data with annotations, giving these data added value....
The Sloan Digital Sky Survey was to be the most detailed map of the northern sky. We thought it would take five years. It took 16. Now we have to figure out how to publish the final data — around 100 terabytes [0.1 petabyte].
archiving of the data is in progress. There’s going to be paper and digital
archives, managed by the
Today, you can scan one gigabyte of data or download it with a good computer system in a minute. But with current technologies, storing a petabyte would require about 1,500 hard disks, each holding 750 gigabytes. That means it would take almost three years to copy a petabyte database — and cost about $1 million.
We generally try to geoplex, which means keeping multiple copies at remote geographic locations. That way, if there is a fire here or a meltdown there, backup copies are unlikely to be affected. We’re also trying to store data on different media. Eventually, I think we’ll probably load data on DVDs or something, which can go into cold storage. We’ll still have to recopy them periodically if we want digital data to survive a century or more.
This is something that we have not had to deal with so far. But it’s coming — the need to consider and plan for curation as data are collected. And it’s something that the National Science Foundation is looking at: standards for long-term digital-data curation.
|
<urn:uuid:78f0799b-dddd-41a0-973a-49add98f7d33>
|
CC-MAIN-2013-20
|
http://www.sciencenews.org/view/generic/id/35263/description/Preserving_digital_data_for_the_future_of_eScience
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704517601/warc/CC-MAIN-20130516114157-00084-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.936448 | 1,006 | 2.859375 | 3 |
Data Structures using C Papers is a computer application paper that makes use of programming languages like C and C++ to create data structures. It starts with analysis of algorithm and goes on to cover elementary principles of C and basic data structures including dynamics of arrays and link representation. Stacks and queues formed by abstract data types, trees and graphs are also taught extensively. The methods of searching and sorting and file structures and complexities take up a crucial part of the syllabus. A little detail into the main syllabus reveal that the elementary topics assemble all the facets of structure operations, data organization, array as parameters and matrices, prefix and postfix expressions, binary trees, sequential searching, hashing, collision resolution strategies, insertion and internal sorting and complexity of search algorithms. The terminologies, definitions, case studies, numerical explorations and comparisons are the key instruments used in explaining the concepts and programs to the apprentices.
|
<urn:uuid:cf61609f-20f3-4132-9946-5e20cdf04f96>
|
CC-MAIN-2013-20
|
http://www.thequestionpapers.com/technology-2nd-sem/biju-patnaik-university-of-technology-2nd-sem-data-structures-using-c-exam/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705020058/warc/CC-MAIN-20130516115020-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.888703 | 181 | 3.40625 | 3 |
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Building a Workforce for the Information Economy
The IT Sector: Context and Character
1.1 THE TRAJECTORY OF INFORMATION TECHNOLOGY
Few foresaw the rate of progress in information technology (IT) and the IT-producing industries over the last few decades—a brief period in which computing went from infancy to ubiquity. Digital technologies have become plentiful, inexpensive, and powerful. Through successive waves, computing advanced from stand-alone systems to batch processing, from batch processing to time-sharing, from time-sharing to personal computers, and now from personal computers to information appliances connected to the Internet. Each of these transitions enabled computing to reach an ever-widening circle of users. Microprocessors are now in machines everywhere, from supercomputers to servers, to very powerful desktop and portable computers, to consumer devices and specialized equipment of all kinds. They are embedded in automobiles, aircraft, and telephones, controlling such functions as antilock brakes, automated landing systems, and cellular call processing.
While small or everyday systems capture the popular imagination, large systems power many sophisticated applications. When characterizing IT systems, large can refer to the kind of problem to be solved, and so-called high-performance systems handle complex applications with large numbers of computations or store huge amounts of information. Large can also refer to the number of connections among devices and smaller systems, and thanks to the Internet, computer-based networking is increasingly large-scale, integrating products and applications from dif-
|
<urn:uuid:a15a8edf-8003-4c11-ba18-eb9f9f80ea84>
|
CC-MAIN-2013-20
|
http://www.nap.edu/openbook.php?record_id=9830&page=23
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706477730/warc/CC-MAIN-20130516121437-00077-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.918808 | 343 | 2.5625 | 3 |
Automatic Programming is defined as the synthesis of a program from a specification. If automatic programming is to be useful, the specification must be smaller and easier to write than the program would be if written in a conventional programming language.
Our approach to automatic programming is based on reuse of generic algorithms through views. A generic algorithm performs some task, such as sorting a linked list of records, based on abstract descriptions of the data on which the program operates. A view describes how actual application data corresponds to the abstract data as used in the generic algorithm. Given a view, a generic algorithm can be specialized by a compilation process to produce a version of the algorithm that performs the algorithm directly on the application data.
Graphical user interfaces make it easy for the user to create views of the application data. Given a view, any of the library algorithms defined for that view can be specialized to work with the application data. Specialized programs can be produced in multiple languages (Lisp, C, C++, Java or Pascal) from a single copy of the generic algorithms.
A related system allows a program to be specified graphically by connecting diagrams that represent data, physical laws, and mathematical models.
On-line demonstrations of these programs are avialable below.
CS 394P: Automatic Programming
|
<urn:uuid:129e1f27-e98f-4086-b2d9-8259f59509d0>
|
CC-MAIN-2013-20
|
http://www.cs.utexas.edu/~novak/autop.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703057881/warc/CC-MAIN-20130516111737-00081-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.888768 | 261 | 3.5 | 4 |
The National Science Foundation has awarded University computer scientist Ian Foster a $12 million grant to develop a system that will seamlessly link together computers across the world. The system, called a grid, will allow scientists to harness the resources of thousands of computers at once, in essence creating supercomputers distributed across continents.
Foster, along with Carl Kesselman of the University of Southern California, will lead the project called National Middleware Initiative (NMI). NMI will draw together computer scientists to develop software that will erase the gaps between computers it is installed on.
This software, called Middleware, links computers together into a continuous network across which resources ranging from storage space to processing power can be instantly transferred. Middleware
Grid continued on page 2
Grid continued from page one
acts as a uniform protocol between different systems and computer languages, removing barriers that would otherwise make communcation difficult.
It’s Napster for scientists,” said Foster, referring to the popular music-swapping website. On a grid system, scientists will be able to swap data and any other resources stored on their computers. This ability to access many computers at once can be used to solve problems that would otherwise be unmanageable. Problems that have remained unsolved for lack of processor time or memory become much more accessible.
The SETI@home program, for instance, a screensaver that hunts through radio telescope data for signs of intelligent life, has successfully used this principle to gain thousands of computer years of processing time from computers that would otherwise have been idle. It has become, in essence, the world’s fastest computer.
A grid system, which will allow users to consciously devote their computers to such problems, would be even more powerful. Communities of scientists and engineers can share resources as they pursue some common goal,” Foster said.
Already, many groups of scientists have been working to build grids linking their computers. Foster has been involved in projects for earthquake researchers who need the power of a grid to run their variable-intensive simulations. Physicists are building grids to help deal with the enormous amount of data gathered by particle accelerators, in which each collision may produce millions of particles. Climatologists envision a grid linking weather sensors and computers to help track the progress of global warming. Indeed, almost any project that involves large numbers of either scientists or data points can benefit from a grid system.
People increasingly use large collaborative efforts,” Foster said. [Interest in grid networks] is becoming inevitable.”
Foster, author of a 1997 book which coined the term grid technologies,” has been involved in the effort to develop working grids since interest in them was first piqued in 1995. At that time, an experiment called I-Way linked 17 networks across the country into a very powerful demonstration grid for about a week.
Since then, efforts to solve the problems involved in creating stable grids have occupied groups around the world. The NMI will coordinate those groups and work to develop common standards and software for grids. We will take all this technology, integrate it with a bunch of others, and really get it out to the community,” Foster said.
Among those technologies is that developed by Foster’s own Globus project, which is devoted to researching and developing the middleware needed to make grids run. The project’s Globus toolkit seems likely to become the standard for grid programs.
Developing more and better middleware programs is the focus of the NMI. Foster and Kesselman will lead three years of research and development that will integrate the findings of the many groups working on grid research. They plan to release their middleware for free distribution. We haven’t asserted any control over the intellectual property,” Foster said.
There are challenges ahead. A grid must strike a delicate balance between security and freedom of computer sharing, allowing users to safeguard their data even as they allow access to it. Grids must also be resilient enough to withstand a few faulty computers within them. They must be self-aware enough to direct users to the resources they need, which may be on computers all over the world. And all this complex software has to be easy to use. The software we develop addresses technical issues in a very large, dynamic, heterogeneous environment,” Foster said.
The grid technology being developed by NMI will most likely be used largely by scientists at first, but Foster suspects business uses will follow. Companies could use grids to increase their processing power or store large amounts of data. IBM has already expressed a strong interest in the technology. None of this is happening in a vacuum. There’s tremendous industrial activity,” Foster said. We see a convergence of grid and commercial technologies.”
|
<urn:uuid:43cf7e5e-4daf-4e16-814d-6df56e270439>
|
CC-MAIN-2013-20
|
http://chicagomaroon.com/2001/10/02/nsf-grants-12-million-to-u-of-c-prof-for-grid-software/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705543116/warc/CC-MAIN-20130516115903-00058-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.959218 | 973 | 3.5625 | 4 |
Hosted virtual desktops are common in datacenters where virtual machines are implemented on a number of servers. This is a modern trend in computing that substitutes the traditional way that involves the operation of a single desktop by a single user. The data involved is usually stored in a central location allowing users to access it from any location via mobile devices such as thin clients, laptops, smartphones, and tablets among others. The stored data is accessed over WANs, LANs and the internet.
The systems are built on robust infrastructure known as Virtual desktop infrastructure and abbreviated as VDI. This is a special program is installed and run on the desktop and hosts a number of remote users. A user has to be connected to the internet in order to access the sever by using a remote display protocol. The sessions are maintained through connection brokering services. The administration of the system is made much easier by centralizing all the facilities.
There are a number of benefits associated with this computing technology. It introduces high levels of data security in the organization. It eliminates the need for storing critical data files on local physical systems that are prone to theft and failure preventing instances of data loss. Backing up is done on a daily basis and firewalls are used to prevent unauthorized access or virus attacks.
One of the main roles of management is to minimize the cost of operating the business. This computing approach introduces the concept of resource sharing. This saves a lot of money since computing facilities tend to be more expensive particularly where each employee has a personal facility. The amount of electrical power consumed is also cut down.
The systems are built with a high level of flexibility to allow for customization to suit special needs. They support a wide range of applications which can be installed and configured as per the needs of each client. Integrating these systems into existing working environments does not present any difficulties. The resources needed for migration are low.
A majority of business have been able to increase their productivity through this technique. The data is accessible remotely from any region and via any mobile device connected to the internet. The employees are able to perform their duties from any location they are comfortable with. The maintenance and upgrades of the system are done by the providers saving the company a lot of money.
These components of the system are not fixed and it is possible to separate the hardware components and the software. This makes them scalable and therefore can be expended to accommodate additional functions of the business. They are designed to be readily available by implementing them on three and four tier platforms. This creates many network paths making them redundant.
To survive the stiff completion in the business world, entrepreneurs search for ways to improve their service delivery processes. The benefits of hosted virtual desktops have attracted numerous business owners. They offer greater reliability, access, security and availability of data. The main challenges encountered are the complexity and the high costs of implementing them. Large scale firms have greatly benefited from this computing approach due to a high capital and professional workforce.
You can visit the website www.bluedognetwork.com for more helpful information about General Information About Hosted Virtual Desktops
|
<urn:uuid:ab34be17-f4ec-4fbd-931d-b056941316ee>
|
CC-MAIN-2013-20
|
http://www.triplegforce.com/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710963930/warc/CC-MAIN-20130516132923-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.951714 | 631 | 2.8125 | 3 |
Computer Science 220. Computer Organization and Assembly Language
A detailed look at the internal organization and logic of computers.
The programming portion of the course considers a common assembly language and how such instructions are translated to the binary instructions of a traditional 32-bit machine language. Addressing modes and stack behavior related to subroutine calls are discussed in detail.
The computer organization portion of the course discusses gates, storage circuits, the arithmetic and logic unit, fetch/execute cycles and data paths. Microcoding is discussed in detail. The question of performance, in relation to a computer’s architecture and the choices made by programmers, is a major theme throughout the course.
|
<urn:uuid:2d0d1ee2-91c6-4592-9087-6f5f8675efc3>
|
CC-MAIN-2013-20
|
http://wheatoncollege.edu/catalog/comp_220/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702127714/warc/CC-MAIN-20130516110207-00092-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.911643 | 135 | 3.3125 | 3 |
The Internet continues to grow exponentially on a daily basis. This growing demand gives rise to the need to think carefully about how web scale applications are written so that they will work properly under challenging workloads. Fortunately, computing hardware is growing increasingly faster, and at a fraction of the price. Memory is abundant and CPUs are 64-bit. Additionally, both CPUs and bandwidth are inexpensive. While storage is more economical than ever, I/O capacity and high-speed network interconnects are not inexpensive.
As a direct result of these changes, the concepts for traditional software development are no longer applicable or appropriate. Database development must shift its approach given these changes in resource cost and availability. With these changes has come the generation of large amounts of data along with the brainstorming of groundbreaking methods for building interactive software solutions and data analysis systems. The playing field has changed tremendously and requires a shift in gears when it comes to thinking web-scale.
Five Major Concepts
There are five major concepts that should be considered when coding. Following an overview of these concepts will be useful suggestions to assist with mastering these concepts.
Concept #1: Maintain a Low Requirement of Resources
The need to keep resource requirements low is frequently overlooked today, largely due to fast CPUs, plentiful memory and speedy networks. Making software procedures more efficient is sometimes the simplest way to decrease scalability issues. Clearly, the quicker a user gets on and off a CPU, the less data transmitted and memory occupied. Following that thought, the application will also run faster. In order to decrease the processing time per unit of work, the following four resources must be considered:
Concept #2: Run in Parallel to Lessen Bottlenecks
The fastest way to diminish bottlenecks is to break workloads into undersized pieces and then process those pieces in parallel on separate storage devices, networks, CPU cores or servers. Additionally, be sure to keep coordinated synchronization of the processing to a minimum because locks will kill currency.
Concept #3: Make Data Decentralized
Data has been traditionally stored centrally. This model requires the database to be scaled vertically, a process that becomes increasingly rigorous and costly for maintaining and expanding. Additionally, central databases can become I/O bottlenecks very rapidly while updating status changes within a central data design can be challenging. Keep in mind that distributing data over numerous servers, though, also distributes the write load. A distributed data store system, such as Cassandra, can assist with this task, handling the partitioning of data while making it very easy to add servers as capacity needs increase.
Concept #4: Eventual Consistency
If slightly stale data can be used on your application, then an eventually consistent data storage system can be employed. Asynchronous processes are operated on these types of systems to update remote replicas of the stored data. Immediately after an update, though, users might initially see the data’s stale version. Carefully determine what data needs ACID (Atomicity, Consistency, Isolation, Durability) properties. If BASE (Basically Available Soft-State Eventually Consistent) is adequate, then superior scalability and availability can be achieved from a distributed data storage system that employs asynchronous data replication.
Concept #5: Scale Horizontally, Rather than Vertically
Simply put, vertical scaling is the addition of resources (more memory, faster network interfaces) to a given computer system. Although this process can be simple and reasonably priced, the vertical limits can be reached very rapidly. So how do you increase if you’re already running the largest, fastest system you can afford? The solution is horizontal scaling. To do so, you need to add more servers and partition the work among them. If your system scales well horizontally, your needs may be perfectly met with multiple, slower systems. However, writing your software without a horizontal scaling capability can cause an impasse. Rather than getting snared into scaling vertically with no other options, horizontal scaling capabilities can lead to further scaling with significant cost savings.
Nine Suggestions: How to Write Code that Scales
Suggestion #1: Define Worst Case Scenario Plan
What’s your worst-case scenario? Define and quantify what an acceptable result would be in a worst-case usage scenario. It may look something like, “Support 20,000 simultaneous connections with < 2 second response time.” Keep this number in mind at every stage of software design and implementation. You may even need to remind everyone on a regular basis because it’s easy to get sidetracked by the feature list and forget the architectural performance and scalability goals. If it’s in front of you at all times and you write a test plan before even writing a single line of code, you will be able to meet your defined scenario.
Suggestion #2: Ensure Caching Mechanisms
Determine which data is most often accessed, and have it cached into memory for the provision of repeated, high-speed access. In most instances, distributed memcached clusters are better than host-based caches.
Suggestion #3: Network Data Compression
Compressing and decompressing the data exchanged between clients and servers is often overlooked. Doing so is a great way to assist with the responsiveness of applications. By reducing the data transfer times, this in turn increases the capacity that can be handled per unit of time. The time cost incurred by this process is typically trivial when weighed against the benefit gained from the increase in speed. The overall efficiency using compressed transmissions, rather than uncompressed, is usually greater.
Suggestion #4: Disk Data Compression
Storage may be inexpensive, but I/O is not. You can readily and efficiently amplify the I/O throughput by compressing the stored data.
Suggestion #5: Sensory driven admission control system
When employing work queues, it’s worthwhile to think about a sensory driven admission control system. A common error is not placing limits on simultaneous usage over the network. For example, let’s consider a system with a satisfactory response time X. Within that response time is a maximum capacity of doing 50 things concurrently while producing 50 units of output. If you increase tasks to 51 things, output might decrease to 30. Taking it a step further, if you give the system 52 things to do, your output might sharply decrease to 20.
Illustrated by our example, pushing a system beyond its limits can cause it to spin its wheels without getting additional work completed. A recommendation is to queue all work, refusing work when the queue gets too long. Many of us are reluctant to have visible limits on system usage, but controlling the rate at which you accept work when operating near reasonable limits is much more preferable. Your client will receive a busy signal if you reject work; this is an improvement from getting “hung up” on! By using a sensible admission control system with an appropriate procedure, you can refuse traffic when work queues get too long. If your system scales well horizontally, though, it could be useful to use an API call to create more Cloud Servers to assist with servicing a growing work queue. This would potentially allow you to scale resources in order to track demand – avoiding entirely the need to refuse work. Keep in mind that this flexible provision is not a replacement for admission control.
At some point, demand may still exceed available capacity and you need a back up plan to handle the excess work. Still not convinced? Contemplate what might happen if your system had an infinite loop and the server mistakenly acted as a client to itself. Having an admission control could interrupt the loop before the whole system crashed.
Suggestion #6: Seek
Seeking is the cause of the majority of I/O bottlenecks. Steer clear of operations that will cause your disk(s) to seek when reading or writing data to a disk storage system. Whenever feasible, be sure to swap random I/O patterns with sequential ones.
Suggestion #7: Low Overhead per Connection
If you require significant amounts of memory per connection, not much work can be performed at the same time. For instance, 10GB of memory and a 200MB requirement per connection might only process 60 connections at a time. However, by switching to a 12-thread worker pool of 200MB each. If you decrease memory per connection down 2MB each, this might permit roughly 7,000+ concurrent connections. Essentially, you are having work completed at a comparable (and often faster) rate as when only 60 concurrent connections were supported. Using asynchronous I/O solutions like select() and epoll() for connection handling can drive even more efficiencies.
Suggestion #8: Only Save the Essential State
Suggestion #9: Avoid Use of Parsed Text
Try to limit the use of parsed text. This applies if components of your application communicate over the network, or perform larger amounts of communication between client and server. It is tempting to use XML or JSON data formats because you’re able to run different architectures in your network communication. Unfortunately the server-side resources used to parse the text data are CPU intensive and can significantly slow down processing times. Lightweight is key, and simple binary protocol can eliminate the need to use text parsing.
Servicing tens of millions of users makes simple system decisions difficult. However, designing systems that are horizontally scalable may result in a much more complex design. Before committing to scalability, consider what will be the true cost of efficiency. There are times that running a larger, quicker system is the real solution. Compression, encryption, thread pools and a host of other solutions are not tasks for a novice, and may be excessive for the demands of most applications.
Know your needs and keep the five concepts in mind when developing applications from the outset; this will ensure you implement what makes sense in your application. Depending on your requirements, you may only need to implement a few of the concepts to create an extremely scalable system.
© 2011-2013 Rackspace US, Inc.
Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License
|
<urn:uuid:af2ebee1-7f2e-4735-a1ea-acd2c283ed8c>
|
CC-MAIN-2013-20
|
http://www.rackspace.com/knowledge_center/article/how-to-write-code-that-scales
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703317384/warc/CC-MAIN-20130516112157-00033-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.922742 | 2,090 | 2.609375 | 3 |
An architecture for distributing computing across the network, using high performance servers and desktop clients. Lotus Notes and relational database management systems use the architecture to separate front-end user processes from back-end database services.
computing technique for processing data between a "client" computer and a file "server." (See Client, and Server).
One of the popular distributed computing with which information is shared by clients and servers machines.
The division of an application into two parts; a front end client and a back end server. It allows multiple front ends running on a PC or Unix workstation (client) to access the same SQL based server database at the same time over the LAN. The aim is to off-load as much processing as possible to the intelligent desktop leaving only the shared information and the software for managing it at the central server. An application that is running in such a fashion with client and server linked by a LAN is termed a bifurcated application.
Term used to describe distributed computing (processing) network systems in which transaction responsibilities are divided into two parts: client (front end) and server (back end). Both terms (client and server) can be applied to software programs or actual computing devices. Also called distributed computing (processing).
A computing system in which two types of computers (client machines and server machines) perform different specialized functions.
Architectural model that functionally divides the execution of a unit of work between activities initiated by an end user or program (client) and those maintaining data (servers). Originally thought to make mainframes obsolete.
A computing environment in which processing capabilities are distributed throughout a network such that a client computer requests processing or some other type of service from a server computer.
|
<urn:uuid:8b6eafbe-891b-4277-beb3-e7d0f75703e2>
|
CC-MAIN-2013-20
|
http://www.metaglossary.com/meanings/678678/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00087-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.919449 | 350 | 3.375 | 3 |
Computer science or computing science (abbreviated CS or CompSci) is the scientific and practical approach to computation and its applications. A computer scientist specializes in the theory of computation and the design of computational systems.
Computing using unconventional methods found in nature has become an important branch of computer science, which might aid scientists construct more robust and reliable devices. For instance, the ability of biological systems to assemble and grow on their own enables much higher interconnection densities or swarm intelligence algorithms, like ant colonies that find optimal paths to food sources. [...]
No, not the sports car, neither the predatory feline, but Oak Ridge National Labs Jaguar – a supercomputer of immense computing capabilities set to top the ranks of the fastest computers in the world, for the second time, after a GPU (graphical processing unit) upgrade. Capable of simulating physical systems with heretofore unfeasible speed and accuracy -from [...]
|
<urn:uuid:83583051-67e0-47ad-ab54-77bc0e99940a>
|
CC-MAIN-2013-20
|
http://www.zmescience.com/tag/computer-science/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697232084/warc/CC-MAIN-20130516094032-00038-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.897427 | 189 | 2.875 | 3 |
in this program how these for loop working.
Given the set of functional dependencies (a->bcd,cd->e,e->cd,d->ah,abh->bd,dh->bc}. Find a non-redundant cover.
essentials of circuit analysis
Its not more than 3 questions its only 2 questions please check again . The rest is just giving you information and details so you can have an idea. Question #1 ____Create a two page paper reflecting on what strategies you will implement in terms of your career development Question #2...
Its only 2 questions please check again . The rest is just giving you information and details so you can have an idea. Create a two page paper reflecting on : Question #1 ____what strategies you will implement in terms of your career development Question #2 _____How these strategies...
What are the Application-Level Requirements List in UOP IT210 Programming with Algorithms and Logic?
Identify at least two data structures that are used to organize a typical file cabinet. Why do you feel it is necessary to emulate these types of data structures in a computer program? For what kind of work project would you want to use this type of program?
R.13. Describe how Web caching can reduce the delay in receiving a requested object. Will Web caching reduce the delay for all objects requested by a user or for only some of the objects? Why
Use branch-and-bound to solve the following IP problem. This problem must be solved in graphical approach- show all the work. Max Z = 5X1 + 2X2 S.T. 3X1 + X2 <= 12 X1 + X2 <=5 X1 >= 0, X2>=0 X1 and X2 are integer
Function with Input/Output Parameters Write a void function called Result that has only two type int input/output parameters. The function will return the sum of the two original parameters through the first parameter. It will also return the differences of the two original parameters through...
Ask a new Computer Science Question
Tips for asking Questions
- Provide any and all relevant background materials. Attach files if necessary to ensure your tutor has all necessary information to answer your question as completely as possible
- Set a compelling price: While our Tutors are eager to answer your questions, giving them a compelling price incentive speeds up the process by avoiding any unnecessary price negotiations
- 1. Identify and describe Trust/Security Domain boundaries that may be applicable to personal computer (workstation) security in a business context.
2. This is a C++ codelab question.
- The "origin" of the cartesian plane in math is the point where x and y are both zero. Given a variable, origin of type Point-- a structured type with two fields, x and y, both of type double, write one or two statements that make this variable's field's values consistent with the mathematical notion of "origin".
- Assume two variables p1 and p2 of type POINT, with two fields, x and y, both of type double, have been declared. Write a statement that reads values for p1 and p2 in that order. Assume that values for x always precede y.
- In mathematics, "quadrant I" of the cartesian plane is the part of the plane where x and y are both positive. Given a variable, p that is of type POINT-- a structured type with two fields, x and y, both of type double-- write and expression that is true if and only the point represented by p is in "quadrant I".
|
<urn:uuid:8e0ece29-52f3-4498-a969-34cf4b765238>
|
CC-MAIN-2013-20
|
http://www.coursehero.com/tutors/problems/Computer-Science/12581/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703748374/warc/CC-MAIN-20130516112908-00007-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.89296 | 747 | 2.921875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.