Search This Blog

Thursday, 14 November 2013

DVR (digital video recording)

A digital video recorder (DVR) records digital television broadcasts and stores them on a disk (see hard disk and cd-rom and dvd-rom). DVRs first appeared as commercial products in 1999 in Replay TV and TiVo, the latter becom-ing the most successful player in the field.

A DVR works with digital signals and discs rather than tape used by the video cassette recorders (VCRs) that had become popular starting in the 1980s. The digital recorder has several advantages over tape:

•  much larger capacity, limited only by hard drive size

•  instant (random) access to any recorded programming without having to go forward or backward through a tape

•  the ability to “time shift” within a live broadcast, including pausing and instant replay

•  the ability to skip over commercials

•  digital special effects

DVR and Integrated Entertainment

Besides what it can do with the program itself, the other big advantage of DVR technology stems from the fact that it produces digital data in a standard format (usually an MPEG file) that is fully compatible with PCs and other com-puting devices. Indeed, by installing one or more TV tuners or “cable cards” (for access to digital cable signals) to a PC, one need only add suitable software to turn a Windows, Macintosh, or Linux PC into a versatile DVR. Alternatively, many cable and satellite TV services are offering set-top boxes with built-in DVRs.

Services such as TiVo also provide access to an online program schedule (for a monthly charge). This works with features that allow the user to scan for and review program listings and to arrange, for example, to record all new epi-sodes of a weekly series as they arrive. DVRs with dual tun-ers allow for recording two live programs simultaneously, or recording one while watching another.

DVR technology is also now being used for closed-cir-cuit television (CCTV) surveillance systems, due to supe-rior storage and playback capabilities. Similar technology is also found in digital video cameras (camcorders).

DVRs are part of a landscape where entertainment that used to be confined to television broadcast, cable, or satel-lite systems can now be received digitally over the Inter-net. Since DVRs produce digital output, recorded programs can be easily shared over the Internet, such as by post-ing on the popular YouTube site, possibly leading to loss of revenue for the original providers (see intellectual property and computing). In response, HBO and other providers have argued for requiring that DVRs recognize content that is flagged as “copy never” and refuse to copy such programs.

Another problem for providers is the growing number of DVR users who have the ability to easily skip over com-mercials. Attempts are being made to make commercials shorter and more entertaining, or to rely more on product placement within the programming itself.

DSL (digital subscriber line)

DSL (digital subscriber line) is one of the two most preva-lent forms of high-speed wired access to the Internet (see broadband and cable modem). DSL can operate over regular phone lines (sometimes called POTS or “plain old telephone service”). DSL takes advantage of the fact that existing phone lines can carry frequencies far beyond the narrow band used for voice telephony. When install-ing DSL, the phone company must evaluate the quality of existing lines to determine how many frequency bands are usable, and thus how much data can be transmitted. Fur-ther, because the higher the frequency the shorter the dis-tance the signal can travel, the available bandwidth drops as one gets farther from the central office or a local DSL access Multiplexer (DSLAM).

Typical DSL services can range in speed from 128 kbps to 3 Mbps. Many providers offer higher speeds at additional cost. Speeds quoted are generally maximums; actual speed may be less due to poor line quality or greater distance from the central office.

The most common form of DSL is ADSL (asymmetric DSL), which has much higher download speeds than upload speeds. This is generally not a problem, since most users consume much more content than they generate. The lower frequencies are generally reserved for regular voice and fax service. A single DSL modem can serve multiple users in a local network by being connected to a router.

As more people move from land-line phone service to cellular, there has been greater demand for offering so-called naked DSL—DSL without traditional phone ser-vice. DSL can also be provided over optical fiber (see fiber optics).

Note that an older and lower-bandwidth version of the technology called ISDN (Integrated Services Digital Net-work) is still in use, but has largely been superseded by DSL/ADSL.

Alternatives to DSL

Cable is still more popular than DSL, though the latter has closed the gap somewhat. The fact that the two services can both provide fast Internet access (mostly) through existing infrastructure has created considerable competition. Thus a cable provider can now offer telephone service via the Internet (see voip) at the same time a phone provider using DSL can offer movies and television programming streamed over the network. The fact that in many locations DSL and cable providers are in competition can result in lower rates or more attractive “bundles” of services for consumers.

On average, cable modem speeds are somewhat faster than DSL; however, cable speeds can degrade as more users are added to a circuit. Although both services have had their share of glitches, they now both tend to be quite reliable.

Dreyfus, Hubert

Dreyfus, Hubert
(1929–  ) American
Philosopher, Cognitive Psychologist

As the possibilities for computers going beyond “number crunching” to sophisticated information processing became clear starting in the 1950s, the quest to achieve artificial intel-ligence (AI) was eagerly embraced by a number of innovative researchers. For example, Allen Newell, Herbert Simon, and Cliff Shaw at the RAND Corporation, attempted to write programs that could “understand” and intelligently manipu-late symbols rather than just literal numbers or characters. Similarly, MIT’s Marvin Minsky (see Minsky, Marvin) was attempting to build a robot that could not only perceive its environment, but in some sense understand and manipulate it. (See artificial intelligence and robotics.)

Into this milieu came Hubert Dreyfus, who had earned his Ph.D. in philosophy at Harvard. Dreyfus had special-ized in the philosophy of perception (how meaning can be derived from a person’s environment) and phenomenology (the understanding of processes). When Dreyfus began to teach a survey course on these areas of philosophy, some of his students asked him what he thought of the artificial intelligence researchers who were taking an experimental and engineering approach to the same topics the philoso-phers had discussed abstractly.

Philosophy had attempted to explain the process of per-ception and understanding (see also cognitive science). One tradition, the rationalism represented by such think-ers as Descartes, Kant, and Husserl took the approach of formalism and attempted to elucidate rules governing the process. They argued that in effect the human mind was a machine (albeit a wonderfully complex and versatile one). The opposing tradition, represented by the phenomenolo-gists Wittgenstein, Heidegger, and Merleau-Ponty, took a holistic approach in which physical states, emotions, and experience were inextricably intertwined in creating the world that people perceive and relate to.

If computers, which at that time had only the most rudimentary “senses” and no emotions could perceive and understand in the way humans did, then the rules-based approach of the rationalist philosophers would be vindi-cated. But when Dreyfus had examined the AI efforts, he wrote a paper titled “Alchemy and Artificial Intelligence.” His comparison of AI to alchemy was provocative in that it suggested that like the alchemists, the modern AI research-ers had met with only limited success in manipulating their materials (such as by teaching computers to perform such intellectual tasks as playing checkers and even prov-ing mathematical theorems). However, Dreyfus concluded that the kind of flexible, intuitive, and ultimately robust intelligence that characterizes the human mind couldn’t be matched by any programmed system. Each time AI research-ers demonstrated the performance of some complex task, Dreyfus examined the performance and concluded that it lacked the essential characteristics of human intelligence. Dreyfus expanded his paper into the book What Computers Can’t Do. Meanwhile, critics complained that Dreyfus was moving the goal posts after each play, on the assumption that “if a computer did it, it must not be true intelligence.”

Two decades later, Dreyfus reaffirmed his conclusions in What Computers Still Can’t Do, while acknowledging that the AI field had become considerably more sophisticated in creat-ing systems of emergent behavior (such as neural networks).


Currently a professor in the Graduate School of Phi-losophy at the University of California, Berkeley, Dreyfus continues his work in pure philosophy (including a com-mentary on phenomenologist philosopher Martin Hei-degger’s Being and Time) while still keeping an eye on the computer world in his latest publication, On the Internet.




DOM (Document Object Model)

The Document Object Model (DOM) is a way to represent a Web document (see html and xml) as an object that can be manipulated using code in a scripting language (see JavaScript). The DOM was created by the World Wide Web Consortium (W3C) as a way to standardize methods of manipulating Web pages at a time when different brows-ers used different access models. The full specification is divided into four levels (0 through 3). By 2005, most DOM specifications were supported by the major Web browsers.

Using DOM, a programmer can navigate through the hierarchical structure of a document, following links or “descending” into forms and user-interface objects. With DOM one can also add HTML or XML elements, as well as load, save, or format documents.

Code can also be written to respond to a number of “events,” including user keyboard or mouse activity and interactions with specific user-interface elements and HTML forms. For example, the “mouseover” event will be triggered when the user moves the mouse cursor over a defined region. The code can then perform an action such as popping up a box with explanatory text. The “submit” event will be triggered when the user has finished filling in a form and clicked the button to send it to the Web site. When an event occurs, the event object is used to pass detailed information about it to the program, such as which key or button was pressed, the location of the mouse pointer, and so on.
Although learning the DOM methods and how to use them takes some time, and familiarity with JavaScript is helpful, the syntax for accessing DOM methods should be familiar to anyone who has used an object-oriented program-ming language. Here are some simple sample statements.

Get the document with the specified ID: document.getElementById(ID)

Get the element with the specified tag: document.getElementByTagName(tagname)

Get the specified attribute (property) of the specified element:

myElement.getAttribute(attributeName) Create an element with the specified tag and reference it through a variable:

var myElementNode = document. createElement(tagname)

Evaluation

Although dynamic HTML (DHTML) also has an object model that can be used to access and manipulate individual elements, DOM is more comprehensive because it provides access to the document as a whole and the ability to navi-gate through its structure.

By providing a uniform way to manipulate documents, DOM makes it easier to write tools to process them in a series of steps. For example, database programs and XML parsers can produce DOM document “trees” as output, and an XSLT (XML style sheet processor) can then be used to format the final output.

For working with XML, another popular alternative is the Simple API for XML (SAX). The SAX model is quite dif-ferent from DOM in that the former “sees” a document as a stream of events (such as element nodes) and the parser is programmed to call methods as events are encountered. DOM, on the other hand, is not a stream but a tree that can be entered arbitrarily and traversed in any direction. On the other hand, SAX streams do not require that the entire document be held in memory, and processing can some-times be faster.

document model

Most early developers and users of desktop computing sys-tems thought in terms of application programs rather than focusing on the documents or other products being cre-ated with them. From the application point of view, files are opened or created, content (text or graphics) is created, and the file is then saved. There is no connection between the files except in the mind of the user. The dominant word processors of the 1980s (such as WordStar and WordPer-fect) were designed as replacements for the typewriter and emphasized the efficient creation of text (see word pro-cessing). Users who wanted to work with other types of information had to run completely separate applications, such as dBase for databases or Lotus 1-2-3 for spreadsheets. Working with graphics images (to the extent it was possible with early PCs) required still other programs.

This “application-centric” way of thinking suited pro-gram developers at a time when most computer systems (such as those running MS-DOS) could run only one pro-gram at a time. But increasing processor power, memory, and graphics display capabilities during the late 1980s made it possible to create an operating system such as Micro-soft Windows that could display text fonts and formatting, graphics and other content in the same window, and run several different program windows at the same time (see multitasking). In turn, this made it possible to present a model that was more in keeping with the way people had worked in the precomputer era.
In the new “document model,” instead of thinking in terms of individual application programs working with files, users could think in terms of creating documents. A document (such as a brochure or report) could contain formatted text, graphics, and data brought in from database or spreadsheet programs. This meant that in the course of working with a document users would actually be invoking the services of several programs: a word processor, graphics editor, database, spreadsheet, and perhaps others. To the user, however, the focus would be on a screen “desktop” on which would be arranged documents (or projects), not on the process of running individual programs and loading files.

Implementing the Document Model

There are two basic approaches to maintaining documents. One is to create large programs that provide all of the fea-tures needed, including word processing, graphics, and data management (see application suite). While such tight integration can (ideally at least) create a seamless work-ing environment with a consistent user interface, it lacks flexibility. If a user needs capabilities not included in the suite (such as, perhaps the ability to create an HTML ver-sion of the document for the Web), one of two cumbersome procedures would have to be followed. Either the operating system’s “cut and paste” facilities might be used to copy data from another application into the document (possibly with formatting or other information lost in the process), or possibly the document could be saved in a file format that could be read by the program that was to provide the additional functionality (again with the possibility of losing something in the translation).

Linking and Embedding

A more sophisticated approach is to create a protocol that applications could use to call upon one another’s services. The Windows COM (Component Object Model) uses a tech-nology formerly called OLE (Object Linking and Embed-ding). Using this facility, someone working on a document in Microsoft Word could “embed” another object such as an Excel spreadsheet or an Access database into the cur-rent document (which becomes the container). When the user double-clicks on the embedded object, the appropriate application is launched automatically, and the user sees the screen menus and controls from that application instead of those in Word. (One can also think of Word in this example being the client and Excel or Access as the server—see cli-ent-server computing). All work done with the embedded object is automatically updated by the server application and everything is stored in the same document file. Alter-natively, an application may be linked rather than embed-ded. In that case, the container document simply contains a pointer to the file in the other application. Whenever that file is changed, all documents that are linked to it are updated. Object embedding thus preserves a document-cen-tric approach but works with any applications that support that facility, regardless of vendor. The Macintosh operating system offers a similar facility. Apple and IBM attempted unsuccessfully to create a competing standard called Open-Doc. This should not be confused with the more recent Open Document standard from the popular open-source application Open Office. Meanwhile Microsoft’s COM, grad-ually introduced during the later 1990s, has been largely superseded by .NET (see Microsoft .NET). This reflects a shift in emphasis from a document model (within a sin-gle computer) to a more comprehensive “network object model.”

Document and object models are also increasingly important for working on the Web. This can be seen in the increasing use of XML documents and the Document Object Model (see xml and dom). This involves the use of a consistent programming interface (see api) by which many applications can create or process XML documents for data communication or display.


documentation, user

As computing moved into the mainstream of offices and schools beginning in the 1980s and accelerating through the 1990s, the need to train millions of new computer users spawned the technical publishing industry. In addition to the manual that accompanied the software, third-party publishers produced full-length books for beginners and advanced users as well as “dictionaries” and reference manu-als (see also technical writing). A popular program such as WordPerfect or (today) Adobe Photoshop can easily fill several shelves in the computer section of a large bookstore.

A number of publishers targeted particular audiences and adopted distinctive styles. Perhaps the best known is the IDG “Dummies” series, which eventually diversified its offerings from computer-related titles to everything from home remodeling to investing. Berkeley, California, pub-lisher Peachpit Press created particularly accessible intro-ductions for Windows and Macintosh users. At the other end of the spectrum, publishers Sams, Osborne, Waite Group, and Coriolis targeted the developer and “power user” community and the eclectic, erudite volumes from O’Reilly grace the bookshelves of many UNIX users.

Online Documentation

During the 1980s, the lack of a multitasking, window-based operating system limited the ability of programs to offer built-in (or “online”) documentation. Traditionally, users could press the F1 key to see a screen listing key commands and other rudimentary help. However, both the Macin-tosh and Windows-based systems of the 1990s included the ability to incorporate a standardized, hypertext-based help system in any program. Users could now search for help on various topics and scroll through it while keeping their main document in view. Another facility, the “wiz-ard,” offered the ability to guide users step by step through a procedure.

The growth of the use of the Web has provided a new ave-nue for online help. Today many programs link users to their Web site for additional help. Even help files stored on the user’s own hard drive are increasingly formatted in HTML for display through a Web browser. Additional sources of help for some programs include training videos and animated presentations using programs such as PowerPoint.

By the late 1990s, printed user manuals were becoming a less common component in software packages. (Instead, the manual was often provided as a file in the Adobe Acrobat format, which reproduces the exact appearance of printed material on the screen.) The computer trade book industry has also declined somewhat, but the bookstore still offers plenty of alternatives for users who are more comfortable with printed documentation.

documentation of program code

Computer system documentation can be divided into two main categories based upon the intended audience. Manu-als and training materials for users focus on explaining how to use the program’s features to meet the user’s needs (see documentation, user). This entry, however, focuses on the creation of documentation for programmers and oth-ers involved in software development and maintenance (see also technical writing).

Software documentation can consist of comments describing the operation of a line or section of code. Early programming with its reliance on punched cards had only minimal facilities for incorporating comments. (Some of the proponents of COBOL thought that the language’s English-like syntax would make additional documentation unnec-essary. Like the similar claim that trained programmers would no longer be needed, the reality proved otherwise.)

After the switch from punchcard input to the use of keyboards, adding comments became easier. For example, a comment in C looks like this:

printf(“Hello, world\n”);

/* Display the traditional message */

while C++ uses comments in this form:

cout << “Hello, World”;

// This is also a comment

Each language provides a particular symbol or set of sym-bols for separating comments from executable code. The compiler ignores comments when compiling the program.

While proper commenting can help people understand a program’s functions, the coding style should also be one that promotes clarity. This includes the use of descriptive and consistent names for variables and functions. This can also be influenced by the conventions of the operating sys-tem: For example, Windows has many special data struc-tures that should be used consistently.
In addition to the commented source code, external documentation is usually provided. Design documents can range from simple flowcharts or outlines to detailed specifi-cations of the program’s purpose, structure, and operations. Rather than being considered an afterthought, documenta-tion has been increasingly integrated into the practice of software engineering and the software development process. This practice became more prevalent during the 1960s and 1970s when it became clear that programs were not only becoming larger and more complex, but also that significant programs such as business accounting and inventory appli-cations were likely to have to be maintained or revised for perhaps decades to come. (The lack of adequate documenta-tion of date-related code in programs of this vintage became an acute problem in the late 1990s. See y2k problem.)

Documentation Tools

As programmers began to look toward developing their craft into a more comprehensive discipline, advocates of structured programming placed an increased emphasis not only on proper commenting of code but on the develop-ment of tools that could automatically create certain kinds of documentation from the source code. For example, there are utilities for C, C++, and Java (javadoc) that will extract information about class declarations or interfaces and for-mat them into tables. Most software development environ-ments now include features that cross-reference “symbols” (named variables and other objects). The combination of comments and automatically generated documentation can help with maintaining the program as well as being helpful for creating developer and user manuals.

While programmers retain considerable responsibility for coding standards and documentation, larger program-ming staffs typically have specialists who devote their full time to maintaining documentation. This includes the log-ging of all program change requests and the resulting new distributions or “patches,” the record of testing and retest-ing of program functions, the maintenance of a “version history,” and coordinating with technical writers in the production of revised manuals.

DNS (domain name system)

The operation of the Internet requires that each participat-ing computer have a unique address to which data pack-ets can be routed (see Internet and tcp/ip). The Domain Name System (DNS) provides alphabetical equivalents to the numeric IP addresses, giving the now familiar-looking Web addresses (URLs), e-mail addresses, and so on.

The system uses a set of “top-level” domains to cat-egorize these names. One set of domains is based on the nature of the sites involved, including: .com (commercial, corporate), .edu (educational institutions), .gov (govern-ment), .mil (military), .org (nonprofit organizations), .int (international organizations), .net (network service provid-ers, and so on).
The other set of top-level domains is based on the geo-graphical location of the site. For example, .au (Australia), .fr (France), and .ca (Canada). (While the United States has the .us domain, it is generally omitted in practice, because the Internet was developed in the United States).

INTERNET COUNTRY CODES     
(partial list)   
  
AD Andorra   
AE United Arab Emirates   
AF Afghanistan   
AG Antigua and Barbuda   
AI Anguilla   
AL Albania   
AM Armenia   
AN Netherlands Antilles   
AO Angola   
AQ Antarctica   
AR Argentina   
AS American Samoa   
AT Austria   
AU Australia   
AW Aruba   
AZ Azerbaijan   
BA Bosnia and Herzegovina   
BB Barbados   
BD Bangladesh   
BE Belgium   
BF Burkina Faso   
BG Bulgaria   
BH Bahrain   
BI Burundi   
BJ Benin   
BM Bermuda   
BN Brunei Darussalam   
BO Bolivia   
BR Brazil   
BS Bahamas   
BT Bhutan   
BV Bouvet Island   
BW Botswana   
BY Belarus   
BZ Belize   
CA Canada   
CC Cocos (Keeling) Islands   
CF Central African Republic   
CG Congo   
CH Switzerland   
CI Côte d’Ivoire (Ivory Coast)   
CK Cook Islands   
CL Chile   
CM Cameroon   
CN China   
CO Colombia   
CR Costa Rica  
CS Czechoslovakia (former)
CU Cuba IS Iceland   
CV Cape Verde   
CX Christmas Island    
CY Cyprus   
CZ Czech Republic   
DE Germany   
DJ Djibouti   
DK Denmark   
DM Dominica   
DO Dominican Republic    
DZ Algeria   
EC Ecuador    
EE Estonia   
EG Egypt   
EH Western Sahara   
ER Eritrea   
ES Spain  
ET Ethiopia    
FI Finland   
FJ Fiji   
FK Falkland Islands (Malvinas)    
FM Micronesia   
FO Faroe Islands    
FR France   
FX France, Metropolitan   
GA Gabon   
GB Great Britain (UK)   
GD Grenada   
GE Georgia   
GF French Guiana    
GH Ghana    
GI Gibraltar
GL Greenland   
GM Gambia
GN Guinea 
GP Guadeloupe    
GQ Equatorial Guinea    
GR Greece   
GS S. Georgia and S. Sandwich Isls   
GT Guatemala    
GU Guam  
GW Guinea-Bissau   
GY Guyana  
HK Hong Kong   
HM Heard and McDonald Islands   
HN Honduras   
HR Croatia (Hrvatska)   
HT Haiti   
HU Hungary    
ID Indonesia    
IE Ireland  
IL Israel   
IN India   
IO British Indian Ocean Territory   
IQ Iraq  
IR Iran    
IS Iceland   
IT Italy   
JM Jamaica   
JO Jordan   
JP Japan   
KE Kenya   
KG Kyrgyzstan   
KH Cambodia   
KI Kiribati   
KM Comoros   
KN Saint Kitts and Nevis   
KP Korea (North)   
KR Korea (South)   
KW Kuwait   
KY Cayman Islands   
KZ Kazakhstan   
LA Laos   
LB Lebanon   
LC Saint Lucia   
LI Liechtenstein   
LK Sri Lanka   
LR Liberia   
LS Lesotho   
LT Lithuania   
LU Luxembourg   
LV Latvia   
LY Libya   
MA Morocco   
MC Monaco   
MD Moldova   
MG Madagascar   
MH Marshall Islands   
MK Macedonia   
ML Mali    
MM Myanmar    
MN Mongolia   
MO Macau   
MP Northern Mariana Islands   
MQ Martinique   
MR Mauritania   
MS Montserrat   
MT Malta   
MU Mauritius   
MV Maldives   
MW Malawi   
MX Mexico   
MY Malaysia   
MZ Mozambique   
NA Namibia   
NC New Caledonia   
NE Niger   
NF Norfolk Island   
NG Nigeria   
NI Nicaragua   
NL Netherlands
NO Norway
NP Nepal
NR Nauru
NT Neutral Zone
NU Niue
NZ New Zealand (Aotearoa)
OM Oman
PA Panama
PE Peru
PF French Polynesia
PG Papua New Guinea
PH Philippines
PK Pakistan
PL Poland
PM St. Pierre and Miquelon
PN Pitcairn
PR Puerto Rico
PT Portugal
PW Palau
PY Paraguay
QA Qatar
RE Reunion
RO Romania
RU Russian Federation
RW Rwanda
SA Saudi Arabia
SB Solomon Islands
SC Seychelles
SD Sudan
SE Sweden
SG Singapore
SH St. Helena
SI Slovenia
SJ Svalbard and Jan Mayen Islands
SK Slovak Republic
SL Sierra Leone
SM San Marino
SN Senegal
SO Somalia
SR Suriname
ST Sao Tome and Principe
SU USSR (former)
SV El Salvador
SY Syria
SZ Swaziland
TC Turks and Caicos Islands
TD Chad
TF French Southern Territories
TG Togo
TH Thailand
TJ Tajikistan
TK Tokelau
TM Turkmenistan
TN Tunisia
TO Tonga
TP East Timor
 TR Turkey    
TT Trinidad and Tobago   
TV Tuvalu   
TW Taiwan   
TZ Tanzania   
UA Ukraine   
UG Uganda   
UK United Kingdom   
UM US Minor Outlying Islands   
US United States   
UY Uruguay   
UZ Uzbekistan   
VA Vatican City State (Holy See)   
VC Saint Vincent and the Grenadines   
VE Venezuela   
VG Virgin Islands (British)   
VI Virgin Islands (U.S.)   
VN Viet Nam   
VU Vanuatu   
WF Wallis and Futuna Islands   
WS Samoa   
YE Yemen   
YT Mayotte   
YU Yugoslavia   
ZA South Africa   
ZM Zambia   
ZR Zaire   
ZW Zimbabwe  



 


distributed computing

This concept involves the creation of a software system that runs programs and stores data across a number of dif-ferent computers, an idea pervasive today. A simple form is the central computer (such as in a bank or credit card company) with which thousands of terminals communicate to submit transactions. While this system is in some sense distributed, it is not really decentralized. Most of the work is done by the central computer, which is not dependent on the terminals for its own functioning. However, responsi-bilities can be more evenly apportioned between computers (see client-server computing).

Today the World Wide Web is in a sense the world’s largest distributed computing system. Millions of docu-ments stored on hundreds of thousands of servers can be accessed by millions of users’ Web browsers running on a variety of personal computers. While there are rules for specifying addresses and creating and routing data pack-ets (see Internet and tcp/ip), no one agency or computer complex controls access to information or communication (such as e-mail).

Elements of a Distributed Computing System

The term distributed computer system today generally refers to a more specific and coherent system such as a database where data objects (such as records or views) can reside on any computer within the system. Distributed computer sys-tems generally have the following characteristics:

•  The system consists of a number of computers (some-times called nodes). The computers need not neces-sarily use the same type of hardware, though they generally use the same (or similar) operating systems.

•  Data consists of logical objects (such as database records) that can be stored on disks connected to any computer in the system. The ability to move data around allows the system to reduce bottlenecks in data flow or optimize speed by storing the most frequently used data in places from which it can be retrieved the most quickly.

•  A system of unique names specifies the location of each object. A familiar example is the DNS (Domain Naming System) that directs requests to Web pages.

•  Typically, there are many processes running concur-rently (at the same time). Like data objects, processes can be allocated to particular processors to balance the load. Processes can be further broken down into threads (see concurrent programming). Thus, the system can adjust to changing conditions (for exam-ple, processing larger numbers of incoming transac-tions during the day versus performing batches of “housekeeping” tasks at night).

•  A remote procedure call facility enables processes on one computer to communicate with processes run-ning on a different computer.

•  In inter-process communication protocols specify the processing of “messages” that processes use to report status or ask for resources. Message-passing can be asynchronous (not time-dependent, and analogous to mailing letters) or synchronous (with interactive responses, as in a conversation).

•  The capabilities of each object (and thus the messages it can respond to or send) are defined in terms of an interface and an implementation. The interface is like the declaration in a conventional program: It defines the types of data that can be received and the types of data that will be returned to the calling process. The implementation is the code that specifies how the actual processing will be done. The hiding of imple-mentation details within the object is characteristic of object-oriented programming (see class).

•  A distributed computing environment includes facili-ties for managing objects dynamically. This includes lower-level functions such as copying, deleting, or moving objects and systemwide capabilities to dis-tribute objects in such as way as to distribute the load on the system’s processors more evenly, to make backup copies of objects (replication), and to reclaim and reorganize resources (such as memory or disk space) that are no longer allocated to objects.

Three widely used systems for distributed computing are Microsoft’s DCOM (Distributed Component Object Model), OMG’s Common Object Request Broker Archi-tecture (see Microsoft .net and corba), and Sun’s Java/Remote Method Invocation (Java/RMI). While these imple-mentations are quite different in details, they provide most of the elements and facilities summarized above.

Applications

Distributed computing is particularly suited to applica-tions that require extensive computing resources and that may need to be scaled (smoothly enlarged) to accommo-date increasing needs (see grid computing). Examples might include large databases, intensive scientific comput-ing, and cryptography. A particularly interesting example is SETI@home, which invites computer users to install a special screen saver that runs a distributed process dur-ing the computer’s idle time. The process analyzes radio telescope data for correlations that might indicate receipt of signals from an extraterrestrial intelligence (see coopera-tive processing).

Besides being able to marshal very large amounts of computing power, distributed systems offer improved fault tolerance. Because the system is decentralized, if a par-ticular computer fails, its processes can be replaced by ones running on other machines. Replication (copying) of data across a widely dispersed network can also provide improved data recovery in the event of a disaster.



distance education

Distance education (also called distance learning or virtual learning) is the use of electronic information and commu-nication technology to link teachers and students without their being together in a physical classroom.

Distance education in the form of correspondence schools or classes actually began as early as the mid-19th century with teaching of the Pitman Shorthand writ-ing method. Later, correspondence classes became part of Chautauqua, a movement to educate the rural and urban working classes, taking advantage of the growing reach of mail service through Rural Free Delivery. In correspon-dence schools, each lesson is typically mailed to the stu-dent, who completes the required work and returns it for grading. A certificate is awarded upon completion of course requirements. A few universities (such as the University of Wisconsin) also began to offer correspondence programs.

By the middle of the 20th century, radio and then tele-vision was being used to bring lectures to students. This increased the immediacy and spontaneity of teaching. The invention of videotape in the 1970s allowed leading teachers to create customized courses geared for different audiences. However, the ability of students to interact with teachers remained limited.

In the 1960s computers also began to be used for edu-cation. One of the earliest and most innovative programs was PLATO (Programmed Logic for Automatic Teaching Operations), which began at the University of Illinois but was later expanded to hundreds of networked terminals. PLATO in many ways pioneered the combining of text, graphics, and sound—what would later be called multime-dia. PLATO also provided for early forms of both e-mail and computer bulletin boards.

Meanwhile, with the development of ARPANET and eventually the Internet, a new platform became available for delivering instruction. By the mid-1990s, courses were being delivered via the Internet (see World Wide Web).

Modern Distance Education

As broadband Internet access becomes the norm, more Inter-net-based learning environments are taking advantage of video conferencing technology, allowing teachers and stu-dents to interact face to face. This helps answer a common objection by critics that distance education cannot replicate the personal and social dimensions of face-to-face education. Another way this objection is sometimes addressed by uni-versities is by having a period of physical residency (per-haps a few weeks) as part of the semester.

New platforms for distance education continue to emerge. Class content including lectures has been format-ted for delivery to mobile devices such as iPods (see pda and smartphone). Another intriguing idea is to establish the classroom within an existing virtual world, such as the popular game Second Life (see online games.) Here students and teachers can meet “face to face” through their virtual embodiments (avatars). It seems only a matter of time before entire universities will exist in such burgeoning alternative worlds.


disaster planning and recovery

Most businesses, government offices, or other organizations are heavily dependent on having continuous access to their data and the hardware, network, and software necessary to work with it. Activities such as procurement (see supply chain management), inventory, order fulfillment, and cus-tomer lists are vital to day-to-day operations. Any disaster that might disrupt these activities, whether natural (such as an earthquake or severe weather) or human-made (see com-puter virus and cyberterrorism), must be planned for. Such planning is often called “business continuity planning.”

The most basic way to protect against data loss is to maintain regular backups (see backup and archive sys-tems). On-site backups can protect against hardware failure, and can consist of separate storage devices (see networked storage) or the use of redundant storage within the main system itself (see raid). However, for protection against fire or other larger-scale disaster, it is also necessary to have regular off-site backups, whether using a dedicated facility or an online backup service.
To protect against power failure or interruption, one or more uninterruptible power supplies (UPS) can be used, and possibly a backup generator to deal with longer-term outages. All equipment should also have surge protection to avoid damage from power fluctuations.

Of course anything that can minimize the chance of disaster happening or the extent of its effects should also be part of disaster planning. This can include structural rein-forcement, physical security, firewalls and antivirus soft-ware, and fire alarms and suppression systems.

Disaster Planning

Despite the best precautions, disasters will continue to hap-pen. Organizations whose continued existence depends on their data and systems need to plan systematically how they are going to respond to foreseeable risks, and how they are going to recover and resume operations. Planning for disas-ters involves the following general steps:

•  specify the potential costs and other impacts of loss of data or access

•  use that data to prioritize business functions or units

•  assess how well facilities are currently being protected

•  determine what additional hardware or services (such as additional file servers, attached storage, or remote backup) should be installed

•  develop a comprehensive recovery plan that specifies procedures for dealing with various types of disas-ters or extent of damage, and including immediate response, recovery or restoration of data, and resump-tion of normal services

•  develop plans for communicating with customers, authorities, and the general public in the event of a disaster

•  specify the responsibilities of key personnel and pro-vide training in all procedures

•  arrange ahead of time for sources of supplies, addi-tional support staff, and so on

•  establish regular tests or drills to verify the effective-ness of the plan and to maintain the necessary skills

Recent natural disasters as well as the 9/11 terror-ist attacks have spurred many organizations to begin or enhance their disaster planning and recovery procedures.


disabled persons and computing

The impact of the personal computer upon persons hav-ing disabilities involving sight, hearing, or movement has been significant but mixed. Computers can help disabled people communicate and interact with their environment, better enabling them to work and live in the mainstream of society. At the same time, changes in computer technology can, if not ameliorated, exclude some disabled persons from fuller participation in a society where computer access and skills are increasingly taken for granted.

Computers as Enablers

Computers can be very helpful to disabled persons. With the use of text-to-speech software, blind people can have online documents read to them. (With the aid of a scan-ner, printed materials can also be input and read aloud.) Persons with low vision can benefit from software that can present text in large fonts or magnify the contents of the screen. Text can also be printed (embossed) in Braille. Deaf or hearing-impaired persons can now use e-mail or instant messaging software for much of their communica-tion needs, largely replacing the older and more cumber-some teletype (TTY and TTD) systems. As people who have seen presentations by physicist Stephen Hawking know, even quadriplegics who have only the use of head or finger movements can input text and have it spoken by a voice synthesizer. Further, advances in coupling eye movements (and even brain wave patterns) to computer systems and robotic extensions offer hope that even profoundly disabled persons will be able to be more self-sufficient.

Challenges

Unfortunately, changes in computer technology can also cause problems for disabled persons. The most pervasive problem arose when text-based operating systems such as MS-DOS were replaced by systems such as Microsoft Win-dows and the Macintosh that are based on graphic icons and the manipulation of objects on the screen. While text commands and output on the older system could be easily turned into speech for the visually impaired, everything, even text, is actually graphics on a Windows system. While it is possible to have software “hook into” the operating sys-tem to read text within Windows out loud, it is much more difficult to provide an alternative way for a blind person to find, click on, drag, or otherwise manipulate screen objects. Thus far, while Microsoft and other operating system devel-opers have built some “accessibility” features such as screen magnification into recent versions of their products, there is no systematic, integrated facility that would allow a blind person to have the same facility as a sighted person.
The growth of the World Wide Web also poses prob-lems for the visually impaired, since many Web pages rely on graphical buttons for navigation. Software plug-ins can provide audio cues to help with screen navigation. While Web browsers usually have some flexibility in setting the size of displayed fonts, some newer features (such as cas-cading style sheets) can remove control over font size from the user.
Because most computer systems today use graphical user interfaces, the failure to provide effective access may be depriving blind and visually impaired persons of employment opportunities. Meanwhile, the computer industry, educational institutions, and workplaces face potential challenges under the Americans with Disabilities Act (ADA), which requires that public and workplace facilities be made accessible to the disabled. Some funding through the Technology-Related Assistance Act has been provided to states for promoting the use of adaptive technology to improve accessibility.

Monday, 11 November 2013

Dijkstra, Edsger W.

Dijkstra, Edsger W.
(1930–2002) Dutch
Computer Scientist

Edsger W. Dijkstra was born in Rotterdam, Netherlands, in 1930 into a scientific family (his mother was a math-ematician and his father was a chemist). He received an intensive and diverse intellectual training, studying Greek, Latin, several modern languages, biology, mathematics, and chemistry. While majoring in physics at the University of Leiden in 1951, he attended a summer school at Cambridge that kindled what soon became a major interest in pro-gramming. He continued this pursuit at the Mathematical Center in Amsterdam in 1952 while finishing studies for his physics degree. At the time there were no degrees in com-puter science; indeed, programming did not yet exist as an academic discipline. Like most other computers of the time, the Mathematical Center’s ARMAC was custom-built. With no high-level languages yet in use, programming required intimate familiarity with the machine’s architecture and low-level instructions. Dijkstra soon found that he thrived in such an environment.

By 1956, Dijkstra had discovered an algorithm for find-ing the shortest path between two points. He applied the algorithm to the practical problem of designing electrical circuits that used as little wire as possible, and generalized it into a procedure for traversing treelike data structures.

During the 1960s, Dijkstra began to explore the prob-lem of communication and resource-sharing within com-puters. He developed the idea of a semaphore. Like the railroad signaling device that allows only one train at a time to pass through a single section of track, the program-ming semaphore provides mutual exclusion, ensuring that two processes don’t try to access the same memory or other resource at the same time.

Another problem Dijkstra tackled involved the sequenc-ing of several processes that are accessing the same resources. He found ways to avoid a deadlock situation where one process had part of what it needed but was stuck because the process holding the other needed resource was in turn waiting for the first process to finish. His algorithms for allowing multiple processes (or processors) to take turns gaining access to memory or other resources would become fundamental for the design of new computing architectures.

During the 1970s, Dijkstra immigrated to the United States, where he became a research fellow at Burroughs, one of the major manufacturers of mainframe computers. Dur-ing this time he helped launch the “structured program-ming” movement. His paper “GO TO Considered Harmful” criticized the use of that unconditional “jump” instruction because it made programs hard to read and verify. The newer structured languages such as Pascal and C affirmed Dijkstra’s belief in avoiding or discouraging such haphazard program flow (see structured programming).

Dijkstra spent the last decades of his career as a pro-fessor of mathematics at the University of Texas at Aus-tin, where he held the Schlumberger Centennial Chair in Computer Science. Dijkstra had some unusual quirks for a computer scientist. His papers were handwritten with a fountain pen, and he did not even own a personal computer until late in life.

In 1972 Dijkstra won the Association for Computing Machinery’s Turing Award. After his death on August 6, 2002, in Nuenen, The Netherlands, the ACM renamed its award for papers in distributed computing as the Dijkstra Prize. Perhaps Dijkstra’s greatest testament, however, is found in the millions of lines of computer code that are better organized and easier to maintain because of the widespread adoption of structured programming.