Search This Blog

Saturday, 28 September 2013

array

An array stores a group of similar data items in consecutive order. Each item is an element of the array, and it can be retrieved using a subscript that specifies the item’s location relative to the first item. Thus in the C language, the statement

int Scores (10);

sets up an array called Scores, consisting of 10 integer values. The statement

Scores [5] = 93;

   stores the value 93 in array element number 5. One subtlety, however, is that in languages such as C, the first element of the array is [0], so [5] represents not the fifth but the sixth element in Scores. (many version of BASIC allow for setting either 0 or 1 as the first element of arrays.)

int * ptr;
ptr = &Scores [0];

(See pointeRs and indiRection.)

   Arrays are useful because they allow a program to work easily with a group of data items without having to use separately named variables. Typically, a program uses a loop to traverse an array, performing the same operation on each element in order (see loop). For example, to print the current contents of the Scores array, a C program could do the following:

int index;
for (index = 0; i < 10; i++)

printf (“Scores [%d] = %d \n”, index, 
Scores [index]);

This program might print a table like this:

Scores [0] = 22
Scores [1] = 28
Scores [2] = 36

and so on. Using a pointer, a similar loop would increment the pointer to step to each element in 
turn.

  An array with a single subscript is said to have one dimension. Such arrays are often used for simple data lists, strings of characters, or vectors. most languages also suppport multidimensional arrays. For example, a two-dimensional array can represent x and Y coordinates, as on a screen display. Thus the number 16 stored at Colors[10][40] might represent the color of the point at x=10, Y=40 on a 640 by 480 display. A matrix is also a two-dimensional array, and languages such as APL provide built-in support for mathematical operations on such arrays. A four-dimensional array might hold four test scores for each person.

   Some languages such as FORTRAN 90 allow for defining “slices” of an array. For example, in a 3 × 3 matrix, the expression mAT(2:3, 1:3) references two 1 × 3 “slices” of the matrix array. Pascal allows defining a subrange, or portion of the subscripts of an array.

associative arrays
It can be useful to explicitly associate pairs of data items within an array. In an associative array each data element has an associated element called a key. Rather than using subscripts, data elements are retrieved by passing the key to a hashing routine (see hashing). In the Perl language, for example, an array of student names and scores might be set up like this:

%Scores = (“Henderson” => 86, “Johnson” => 87, “Jackson” => 92);

The score for Johnson could later be retrieved using the reference:

$Scores (“Johnson”)

Associative arrays are handy in that they facilitate look-up tables or can serve as small databases. However, expanding the array beyond its initial allocation requires rehashing all the existing elements.

Programming issues
To avoid error, any reference to an array must be within its declared bounds. For example, in the earlier example, Scores[9] is the last element, and a reference to Scores[10] would be out of bounds. Attempting to reference an outof-bounds value gives an error message in some languages such as Pascal, but in others such as standard C and C++, it simply retrieves whatever happens to be in that location in memory.

   Another issue involves the allocation of memory for the array. In a static array, such as that used in FORTRAN 77, the necessary storage is allocated before the program runs, and the amount of memory cannot be changed. Static arrays use memory efficiently and reduce overhead, but are inflexible, since the programmer has to declare an array based on the largest number of data items the program might be called upon to handle. A dynamic array, however, can use a flexible structure to allocate memory (see heap). The program can change the size of the array at any time while it is running. C and C++ programs can create dynamic arrays and allocate memory using special functions (malloc and free in C) or operators (new and delete in C++).

   In the early days of microcomputer programming, arrays tended to be used as an all-purpose data structure for storing information read from files. Today, since there are more structured and flexible ways to store and retrieve such data, arrays are now mainly used for small sets of data (such as look-up tables).

arithmetic logic unit  (ALU)

The arithmetic logic unit is the part of a computer system that actually performs calculations and logical comparisons on data. It is part of the central processing unit (CPU), and in practice there may be separate and multiple arithmetic and logic units (see cpu).

   The ALU works by first retrieving a code that represents the operation to be performed (such as ADD). The code also specifies the location from which the data is to be retrieved and to which the results of the operation are to be stored. (For example, addition of the data from memory to a number already stored in a special accumulator register within the CPU, with the result to be stored back into the accumulator.) The operation code can also include a specification of the format of the data to be used (such as fixed or floating-point numbers)—the operation and format are often combined into the same code.

   In addition to arithmetic operations, the ALU can also carry out logical comparisons, such as bitwise operations that compare corresponding bits in two data words, corres sponding to Boolean operators such as AND, OR, and xOR (see bitWise opeRations and boolean opeRatoRs).

   The data or operand specified in the operation code is retrieved as words of memory that represent numeric data, or indirectly, character data (see memoRy, numeRic data,and chaRacteRs and stRings). Once the operation is performed, the result is stored (typically in a register in the CPU). Special codes are also stored in registers to indicate characteristics of the result (such as whether it is positive, negative, or zero). Other special conditions called exceptions indicate a problem with the processing. Common exceptions include overflow, where the result fills more bits 
than are available in the register, loss of precision (because there isn’t room to store the necessary number of decimal places), or an attempt to divide by zero. Exceptions are typically indicated by setting a flag in the machine status register (see flag).

the big Picture
Detailed knowledge of the structure and operation of the ALU is not needed by most programmers. Programmers who need to directly control the manipulation of data in the ALU and CPU write programs in assembly language (see assembleR) that specify the sequence of operations to be performed. generally only the lowest-level operations involving the physical interface to hardware devices require this level of detail (see device dRiveR). modern compilers can produce optimized machine code that is almost as efficient as directly-coded assembler. However, understanding the architecture of the ALU and CPU for a particular chip can help predict its advantages or disadvantages for various kinds of operations.

application suite

An application suite is a set of programs designed to be used together and marketed as a single package. For example, a typical office suite might include word processing, spreadsheet, database, personal information manager, and e-mail programs

   While an operating system such as microsoft Windows provides basic capabilities to move text and graphics from one application to another (such as by cutting and pasting), an application suite such as microsoft Office makes it easier to, for example, launch a Web browser from a link within a word processing document or embed a spreadsheet in the document. In addition to this “interoperability,” an application suite generally offers a consistent set of commands and features across the different applications, speeding up the learning process. The use of the applications in one package from one vendor simplifies technical support and upgrading. (The development of comparable applications suites for Linux is likely to increase that operating system’s acceptance on the desktop.)

   Applications suites have some potential disadvantages as compared to buying a separate program for each application. The user is not necessarily getting the best program in each application area, and he or she is also forced to pay for functionality that may not be needed or desired. Due to their size and complexity, software suites may not run well on older computers. Despite these problems, software suites sell very well and are ubiquitous in today’s office.

   (For a growing challenge to the traditional standalone 
software suite, see application seRvice pRovideR.)

application software

Application software consists of programs that enable computers to perform useful tasks, as opposed to programs that are concerned with the operation of the computer itself (see opeRating system and systems pRogRamming). To most users, applications programs are the computer: They determine how the user will accomplish tasks.

The following table gives a selection of representative applications:

deveLoPing and distributing aPPLications
Applications can be divided into three categories based on how they are developed and distributed. Commercial applications such as word processors, spreadsheets, and general-purpose Database management Systems (DBmS) are developed by companies specializing in such software and distributed to a variety of businesses and individual users (see WoRd pRocessing, spReadsheet, and databasemanagement system). Niche or specialized applications (such as hospital billing systems) are designed for and markketed to a particular industry (see medical applications of computeRs). These programs tend to be much more expensive and usually include extensive technical support. 
Finally, in-house applications are developed by programmers within a business or other institution for their own use. Examples might include employee training aids or a Web-based product catalog (although such applications could also be developed using commercial software such as multimedia or database development tools).

   While each application area has its own needs and priorities, the discipline of software development (see softWaRe engineeRing and pRogRamming enviRonment) is generally applicable to all major products. Software developers try to improve speed of development as well as program reliability by using software development tools that simplify the writing and testing of computer code, as well as the manipulation of graphics, sound, and other resources used by the program. An applications developer must also have a good understanding of the features and limitations of the relevant operating system. The developer of commercial software must work closely with the marketing department to work out issues of feature selection, timing of releases, and anticipation of trends in software use (see maRketing of softWaRe).

application service provider  (ASP)

Traditionally, software applications such as office suites are sold as packages that are installed and reside on the user’s computer. Starting in the mid-1990s, however, the idea of offering users access to software from a central repository attracted considerable interest. An application service provider (ASP) essentially rents access to software.

   Renting software rather than purchasing it outright has several advantages. Since the software resides on the provider’s server, there is no need to update numerous desktop installations every time a new version of the software (or a “patch” to fix some problem) is released. The need to ship physical CDs or DVDs is also eliminated, as is the risk of software piracy (unauthorized copying). Users may be able to more efficiently budget their software expenses, since they will not have to come up with large periodic expenses for upgrades. The software provider, in turn, also receives a steady income stream rather than “surges” around the time of each new software release.

   For traditional software manufacturers, the main concern is determining whether the revenue obtained by providing its software as a service (directly or through a third party) is greater than what would have been obtained by selling the software to the same market. (It is also possible to take a hybrid approach, where software is still sold, but users are offered additional features online. microsoft has experimented with this approach with its microsoft Office Live and other products.)

   Renting software also has potential disadvantages. The user is dependent on the reliability of the provider’s servers and networking facilities. If the provider’s service is down, then the user’s work flow and even access to critical data may be interrupted. Further, sensitive data that resides on a provider’s system may be at risk from hackers or industrial spies. Finally, the user may not have as much control over the deployment and integration of software as would be provided by outright purchase.

   The ASP market was a hot topic in the late 1990s, and some pundits predicted that the ASP model would eventually supplant the traditional retail channel for mainstream software. This did not happen, and more than a thousand ASPs were among the casualties of the “dot-com crash” of the early 2000s. However, ASP activity has been steadier if less spectacular in niche markets, where it offers more economical access to expensive specialized software for applications such as customer relationship management, supply chain management, and e-commerce related services—for example, Salesforce.com. The growing importance of such “software as a service” business models can be seen in recent offerings from traditional software companies such as SAS. By 2004, worldwide spending for “on demand” software had exceeded $4 billion, and gartner Research has predicted that in the second half of the decade about a third of all software will be obtained as a service rather than purchased.

web-based aPPLications and Free soFtware
By that time a new type of application service provider had become increasingly important. Rather than seeking to gain revenue by selling online access to software, this new kind of ASP provides the software for free. A striking example is google Pack, a free software suite offered by the search giant (see google). google Pack includes a variety of applications, including a photo organizer and search and mapping tools developed by google, as well as third-party programs such as the mozilla Firefox Web browser, RealPlayer media player, the Skype Internet phone service (see voip), and antivirus and antispyware programs. The software is integrated into the user’s Windows desktop, providing fast index and retrieval of files from the hard drive. (Critics have raised concerns about the potential violation of privacy or misuse of data, especially with regard to a “share across computers” feature that stores data about user files on google’s servers.) America Online has also begun to provide free access to software that was formerly available only to paid subscribers.

   This use of free software as a way to attract users to advertising-based sites and services could pose a major threat to companies such as microsoft that rely on software as their main source of revenue. In 2006 google unveiled a google Docs & Spreadsheets, a program that allows users to create and share word-processing documents and spreadsheets over the Web. Such offerings, together with free open-source software such as Open Office.org, may force traditional software companies to find a new model for their own offerings.

   microsoft in turn has launched Office Live, a service designed to provide small offices with a Web presence and productivity tools. The free “basic” level of the service is advertising supported, and expanded versions are available for a modest monthly fee. The program also has features that are integrated with Office 2007, thus suggesting an attempt to use free or low-cost online services to add value to the existing stand-alone product line

   By 2008 the term cloud computing had become a popular way to describe software provided from a central Internet site that could be accessed by the user through any form of computer and connection. An advantage touted for this approach is that the user need not be concerned with where data is stored or the need to make backups, which are handled seamlessly

application program interface  (API)

In order for an application program to function, it must interact with the computer system in a variety of ways, such as reading information from disk files, sending data to the printer, and displaying text and graphics on the monitor screen (see useR inteRface). The program may need to find out whether a device is available or whether it can have access to an additional portion of memory. In order to provide these and many other services, an operating system such as microsoft Windows includes an extensive application program interface (API). The API basically consists of a variety of functions or procedures that an application program can call upon, as well as data structures, constants, and various definitions needed to describe system resources.

   Applications programs use the API by including calls to routines in a program library (see libRaRy, pRogRam and pRoceduRes and functions). In Windows, “dynamic link libraries” (DLLs) are used. For example, this simple function puts a message box on the screen:

MessageBox (0, “Program Initialization Failed!”, “Error!”, MB_ICONEXCLAMATION | MB_OK | MB_SYSTEMMODAL);

   In practice, the API for a major operating system such as Windows contains hundreds of functions, data structures, and definitions. In order to simplify learning to access the necessary functions and to promote the writing of readable code, compiler developers such as microsoft and Borland have devised frameworks of C++ classes that package related functions together. For example, in the microsoft Foundation Classes (mFC), a program generally begins by deriving a class representing the application’s basic characteristics from the mFC class CWinApp. When the program wants to display a window, it derives it from the CWnd class, which has the functions common to all windows, dialog boxes, and controls. From CWnd is derived the specialized class for each type of window: for example, CFrameWnd implements a typical main application window, while CDialog would be used for a dialog box. Thus in a framework such as mFC or Borland’s OWL, the object-oriented concept of encapsulation is used to bundle together objects and their functions, while the concept of inheritance is used to relate the generic object (such as a window) to specialized versions that have added functionality (see object-oRiented pRogRamming and encapsulation inheRitance).

   In recent years microsoft has greatly extended the reach of its Windows API by providing many higher level functions (including user interface items, network communications, and data access) previously requiring separate software components or program libraries (see micRosoft.net).

   Programmers using languages such as Visual Basic can take advantage of a further level of abstraction. Here the various kinds of windows, dialogs, and other controls are provided as building blocks that the developer can insert into a form designed on the screen, and then settings can be made and code written as appropriate to control the behavior of the objects when the program runs. While the programmer will not have as much direct control or flexibility, avoiding the need to master the API means that useful programs can be written more quickly

applet

An applet is a small program that uses the resources of a larger program and usually provides customization or additional features. The term first appeared in the early 1990s in connection with Apple’s AppleScript scripting language for the macintosh operating system. Today Java applets represent the most widespread use of this idea in Web development (see java).

   Java applets are compiled to an intermediate representation called bytecode, and generally are run in a Web browser (see Web bRoWseR). Applets thus represent one of several alternatives for interacting with users of Web pages beyond what can be accomplished using simple text markup (see html; for other approaches see javascRipt, php, scRipting languages, and ajax).

   An applet can be invoked by inserting a reference to its program code in the text of the Web page, using the HTmL applet element or the now-preferred object element. Although the distinction between applets and scripting code (such as in PHP) is somewhat vague, applets usually run in their own window or otherwise provide their own interface, while scripting code is generally used to tailor the behavior of separately created objects. Applets are also rather like plug-ins, but the latter are generally used to provide a particular capability (such as the ability to read or play a particular kind of media file), and have a standardized facility for their installation and management (see plug-in)

   Some common uses for applets include animations of scientific or programming concepts for Web pages supporting class curricula and for games designed to be played using Web browsers. Animation tools such as Flash and Shockwave are often used for creating graphic applets.

   To prevent badly or maliciously written applets from affecting user files, applets such as Java applets are generally run within a restricted or “sandbox” environment where, for example, they are not allowed to write or change files on disk.

Apple Corporation

Since the beginning of personal computing, Apple has had an impact out of proportion to its relatively modest market share. In a world generally dominated by IBm PC-compatible machines and the microsoft DOS and Windows operating systems, Apple’s distinctive macintosh computers and more recent media products have carved out distinctive market spaces.

   Headquartered in Cupertino, California, Apple was cofounded in 1976 by Steve Jobs, Steve Wozniak, and Ronald Wayne (the latter sold his interest shortly after incorporation). (See jobs,steve, and Wozniak, steven.) Their first product, the Apple I computer, was demonstrated to fellow microcomputer enthusiasts at the Homebrew Computer Club. Although it aroused considerable interest, the hand-built Apple I was sold without a power supply, keyboard, case, or display. (Today it is an increasingly valuable “antique.”)

   Apple’s true entry into the personal computing market came in 1977 with the Apple II. Although it was more expensive than its main rivals from Radio Shack and Commodore, the Apple II was sleek, well constructed, and featured built-in color graphics. The motherboard included several slots into which add-on boards (such as for printer interfaces) could be inserted. Besides being attractive to hobbyists, however, the Apple II began to be taken seriously as a business machine when the first popular spreadsheet program, VisiCalc, was written for it.

   By 1981 more than 2 million Apple IIs (in several variations) had been sold, but IBm then came out with the IBm PC. The IBm machine had more memory and a somewhat more powerful processor, but its real advantage was the access IBm had to the purchasing managers of corporate America. The IBm PC and “clone” machines from other companies such as Compaq quickly displaced Apple as market leader.

the macintosh
By the early 1980s Steve Jobs had turned his attention to designing a radically new personal computer. Using technology that Jobs had observed at the xerox Palo Alto Research Center (PARC), the new machine would have a fully graphical interface with icons and menus and the ability to select items with a mouse. The first such machine, the Apple Lisa, came out in 1983. The machine cost almost $10,000, however, and proved a commercial failure.

   In 1984, however, Apple launched a much less expensive version (see macintosh). Viewers of the 1984 Super Bowl saw a remarkable Apple commercial in which a female figure runs through a group of corporate drones (representing IBm) and smashes a screen. The “mac” sold reasonably well, particularly as it was given more processing power and memory and was accompanied by new software that could take advantage of its capabilities. In particular, the mac came to dominate the desktop publishing market, thanks to Adobe’s Pagemaker program.

   In the 1990s Apple diversified the macintosh line with a portable version (the PowerBook) that largely set the standard for the modern laptop computer. By then Apple had acquired a reputation for stylish design and superior ease of use. However, the development of the rather similar Windows operating system by microsoft (see micRosoftWindoWs) as well as constantly dropping prices for IBmcompatible hardware put increasing pressure on Apple and kept its market share limited. (Apple’s legal challenge to microsoft alleging misappropriation of intellectual property proved to be a protracted and costly failure.)

   Apple’s many macintosh variants of the later 1990s proved confusing to consumers, and sales appeared to bog down. The company was accused of trying to rely on an increasingly nonexistent advantage, keeping prices high, and failing to innovate.

   However, in 1997 Steve Jobs, who had been forced out of the company in an earlier dispute, returned to the company and brought with him some new ideas. In hardware there was the imac, a sleek all-in-one system with an unmistakable appearance that restored Apple to profitability in 1998. On the software side, Apple introduced new video-editing software for home users and a thoroughly redesigned UNIx-based operating system (see OS x). In general, the new incarnation of the macintosh was promoted as the ideal companion for a media-hungry generation.

consumer eLectronics
Apple’s biggest splash in the new century, however, came not in personal computing, but in the consumer electronics sector. Introduced in 2001, the Apple iPod has been phenomenally successful, with 100 million units sold by 2006. The portable music player can hold thousands of songs and easily fit into a pocket (see also music and video playeRs, digital). Further, it was accompanied by an easy-touse interface and an online music store (iTunes). (By early 2006, more than a billion songs had been purchased and downloaded from the service.) Although other types of portable mP3 players exist, it is the iPod that defined the genre (see also podcasting). Later versions of the iPod include the ability to play videos.

   In 2005 Apple announced news that startled and perhaps dismayed many long-time users. The company announced that future macintoshes would use the same Intel chips employed by Windows-based (“Wintel”) machines like the IBm PC and its descendants. The more powerful machines would use dual processors (Intel Core Duo). Further, in 2006 Apple released Boot Camp, a software package that allows Intel-based macs to run Windows xP. Jobs’s new strategy seems to be to combine what he believed to be a superior operating system and industrial design with industry-standard processors, offering the best user experience and a very competitive cost. Apple’s earnings continued strong into the second half of 2006.

   In early 2007 Jobs electrified the crowd at the macworld Expo by announcing that Apple was going to “reinvent the phone.” The product, called iPhone, is essentially a combination of a video iPod and a full-featured Internet-enabled cell phone (see smaRtphone). marketed by Apple and AT&T (with the latter providing the phone service), the iPhone costs about twice as much as an iPod but includes a higher-resolution 3.5-in. (diagonal) screen and a 2 megapixel digital camera. The phone can connect to other devices (see bluetooth) and access Internet services such as google maps. The user controls the device with a new interface called multitouch.

   Apple also introduced another new media product, the Apple TV (formerly the iTV), allowing music, photos, and video to be streamed wirelessly from a computer to an existing TV set. Apple reaffirmed its media-centered plans by announcing that the company’s name would be changed from Apple Computer Corporation to simply Apple Corporation.

   In the last quarter of 2006 Apple earned a recordbreaking $1 billion in profit, bolstered mainly by very strong sales of iPods and continuing good sales of macintosh computers.

   Apple had strong macintosh sales performance in the latter part of 2007. The company has suggested that its popular iPods and iPhones may be leading consumers to consider buying a mac for their next personal computer.

  meanwhile, however, Apple has had to deal with questions about its backdating of stock options, a practice by which about 200 companies have, in effect, enabled executives to purchase their stock at an artificially low price. Apple has cleared Jobs of culpability in an internal investigation, and in April 2007 the Securities and Exchange Commission announced that it would not take action against the company.

Friday, 27 September 2013

APL  (a programming language)

This programming language was developed by Harvard (later IBm) researcher Kenneth E. Iverson in the early 1960s as a way to express mathematical functions clearly and consistently for computer use. The power of the language to compactly express mathematical functions attracted a growing number of users, and APL soon became a full general-purpose computing language.

   Like many versions of BASIC, APL is an interpreted language, meaning that the programmer’s input is evaluated “on the fly,” allowing for interactive response (see inteRpReteR). Unlike BASIC or FORTRAN, however, APL has direct and powerful support for all the important mathematical functions involving arrays or matrices (see aRRay).

   APL has over 100 built-in operators, called “primitives.” With just one or two operators theprogrammer can perform complex tasks such as extracting numeric or trigonometric functions, sorting numbers, or rearranging arrays and matrices. (Indeed, APL’s greatest power is in its ability to manipulate matrices directly without resorting to explicit loops or the calling of external library functions.)

   To give a very simple example, the following line of APL code:
                                    x [D x]
sorts the array x. In most programming languages this would have to be done by coding a sorting algorithm in a dozen or so lines of code using nested loops and temporary variables.

   However, APL has also been found by many programmers to have significant drawbacks. Because the language uses greek letters to stand for many operators, it requires the use of a special type font that was generally not available on non-IBm systems. A dialect called J has been devised to use only standard ASCII characters, as well as both simplifying and expanding the language. many programmers find mathematical expressions in APL to be cryptic, making programs hard to maintain or revise. Nevertheless, APL Special Interest groups in the major computing societies testify to continuing interest in the language.

anonymity and the Internet

Anonymity, or the ability to communicate without disclosing a verifiable identity, is a consequence of the way most Internet-based e-mail, chat, or news services were designed (see e-mail, chat, texting and instant messaging, and netneWs and neWgRoups). This does not mean that messages do not have names attached. Rather, the names can be arbitrarily chosen or pseudonymous, whether reflecting development of an online persona or the desire to avoid having to take responsibility for unwanted communications (see spam).

advantages
If a person uses a fixed Internet address (see tcp/ip), it may be possible to eventually discover the person’s location and even identity. However, messages can be sent through anonymous remailing services where the originating address is removed. Web browsing can also be done “at arm’s length” through a proxy server. Such means of anonymity can arguably serve important values, such as allowing persons living under repressive governments (or who belong to minority groups) to express themselves more freely precisely because they cannot be identified. However, such techniques require some sophistication on the part of the user. With ordinary users using their service provider accounts directly, governments (notably China) have simply demanded that the user’s identity be turned over when a crime is alleged.

   Pseudonymity (the ability to choose names separate from one’s primary identity) in such venues as chat rooms or online games can also allow people to experiment with different identities or roles, perhaps getting a taste of how members of a different gender or ethnic group are perceived (see identity in the online WoRld)

   Anonymity can also help protect privacy, especially in commercial transactions. For example, purchasing something with cash normally requires no disclosure of the purchaser’s identity, address, or other personal information. Various systems can use secure encryption to create a cash 
equivalent in the online world that assures the merchant of valid payment without disclosing unnecessary information about the purchaser (see digital cash). There are also facilities that allow for essentially anonymous Web browsing, preventing the aggregation or tracking of information (see cookies).

ProbLems
The principal problem with anonymity is that it can allow the user to engage in socially undesirable or even criminal activity with less fear of being held accountable. The combination of anonymity (or the use of a pseudonym) and the lack of physical presence seems to embolden some people to engage in insult or “flaming,” where they might be inhibited in an ordinary social setting. A few services (notably The WELL) insist that the real identity of all participants be available even if postings use a pseudonym.

   Spam or deceptive e-mail (see phishing and spoofing) takes advantage both of anonymity (making it hard for authorities to trace) and pseudonymity (the ability to disguise the site by mimicking a legitimate business). Anonymity makes downloading or sharing files easier (see file-shaRing and p2p netWoRks), but also makes it harder for owners of videos, music, or other content to pursue copyright violations. Because of the prevalence of fraud and other criminal activity on the Internet, there have been calls to restrict the ability of online users to remain anonymous, and some nations such as South Korea have enacted legislation to that effect. However, civil libertarians and privacy advocates believe that the impact on freedom and privacy outweighs any benefits for security and law enforcement.

   The database of Web-site registrants (called Whois) provides contact information intended to ensure that someone will be responsible for a given site and be willing to cooperate to fix technical or administrative problems. At present, Whois information is publicly available. However, the Internet Corporation for Assigned Names and Numbers (ICANN) is considering making the contact information available only to persons who can show a legitimate need.

animation, computer

Ever since the first hand-drawn cartoon features entertained moviegoers in the 1930s, animation has been an important part of the popular culture. Traditional animation uses a series of hand-drawn frames that, when shown in rapid succession, create the illusion of lifelike movement.

comPuter animation techniques
The simplest form of computer animation (illustrated in games such as Pong) involves drawing an object, then erasing it and redrawing it in a different location. A somewhat more sophisticated approach can create motion in a scene by displaying a series of pre-drawn images called sprites—for example, there could be a series of sprites showing a sword-wielding troll in different positions.

   Since there are only a few intermediate images, the use of sprites doesn’t convey truly lifelike motion. modern animation uses a modern version of the traditional drawn animation technique. The drawings are “keyframes” that capture significant movements by the characters. The keyframes are later filled in with transitional frames in a process called tweening. Since it is possible to create algorithms that describe the optimal in-between frames, the advent of sufficiently powerful computers has made computer animation both possible and desirable. Today computer animation is used not only for cartoons but also for video games and movies. The most striking use of this technique is morphing, where the creation of plausible intermediate images between two strikingly different faces creates the illusion of one face being transformed into the other.

   Algorithms that can realistically animate people, animals, and other complex objects require the ability to create a model that includes the parts of the object that can move separately (such as a person’s arms and legs). Because the movement of one part of the model often affects the positions of other parts, a treelike structure is often used to describe these relationships. (For example, an elbow moves an arm, the arm in turn moves the hand, which in turn moves the fingers). Alternatively, live actors performing a repertoire of actions or poses can be digitized using wearable sensors and then combined to portray situations, such as in a video game.

   Less complex objects (such as clouds or rainfall) can be treated in a simpler way, as a collection of “particles” that move together following basic laws of motion and gravity. Of course when different models come into contact (for example, a person walking in the rain), the interaction between the two must also be taken into consideration.

   While realism is always desirable, there is inevitably a tradeoff between the resources available. Computationally intensive physics models might portray a very realistic spray of water using a high-end graphics workstation, but simplified models have to be used for a program that runs on a game console or desktop PC. The key variables are the frame rate (higher is smoother) and the display resolution. The amount of available video memory is also a consideration: many desktop PCs sold today have 256mB or more of video memory.

aPPLications
Computer animation is used extensively in many feature films, such as for creating realistic dinosaurs (Jurassic Park) or buglike aliens (Starship Troopers). Computer games combine animation techniques with other techniques (see computeR gRaphics) to provide smooth action within a vivid 3D landscape. Simpler forms of animation are now a staple of Web site design, often written in Java or with the aid of animation scripting programs such as Adobe Flash.

   The intensive effort that goes into contemporary computer animation suggests that the ability to fascinate the human eye that allowed Walt Disney to build an empire is just as compelling today.

Andreessen, Marc

Andreessen, Marc
(1971– )
American
Entrepreneur, Programmer

marc Andreessen brought the World Wide Web and its wealth of information, graphics, and services to the desktop, setting the stage for the first “e-commerce” revolution of the later 1990s. As founder of Netscape, Andreessen also created the first big “dot-com,” or company doing business on the Internet.

   Born on July 9, 1971, in New Lisbon, Wisconsin, Andreessen grew up as part of a generation that would become familiar with personal computers, computer games, and graphics. By seventh grade Andreessen had his own PC and was programming furiously. He then studied computer science at the University of Illinois at Urbana-Champaign, where his focus on computing was complemented by a wideranging interest in music, history, literature, and business.

   By the early 1990s the World Wide Web (see WoRldWide Web and beRneRs-lee, tim) was poised to change the way information and services were delivered to users. However, early Web pages generally consisted only of linked pages of text, without point-and-click navigation or the graphics and interactive features that adorn Web pages today.

   Andreessen learned about the World Wide Web shortly after Berners-Lee introduced it in 1991. Andreessen thought it had great potential, but also believed that there needed to be better ways for ordinary people to access the new medium. In 1993, Andreessen, together with colleague Eric 
Bina and other helpers at the National Center for Supercomputing Applications (NCSA), set to work on what became known as the mosaic Web browser. Since their work was paid for by the government, mosaic was offered free to users over the Internet. mosaic could show pictures as well as text, and users could follow Web links simply by clicking on them with the mouse. The user-friendly program became immensely popular, with more than 10 million users by 1995.

   After earning a B.S. in computer science, Andreessen left mosaic, having battled with its managers over the future of Web-browsing software. He then met Jim Clark, an older entrepreneur who had been CEO of Silicon graphics. They founded Netscape Corporation in 1994, using $4 million seed capital provided by Clark.

   Andreessen recruited many of his former colleagues at NCSA to help him write a new Web browser, which became known as Netscape Navigator. Navigator was faster and more graphically attractive than mosaic. most important, Netscape added a secure encrypted facility that people could use to send their credit card numbers to online merchants. This was part of a two-pronged strategy: First, attract the lion’s share of Web users to the new browser, and then sell businesses the software they would need to create effective Web pages for selling products and services to users.

   By the end of 1994 Navigator had gained 70 percent of the Web browser market. Time magazine named the browser one of the 10 best products of the year, and Netscape was soon selling custom software to companies that wanted a presence on the Web. The e-commerce boom of the later 1990s had begun, and marc Andreessen was one of its brightest stars. When Netscape offered its stock to the public in summer 1995, the company gained a total worth of $2.3 billion, more than that of many traditional bluechip industrial companies. Andreessen’s own shares were worth $55 million.

battLe with microsoFt
microsoft (see micRosoft and gates, bill) had been slow to recognize the growing importance of the Web, but by the mid-1990s gates had decided that the software giant had to have a comprehensive “Internet strategy.” In particular, the company had to win control of the browser market so users 
would not turn to “platform independent” software that could deliver not only information but applications, without requiring the use of Windows at all.

   microsoft responded by creating its own Web browser, called Internet Explorer. Although technical reviewers generally considered the microsoft product to be inferior to Netscape, it gradually improved. most significantly, microsoft included Explorer with its new Windows 95 operating system. This “bundling” meant that PC makers and consumers had little interest in paying for Navigator when they already had a “free” browser from microsoft. In response to this move, Netscape and other microsoft competitors helped promote the antitrust case against microsoft that would result in 2001 in some of the company’s practices being declared an unlawful use of monopoly power.

   Andreessen tried to respond to microsoft by focusing on the added value of his software for Web servers while making Navigator “open source,” meaning that anyone was allowed to access and modify the program’s code (see opensouRce). He hoped that a vigorous community of programmers might help keep Navigator technically superior to Internet Explorer. However, Netscape’s revenues began to decline steadily. In 1999 America Online (AOL) bought the company, seeking to add its technical assets and Webcenter online portal to its own offerings (see ameRica online).

   After a brief stint with AOL as its “principal technical visionary,” Andreessen decided to start his own company, called LoudCloud. The company provided Web-site development, management, and custom software (including ecommerce “shopping basket” systems) for corporations that had large, complex Web sites. However, the company was not successful; Andreessen sold its Web-site-management component to Texas-based Electronic Data Systems (EDS) while retaining its software division under the new name Opsware. In 2007 Andreessen scored another coup, selling Opsware to Hewlett-Packard (HP) for $1.6 billion.

   In 2007 Andreessen launched Ning, a company that offers users the ability to add blogs, discussion forums, and other features to their Web sites, but facing established competitors such as mySpace (see also social netWoRking). In July 2008 Andresseen joined the board of Facebook.

   While the future of his recent ventures remains uncertain, marc Andreessen’s place as one of the key pioneers of the Web and e-commerce revolution is assured. His inventiveness, technical insight, and business acumen made him a model for a new generation of Internet entrepreneurs. Andreessen was named one of the Top 50 People under the Age of 40 by Time magazine (1994) and has received the Computerworld/Smithsonian Award for Leadership (1995) and the W. Wallace mcDowell Award of the IEEE Computer Society (1997).

analog computer

most natural phenomena are analog rather than digital in nature (see analog and digital). But just as mathematical laws can describe relationships in nature, these relationships in turn can be used to construct a model in which natural forces generate mathematical solutions. This is the key insight that leads to the analog computer.

   The simplest analog computers use physical components that model geometric ratios. The earliest known analog computing device is the Antikythera mechanism. Constructed by an unknown scientist on the island of Rhodes around 87 b.c., this device used a precisely crafted differential gear mechanism to mechanically calculate the interval between new moons (the synodic month). (Interestingly, the differential gear would not be rediscovered until 1877.)

   Another analog computer, the slide rule, became the constant companion of scientists, engineers, and students until it was replaced by electronic calculators in the 1970s. Invented in simple form in the 17th century, the slide rule’s movable parts are marked in logarithmic proportions, allowing for quick multiplication, division, the extraction of square roots, and sometimes the calculation of trigonometric functions.

   The next insight involved building analog devices that set up dynamic relationships between mechanical movements. In the late 19th century two British scientists, James Thomson and his brother Sir William Thomson (later Lord Kelvin) developed the mechanical integrator, a device that could solve differential equations. An important new principle used in this device is the closed feedback loop, where the output of the integrator is fed back as a new set of inputs. This allowed for the gradual summation or integration of an equation’s variables. In 1931, vannevaRbush completed a more complex machine that he called a “differential analyzer.” Consisting of six mechanical integrators using specially shaped wheels, disks, and servomechanisms, the differential analyzer could solve equations in up to six independent variables. As the usefulness and applicability of the device became known, it was quickly replicated in various forms in scientific, engineering, and military institutions.

   These early forms of analog computer are based on fixed geometrical ratios. However, most phenomena that scientists and engineers are concerned with, such as aerodynamics, fluid dynamics, or the flow of electrons in a circuit, involve a mathematical relationship between forces where the output changes smoothly as the inputs are changed. The “dynamic” analog computer of the mid-20th century took advantage of such force relationships to construct devices where input forces represent variables in the equation, and nature itself “solves” the equation by producing a resulting output force.

   In the 1930s, the growing use of electronic circuits encouraged the use of the flow of electrons rather than mechanical force as a source for analog computation. The key circuit is called an operational amplifier. It generates a highly amplified output signal of opposite polarity to the input, over a wide range of frequencies. By using components such as potentiometers and feedback capacitors, an analog computer can be programmed to set up a circuit in which the laws of electronics manipulate the input voltages in the same way the equation to be solved manipulates its variables. The results of the calculation are then read as a series of voltage values in the final output.

   Starting in the 1950s, a number of companies marketed large electronic analog computers that contained many separate computing units that could be harnessed together to provide “real time” calculations in which the results could be generated at the same rate as the actual phenomena being simulated. In the early 1960s, NASA set up training simulations for astronauts using analog realtime simulations that were still beyond the capability of digital computers.

   gradually, however, the use of faster processors and larger amounts of memory enabled the digital computer to surpass its analog counterpart even in the scientific programming and simulations arena. In the 1970s, some hybrid machines combined the easy programmability of a digital “front end” with analog computation, but by the end of that decade the digital computer had rendered analog computers obsolete.

analog and digital

The word analog (derived from greek words meaning “by ratio”) denotes a phenomenon that is continuously variable, such as a sound wave. The word digital, on the other hand, implies a discrete, exactly countable value that can be represented as a series of digits (numbers). Sound recording provides familiar examples of both approaches. Recording a phonograph record involves electromechanically transferring a physical signal (the sound wave) into an “analogous” physical representation (the continuously varying peaks and dips in the record’s surface). Recording a CD, on the other hand, involves sampling (measuring) the sound level at thousands of discrete instances and storing the results in a physical representation of a numeric format that can in turn be used to drive the playback device.

   Virtually all modern computers depend on the manipulation of discrete signals in one of two states denoted by the numbers 1 and 0. Whether the 1 indicates the presence of an electrical charge, a voltage level, a magnetic state, a pulse of light, or some other phenomenon, at a given point there is either “something” (1) or “nothing” (0). This is the most natural way to represent a series of such states.

   Digital representation has several advantages over analog. Since computer circuits based on binary logic can be driven to perform calculations electronically at ever-increasing speeds, even problems where an analog computer better modeled nature can now be done more efficiently with digital machines (see analog computeR). Data stored in digitized form is not subject to the gradual wear or distortion of the medium that plagues analog representations such as the phonograph record. Perhaps most important, because digital representations are at base simply numbers, an infinite variety of digital representations can be stored in files and manipulated, regardless of whether they started as pictures, music, or text (see digital conveRgence).

converting between anaLog and digitaL rePresentations
Because digital devices (particularly computers) are the mechanism of choice for working with representations of text, graphics, and sound, a variety of devices are used to digitize analog inputs so the data can be stored and manipulated. Conceptually, each digitizing device can be thought of as having three parts: a component that scans the input and generates an analog signal, a circuit that converts the analog signal from the input to a digital format, and a component that stores the resulting digital data for later use. For example, in the ubiquitous flatbed scanner a moving head reads varying light levels on the paper and converts them to a varying level of current (see scanneR). This analog signal is in turn converted into a digital reading by an analog-todigital converter, which creates numeric information that represents discrete spots (pixels) representing either levels of gray or of particular colors. This information is then written to disk using the formats supported by the operating system and the software that will manipulate them.

America Online  (AOL)

For millions of PC users in the 1990s, “going online” meant connecting to America Online. However, this once dominant service provider has had difficulty adapting to the changing world of the Internet.

   By the mid-1980s a growing number of PC users were starting to go online, mainly dialing up small bulletin board services. generally these were run by individuals from their homes, offering a forum for discussion and a way for users to upload and download games and other free software and shareware (see bulletin boaRd systems). However, some entrepreneurs saw the possibility of creating a commercial information service that would be interesting and useful enough that users would pay a monthly subscription fee for access. Perhaps the first such enterprise to be successful was Quantum Computer Services, founded by Jim Kimsey in 1985 and soon joined by another young entrepreneur, Steve Case. Their strategy was to team up with personal computer makers such as Commodore, Apple, and IBm to provide special online services for their users.

   In 1989 Quantum Link changed its name to America Online (AOL). In 1991 Steve Case became CEO, taking over from the retiring Kimsey. Case’s approach to marketing AOL was to aim the service at novice PC users who had trouble mastering arcane DOS (disk operating system) commands and interacting with text-based bulletin boards and primitive terminal programs. As an alternative, AOL provided a complete software package that managed the user’s connection, presented “friendly” graphics, and offered point-andclick access to features

   Chat rooms and discussion boards were also expanded and offered in a variety of formats for casual and more formal use. gaming, too, was a major emphasis of the early AOL, with some of the first online multiplayer fantasy roleplaying games such as a version of Dungeons and Dragons called Neverwinter Nights (see online games). A third popular application has been instant messaging (Im), including a feature that allowed users to set up “buddy lists” of their friends and keep track of when they were online (see also texting and instant messaging).

internet chaLLenge
By 1996 the World Wide Web was becoming popular (see WoRld Wide Web). Rather than signing up with a proprietary service such as AOL, users could simply get an account with a lower-cost direct-connection service (see inteRnetseRvice pRovideR) and then use a Web browser such as Netscape to access information and services. AOL was slow in adapting to the growing use of the Internet. At first, the service provided only limited access to the Web (and only through its proprietary software). gradually, however, AOL offered a more seamless Web experience, allowing users to run their own browsers and other software together with the proprietary interface. Also, responding to competition, AOL replaced its hourly rates with a flat monthly fee ($19.95 at first).

   Overall, AOL increasingly struggled with trying to fulfill two distinct roles: Internet access provider and content provider. By the late 1990s AOL’s monthly rates were higher than those of “no frills” access providers such as NetZero. AOL tried to compensate for this by offering integration of services (such as e-mail, chat, and instant messaging) and news and other content not available on the open Internet.

   AOL also tried to shore up its user base with aggressive marketing to users who wanted to go online but were not sure how to do so. Especially during the late 1990s, AOL was able to swell its user rolls to nearly 30 million, largely by providing millions of free CDs (such as in magazine inserts) that included a setup program and up to a month of free service. But while it was easy to get started with AOL, some users began to complain that the service would keep billing them even after they had repeatedly attempted to cancel it. meanwhile, AOL users got little respect from the 
more sophisticated inhabitants of cyberspace, who often complained that the clueless “newbies” were cluttering newsgroups and chat rooms.

   In 2000 AOL and Time Warner merged. At the time, the deal was hailed as one of the greatest mergers in corporate history, bringing together one of the foremost Internet companies with one of the biggest traditional media companies. The hope was that the new $350 billion company would be able to leverage its huge subscriber base and rich media resources to dominate the online world.

From service to content Provider
By the 2000s, however, an increasing number of people were switching from dial-up to high-speed broadband Internet access (see bRoadband) rather than subscribing to services such as AOL simply to get online. This trend and the overall decline in the Internet economy early in the decade (the “dot-bust”) contributed to a record loss of $99 billion for the combined company in 2002. In a shakeup, TimeWarner dropped “AOL” from its name, and Steve Case was replaced as executive chairman. The company increasingly began to shift its focus to providing content and services that would attract people who were already online, with revenue coming from advertising instead of subscriptions.

   In October 2006 the AOL division of Time-Warner (which by then had dropped the full name America Online) announced that it would provide a new interface and software optimized for broadband users. AOL’s OpenRide desktop presents users with multiple windows for e-mail, instant messaging, Web browsing, and media (video and music), with other free services available as well. These offerings are designed to compete in a marketplace where the company faces stiff competition from other major Internet presences who have been using the advertising-based model for years (see yahoo! and google).

Amdahl, Gene Myron

Amdahl, Gene Myron
(1922– )
American
Inventor, Entrepreneur

Gene Amdahl played a major role in designing and developing the mainframe computer that dominated data processing through the 1970s (see mainfRame). Amdahl was born on November 16, 1922, in Flandreau, South Dakota. After having his education interrupted by World War II, Amdahl received a B.S. from South Dakota State University in 1948 and a Ph.D. in physics at the University of Wisconsin in 1952.

   As a graduate student Amdahl had realized that further progress in physics and other sciences required better, faster tools for computing. At the time there were only a few computers, and the best approach to getting access to significant computing power seemed to be to design one’s own machine. Amdahl designed a computer called the WISC (Wisconsin Integrally Synchronized Computer). This computer used a sophisticated procedure to break calculations into parts that could be carried out on separate processors, making it one of the earliest examples of the parallel computing techniques found in today’s computer architectures.

designer For ibm
In 1952 Amdahl went to work for IBm, which had committed itself to dominating the new data processing industry. Amdahl worked with the team that eventually designed the IBm 704. The 704 improved upon the 701, the company’s first successful mainframe, by adding many new internal programming instructions, including the ability to perform floating point calculations (involving numbers that have decimal points). The machine also included a fast, high-capacity magnetic core memory that let the machine retrieve data more quickly during calculations. In November 1953 Amdahl became the chief project engineer for the 704 and then helped design the IBm 709, which was designed especially for scientific applications.

   When IBm proposed extending the technology by building a powerful new scientific computer called STRETCH, Amdahl eagerly applied to head the new project. However, he ended up on the losing side of a corporate power struggle, and did not receive the post. He left IBm at the end of 1955.

   In 1960 Amdahl rejoined IBm, where he was soon involved in several design projects. The one with the most lasting importance was the IBm System/360, which would become the most ubiquitous and successful mainframe computer of all time. In this project Amdahl further refined his ideas about making a computer’s central processing unit more efficient. He designed logic circuits that enabled the processor to analyze the instructions waiting to be executed (the “pipeline”) and determine which instructions could be executed immediately and which would have to wait for the results of other instructions. He also used a cache, or special memory area, in which the instructions that would be needed next could be stored ahead of time so they could be retrieved immediately when needed. Today’s desktop PCs use these same ideas to get the most out of their chips’ capabilities.

   Amdahl also made important contributions to the further development of parallel processing. Amdahl created a formula called Amdahl’s law that basically says that the advantage gained from using more processors gradually declines as more processor are added. The amount of improvement is also proportional to how much of the calculation can be broken down into parts that can be run in parallel. As a result, some kinds of programs can run much faster with several processors being used simultaneously, while other programs may show little improvement.

   In the mid-1960s Amdahl helped establish IBm’s Advanced Computing Systems Laboratory in menlo Park, California, which he directed. However, he became increasingly frustrated with what he thought was IBm’s too rigid approach to designing and marketing computers. He decided to leave IBm again and, this time, challenge it in the marketplace.

creator oF “cLones”
Amdahl resolved to make computers that were more powerful than IBm’s machines, but that would be “plug compatible” with them, allowing them to use existing hardware and software. To gain an edge over the computer giant, Amdahl was able to take advantage of the early developments in integrated electronics to put more circuits on a chip without making the chips too small, and thus too crowded for placing the transistors.

   Thanks to the use of larger scale circuit integration, Amdahl could sell machines with superior technology to that of the IBm 360 or even the new IBm 370, and at a lower price. IBm responded belatedly to the competition, making more compact and faster processors, but Amdahl met each new IBm product with a faster, cheaper alternative. However, IBm also countered by using a sales technique that opponents called FUD (fear, uncertainty, and doubt). IBm salespersons promised customers that IBmwould soon be coming out with much more powerful and economical alternatives to Amdahl’s machines. As a result, many would-be customers were persuaded to postpone purchasing decisions and stay with IBm. Amdahl Corporation began to falter, and gene Amdahl gradually sold his stock and left the company in 1980.

   Amdahl then tried to repeat his success by starting a new company called Trilogy. The company promised to build much faster and cheaper computers than those offered by IBm or Amdahl. He believed he could accomplish this by using the new, very-large-scale integrated silicon wafer technology in which circuits were deposited in layers on a single chip rather than being distributed on separate chips on a printed circuit board. But the problem of dealing with the electrical characteristics of such dense circuitry, as well as some design errors, somewhat crippled the new 
computer design. Amdahl was forced to repeatedly delay the introduction of the new machine, and Trilogy failed in the marketplace.

   Amdahl’s achievements could not be overshadowed by the failures of his later career. He has received many industry awards, including Data Processing man of the Year by the Data Processing management Association (1976), the Harry goode memorial Award from the American Federation of Information Processing Societies, and the SIgDA Pioneering Achievement Award (2007).

Amazon.com

Beginning modestly in 1995 as an online bookstore, Amazon.com became one of the first success stories of the early Internet economy (see also e-commeRce).

     Named for the world’s largest river, Amazon.com was the brainchild of entrepreneur Jeffrey Bezos (see bezos, jeffRey p.). Like a number of other entrepreneurs of the early 1990s, Bezos had been searching for a way to market to the growing number of people who were going online. He soon decided that books were a good first product, since they were popular, nonperishable, relatively compact, and easy to ship.

     Several million books are in print at any one time, with about 275,000 titles or editions added in 2007 in the United States alone. Traditional “brick and mortar” (physical) bookstores might carry a few thousand titles up to perhaps 200,000 for the largest chains. Bookstores in turn stock their shelves mainly through major book distributors that serve as intermediaries between publishers and the public.

     For an online bookstore such as Amazon.com, however, the number of titles that can be made available is limited only by the amount of warehouse space the store is willing to maintain—and no intermediary between publisher and bookseller is needed. From the start, Amazon.com’s business model has capitalized on this potential for variety and the ability to serve almost any niche interest. Over the years the company’s offerings have expanded beyond books to 34 different categories of merchandise, including software, music, video, electronics, apparel, home furnishings, and even nonperishable gourmet food and groceries. (Amazon.com also entered the online auction market, but remains a distant runner-up to market leader eBay).

exPansion and ProFitabiLity
Because of its desire to build a very diverse product line, Amazon.com, unusually for a business startup, did not expect to become profitable for about five years. The growing revenues werelargely poured back into expansion. In the heated atmosphere of the Internet boom of the late 1990s, many other Internet-based businesses echoed that philosophy, and many went out of business following the bursting of the so-called dot-com bubble of the early 2000s. Some analysts questioned whether even the hugely popular Amazon.com would ever be able to convert its business volume into an operating profit. However, the company achieved its first profitable year in 2003 (with a modest $35 million surplus). Since then growth has remained steady and generally impressive: In 2005, Amazon.com earned $8.49 billion revenues with a net income of $359 million. By then the company had about 12,000 employees and had been added to the S&P 500 stock index.

     In 2006 the company maintained its strategy of investing in innovation rather than focusing on short-term profits. Its latest initiatives include selling digital versions of books (e-books) and magazine articles, new arrangements to sell video content, and even a venture into moviemaking. By year end, annual revenue had increased to $10.7 billion.

     In November 2007 Amazon announced the Kindle, a book reader (see e-books and digital libRaRies) with a sharp “paper-like” display. In addition to books, the Kindle can also subscribe to and download magazines, content from newspaper Web sites, and even blogs.

     As part of its expansion strategy, Amazon.com has acquired other online bookstore sites including Borders.com and Waldenbooks.com. The company has also expanded geographically with retail operations in Canada, the United Kingdom, France, germany, Japan, and China.

     Amazon.com has kept a tight rein on its operations even while continually expanding. The company’s leading market position enables it to get favorable terms from publishers and manufacturers. A high degree of warehouse automation and an efficient procurement system keep stock moving quickly rather than taking up space on the shelves.

inFormation-based strategies
Amazon.com has skillfully taken advantage of information technology to expand its capabilities and offerings. Examples of such efforts include new search mechanisms, cultivation of customer relationships, and the development of new ways for users to sell their own goods.

     Amazon’s “Search Inside the Book” feature is a good example of leveraging search technology to take advantage of having a growing amount of text online. If the publisher of a book cooperates, its actual text is made available for online searching. (The amount of text that can be displayed is limited to prevent users from being able to read entire books for free.) Further, one can see a list of books citing (or being cited by) the current book, providing yet another way to explore connections between ideas as used by different authors. Obviously for Amazon.com, the ultimate reason for offering all these useful features is that more potential customers may be able to find and purchase books on even the most obscure topics.

     Amazon.com’s use of information about customers’ buying histories is based on the idea that the more one knows about what customers have wanted in the past, the more effectively they can be marketed to in the future through customizing their view of the site. Users receive automatically generated recommendations for books or other items based on their previous purchases (see also 
customeR Relationship management). There is even a “plog” or customized Web log that offers postings related to the user’s interests and allows the user to respond.

     There are other ways in which Amazon.com tries to involve users actively in the marketing process. For example, users are encouraged to review books and other products and to create lists that can be shared with other users. The inclusion of both user and professional reviews in turn makes it easier for prospective purchasers to determine whether a given book or other item is suitable. Authors are given the opportunity through “Amazon Connect” to provide additional information about their books. Finally, in late 2005 Amazon replaced an earlier “discussion board” facility with a wiki system that allows purchasers to create or edit an information page for any product (see Wikisand Wikipedia).

     The company’s third major means of expansion is to facilitate small businesses and even individual users in the marketing of their own goods. Amazon marketplace, a service launched in 2001, allows users to sell a variety of items, with no fees charged unless the item is sold. There are also many provisions for merchants to set up online “storefronts” and take advantage of online payment and other services.

     Another aspect of Amazon’s marketing is its referral network. Amazon’s “associates” are independent businesses that provide links from their own sites to products on Amazon. For example, a seller of crafts supplies might include on its site links to books on crafting on the Amazon site. In return, the referring business receives a commission from Amazon.com.

     Although often admired for its successful business plan, Amazon.com has received criticism from several quarters. Some users have found the company’s customer service (which is handled almost entirely by e-mail) to be unresponsive. meanwhile local and specialized bookstores, already suffering in recent years from the competition of large chains such as Borders and Barnes and Noble, have seen in Amazon.com another potent threat to the survival of their business. (The company’s size and economic power have elicited occasional comparisons with Wal-mart.) Finally, Amazon.com has been criticized by some labor advocates for paying low wages and threatening to terminate workers who sought to unionize.

algorithm

When people think of computers, they usually think of silicon chips and circuit boards. moving from relays to vacuum tubes to transistors to integrated circuits has vastly increased the power and speed of computers, but the essential idea behind the work computers do remains the algorithm. An algorithm is a reliable, definable procedure for solving a problem. The idea of the algorithm goes back to the beginnings of mathematics and elementary school students are usually taught a variety of algorithms. For example, the procedure for long division by successive division, subtraction, and attaching the next digit is an algorithm. Since a dona fide algorithm is guaranteed to work given the specified type of data and the rote following of a series of steps, the algorithmic approach is naturally suited to mechanical computation.

aLgorithms in computer science
Just as a cook learns both general techniques such as how to sauté or how to reduce a sauce and a repertoire of specific recipes, a student of computer science learns both general problem-solving principles and the details of common algorithms. These include a variety of algorithms for organizing data (see sorting and seaRching), for numeric problems (such as generating random numbers or finding primes), and for the manipulation of data structures (see list pRocessing and queue).

     A working programmer faced with a new task first tries to think of familiar algorithms that might be applicable to the current problem, perhaps with some adaptation. For example, since a variety of well-tested and well-understood sorting algorithms have been developed, a programmer is likely to apply an existing algorithm to a sorting problem rather than attempt to come up with something entirely new. Indeed, for most widely used programming languages there are packages of modules or procedures that implement commonly needed data structures and algorithms (see libRaRy, pRogRam).

     If a problem requires the development of a new algorithm, the designer will first attempt to determine whether the problem can, at least in theory, be solved (see computability and complexity). Some kinds of problems have been shown to have no guaranteed answer. If a new algorithm seems feasible, principles found to be effective in the past will be employed, such as breaking complex problems down into component parts or building up from the simplest case to generate a solution (see Recursion). For example, the merge-sort algorithm divides the data to be sorted into successively smaller portions until they are sorted, and then merges the sorted portions back together.

     Another important aspect of algorithm design is choosing an appropriate way to organize the data (see data structures). For example, a sorting algorithm that uses a branching (tree) structure would probably use a data structure that implements the nodes of a tree and the operations for adding, deleting, or moving them (see class).

     Once the new algorithm has been outlined (see pseudocode), it is often desirable to demonstrate that it will work for any suitable data. mathematical techniques such as the finding and proving of loop invariants (where a true assertion remains true after the loop terminates) can be used to demonstrate the correctness of the implementation of the algorithm.

PracticaL considerations
It is not enough that an algorithm be reliable and correct, it must also be accurate and efficient enough for its intended use. A numerical algorithm that accumulates too much error through rounding or truncation of intermediate results may not be accurate enough for a scientific application. An algorithm that works by successive approximation or convergence on an answer may require too many iterations even for today’s fast computers, or may consume too much of other computing resources such as memory. On the other hand, as computers become more and more powerful and processors are combined to create more powerful supercomputers (see super Computer and reprogramming), algorithms that were previously considered impracticable might be reconsidered. Code profiling (analysis of which program statements are being executed the most frequently) and techniques for creating more efficient code can help in some cases. It is also necessary to keep in mind special cases where an otherwise efficient algorithm becomes much less efficient (for example, a tree sort may work well for random data but will become badly unbalanced and slow when dealing with data that is already sorted or mostly sorted).

     Sometimes an exact solution cannot be mathematically guaranteed or would take too much time and resources to calculate, but an approximate solution is acceptable. A so called “greedy algorithm” can proceed in stages, testing at each stage whether the solution is “good enough.” Another approach is to use an algorithm that can produce a reasonable if not optimal solution. For example, if a group of tasks must be apportioned among several people (or computers) so that all tasks are completed in the shortest possible time, the time needed to find an exact solution rises exponentially with the number of workers and tasks. But an algorithm that first sorts the tasks by decreasing length and then distributes them among the workers by “dealing” them one at a time like cards at a bridge table will, as demonstrated by Ron graham, give an allocation guaranteed to be within 4/3 of the optimal result—quite suitable for most applications. (A procedure that can produce a practical, though not perfect solution is actually not an algorithm but a heuristic.)


     An interesting approach to optimizing the solution to a problem is allowing a number of separate programs to “compete,” with those showing the best performance surviving and exchanging pieces of code (“genetic material”) with other successful programs (see genetic algorithms). This of course mimics evolution by natural selection in the biological world.

Algol

The 1950s and early 1960s saw the emergence of two highlevel computer languages into widespread use. The first was designed to be an efficient language for performing scientific calculations (see foRtRan). The second was designed for business applications, with an emphasis on data processing (see cobol). However many programs continued to be coded in low-level languages (see assembleR) designed to take advantages of the hardware features of particular machines.

     In order to be able to easily express and share methods of calculation (see algoRithm), leading programmers began to seek a “universal” programming language that was not designed for a particular application or hardware platform. By 1957, the german gAmm (gesellschaft für angewandte mathematik und mechanik) and the American ACm (Association for Computing machinery) had joined forces to develop the specifications for such a language. The result became known as the Zurich Report or Algol-58, and it was refined into the first widespread implementation of the language, Algol-60.

Language Features
Algol is a block-structured, procedural language. Each variable is declared to belong to one of a small number of kinds of data including integer, real number (see data types), or a series of values of either type (see aRRay). While the number of types is limited and there is no facility for defining new types, the compiler’s type checking (making sure a data item matches the variable’s declared type) introduced a level of security not found in most earlier languages.

     An Algol program can contain a number of separate procedures or incorporate externally defined procedures (see libRaRy, pRogRam), and the variables with the same name in different procedure blocks do not interfere with one another. A procedure can call itself (see RecuRsion). Standard control structures (see bRanching statementsand loop) were provided.

    The following simple Algol program stores the numbers from 1 to 10 in an array while adding them up, then prints the total: 

begin
integer array ints[1:10];
integer counter, total;
total := 0;
for counter :=1 step 1 until counter > 10
do
begin
ints [counter] := counter;
total := total + ints[counter];
end;
printstring “The total is:”;
printint (total);
end

aLgoL’s Legacy
The revision that became known as Algol-68 expanded the variety of data types (including the addition of boolean, or true/false values) and added user-defined types and “structs” (records containing fields of different types of data). Pointers (references to values) were also implemented, and flexibility was added to the parameters that could be passed to and from procedures.

     Although Algol was used as a production language in some computer centers (particularly in Europe), its relative complexity and unfamiliarity impeded its acceptance, as did the widespread corporate backing for the rival languages FORTRAN and especially COBOL. Algol achieved its greatest success in two respects: for a time it became the language of choice for describing new algorithms for computer scientists, and its structural features would be adopted in the new procedural languages that emerged in the 1970s (see pascal and c).