Search This Blog

Tuesday, 22 October 2013

client-server computing

It is often more efficient to have a large, relatively expen-sive computer provide an application or service to users on many smaller, inexpensive computers that are linked to it by a network connection. The term server can apply to both the application providing the service and the machine run-ning it. The program or machine that receives the service is called the client.

A familiar example is browsing the Web. The user runs a Web browser, which is a client program. The browser connects to the Web server that hosts the desired Web site. Another example is a corporate server that runs a database. Users’ client programs connect to the database over a local area network (LAN). Many retail transactions are also han-dled using a client-server arrangement. Thus, when a travel or theater booking agent sells a ticket, the agent’s client pro-gram running on a PC or terminal connects to the server containing the database that keeps track of what seats are available (see terminal).

There are several advantages to using the client-server model. Having most of the processing done by one or more servers means that these powerful and more costly machines can be used to the greatest efficiency. If more processing capacity is needed, more servers can be brought online without having to revamp the whole system. Users, on the other hand, only need PCs (or terminals) that are powerful enough to run the smaller client program to con-nect to the server.

Keeping the data in a central location helps ensure its integrity: If a database is on a server, transactions can be committed in an orderly way to ensure that, for example, the same ticket isn’t sold to two people. A client-server model also offers flexibility to users. Any client program that meets the standards supported by the server can be used to make a connection. (The marketplace generally decides which clients will be supported: for example most Web sites today support both Microsoft Internet Explorer and Firefox, although they may cater to some features unique to one or the other and other browsers will also work to some extent.)

Client-server computing does have potential disadvan-tages. If there is only one server, a failure of the server (whether from a hardware failure, a bug, or a hacker attack) brings the whole system to a halt, since the client has no ability to complete transactions on its own. The clients’ access to the server is also dependent on the network that connects them. A network failure or traffic bottleneck will also prevent the client from getting any work done.

Extending the Model

One way used in larger organizations to improve the effi-ciency of the client-server model is to introduce an interme-diary between the client and the server. The intermediary program can cache frequently requested data so it can be supplied immediately rather than having to be retrieved from the server (see cache). The intermediary can also act as a “traffic cop” to route client requests to the server that currently has the least load or the fastest network access.

Another design consideration is the distribution of pro-cessing between the client and the server. At one extreme is the “thin client,” where the client machine may only display forms and transmit information to and display information from the server. A POS (point of sale) terminal typifies this approach. On the other hand, a “fat client” running on a full-featured desktop PC may perform functions such as verifying the completeness and validity of data before send-ing it to the server, or use information from the server to generate graphics (this is typical with online games, where limiting the amount of information that must be sent over the network can be crucial to speed).

The ultimate extension of the client-server model is “distributed object computing.” This is an application of object-oriented programming principles to the organiza-tion of the resources needed for data processing. In this model each object (such as a database table) is accessible throughout the network by all other objects, regardless of their physical location. This scheme provides the ultimate in flexibility, because objects can be moved freely among physical machines in order to even out the load. For one popular implementation of distributed object computing is CORBA (Common Object Request Broker Architecture—see corba). For Windows-based programs, Microsoft has devel-oped the DCOM (Distributed Component Object Model), which allows controls (that is, objects with functional inter-faces) written using ActiveX to communicate with each other in a networked environment. (For example, an Excel spreadsheet in an ActiveX control can be embedded in a Word document, and instructed to update itself regularly by obtaining data from a Microsoft Access database table on another machine.) The Microsoft.NET initiative is also geared toward creating applications that can fluidly inter-operate over the Internet (see Microsoft .NET).

No comments:

Post a Comment