Search This Blog

Thursday, 23 October 2014

electronic voting systems

There are a variety of ways to electronically register, store, and process votes. In recent years older manual systems (paper ballots or mechanical voting machines) have been replaced in many areas with systems ranging from purely digital (touch screens) to hybrid systems where marked paper ballots are scanned and tabulated by machine. How-ever, voting systems have been subject to considerable con-troversy, particularly following the Florida debacle in the 2000 U.S. presidential election.

The criteria by which voting systems are evaluated include:

•  how easy it is for the voter to understand and use the system

•  accessibility for disabled persons

•  whether the voter’s intentions are accurately recorded

There are several types of electronic voting systems, such as this box that automatically tallies specially marked ballots. Common concerns include the potential for tampering and the need to provide for inde-pendent verification of results.  (Lisa McDonald/istockphoto)

•  the ability to make a permanent record of the vote

•  prevention of tampering (physical or electronic)

•  provisions for independent auditing of the votes in case of dispute

The degree to which a given system meets these criteria can vary considerably because of both design and imple-mentation issues.

Early Systems

The earliest form of voting system consisted of paper ballots marked and tabulated entirely by hand. The first generation of “automatic” voting systems involved mechanical voting machines (where votes were registered by pulling levers). Next came two types of hybrid systems where votes were cast mechanically but tabulated automatically. These sys-tems used punch cards (see punched cards and paper tape) or “marksense” or similar systems where the voter filled in little squares and the ballots were then scanned and tabulated automatically.

The ultraclose and highly disputed 2000 U.S. presiden-tial election “stress-tested” voting systems that most people had previously believed were reasonably accurate. The prin-cipal problems were the interpretation of punch cards that were not properly punched through (so-called dimpled or hanging chads) and the fact that some ballot layouts proved to be confusing or ambiguous. Two types of voting systems have been proposed as replacements for the problematic earlier technology.

Touchscreen

This type of system uses a screen display that can be directly manipulated by the voter (see touchscreen). In the most common type, called DRE (direct-recording elec-tronic), a computer program interprets and tabulates the vote as it is cast, storing an image in a removable memory unit and (usually) printing out a copy for backup. After vot-ing is complete, the memory module can be sent to the cen-tral counting office. (Alternatively, votes can be transmitted over a computer network in batches throughout the day.) In a few cases, voting has also been implemented through secure Internet sites.


Optical Scan

Concern about potential tampering with computers has led many jurisdictions to begin to replace touchscreen systems with optical-scan systems, where the voter marks a sturdy paper ballot. (About half of U.S. counties now use opti-cal-scan systems.) The advantage of optical systems is that the voter physically marks the ballot and can see how he or she has voted, and after tabulation the physical ballots are available for review in case of problems. However, opti-cal-scan ballots must be properly marked using the correct type of pencil, or they may not be read correctly. Unlike the touchscreen, it is not possible to give the voter immediate feedback so that any errors can be corrected. Optical-ballot systems may cost more because of paper and printing costs for the ballots, which may have to be prepared in several languages. However this cost may be offset by not having to develop or validate the more complicated software needed for all-electronic systems.

Whatever system is used, federal law requires that visu-ally or otherwise disabled persons be given the opportu-nity, wherever possible, to cast their own vote in privacy. With optical-scan ballots, this is accommodated with a spe-cial device that plays an audio file listing the candidates for each race, with the voter pressing a button to mark the choice. However, disability rights advocates have com-plained that existing systems still require that another per-son physically insert the marked ballot into the scanner. Touchscreen systems, however, with the aid of audio cues, can be used by visually disabled persons without the need for another person to be present. They are thus preferred by some advocates for the disabled.

Reforms and Issues

In response to the problems with the 2000 election, Con-gress passed the Help America Vote Act in 2002. Since then, the federal government has spent more than $3 billion to help states replace older voting systems—in many cases with touchscreen systems.

The biggest concern raised about electronic voting sys-tems is that they, like other computer systems, may be sus-ceptible to hacking or manipulation by dishonest officials. In 2007 teams of researchers at the University of Califor-nia–Davis were invited by the state to try to hack into its voting systems. For the test, the researchers were provided with full access to the source code and documentation for the systems, as well as physical access. The hacking teams were able to break into and compromise every type of vot-ing system tested. In their report, the researchers outlined what they claimed to be surprisingly weak electronic and physical security, including flaws that could allow hackers to introduce computer viruses and take over control of the systems.

Manufacturers and other defenders of the technology have argued that the testing was unrealistic and that real-world hackers would not have had nearly as much informa-tion about or access to the systems. (This may underestimate the resourcefulness of hackers, as shown with other sys-tems, such as the phone system and computer networks.)

Another issue is who will be responsible for indepen-dently reviewing the programming (source) code for each system to verify that it does not contain flaws. Manufactur-ers generally resist such review, considering the source code to be proprietary. (A possible alternative might be an open-source voting system. Advocates of open-source software argue that it is safer precisely because it is open to scrutiny and testing—see open-source movement.)

One common response to these security concerns is to require that all systems generate paper records that can be verified and audited. Some defenders of existing technol-ogy say that adding a parallel paper system is unnecessarily expensive and introduces other problems such as printer failures. They argue that all-electronic systems can be made safer and more secure, such as through the use of encryp-tion. (A proposed compromise would be for the machine to

print out a simple receipt with a code that the voter could use to verify online that the vote was tabulated.)

As of 2007, 28 states had passed laws requiring that vot-ing systems produce some sort of paper receipt or record that shows the voter what has been voted and that can be used later for an independent audit or recount,

Although control of elections is primarily a state or local responsibility, the federal government does have jurisdic-tion over elections for federal office. As a practical matter, any changes in voting technology or procedures mandated by Congress for federal elections will end up being used in local elections as well.

In 2007, congressional leaders decided not to require a major overhaul of the nation’s election systems until at least 2012. However, the inclusion of some sort of paper record is being mandated for the 2008 election. For users of touch-screen systems, the simplest way to accommodate this is to add small paper-spool printers, but some states have com-plained that their systems would require more-expensive accommodations.


Meanwhile, a lively debate continues in many states and other jurisdictions about how to meet the need for accessi-ble but secure voting systems without breaking the budget.

Electronic Arts


Electronic Arts (NASDAQ symbol: ERTS) is a pioneering and still prominent maker of games for personal computers (see computer games). Its fortunes largely mirror those of the game industry itself.

In 1982 Trip Hawkins and several colleagues left Apple Computer and founded a company called Amazin’ Soft-ware. The company was founded with the goal of making “software that makes a personal computer worth owning.” Hawkins also had an ambitious goal of turning it into a bil-lion-dollar company, but this goal would not be achieved until the mid-1990s. Meanwhile, after considerable internal debate, the company changed its name to Electronic Arts in late 1982. This name reflects Hawkins’s belief that computer games were an emerging art form and that their developers should be respected as artists. This would be reflected in game box covers that looked like record jackets and promi-nently featured the names of the developers.

In 1983 EA published three games for the Atari 800 computer that typified playability and diversity. Archon combined chesslike strategy with arcade-style battles; Pin-ball Construction Set let users create and play their own layouts; and the unique M.U.L.E. was a deceptively simple game of strategic resources—and one of the first multi-player video games. EA titles published in the later 1980s include an exploration game Seven Cities of Gold, the graph-ically innovative space conquest game Starflight, and the role-playing series The Bard’s Tale.

In its early years the company published games devel-oped by independent programmers, but in the late 1980s it began to develop some games in house. EA sought out innovative games and promoted them directly to retailers. While it was difficult at first to market often-obscure games to stores, as the games became successful and regular retail channels were established, EA’s revenue began to outpace that of competitors. (Hawkins left in 1991 to found the game company 3DO.)

Challenges and Criticism

By the 2000s EA, now under Larry Probst, had suffered loss of its once-dominant position in what had become an increasingly diverse industry. EA was criticized by some investment analysts for declining to follow the trend toward ultraviolent, M-rated games such as Grand Theft Auto, though the company later softened that stand. In recent years the company’s big sellers have been its graphically intense and realistic sports simulations, nota-bly John Madden Football. (Besides the NFL, EA has con-tracts with NASCAR, FIFA [soccer], and the PGA and Tiger Woods.)

In 2007 EA announced that it would come out with Macintosh versions of many of its top titles. However, crit-ics have noted that the company seems to be publishing fewer original titles in favor of yearly updates (particularly in their sports franchises).

Along with much of the game industry, EA has increas-ingly focused on console games (see game console). EA currently develops games for the leading consoles; in fact, about 43 percent of EA’s 2005 revenue came from sales for the Sony PlayStation2 alone. (Total revenue in 2008 was $4.02 billion.) EA has also been expanding into online games, starting in 2002 with an online version of The Sims, a “daily life simulator.” (See online games.)

Some critics have objected to EA’s practice of buying smaller companies in order to get control over their popu-lar games, and then releasing versions that had not been properly tested. Perhaps the most-cited example is EA’s acquiring of Origin Systems and its famous Ultima series of role-playing games. Once acquired, EA produced two new titles in the series that many gamers consider to not be up to the Ultima standard.

The company has also been criticized for requiring very long work hours from developers; it eventually settled suits from game artists and programmers demanding compensa-tion for unpaid overtime.

EA has shown continuing interest in promoting the pro-fession of game development. In 2004 the company made a significant donation toward the development of a game design and production program at the University of South-ern California.


Meanwhile, founder Hawkins has founded a company called Digital Chocolate, focusing on games for mobile devices.

Effel


Eiffel is an interesting programming language developed by Bertrand Meyer and his company Eiffel Software in the 1980s. The language was named for Gustav Eiffel, the archi-tect who designed the famous tower in Paris. The language and accompanying methodology attracted considerable interest at software engineering conferences.

Eiffel fully supports (and in some ways pioneered) pro-gramming concepts found in more widely used languages today (see class and object-oriented programming). Syn-tactically, Eiffel emphasizes simple, reusable declarations that make the program easier to understand, and tries to avoid obscure or lower-level code such as compiler optimizations.

Program Structure

An Eiffel program is called a “system,” emphasizing its structure as a set of classes that represent the types of real-world data that need to be processed. A simple class might look like this:

class

COUNTER

feature—access counter value total: INTEGER

feature—manipulate counter value increment is—increase counter by one
do

total :- total + 1 end

decrement is—decrease counter by one do

total := total - 1 end

reset is—reset counter to zero do

total := 0 end

end

(In this listing language, keywords are in bold and user-defined objects are in italics. This formatting will be done automatically as the user enters the text.) Once the class is defined, making an instance of it is very simple:

my_counter COUNTER

create my_counter

The Eiffel compiler itself compiles to an intermediate “bytecode” that, in the final stage, is compiled into C, taking advantage of the ready availability of optimized C compilers.

A unique feature of Eiffel is the ability to set up “con-tracts” that specify in detail how classes will interact with one another. (This goes well beyond the usual declarations of parameters and enforcement of data types.) For example, with the COUNTER class an “invariant” can be declared such that total >= 0. This means that this condition must always remain true no matter what. A method can also require that the caller meet certain conditions. After pro-cessing and before returning to the caller, the method can ensure that a particular condition is true. The point of these specifications is that they make explicit what a given unit of code expects and what it promises to do in return. This can also improve program documentation.


Implementation and Uses

Eiffel’s proponents note that it is more than a language: It is designed to provide consistent ways to revise and reuse program components throughout the software development cycle. The current implementation of Eiffel is available for virtually all platforms and has interfaces to C, C++, and other languages. This allows Eiffel to be used to create a design framework for reusing existing software components in other languages. Eiffel’s consistent object-oriented design also makes it useful for documenting or modeling software projects (see modeling languages).

Eiffel was developed around the same time as C++. Eiffel is arguably cleaner and superior in design to the latter lan-guage. However, two factors led to the dominance of C++: the ready availability of inexpensive or free compilers and the existence of thousands of programmers who already knew C. Eiffel ended up being a niche language used for teaching software design and for a limited number of com-mercial applications using the EiffelStudio programming environment.


Eiffel has been recognized for its contributions to the development of object-oriented software design, most recently by the Association for Computing Machinery’s 2006 Software System Award for Impact on Software Quality.

e-government


Just as the way business is organized and conducted has been profoundly changed by information and communica-tions technology, the operation of government at all levels has been similarly affected. The term e-government (or elec-tronic government) is a way of looking at these changes as a whole and of considering how government uses (or might use) various computer applications.

The use of information technology in government can involve changes in the organization and internal commu-nications of government departments, changes in how ser-vices are delivered to the public, and providing new ways for the public to interact with the agency.

Internally, government agencies have many of the same information management and sharing needs as private enterprises (see data mining, database administration, e-mail, groupware, personal information manager, and project management software). However, govern-ment agencies are likely to have to adapt their information systems to account for complex, specialized regulations (both those the agency administers and others it is subject to). The standards of openness and accountability are gen-erally different from and stricter than those that apply to private organizations.

A major focus of e-government is in expanding agencies’ presence on the Web and making government sites more useful. This can include providing summaries of regulations or other complicated information, offering online assis-tance, allowing filing of tax or other forms electronically, and helping with applications such as for Social Security or Medicare. Where applicants must physically visit the office, a computerized system can make it easy to make appoint-ments to reduce time waiting in line (a welcome option now offered by many state departments of motor vehicles).

Implementation

Obtaining employees with the necessary skills for maintain-ing sophisticated information systems and modern dynamic Web sites is not easy. The government hiring process tends to be cumbersome and slow to respond to changing needs. Government must often compete with a private sector that is willing to pay high prices for top talent.

In many cases, adopting comprehensive e-government would require a rethinking of an agency’s purpose and pri-orities. There is also a tension between the Web culture, which focuses on linking information across conventional boundaries, and the tendency of bureaucracies to compart-mentalize and centrally control information. Nevertheless, even without fundamentally restructuring how agencies operate, there has been considerable success with bringing information to the public through a central portal (USA. gov, formerly FirstGov).

Once a service is offered, it has to be promoted. While some services (such as “e-filing” of tax returns) can be read-ily promoted for their convenience, other services are more obscure or may be of interest only to a narrow constituency.

Social and Political Impact

A survey by the Hart-Teeter poll found that respondents considered the most important potential benefit of e-gov-ernment to be greater government accountability; the second was greater access to information; and, perhaps sur-prisingly, convenience came third.

One criticism of e-government initiatives is that they often lack central coordination and may be implemented without keeping in mind the need of an agency to provide uniform, consistent, and impartial treatment to all citizens. For example, if an agency focuses its resources on develop-ing its Web site, people who lack online access may come to feel that they are receiving “second class” service (see digital divide). This is particularly unfortunate because the unconnected people are likely to be in poor and isolated communities that are most likely to be in need of govern-ment services.

 As with private enterprise, there can also be important online privacy issues. Information that has been collected digitally is easy to transfer to other agencies or even (as in the case of DMV information in some states) sold to private companies. Having a clearly spelled-out privacy policy is crucial.


Besides keeping private what people expect to be pri-vate, government agencies must also provide information that helps ensure public accountability. Information col-lected by government agencies is often subject to the Free-dom of Information Act (FOIA). This may require that data be provided in a format that is readily accessible.

education in the computer field


Education and training in computer-related fields runs the gamut from courses in basic computer concepts in adult education or junior college programs to postgraduate pro-grams in computer science and engineering. Curricula can be roughly divided into the following areas

•  computer literacy and applications

•  computer science

•  information systems

Computer Literacy and Applications

There is a general consensus that basic knowledge of com-puter terminology and mastery of widely used types of soft-ware will be essential for a growing number of occupations (see computer literacy). The elementary and junior high school curriculum now generally includes computer classes or “labs” where students learn the basics of word process-ing, spreadsheets, databases, graphics software, and use of the World Wide Web. There may also be introductory courses in programming, usually featuring easy-to-use pro-gramming languages such as Logo or BASIC.

Some high schools offer a track geared toward prepara-tion for college studies in computer science. This track may include courses in more advanced languages such as C++ or Java. Because of public interest and marketability, courses in graphics (such as use of Adobe Photoshop), multimedia, and Web design are also increasingly popular. Adult education and community college programs feature a similar range of courses. Many of today’s adult workers went to school at a time when personal computers were not readily available and computer literacy was not generally emphasized. The career prospects of many older workers are thus increasingly lim-ited if they don’t receive training in basic computer skills.

Technical or vocational schools offer tightly focused pro-grams that are geared toward providing a set of marketable skills, often in conjunction with gaining industry certifica-tions (see certification of computer professionals).

Computer Science

In the early 1950s, knowledge of computing tended to have an ad hoc nature. On the practical level, computing staffs tended to train newcomers in the specific hardware and machine-level programming languages in use at a particu-lar site. On the theoretical level, programmers in scientific fields were likely to come from a background in electronics, electrical engineering, or similar disciplines.

As it became clear that computers were going to play an increasingly important role, courses specific to computing were added to curricula in mathematics and engineering. By the late 1950s, however, leading people in the comput-ing field had become convinced that a formal curriculum in computer science was necessary for further advance in an increasingly sophisticated computing arena (see computer science). By the early 1960s, efforts at the University of Michigan, University of Houston, Stanford, and other insti-tutions had resulted in the creation of separate graduate departments of computer science. By the mid-1960s, the National Academy of Sciences and the President’s Science Advisory Committee had both called for a major expan-sion of efforts in computer science education to be aided by federal funding. During the 1970s and 1980s, mathemati-cal and engineering societies (in particular the Association for Computing Machinery (ACM) and Institute for Electri-cal and Electronic Engineering (IEEE) worked to estab-lished detailed computer science curricula that extended to undergraduate study. By 2000, there were 155 accredited programs in computer science in the United States.

Information Systems

The traditional computer science curriculum emphasizes theoretical matters such as algorithm and program design and computer architecture. Hiring managers in corpo-rate information systems departments have observed that computer science graduates often have little experience in such practical considerations as systems analysis, or the designing of computer systems to meet business require-ments. There has also been an increasing need for systems administrators, database administrators, and networking professionals who are well versed in the management and maintenance of particular systems.

In response to demand from industry, many universi-ties have instituted degree programs in information sys-tems (sometimes called MIS or Management Information Systems) as an alternative to computer science. While these programs include some study of theory, they focus on prac-tical considerations and often include internships or other practical work experience. Some programs offer more ambi-tious students a dual track leading to an MBA.

Challenges

There has always been a gap between the emphases in com-puter and information science programs and the needs of a rapidly changing marketplace. However, additional chal- lenges face education in the computer field today. The num-ber of undergraduate computer science degrees awarded in Ph.D.-granting universities in the United States has steadily declined since 2000. In part this may be a delayed reaction to the decline in employment of programmers early in the decade (due to the bursting of the “dot-com bubble”) that has since leveled off but has not significantly grown (see employ-ment in the computer field). This, together with the out-sourcing of many jobs (see globalism and the computer industry) may have in turn discouraged young people from entering the field.


At the same time, many observers insist that prospects are good for educators and students who can target emerg-ing high-demand skills. These include areas such as com-puter security, data mining, bioinformatics, Web content management, and even aspects of business management. Educators will be challenged to strike a balance between a comprehensive treatment of concepts that have many poten-tial applications and the need to provide specific skills that are in demand in the market.

Education and Computers

Computers are widely used in educational institutions from elementary school to college. While computers have had as yet little impact on the structure or organization of schools, educational software and the use of the Internet has had a growing impact on how education is delivered.


History

During the 1950s and early 1960s, computer resources were generally too scarce, expensive, and cumbersome to be used for teaching, although universities aspired to have comput-ers to aid their graduate and faculty researchers. However, during the 1960s computer engineers and educators at the Computer-based Education Research Laboratory at the Uni-versity of Illinois, Urbana, formed a unique collaboration and designed a computer system called PLATO. The PLATO system used mainframe computers to deliver instructional content to up to 1,000 simultaneous users at terminals throughout the University of Illinois and other educational institutions in the state. PLATO pioneered the interactive approach to instruction and the use of graphics in addition to text. The PLATO system was later marketed by Con-trol Data Corporation (CDC) for use elsewhere. During this time Stanford University also set up a system for deliver-ing computer-assisted instruction (CAI) to users connected to terminals throughout the nation. (See computer-aided instruction.)

By the early 1980s, microcomputers had become rela-tively affordable and capable of running significant edu-cational software including graphics. Apple Computer’s Apple II became an early leader in the school market, and the introduction of the Macintosh in 1984 with the Hyper-card scripting language inspired many teachers and other enthusiasts to create their own educational software. By the early 1990s, IBM compatible PCs with Windows were catching up. Commercially available computer games (such as Civilization or Railroad Tycoon) also offered ways to enrich social studies and other classes (see computer games).

The advent of the World Wide Web and graphical Web browsing in the mid-1990s spurred schools to connect to the Internet. The Web offered the opportunity for educa-tors to create resources that could be accessed by col-leagues and students anywhere in the world. The use of Web portals such as Yahoo!, library catalogs, and online encyclopedias gave teachers and students potential access to a far greater variety of information than could pos-sibly be found in textbooks. The Web also offered the opportunity for students at different schools to participate in collaborative projects, such as community surveys or environmental studies.

Applications

Educational applications of computing can be divided into several categories, as summarized in the following table. While small compared to the business market, the educational software industry is a significant market, tar-geting both schools and parents seeking to improve their children’s academic performance. However, the educational use of computers extends far beyond specialized software. Schools are in effect a major industry in themselves, requir-ing much of the same support software as large businesses.

Trends

The growth of the World Wide Web has led to some shift of emphasis away from stand-alone, CD-ROM based appli-cations running on local PCs or networks. Educators are excited about the possibilities for online collaboration. Public concern about children achieving an adequate level of technical skill (see computer literacy) has fueled an increasing commitment of funds for computer hardware, software, and networking for schools.


Some visionaries speak of a 21st-century “virtual school” that has no classroom in the conventional sense, but uses the Internet and conferencing software to bring teachers and students together. While there has been only limited experimentation in creating virtual secondary schools, thousands of university courses are now offered online, and many degree programs are now available. Some institutions such as the University of Phoenix have made such “distance learning” a core part of their growth strategy.

Several factors have caused other observers to have mis-givings about the rush to get schools onto the “information superhighway.” Many schools lack adequate physical facili-ties and teacher training. Under those circumstances other priorities might deserve precedence over the installation of technology that may not be effectively utilized. At the same time, the lagging in access to technology by minorities and the poor may suggest that schools must play a significant role in providing such access and enabling the coming gen-eration to catch up (see digital divide).


The debate over how best to use technology in the schools also reflects fundamental theories about teaching and learning. Critics of information technology such as Clifford Stoll (see Stoll, Clifford) have reacted against the mechanical, rote nature of much educational software. They also decry the hype of some advocates who have sug-gested technology as a panacea for the problems of low performance, poor motivation, and lack of accountability in many schools.

Some advocates of computer use agree with the criticism of uncreative and poorly planned “e-learning” programs, but argue that the answer is to use technology that helps good teachers unlock creativity. For example, Seymour Pap-ert and his LOGO language are based on “constructivist” principles where students learn through doing (see Papert, Seymour and logo). From this point of view, “computer literacy” should not be a focus in itself, but one outcome of a program that creates literate and capable learners (see computer literacy.)