How the Computer Was Invested Based on the 1930s Alan Turing's Work and 1950s John Von Neumann

Published: 2021-09-22 15:55:10
essay essay

Category: Scientist

Type of paper: Essay

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Hey! We can write a custom essay for you.

All possible types of assignments. Written by academics

GET MY ESSAY
Introduction
The invention of computers–based on the work of Alan Turing in the 1930s and John von Neumann in the 1950s–quickly gave rise to the notion of artificial intelligence, or AI, the claim that such nonhuman machines can exhibit intelligence because they mimic (or so its proponents claim) what humans do when they do things we regard as being evidence of intelligence.
From about the late 1960s to the middle of the 1980s there was a great deal of excitement and debate among philosophers, psychologists, learning theorists, and others concerning the possibility and status of AI. Mostly there were AI champions and AI detractors, with little middle ground. That controversy seems to have cooled of late, but new developments in the computer-engineering field may now take us past those earlier debates. (Agre & Chapman 1987)Research in the provocatively named field of artificial intelligence (AI) evokes both spirited and divisive arguments from friends and foes alike. The very concept of a “thinking machine” has provided fodder for the mills of philosophers, science fiction writers, and other thinkers of deep thoughts. Some postulate that it will lead to a frightening future in which superhuman machines rule the earth with humans as their slaves, while others foresee utopian societies supported by mechanical marvels beyond present ken. Cultural icons such as Lieutenant Commander Data, the superhuman android of Star Trek:
The Next Generation, show a popular willingness to accept intelligent machines as realistic possibilities in a technologically advanced future. (Albus 1996)
However, superhuman artificial intelligence is far from the current state of the art and probably beyond the range of projection for even the most optimistic AI researcher. This seeming lack of success has led many to think of the field of artificial intelligence as an overhyped failure–yesterday’s news. Where, after all, are even simple robots to help vacuum the house or load the dishwasher, let alone the Lieutenant Commander Datas. It therefore may amaze the reader, particularly in light of critical articles, to learn that the field of artificial intelligence has actually been a significant commercial success.
In fact, according to a 1994 report issued by the U.S. Department of Commerce, (Critical technology assessment…1994) the world market for AI products in 1993 was estimated to be over $900 million!
The reason for this is, in part, that the fruits of AI have not been intelligent systems that carry out complex tasks independently. Instead, AI research to date has primarily resulted in small improvements to existing systems or relatively simple applications that interact with humans in narrow technical domains. While selling well, these products certainly don’t come close to challenging human dominance, even in today’s highly computerized and networked society.
Some systems that have grown out of AI technology might surprise you. For example, at tax time many of you were probably sitting in front of your home computer running packages such as TURBOTAX, MACINTAX, and other “rule-based” tax-preparation software. Fifteen years ago, that very technology sparked a major revolution in the field of AI, resulting in some of the first commercial successes of a burgeoning applied-research area. Perhaps on the same machine you might be writing your own articles, using GRAMMATIK or other grammar-checking programs. (Araujo & Grupen 1996) These grew out of technology in the AI sub field of “natural language processing,” a research area “proven” to be impossible back in the late 1960s. Other examples range from computer chips in Japanese cameras and TVs that use a technique ironically called fuzzy logic that improves image quality and reduces vibration, to an industrial-scale “expert system” that plans the loading and unloading of cargo ships in Singapore.
If you weren’t aware of this, you are not alone. Rarely has the hype and controversy surrounding an entire research discipline been as overwhelming as it has for the AI field. (AI: The Tumultuous History…1993) The history of AI includes raised expectations and dashed hopes, small successes sold as major innovations, significant research progress taking place quietly in an era of funding cuts, and an emerging technology that may play a major role in helping shape the way people interact with the information overload in our current data-rich society.
Where Artificial Intelligence Has Been
Roughly speaking, AI is more than fifty years old–the field as a coherent area of research usually being dated from the 1956 Dartmouth conference. That summer long conference gathered ten young researchers united by a common dream: to use the newly designed electrical computer to model the ways that humans think. They started from a relatively simple-sounding hypothesis: that the mechanisms of human thought could be precisely modeled and simulated on a digital computer. This hypothesis forms what is, essentially, the technological foundation on which AI is based.
In that day and age, such an endeavor was incredibly ambitious. Now, surrounded by computers, we often forget what the machines of forty years ago looked like. In those early days, entering a program into difficult-to-use, noisy teletypes, which were interfaced with large, snail-paced computers, largely performed AI. After starting the program (assuming one had access to one of the few interactive, as opposed to batch, machines), one would head off to lunch, hoping for a run of the program before the computer crashed. In those days, 8K of core memory was considered major computing memory, and a 16K disk was sometimes available for the main memory. In fact, anecdote has it that some of the runs of Herb Simon’s earliest AI systems used his family and students to simulate the computations–it was faster than using the computer! (Beer 1990)
Within a few years, however, AI seemed really to take off. Early versions of many ambitious programs seemed to do well, and the thinking was that the systems would progress at the same pace.
In fact, the flush of success of the young field led many of the early researchers to believe that progress would continue at this pace and that intelligent machines would be achieved in their lifetimes. A checkers-playing program beat human opponents, so could chess be far beyond? Translating sentences from codes (like those developed for military use during World War II and the Korean War) into human-understandable words by computer was possible, so could translation from one human language to another be too much harder? Learning to identify some kinds of patterns from others worked in certain cases, so could other kinds of learning be much different? (Beer 1995)
Unfortunately, the answers to all of these questions turned out to be yes. For technical reasons, chess is much harder than checkers to program.
Translating human languages turns out to have very different complexities from those encountered in decoding messages. The learning algorithms were shown to be severely limited in how far they could go. In short, the early successes were misleading, and the expectations they raised were not fulfilled.
At this point, things started getting complicated. Waiting in the wings and watching carefully were a number of people who were sure that this new technology would be a failure. Both philosophers and computer scientists were sure that getting computers to “think” was impossible, and they confused the early difficulties with fundamental limits.
The problems were magnified tremendously by the naysayers, who were using arguments about the theoretical limits (a current example of such theoretical arguments is Searle’s Chinese room argument) to describe failures of current technology. In short, those waiting to call the field a flop felt sure they were seeing evidence to that effect. (Beer 1997)
One can dwell at length on the early failures; there were plenty to go around. But as should have been clear to AI’s critics, these failures were not tragic. In fact, often they were extremely informative. This should come as no surprise; after all, this is how science works. Past failures coupled with new technologies led to many of the major advances in science’s history. It was the failure of alchemy coupled with better measurement techniques that led to elemental chemistry; the newly rediscovered telescope coupled with the failures of epicycles led to acceptance of the heliocentric model of the solar system, and so forth.
In AI, these breakthroughs were less dramatic, but they were occurring. The exponential improvements in computing technology (doubling in speed and memory size every few years), coupled with increasingly powerful programming languages, made it easier for AI scientists to experiment with new approaches and more ambitious models. In addition, each “failure” added more information for the next project to build on. Science progressed and much was learned, often to the chagrin of AI’s critics. (Boden 1996)
Critics vs. Technology–The Example Of Computer Chess
A good example of how this progression occurred is in the area of chess-playing programs. By the end of the 1950s, computers were playing a pretty good game of checkers. A famous checker-playing program written by Arthur Samuel (who was not a very good player) had actually beaten him by the late 1950s, and in 1962 it beat Robert Nealy, a well-respected player (ex-Connecticut state champion). Chess seemed just around the corner, and claims that “in ten years, the best player in the world will be a machine” were heard.
It turns out, however, that checkers can be played fairly well using a simple strategy called “minimaxing.” Each move in checkers has at most a few responses, and searching for the best move doesn’t require examining too many possibilities. The complications of chess, on the other hand, grow very quickly. Consider:
There are twenty moves the first player can make, each followed by twenty possible responses. Thus, after each player has moved once, there are about four hundred possible chess boards that could result. The first player then moves again (another twenty or more possibilities), and thus there are now 400 x 20 = 8,000 possible ways the game could have gone. This sort of multiplying goes on for a long time–in fact, the calculation for how many total possibilities exist in a game of chess was estimated to be about 10[120]. For even today’s fastest supercomputer to examine all of the possibilities would take over 10[100] years–well beyond the probable death of the universe. (Brooks 1991)
Given the complexity of chess, it’s hardly surprising that early programs didn’t do very well. In fact, it wasn’t until 1967 that a chess program (written by Richard Greenblatt, an MIT graduate student) competed successfully against human players at the lowest tournament levels. Greenblatt used a number of techniques being explored by AI scientists and tailored them for chess play. His program played chess at a rating reported to be between 1400 and 1500, below (but not far below) the average rating for players in the U.S. Chess Federation — and certainly better than most human neophytes.
In 1965, while researchers were still trying to figure out how to get the chess-playing programs to overcome the combinatoric problems (i.e., the plethora of possible moves), Hubert Dreyfuss, one of the most outspoken critics of AI to this day, produced a report for the RAND Corporation that trashed AI. He argued, both philosophically and computationally, that computers could never overcome combinatoric problems. In fact, he stated categorically that no computer could ever play chess at the amateur level and certainly that no computer could beat him at chess. A couple of years later, he attempted to prove this by playing against Greenblatt’s program. He lost. (Dorffner 1997a)
Now, nearly thirty years later, the world’s best chess player is still not a machine. However, today there are a number of computer programs playing at the master level, and a few that are breaking into the rank of grand master. In a recent official long game, a computer beat a player ranked as the thirtieth best in the United States. Reportedly, in an “unofficial” short game recently, a chess program running on a supercomputer beat Gary Kasparov, arguably the best human player in the world. Most AI researchers believe that it is only a matter of a few years until computer chess programs can beat players of Kasparov’s caliber in official long matches.
What was happening in chess was happening (although somewhat less dramatically) in many other parts of the field. It turned out that most of the problems being looked at by AI researchers suffered from combinatoric problems, just as chess had. As in chess, coming up with both better machines and, more important, better techniques for “pruning” the large number of possibilities, led to significant successes in practice. In fact, in the late 1970s and early ’80s, AI was ready to come out of the laboratories, and it would have a great impact on the business (and military) world. (Dorffner 1997b)
The Breakthrough: What’s Hard Is Simple
What was the realization that led to the first successes of AI technology? The intuition was very simple. Many of the first problems that AI looked at were ones that seemed easy. If one wanted to try to get the computer to read books, why not start with children’s stories–after all, they’re the easiest, right? If one wanted to study problem solving, try basic logic puzzles like those given on low-level intelligence tests. In short, it seems obvious that to try to develop intelligent programs, one should first attack the problems that humans find easy. The real breakthrough in AI was the realization that this was just plain wrong! (Franklin 1885)
In fact, it turns out that many tasks that humans find easy require having a broad knowledge of many different things. To see this, consider the following example from the work of Roger Schank in the mid-1970s. If a human (or AI program) reads this simple story: “John went to a restaurant. He ordered lobster. He ate and left.” and is asked “What did John eat?,” the answer should be “Lobster.” However, the story never says that. Rather, your knowledge about eating in restaurants tells you that you eat what you order. Similarly, you could figure out that John most likely used a fork, that the meal was probably on the expensive side, that he probably wore one of those silly little bibs, and so forth. Moreover, if I mentioned “Mary” was with him in the restaurant, you’d think about social relations, dating customs, and lots more. In fact, to understand simple stories like this, you must bring to bear tremendous amounts of very broad knowledge. (Funes & Pollack 1997)
Now consider the following “story” from the manual for a personal computer hard-disk drive.
If this equipment does cause interference to radio or TV reception, which can be determined by turning the equipment on and off, the user is encouraged to try to correct the interference by one or more of the following measures: reorient the receiving antenna, relocate the computer with respect to the antenna, plug the computer into a different outlet so that the computer and receiver are on different branch circuits.
For a human, this story seems much harder to understand than the one about the lobster. However, if you think about it, you’ll realize that if your computer was given a fairly narrow amount of knowledge (about antennas, circuits, TVs, etc.) it would be able to recognize most of the important aspects of this story. No broad knowledge is needed to handle this. Rather, “domain specific” information about a very narrow aspect of the world is sufficient. In fact, this is much easier information to encode. Thus, developing a system that is an “expert” in hard disks is actually much easier than developing one that can handle simple children’s stories. (Johnson 1989)
Many of these narrow technical domains can be of great use. Recognizing what disease someone has from a set of symptoms, deciding where to drill for oil based on core samples, figuring out what machine can be used to make a mechanical part, configuring a computer system, troubleshooting a diesel locomotive, and hundreds of other problems require narrow knowledge about a specific domain. Building a system that has an expertise in a specific area proves, in many ways, to be easier than making one for “simple” tasks.
Spurred by this realization, AI researchers developed programming technologies, known as rule-based systems or blackboard architectures, in the mid-to late 1970s. By the early 1980s, the term expert system came to be used to describe a program that could reason (or more often, help a human reason) through a specific hard problem. The rule bases could be embedded as parts of larger programs (such as control systems, decision support tools, CAD/CAM tools, and others) or used by themselves with humans providing inputs and outputs. As more and more industries and government agencies began to realize the potential for these systems, small AI companies were started, major companies started AI laboratories, and the AI boom of the 1980s was on. (Langton 1989)
The Artificial Intelligence Boom
The 1980s was an exciting time to be an AI scientist. One didn’t know upon waking up in the morning if he would hear a news story about how AI was the magic bullet that would solve all the world’s woes or a critical piece about why expert systems weren’t really “intelligent,” written by the critics who had crowed over AI’s failures (and were eating crow over its perceived successes). Attendance at AI conferences swelled from hundreds in the late 1970s to thousands in the early 1980s. Any company that could build rule bases and afford some basic equipment declared itself an AI expert.
The early days of AI entrepreneurship were very similar to those of biotechnology or other high-technology industries. Many companies started, but most were unsuccessful. The few successes had to change products and techniques based on market forces; many look very different than what their founders expected. (Newell 1982)
However, AI technology, as it has matured and transitioned, has also become easier to use and more integrated with the rest of the software environment. So prevalent is this technology today that virtually every major U.S. high-technology firm employs some people trained in AI techniques–in fact, according to the Commerce Department report cited earlier, an estimated 70-80 percent of Fortune 500 companies use AI technology to some degree.
Strangely, despite the economic success of this technology, its long-run effect was to give AI something of a black eye in the marketplace. There are many reasons for this, but basically they boil down to a striking phenomenon: AI is a victim of its own success. So fast was the transition of this technology into the marketplace that in only ten years the necessary technology fell in cost and complexity by about one to two orders of magnitude.
Ten years ago, a special computer costing tens of thousands of dollars was needed to develop expert systems, but now they can be developed on generic workstations or even personal computers costing only a thousand dollars or so. Where the development environment for an expert system (called a shell) used to cost $20,000, today one can be bought for the price of the manuals. (In fact, numerous shells are available free on the Internet.) Thus, having the ability to build expert systems is no longer a high-cost investment; now anyone technically competent can do it, and do it cheaply. (Pfeifer & Scheier 1998)
The Artificial Intelligence ‘Bust’?
Unfortunately, there was a negative consequence of this drop in cost that emerged in the mid-to late 1980s. Because a lot of money was being invested in AI and anyone could enter the field, a great many people did so. Unfortunately, many of these newcomers had not learned the historical lessons of AI. Its rapid progress on some problems caused many to feel this would be easy to extend to other problems–if AI could handle hard tasks, certainly it could handle “easier” tasks such as reading newspapers, translating languages, playing games, and similar tasks. Even worse, people with little concept of the combinatorics of AI tasks would underbid on big development projects and then be unable to deliver two or three years later. Thus, many who joined late, unaware of the field’s history, made many of the same mistakes as had been made in the earliest days of the discipline. (Steels 1994)
Moreover, it turned out that many of the best expert systems didn’t function by themselves. Instead of being stand-alone systems that dispatched wisdom, expert systems turned out to be most useful when hidden behind larger applications. Take, for example, the DART (Dynamic Analysis and Replanning Tool) system developed in the early 1990s by Bolt, Baranek, and Newman. DART is a military transport planning program that was used by the U.S. military in Desert Shield and Desert Storm. It works by providing a graphical interface in which humans enter information about what materiel is going where and when. The system uses its knowledge to project delivery dates and to recognize possible problems in meeting those dates.
When a problem is found, DART does not fix it. Rather, it reports the information to the human user and asks what to do. Thus, the expertise in this system is not in making the “intelligent” decisions about what to do but rather in taking into account fairly prosaic low-level details and managing them for the user. In fact, this is true of most successful expert systems–the system functions more like a well-trained assistant than like an expert.
This is not a condemnation, however. DART is credited by the personnel at the U.S. Advanced Research Projects Agency (ARPA), the main government funder of AI research, as having “more than offset all the money that [ARPA] had funneled into AI research in the last 30 years.” (Vaario & Ohsuga 1997)
Unfortunately, despite the success of programs like DART, their interactive nature helped feed into a subtle negative perception that expert systems were not successful. Basically, once the tools reached a certain point of maturity, it became relatively easy to see how these systems worked. Understanding that the programs were only manipulating simple facts or recognizing simple patterns, people realized the programs were not “intelligent” at all, that humans were providing most of the “thinking” and the AI systems were just managing details. This gave the naysayers more ammunition–expert systems clearly were not intelligent by any obvious definition. Given the hype over these systems, many people were disappointed to find out that they were just relatively straightforward computer programs. In short, what was an industrial success proved to be insufficient to refute our critics’ condemnations–they won’t be satisfied until we build Mr. Data.(Tani & Nolfi 1998)
The Debate Goes On
Unfortunately, even great strides in information technology will not bring a “smart” computer. As the technology reaches fruition, again the AI field will be accused of just adding technology, not “developing intelligence.” I suspect that each time that AI surpasses our current expectations and achieves results changing the way we live, work, and interact with computers, the ever-present critics will be fight with us. In fact, probably no level of success will still the voices that accuse us of inflated claims, deflating our successes and denying, to the very end, the very possibility of artificial intelligence.

Warning! This essay is not original. Get 100% unique essay within 45 seconds!

GET UNIQUE ESSAY

We can write your paper just for 11.99$

i want to copy...

This essay has been submitted by a student and contain not unique content

People also read