Paola Mello
I. What is artificial intelligence
1. Introduction. We can define the 'artificial intelligence', here abbreviated as IA (often used as "AI", acronym for Artificial Intelligence ) as the 'series of studies and techniques that tend to create machines especially electronic calculators, to solve problems and play activities own human intelligence "(T. De Mauro, Great Italian dictionary use , Torino 2000). The term "artificial intelligence" is an obvious oxymoron, since it attaches to the 'artificial' something that is essentially "natural" as it is the most jealous prerogative of human nature: intelligence. And the oxymoron is quite challenging because there are those who seriously questions whether the machine can be really "smart" in the sense that this term is attributed to the human mind (see below , IV). On the other hand definitions like this are too general, since it fits so well to all information and, for example, the "automation techniques" disciplines that are not part of AI. In the following sections (see No. 2), in which one tries to delineate a historical record, you will see these seemingly better. But it is objectively difficult to give more precise definitions, and because it is a rapidly evolving field, so a definition that define the territory within its competence might exclude a priori future developments that might naturally belong, or because it is both a science and technology, and is a frontier discipline, a sort of charming "multiple point" where there are in different domains of knowledge: logic, computer science, psychology, neuroscience, philosophy. So, rather than define, you prefer to list and describe its fundamental characteristics and its main areas of application. However, no attempts are made to more specific details on which, being inconsistent with each other, but are found to be quite different definitions. We see a few.
Russell and Norvig propose two fundamental distinctions. The first is between machines that "think" and cars are limited to "operate" in a way, to some extent, similar to that of humans. The second concerns the standard of comparison for assessing their performance, and which may be the real man or an idealized rationality (see Artificial Intelligence, 1998, p. 4). It must be said that the technical applications (see below , II) are mainly the 'rational act, "while the philosophical debate, which will be its last section, insists on the possibility of performance "human" and especially a "human thought". Another fundamental distinction, on which the philosophical debate is on, is among the so-called "weak AI" and "strong AI": those who say the weak AI is content to consider machines that act "as if they were intelligent; 's Strong AI, by contrast, asserts the possibility of human-like machines to have a self. S'intreccino easily understood how these distinctions between them, in particular the strong AI concerns only "machines that think in a humane manner," while the weak AI is concerned, preferably, "machines that run."
Finally, in terms of technical implementation, an approach we can distinguish between "functional" or "behavioral", for which no matter what the structure of the computer equipped with seat of 'intelligence', and ' setting of "structural" or "constructivist" or "connectionist", who wants to achieve the same performance of the human brain playing, somehow, its structure. With a slight shift in perspective, the first approach was called "emulazionistica" and the second "simulazionistica" proponents believe that only by reproducing as closely as possible the brain can be obtained performance comparable to it, those who argue the former, the Instead, he believes that the essence of the functioning of the brain does not lie in its structure but in its performance, and that these can be obtained, and perhaps even better results, even from completely different structures. Both settings are fertile results but the second, although a minority, has a special importance because it has led to the creation of 'neural networks'. These are imitations of animal brain, extremely rough, but interesting as well as technical knowledge, because they establish a rapport with the neurosciences is very useful for AI for the neurosciences themselves. In practice, both approaches converge because after a few attempts to produce physical structures "dedicated" (ie in terms of hardware ), now the neural networks are made rather than with computer programs (ie level software) that run on computers generic type.
2. A little 'history . The idea of \u200b\u200bdelegating certain tasks to mechanical characteristic of mind is very old. Just think of the arithmetic operations carried out with the abacus, probably invented around 5000 BC by the Chinese, which dates from the first well known example of automatic control device, used to adjust the water level in rice fields: a float moving a gate, which reduced the water flow when the level tended to increase. In modern times, the first computing machines are due to Pascal (1623-1662) who, in the mid-seventeenth century, built a mechanical adding machine (the Pascaline ) and Leibniz (1646-1716), at the end of the century, perfected to allow the multiplication and division. The first machine "programmed", that is able to automatically execute sequences of operations, was conceived by Charles Babbage (1792-1871) around 1830, but was never built because of mechanical difficulties. In terms of automation is to be mentioned controller Speed \u200b\u200bof James Watt (1736-1819), who in the mid-eighteenth century opened the way for industrial automation. These reports document the interest in transferring to the machines not only the work material - one that involves expenditure of physical energy - but also the intellectual effort required, you want to perform tedious sequences of calculation, you want to monitor and control the proper operation of other devices (automation has been defined by the formula "machinery that control other machines).
But more developments specifically interested in the birth place of AI in the mid-twentieth century. You have to Alan Turing (1912-1954) two fundamental contributions. In 1936 he proposed an ideal model of automatic calculator "universal" (known, in fact, as the "Turing machine"): it is the prototype of all the computers, which were then developed from the mid-forties. In 1950, then, Turing proposed the 'imitation game', a paradigm to determine if a machine is "intelligent." In his famous article, Computing Machinery and Intelligence (1950), he suggested to put an observer in front of two teleprinter. One of them is controlled by a man, the other a woman. The observer, who does not know which end the match as the man and woman, they may find them putting any type of application. One of the two parties must respond with sincerity, but the other has to pretend to be the opposite sex. Then from the person lying replacing a computer programmed to "pretend" to be a human person. When will equal the number of errors produced in attempts to identify the computer to the one recorded in the case of identification interlocutor's false, then you can say that the computer is "intelligent." It must be said that the imitation game has been attempted several times with results so far rather disappointing. In the classification of Russell and Norvig, the Turing thought experiment provides a good example of "machine that behaves in a humane manner", it represents a position of "behaviorist", which other researchers judge the AI, however, insufficient.
In fact, the first work that uses artificial intelligence to ascribe dates back to 1943, when Warren McCulloch and Walter Pitt designed a neural network. But the most important developments - both on a theoretical level and as for the preparation of computer programs that count as a prototype for later experiences - be attributed to the decade back to the provocation of Turing. In particular, in 1956 another pioneer, John McCarthy, met at Dartmouth the leading scholars of the time (including Marvin Minsky, Allen Newell, Claude Shannon and Herbert Simon) at a seminar in which, among other things, he proposed the name of "artificial intelligence". 1958 was particularly fruitful results: McCarthy produced the Lisp, a language of high-level programming devoted especially to AI (subsequently, in 1973, from Prolog), and began to develop programs for the general solution of problems. Even they begin to study what are now called "genetic algorithms", ie programs capable of automatically change to improve their performance. In subsequent decades the search continued with ups and downs. The sixties were characterized by exceptional results may not be assessed if the yardstick of today, but so exciting, both because of the limitations of computational tools with which they were obtained, and because it systematically disprove the skeptics who claimed that "such a thing will never do." In those years he also recorded interesting developments, particularly theoretical research on neural networks. But he also encountered the first difficulties, and was forced to become aware of the limits that still appear insurmountable. One major difficulty is the 'combinatorial explosion', ie the explosive increase in computing time increases when the number of variables of the problem is that a limit on whom we must return, that the computer can process only the links "syntactic" and not the content "semantic" ie the meaning of the variables on which it operates. The seventies saw the birth of 'expert systems' and their first applications in medical diagnostics, and early attempts at "understanding" natural language (in the restrictive sense of preordained answers to a limited number of questions).
Since 1980, the IA was published by the scientific laboratories and found significant practical applications, several of which will be described in next section. At the same time, and consequently, especially American and Japanese manufacturing companies have started to put marketing programs focusing on expert systems and the recognition of configurations and so on, and have built entire microcircuits and computers specialized applications of AI. Neural networks, after less than two decades of almost complete lack of interest, have received renewed attention since 1985, particularly because of the definition of new, more powerful optimization algorithms. Over the last decade of the century, the improvement of neural networks were accompanied by the development of new methods of calculation, mainly derived from the theory of probability and decisions; and, on the side of the applications were developed effective methods for the construction of expert systems and speech recognition and forms, especially those intended for robotics and computer vision.
II. What does the AI \u200b\u200b
From an engineering perspective and pragmatic reductively, the AI \u200b\u200bis valued simply for its capacity and performance, regardless of the methods and mechanisms that are used to achieve it. The view is therefore "Emulazionistico" and not "simulazionistico": the idea behind is to build machines that do not necessarily "simulate" playing the behavior of the human brain, but are more easily able to "emulate", selectively, in the final result of certain operations. This is the argument put forward by A. Turing's imitation game in which we have already described: he intends to "evaluate" the intelligence of a machine only by his ability to present a common pattern indistinguishable from speaking being a human being. This approach was certainly dominant in the history of AI and has led to the construction programs that reach a high level of competence in knowledge and troubleshooting considered complex. These programs are designed as "manipulators" of non-formal symbols interpreted, so the machine can be seen simply as a transformer with no syntactic knowledge "semantics" of the problem (see below , III.4).
1. Basic architecture of artificial intelligence systems. The application software behind an AI system is not an immutable set of instructions that represent the solution of a problem, but an "environment" in which to represent, use and modify a knowledge base. The system examines a large number of possibilities and dynamically constructs a solution. Any such system must be able to make two kinds of knowledge in a separate and modular: a knowledge base and an inference engine.
For "knowledge base" means "form" which contains the knowledge on "dominion," that is the problem. It is possible to detail the knowledge base of dividing it into two sections: a) The block of statements or facts (temporary or short-term memory), b) the block of relations and rules (long-term memory). The temporary memory contains the "declarative knowledge" on a particular problem. It has a representation made up of true facts introduced at the beginning of the consultation or proven true by the system during the session. In the long-term memory are instead maintained that the rules provide a set of recommendations, advice, strategic direction aimed at build the wealth of knowledge available to solve the problem. The rules are established through statements made by two units. The first is called "before" and expresses a situation or a premise, while the second is called 'consequences', and it starts the action to be applied when there is acknowledgment of truth in the premise. The general syntax is therefore: "if" before ", then" consequent "."
The "inference engine" is the module that uses the knowledge base to reach a solution to the problem proposed and to provide explanations. Inference engine is delegated to the choice of what knowledge should be used, moment by moment, in the solution process. So are combined the various cells of knowledge that, taken individually, would appear of limited use, in order to draw new conclusions and express new facts. Each rule that represents the whole domain of knowledge, in order to be valid in a particular instance, be compared with a set of facts representing the current knowledge on the current case, and therefore satisfied. This is done through operation matching, in which you try to pull the antecedent of the rule with the various facts in the temporary memory. If the matching is successful, proceed to execute any of the actions listed in the consequent. If instead it contains a conclusion, satisfaction dell'antecedente you approve this claim as a new fact of short-term memory. The process of matching generates "inference chains", which indicate the manner in which the system uses the rules to make new inferences. It also allows us to give you an explanation of how certain conclusions are issued. To generate the inferential chains from a set of rules are basically two methods used: a) the 'forward chaining (forward chaining ). This technique attempts to reach a conclusion from facts and present early in the temporary memory on the rules applying to production. It is said that the inference is guided dall'antecedente, as the search for rules to be applied is based on matching between the various facts in memory and combined logically active in the antecedent of the rule, b) the "backward linkage" ( backward chaining). In this case we proceed by reducing the principal objective (goal ) to subproblems. Then, once you have found the thesis to prove, apply backward production rules, trying to find consistency with the initial data. Interpreter search, if there is a rule that results in the assertion must prove the truth. From here we turn to try the sub-goals (sub-goals ) that constitute the antecedent of the rule found. We speak, therefore, guided by the subsequent inference.
2. expert systems. Expert systems are the best known example of an application resulting from this approach. An expert system, ie a "system based on knowledge," is a tool to solve problems in a bounded domain, but with similar performance to that of a human expert of the domain. This means that the fundamental task of an expert system is to assist the professional activities of users, where it is usually sought advice from a specialist with expertise in human ( expertise ) and ability to judge. The research of IA have put light on the problems of realization of such instruments, affirming the need to restrict as much as possible, the field of application. Thus, compared to a human expert, these applications will undoubtedly reveal more limited and superficial, lacking the completeness of which is the cultural knowledge of the competent person. It is also not possible to hope that an expert system can reach conclusions in an intuitive way or jumping some logical steps, relying on "common sense" or the mechanism of analogy, but it is the prerogative of man. Ultimately, simulates a human expert with features more or less sketchy, and it provides the ability to solve small tasks, temporary or secondary. The first and best known of these systems is Mycin, developed by EM Shortleffe since 1972 and applied in the medical field. Concerning more specifically the types of problems that can be called an expert system to solve, you can draw up a list of topics of course not limited to: a) "diagnosis" is to identify, based on the recognition of certain symptoms, possible causes of "failure" and suggest a path of care, b) "tracking" is following the temporal development of a process and proceed to the acquisition and data processing of various types, providing summary information on the state and output estimates of its evolution, c) "planning" notes the resources available, it identifies the use better in order to achieve a certain goal within a given time, is addressed in parallel with the acquisition of new resources; d) "interpretation of information and signals": having a set of input data for a certain range, you want to make an overall assessment in order to recognize the occurrence some predetermined situations.
3. Games. Another field of application where this type of approach, symbolic and engineering, has found success is that of games. The artificial intelligence is generally considered the two-player games where the moves are alternated and interprets the unfolding of the game as a "tree" where the "root" is the starting position and the "leaves" are the final positions (winning or losing). Obviously, because of the complexity of the games covered, it would be unthinkable, even for a very powerful computer , fully develop the entire tree to decide on the move "better." Hence the need to apply appropriate heuristics to "prune" some branches of the tree and make the problem manageable. Consider the game of chess, where the size the problem is huge. Only at the beginning of the game are 400 possible moves, they become more than 144,000 at the second move. Developing the game tree we would have 35 100 knots. Applying symbolic manipulation techniques and using powerful methods to reduce the size of the search space, otherwise intractable, however, have produced systems that can play chess better than men, although, of course, using techniques quite different from those of humans. It is known that in May 1997, New York, a machine ( Deep Blue) beat a six-game match world champion Kasparov. It is interesting to point out that such a machine, designed at hardware to be able to develop and examine areas of research in parallel very quickly (think that Deep Blue can examine 1011 positions in about 3 minutes) uses the "strength brute rather than refined heuristic techniques to achieve the best solution quickly.
4. Mathematical proofs and logic programming languages. The use of logic and automation of mathematical proofs is another field of application where the AI \u200b\u200bhas achieved remarkable results. The logic is certainly one of the most ancient and rigorous seated man used to formalize and explain their reasoning. It is semantically well-defined, highly declarative, and has a very general deductive apparatus. This explains why the classical logic (especially that of the first order) is much used in AI to represent knowledge about a problem, even though this has obvious limitations (see below , III.3) and not find a consensus. Minsky argues that the formulas and methods of logical deduction are not the most natural way in which reason and not the methods by which man organizes his knowledge and show an intelligent behavior. The knowledge base in this case becomes a collection of statements of first order predicate logic. The inference rules allow to deduce new statements ("theorem") is not explicitly contained in the initial knowledge base. The sequence of inference rules used in the derivation of the theorem is called "proof of the theorem. Obviously, wanting to automate the process, the efficiency of the test becomes a requirement. Most of the programs that use the logic in AI are based on studies on the automatic demonstration of the theorems of logic, and in particular the resolution method developed by JA Robinson in the '60s and the development of strategies to streamline the proof . Children of these studies are the programming logic and language Prolog in particular (from PROgramming in Logic), which is emerging as one of the most interesting and innovative programming paradigms for the development of "intelligent" applications.
The concept of 'logic programming' was born in the early 70s, thanks especially to researchers from the universities of Edinburgh and Marseilles. Robert Kowalski, then at the University of Edinburgh, dates back to the definition of the theoretical foundations of logic programming and, in particular, the proposal for the procedural interpretation of the clauses of the logic that reduces the process of proof of a theorem to the more traditional process of computing languages programming. The group of Alain Colmerauer in Marseille instead be credited with having first made in 1972, an "interpreter" for the Prolog language , thus demonstrating the practical feasibility of the concept of logic programming. This is radically different from programming techniques that are normally used to write programs in traditional languages. The most common programming languages, from Fortran to Pascal to C , are in fact based on the imperative paradigm, according to which a program consists of a sequence of commands that specify in great detail the operations to be performed from the computer to solve the given problem. Conversely, in a logic programming problem is described in much more abstract with a set of formulas of the logic. This way of representing problems allows an understanding of declarative knowledge, which describes a problem without specifying in detail how you can get the solution. In other words, the programming logic shares with the automatic demonstration of the theorems the use of logic to represent understanding and use of deduction to solve problems. However, it emphasizes the fact that the logic can be used to express programs and special proof techniques can be used to run programs.
5. Learning . Beyond the duty list of these systems considered successful from the point of view, though with obvious limitations when evaluated with a view less reductive, it is almost universally acknowledged that the machines can not be said to be intelligent until they are able increase their knowledge and improve their skills. Writes Simon (1981): "Learning is change in the system that are adaptive in that it enables the system to hold the next time the same task more efficiently and effectively." One way to solve, even if only partially, this problem is to equip the machines symbolic inductive reasoning ability as well as deductive. Inductive reasoning moves from singular statements about particular facts or phenomena ("examples") to be expressed through universal statements hypotheses or theories that explain the facts and data are able to predict new ones. However, while the deductive inference preserves the "truth" (in the sense of logical correctness), the inductive inference can not guarantee this, and therefore these systems may tend to over-generalization and produce errors. This is always a symbolic approach in that the results of this procedure is a new theory, new rules and, in general, a knowledge base of new or updated. One of the best known examples is the learning programs ID3 , developed by J. Ross Quinlan (Between 1979 and 1983), from which they were born commercial products for the automatic classification. ID3 and its "descendants" have explored thousands of databases producing identification rules in different areas (eg diagnosis of diseases). Currently learning programs are widely used by the practical point of view to face the need to exploit the wealth of information contained in large collections of data available over the network, or corporate databases, to extract regularities among the data, information and hidden knowledge (data mining ).
6. Neural networks . Neural networks represent an approach significantly different from that previously analyzed symbolic, and fall in the mainstream of AI that we have referred to as "structural" or "connectionist" (see above , I.1). The basic idea is to reproduce the intelligence and, in particular, learning to the computer simulating the neural structure of animal brain. Computers can store vast amounts of information easily, work in nanoseconds and can play Dick moles of arithmetic without error, while men are not able to get close to these benefits. There is no doubt, but that men usually play "simple" tasks such as walking, talking, interpreting a visual scene or understand a sentence, reasoning about events of common sense, handle uncertain situations, so much brighter and more efficient use of refined IA and expensive programs resulting dall'approccio symbolic and functional.
The idea of \u200b\u200bbuilding an intelligent machine from artificial neurons can be traced back the birth of AI, and some results were already obtained by McCulloch and Pitts was born in 1943 when the first neural model, and were then investigated by other researchers. In 1962, Rosenblatt proposed a new model of neuron, the 'perceptron', able to learn by examples. A perceptron describes the operation of a neuron performing a weighted sum of its inputs and emitting an output "1" if the sum is greater than a threshold value changed, or "0" otherwise. Learning, thus understood, is a process of changing the values \u200b\u200bof the weights. The great enthusiasm for this approach suffered a sharp reduction in a few years later, when Minsky and Papert showed large limits of the perceptron learning. More recently have been proposed new neural network architectures no longer subject to the limitations of perceptron theory, called "Connections" using powerful learning algorithms (back propagation). This has aroused a strong interest in neural networks and has enabled the development of successful applications. The architecture "Connection", an alternative to that of Von Neumann, is characterized by: a) a large number of very simple processing elements, similar to neurons, b) a large number of connections (synapses) among the factors weighed c) a highly parallel distributed control. The weights encode, in fact, knowledge of a network. The changes that occur during learning can be considered as dynamic variations of connection weights. We can distinguish different ways of learning depending on how the network is "trained." In particular, the paradigms of learning can be divided into three basic classes: a) a supervised learning-by-example ( Supervised Learning): a teacher provides the answers you want to network that neurons should produce After the training phase, b) unsupervised learning ( Unsupervised Learning): neurons specialize through an internal competition in order to discriminate stimuli presented in input, c) learning by reinforcement ( Reinforcement Learning) to the network is provided only qualitative information about the goodness of its response, a critic evaluates the response of the network and sends a signal to the neurons of positive reinforcement if the evaluation is good, otherwise negative.
In connectionist systems seem to be easier to achieve learning systems, but such learning, hidden value changes in real numbers, is wired into the network and can not be explained in symbolic form. Neural networks are more useful in classification tasks and perception conceptual "low level" even though technically difficult, such as speech recognition, process control and image recognition, while conceptually complex issues such as design, diagnosis, planning, remain the domain of symbolic AI. While the neural network models are based on simulation of the human brain, many other techniques IA are based on the evolution of the animal world and social groupings. Genetic algorithms, for example, algorithms based on evolution, in which learning takes place through a selective process from a large population of random programs.
7. Limits and new goals. Many criticisms have been made to existing IA systems: they are definitely poor and disappointing when compared with early expectations of artificial intelligence. There has been no, in fact, great strides and most challenging problems such as, for example, learning and representation of common sense, even if addressed and resolved in part, are far from a complete solution. As for the functional approach to AI, despite the many points in favor of the architectural model of knowledge-based systems such as the modularity of the architecture and the possibility of incremental growth of knowledge, few systems are truly experts in commercial operating , will constitute a very small percentage compared to conventional programs. A heavy "bottleneck" for their dissemination is undoubtedly the acquisition of knowledge. It is particularly complex, in fact, pull out completely by the expert knowledge and be able to formalize it in the knowledge base. Moreover, these systems have a high cost of maintenance and updating. On the other hand, the alternative approach represented by functional connectionism and neural networks applications is also successful, but often limited to issues considered of lower level such as perception and recognition.
Regarding future prospects, currently leading the technological revolution in the information society makes it possible to access an enormous amount of material, but it must be managed and interpreted correctly. They range from large corporate archives information online updated in "real time", the ability to capture the knowledge in its most practical - such as the experience gained on the field by a specialist - the survey of retail aimed at targets more accurately. All About Development is faced with a mass of unstructured heterogeneous data and redundant. It therefore seems justified to search not only strengthen, but also to revolutionize the means of extraction and analysis in order to use this great wealth of knowledge to its best potential. It is therefore essential to use methods for the extraction of knowledge mentioned above, using symbolic learning techniques and neural networks.
Currently there is also a strong push to the integration of AI systems, in particular expert systems with the rest of the world of information engineering, where we find the current use of technologies such as programming or building databases "object oriented" ( Object Oriented Programming and Object Oriented Data Base ) and the "GUI" (Graphic User Interfaces ), some of which are originally born in AI. An important phenomenon is the tendency to extinction of the expert system, as the application of its own, to the benefit of a vision integrated: it tends to create modules that produce task smart, tightly integrated software applications general and information systems. The idea is to build intelligent agents with deductive and inductive reasoning skills, which are responsible for particular tasks and able to coordinate with other distributed agents in order to achieve one goal together. The functions carried out by those intelligent agents must be integrated with the functions carried out by other modules, perhaps to the existing system, which interface with the operator, records management systems (DBMS , Data Base Management System ), the data acquisition and graphics systems. In these fields, the AI \u200b\u200bis already taking, and may still feel, new prestigious achievements.
III. What artificial intelligence can not do
the debate on was in the last two decades of the twentieth century, and still is among the most passionate and exciting of philosophical research. This may be considered natural because the IA reopens, with great strength of provocation, the question of what is the mind, intelligence ol'intelligenza conscious. The discussion covers two main themes: what it can do the AI, and what is lawful to do with AI.
1. Weak AI and strong AI . Appropriate, from the outset, to distinguish between "weak AI" and "strong AI": the meaning of these terms We have already mentioned at the beginning (see above , I.1), but now agrees to recall and accurately. AI weak want to build machines that behave "as if" they were intelligent machine that is capable of solving "all" the problems that human intelligence can solve. The strong AI wants more: it says that the machine that acts in an intelligent way must have a "conscious intelligence", a conscious mind indistinguishable from the human mind. Note that the weak AI is concerned with the constructability of concrete construction or machinery "thinking", while the strong AI wants to give an abstract answer to the problem of what they "think". Therefore, as noted by Russell and Norvig, you can believe and be strong AI skeptics weak on AI (Artificial Intelligence see , 1998, p. 884): that is to think that intelligent machines, if they were built, would have an 'conscious intelligence, but feel that they can not be built.
Some of the objections relate to the weak AI, but the most radical ones are brought strong AI. The debate starts from such claims: "The brain is a machine and therefore, in principle, one can build a machine that does everything the brain does. " Applied to the properties of the mind, this statement assumes a clear taste reductionist as implicitly assumes that "mind" is the same "brain". This approach is clearly questionable and extremely materialistic, but because the whole debate takes place within it, it can be debated later (see below , IV.1).
2. What the weak AI can not do . Let us start from criticism to AI weak, while doing this that many of them apply, by natural extension, even strong AI. The first group consists of apodictic statements like, "the machine can never do such a thing." Objections of this sort have always accompanied the technical innovations, and are mostly a manifestation of psychological denial of panic in the face of a "new" incomprehensible and can only be viewed with extreme suspicion. In general they have not prompted debate: they were almost always contradicted by the facts. More precisely, the technical challenges have collected and have used to build machines that did exactly what you declared impossible. Striking example is the game of chess which has been discussed (see above , II.3), because the belief that the machine could never beat the grand master has lasted longer than perhaps any other, and more severely challenged the engineers. But it is instructive to see "how to" beat the computer teacher, it "sees" the right move on the keyboard, the "senses" in a manner apparently similar to the intuition of the artist, with a mental process which we call "genius," but which we know nothing. To all this the machine opposes the "brute force" of a huge number of fast circuits, which allow it to make a great number of attempts in search of the move that ensures the greatest chance of victory. One begins to see in this a fundamental difference between man and machine that is not subject to reductionist analysis.
But Turing in the same article in which he proposes the "imitation game" (which anticipates many of the issues that would fan the philosophers in subsequent decades), also presents and discusses an imaginative list of transactions that, according to his opponents, "the machine can never do." Some of the points list, as 'learning from experience', were, at least at some level, created: Turing's foresight has already prevailed on unwary opponents AI. Others simply do not relate to the machine as a subject of AI. For example, "being nice" or "make someone fall in love of self": the science fiction narrative theories of beautiful robot humanoid and human falls in love, but this could, if ever, robotics, certainly not the AI. Or "taste of strawberries and cream 'you can imagine (and something similar has been built) a robot equipped with sensors of taste and smell, and a program that discriminate tastes pleasing. Again the issue relates primarily to robotics, but with another problem: "taste" already implies a typical power of the human mind and therefore calls into question the basic problem of strong AI. The same applies to the operation more disturbing: "being the object of attitudes ", which involves self-consciousness.
A variant of this objection argues that "the machine can do only what we know the order to do", that is devoid of free will and the choices we make are affected. You can answer in two opposite ways. On the one hand, genetic algorithms, and many of the applications described (see above , II) show that the machine can greatly expand and change the range of its possibilities. Second, however, it could be argued that these changes are in some way, planned and thus potentially included in the original programming. This highlights the essential and indissoluble dependence of man-machine, but also introduces a substantial element of novelty, and that is the transgression of the fundamental methodological paradigm of engineering, planning. It wants every technical object is drawn in its entirety before s'intraprenda its construction. But learning systems, and maximum extent in the neural networks are initially objects to some extent information: defining the technical structure formal, but the "weight" of the connections that characterize the particular neural network in order to the task to perform, you are specifying when learning in a manner not dependent on intentionality of the designer, but from the information provided, the end ( admitted that there is an end, namely that the learning process does not last for the lifetime of the system), they take the numerical values \u200b\u200bof the unpredictable and, moreover, without interest for the engineer who designed the network and who uses it. In this technique, the AI \u200b\u200bseems to anticipate then a trend that has spread to many fields of engineering, especially in engineering and information: the generation and use of systems "not planned". See, for example, the case of Internet .
Similar to the previous objections kind of "quantitative" which may take the form: "do not ever succeed in building a machine powerful enough to solve this problem." For most of these objections the facts are in charge of denial, but the issue can not be said entirely outdated. With particular force when they were raised in the sixties, the solution of mathematical problems has met with the 'combinatorial explosion "which has already been discussed. In applied mathematics problems we talk about 'intractable' if the computation time grows at least exponentially with the number of variables. It is therefore an impossibility "practice" has the same problem can be resolved if the uncertainties are small, but becomes insoluble (in a reasonable time) when their number increases. In this regard, Turing argued that often the "quantity" to "quality" in the sense that above a certain size the system behavior may change substantially, and suddenly make possible what before was not. The example given by Turing, nuclear reactors, more than a certain size, they pass by "sub-critical" to "critical" and produce energy, is only a special case of well-known property of most dynamical systems, to move from stability instability to changes in the value of some of their parameters. So it would be a "structure" that when it reaches a certain size - for example, a network with enough neurons - become able to control the combinatorial explosion.
Another problem of "size", widely discussed by Hubert Dreyfus (1988), concerns the enormous amount of information (the "knowledge base") required, for example, to "contextualize" the spoken word and thus eliminating its inevitable ambiguity. This knowledge base is nothing more than that man accumulates with learning. So the problem is twofold: to create a "memory" of appropriate size, and enter the information. And the problem consists of placing several subproblems: a) how to construct a "background knowledge" from which the learning set b) how to organize the learning process (which will, in general, a process type "reinforcement") in order to optimize its performance, c) how to make the inductive processes that generate knowledge from experience, d) how to control the acquisition of sensory data. Dreyfus had brought these issues in a negative way, with a strong tint of pessimism about the possibility of solving them, but his objections were resolved in a powerful stimulus to their resolution.
3. The limits of mathematics and logic . The implementation of AI, even in weak form, encounters some serious difficulties of a theoretical nature, on which hath been focused a considerable part of the objections. For example, there's the "halting problem": the execution of a program will have a term, or could go on, theoretically, forever? This problem has no solution: Turing showed that for every algorithm responsible for determining whether a given program should be done, you can find a program for which that algorithm will not be able to answer. The major difficulty comes from the "incompleteness theorem" of Gödel, and it focused on a debate, sometimes fierce. The incompleteness theorem states that any formal logical system (if sufficiently powerful) it is possible to formulate true propositions of which, however, the tools of your system does not allow to prove the truth. John Lucas notes in a famous Article Minds, Machines and Gödel (1961): "I think that Gödel's theorem proves that the mechanism is false, that minds can not be equated with machines."
So there is something that machines can not do: determine the truth of propositions undecidable. But man, yes, because he knows how to "arise out of the system ': for example, Gödel's theorem can be applied to the system itself. Douglas Hofstadter, who also quotes this article at length and with admiration, somewhat sarcastically replica showing that man "can not" arise out of the system, because this would lead to an infinite regression (see Hofstadter, 1984, pp. 508-510). This, from a reductionist point of view, is correct, however, and contradicts the common experience that shows how the man can actually exceed the limits of pure logic and is able to look at the logical problems "outside." The matter was taken up by Roger Penrose (1992), which suggests a way out. Penrose, who is a renowned physicist, to support the view of Lucas observes, first , that if the mind is able to understand the computational mathematics can not be only a formal logical system. It also adds an argument of his own material. He begins with the observation that there is a radical dichotomy between the mathematical description of quantum mechanics and the classical physics, which is still valid at the macroscopic level, and the fact that the laws of physics are reversible with respect to time, or do not take account of the irreversibility of time shown by the second law of thermodynamics, and recognizable in our consciousness. So Penrose suggests that we can discover a "new" physical fuller and deeper understanding, "that makes possible the fusion between the classical and quantum world, which is" asymmetric "with respect to time and that makes it physically possible to understand the nature of the mind '(G . Piccinini, 1994, p. 141). But this would also imply a new physical type of mathematics, which should 'contain essentially non-computable elements "(ibid. ). This computational mathematics not exceed the limits of AI, which relies instead on a computational mathematics, and may include transactions that Gödel's theorem prohibits AI but not the human mind.
4. The strong AI: drafting syntactic and semantic content. John Searle, in 1980, took strong objection against the AI \u200b\u200bis completely different, which gave the form of an entertaining fable: the 'Chinese Room thought experiment. " Suppose, he says, I am in a room full of Chinese characters and, not knowing the language, I'll be given a handbook of rules by which characters to associate with other ideograms. The rules specify unambiguously the characters based on their shape and do not require that I understand them. Now assume that people who understand the Chinese to introduce into the room and groups of characters that, in response, I manipulate these characters according to the rules of the handbook and return them other groups of characters. If the rules of the manual specify quite accurately which groups of characters can be associated with the characters introduced, so that the "answers" may have made sense and are consistent with the questions, who is out of the room may mistakenly conclude who knows who is in Chinese. Namely, that it has executed the process "syntax" of the message according to his understanding of "semantics", while the "semantics" has been out of the room (see Searle 1990 and 1992). This is what happens in all computers (and not only in AI), by performing syntactic operations on messages introduced, entirely independent of their semantic content. The semantics stops, so to speak, at the entrance of the message in the computer, and returns to the message chi lo riceve all’uscita. Si noti che con questo Searle affronta uno dei problemi che, come s’è già accennato (vedi supra, I.2), vent’anni prima avevano alquanto attenuato l’iniziale entusiasmo per la nascente IA.
Forse anche in ragione della sua forza, questa argomentazione ha ricevuto un gran numero di contestazioni, talvolta pittoresche. Paul e Patricia Churchland (marito e moglie, colleghi di Searle all’Università di California, Searle a Berkeley e i Churchland a San Diego), volendo mettere in luce il fatto che, per loro, l’esperimento della stanza cinese non è che un capzioso sillogismo, gli hanno contrapposto l’«esperimento della stanza luminosa» (cfr. Churchland e Churchland, 1990). Ma il punto d’arrivo di queste contestazioni consiste nel negare che vi sia una distinzione essenziale e qualitativa fra sintassi e semantica: poiché ogni processo mentale ha sede nel cervello, entrambe sarebbero aspetti, fra loro strettamente correlati, dell’attività cerebrale; e dal momento che la semantica risiede nel cervello la sua apparente differenza dall’elaborazione sintattica sarebbe correlata con l’estrema complessità brain structure. Therefore, even the semantics could be transferred to the machines, provided they were equipped with sufficient circuit and algorithmic complexity. This position, in a reductionist perspective, it seems unexceptionable. But it is opposed, with further arguments, Hubert Dreyfus (1988) denying that computers possess expertise not only semantic but also syntactic ability of higher level, those that serve to "thematized, Heidegger, their presence in the world , to put that question to the point of being able to reach beyond one's initial context, however, to place themselves into other realities that may contain the first, and always having consciousness. From this point of view, the limits of artificial intelligence is not very mundane [...] not to take note that you must use to make computers that really simulate the intelligence and conscience, not computable physical structures, type-relativistic quantum . It consists in the fact, far more fundamental, that intelligence and consciousness real, not artificial, they have the ability to connect logic levels, syntactic and semantically different, and constantly challenge, as no physical basis on thinkable computer (or physical-chemical, or biological, however artificial) is conceivably able to do "(Rossi, 1998, pp. 90-91).
IV. Artificial intelligence and conscience of science
With the above considerations we arrive at a central point considered, namely the "problem of conscience." The same experiment by Searle proposes the concept of "brain prosthesis," assuming that, with a refined microsurgical intervention, we may replace one by one all the neurons in a brain with many electronic microcircuits that work exactly the same way as neurons, and are reproduced in all the synaptic connections. What would happen, the question arises, the consciousness of man? For Searle, it would vanish, but instead to Hans Moravec, who has taken eight years after the issue broke it reviewed by a point of view "functionalist," it would remain unchanged. But it is, above all, to define what is consciousness. And this, in fact, is not a solvable problem with the scientific method.
1. Critics of reductionism . There has been talk of "reductionist perspective" (see above , III.1), meaning these words, the identification of mind with his "support" material, the brain, seen as a "machine" be fully playable with artificial devices . So, if there is difference between mind and machine, it is due or a temporary failure of the machinery to be remedied in the future, or a limit that the mind is not yet known to have. More generally, reductionism means here to the idea that the human mind can be simulated, at least in principle, by artificial systems capable of reproducing the performance so perfectly to make it indistinguishable from one other (see Rossi, 1998, pp. 43-44). And, for some authors, the simulation gets to the point that the artificial system possesses very human attributes such as consciousness and intentionality. Almost none of the authors cited here make clear whether this is the position of principle, what are the potential diversification inside and for what reasons: it is "natural," as if it were the only possible one. Jerry A. Fodor prefers to call this position "materialism", and contrasts with the "dualism" Cartesian refused to "his inability to render proper account of mental causation. If the mind is something non-physical, has no place in physical space. How is it possible, then, that a mental cause give rise to a behavioral effect that instead has a position in space? To put it differently, how can the non-physical cause without violating the physical the laws of conservation of mass, energy and momentum? "(1981, p. 100). Within the material can be distinguished, then, positions "behavioral" derived from psychology and positions, however, favor the neurophysiological aspects. And then, for Fodor is distinct from the materialism and dualism, there are positions "functionalist" which ignore the structure of the brain or for simulating systems that focus on "the possibility that systems as diverse as human beings, calculating machines and disembodied spirits may all have mental states "(ibid. ). But Fodor does not consider the possibility that their "mental states" and the intellectual faculties that produce them, are essentially different.
groped The reader may recognize these different positions in the exposition above. It is, as you can see, an event that is fully consistent attitude "metaphysical" that dominates the post-Galilean scientific research and post-Cartesian. But it is here, investing this border area between body and mind, between physical and metaphysical, reached contradictory effects, and to some extent, paradoxical. Hofstadter denies, with a daring and fascinating series of arguments, the fact that everyone can experience back, that the human mind does not stop in front all'indecidibilità gödeliana. The replies to the paradox of the Chinese room suggest that there is no distinction between syntax and semantics. And on a practical level, the ability to "aesthetics" of the chess master has downgraded the laborious search for the right path in a tree of decisions incredibly intricate. Typically, these positions favor the activity rational-deductive mind than other faculty. In particular, they ignore the intelligence "intuitive", which in fact should give rise to some suspicion in the case of a chess player. There is, admittedly, a few generous attempt to overcome the barrier of ratio (in the etymological sense of calculation) and the resulting aporia of the incompleteness theorem. Such attempts may be observed especially in Searle, Penrose and assuming "rationality is not algorithmic," but runs out quickly, perhaps to insufficient openness to the metaphysical perspective, but certainly also because of anger reaction of the opponents, forcing a tiring defense positions on the rear.
Here we will mention first mentioned in the article by Searle, an illuminating passage because it reveals the true knot of the matter, namely the different nature of the brain and machine, the 'natural' and 'artificial' "The computer simulations of brain processes provide models of the formal aspects of these processes, but the simulation should not be confused with playing. The computational model of mental processes is no more real than any other natural phenomenon. One can imagine a computer simulation to the final states of the synapses of the peptides in the hypothalamus. But one can also imagine a computer simulation of the oxidation of hydrocarbons in an automobile engine or the processes of digestion in the stomach one dealing with a pizza. In the case of the brain simulation is no more real than in the case of the automobile and the stomach. Unless a miracle happens, we could not bring on our machine with a computer simulation of the oxidation of gasoline nor could we digest the pizza by running the program that simulates the digestion. It seems equally obvious that the simulation of a cognitive process does not produce the same effects that the neurobiology of cognition "(Searle, 1990, p. 19). Out of every polemic, the AI \u200b\u200bis only a "simulation model" natural intelligence (and, for now, only some of its aspects), useful for practical purposes as with all simulation models, but nothing more, and if were to give even a sign of something like consciousness or intentionality, we should say that it is only a simulation of consciousness and intentionality. So the AI \u200b\u200btakes the role of a "sign of contradiction" for scientific research. It reveals, first, the character "meta-scientific" Fodor's choice that graphically called "materialistic" and the futility of trying to justify staying within the terms of the positive sciences, it has been reported earlier. On the other hand it shows the limitations of that choice, potentially a source of insoluble aporia and results in contrast with the experience: the limits that other positions, not programmatically closed a metaphysical perspective, possibly even higher.
2. What AI has to do . The moral problem of the extent to which and ways in which it is permitted to make use of artificial intelligence techniques, is merely a particularization of the general problem of the proper use of technological tools. But here it takes on a special connotation as it appears to give the car choices that would touch the man. Consider, for example, the implications bioethical therapies automatic "decided" by an expert system. In other words, while in general the use of technical meets quantitative restrictions (for example, "you should not use too much energy because it depletes and destroys the environment) there would appear to limit rather than a qualitative nature, since it would know "what" kind of action is lawful and which are not entrusted to artificial intelligence. Perhaps the question should not be dramatized. If it was worth the reductionist paradigm for the machine is also equipped with intentionality, then the delegation would be deeply disturbing. But if, more reasonably, please note that the machine is programmed by humans and depends on him even when it was equipped with algorithms "genetic" programming that develop in a way not intended, then the problem reduces to determine to what extent rely on "intellectual prosthesis" to design a therapy or intervention of some important economic or social burden.
The problem back to be that of a still somewhat quantitative, ie the prudent use of technical instruments. But you can not deny that the availability of programs that "decide for him" could lead the operator to take a responsible attitude and not to give up its responsibility to "delegate" to the machine: in which case we could no longer say that machine depends on man. In this sense we can see, in general terms, that these tools with seemingly "autonomous capabilities" pose in a more clear the need, for example, was already being felt by Romano Guardini that man has at any time the full domain "morality" on technological systems (see The End of Modern , Brescia 19892, p. 88).
Bibliography:
mainly technical aspects of AI: WAS MCCULLOCH, W. PITTS, A logical Calculus of the ideas immanent inneural Nets, "Bulletin of Mathematical Biophysics" 5 (1943), pp. 115-137; F. Rosenblatt, Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms , Spartan Books, Washington DC 1962, JA ROBINSON, A Machine-Oriented Logic based on the resolution principle , "Journal of ACM" 12 (1965), n. 1, pp. 23-41; M. MINSKY, S. PAPERT, Perceptrons , MIT Press, Cambridge (MA) 1969; E.H. SHORTLIFFE, Computer Based Medical Consultation: Mycin , Elsevier, New York 1976; R. DAVIS, B. BUCHANAN, E. SHORTLIFFE, Production Rules as a Representation for Knowledge-based Consultation Program , "Artificial Intelligence" 8 (1977), pp. 15-45; H.A. SIMON, The Sciences of the Artificial , MIT Press, Cambridge (MA) 1981; R. DAVIS, D.B. LENAT, Knowledge-based Systems in Artificial Intelligence , McGraw-Hill, New York 1982; S. GROSSBERG, Studies of Mind and Brain. Neural Principles of Learning , Perception Development, Cognition and Motor Control, Reidel, Boston 1982; R.S. MICHALSKI, J.G. CARBONELL, T.M. MITCHELL (a cura di), Machine Learning: An Artificial Intelligence Approach , Springer, Berlin-New York 1984; J.R. QUINLAN, Induction of Decision trees , "Machine Learning" 1 (1986), n. 1, pp. 81-106; I. BRATKO, Programmare in Prolog per l'Intelligenza Artificiale , Masson e Addison-Wesley, Milano 1988; E. CHARNIAC, D. MCDERMOTT, Introduzione all'Intelligenza Artificiale . Masson e Addison-Wesley, Milano 1988; M. MINSKY, The Society of Mind, Touchstone Editions , New York 1988; D. GOLDBERG, Genetic Algorithms in Search, Optimization and Machine Learning , Addison-Wesley, Reading (MA) 1989; J.A. FREEMAN, D.M. SKAPURA, Neural Networks, Algorithms Applications and Programming Techniques , Addison-Wesley, Readiing (MA) 1991; J.A. HERTZ, A. KROGH, R.G. PALMER, Introduction to the Theory of Neural Computation , Addison-Wesley, Redwood City (CA) 1991; U.M. FAYYAD, G. PIATETSKY-SHAPIRO, P. SMYTH, R. UTHURUSAMY, Advances in Knowledge Discovery and Data Mining , AAAI Press, Menlo Park (CA) 1992; K. KNIGHT, E. RICH, Intelligenza Artificiale , McGraw Hill, Milano 19922; P.H. WINSTON, Artificial Intelligence , Addison-Wesley, Reading (MA) 1992; M. GINSBERG, Essentials of Artificial Intelligence , Morgan Kaufman, San Mateo (CA) 1993; S. HAYKIN, Neural networks , Macmillan, New York 1994; J. DOYLE, T. DEAN, Strategic Directions in Artificial Intelligence , "ACM Computing Surveys 28 (1996), No. 4, pp. 653-670; L. Console, E. LAMMA, P. Mello, M. Milano, Logic Programming and Prolog , Utet, Torino 1997 2, TM Mitchell, Machine Learning, McGraw Hill, New York 1997, NR Jennings, MJ WOOLDRIGE (ed.), Agent Technology, Springer, Berlin-New York 1998; SJ Russel, P. Norvig Artificial Intelligence: A Modern Approach , Prentice Hall International - Utet, Torino 1998, Intelligent Systems, "The Mill" 12 (2000), n. 1.
Interdisciplinary Aspects: A. TURING, Computing Machinery and Intelligence, "Mind 49 (1950), pp. 433-460; J. LUCAS, Minds, Machines and Gödel , "Philosophy" 37 (1961), pp. 37-39; M. BUNGE, The Mind-Body Problem. A Psychobiological Approach, Oxford University Press, Oxford 1980; JA Fodor, The Mind-Body , "The Sciences" 14 (1981), n. 151, pp. 100-110; K. POPPER, J. ECCLES, The Self and Its Brain , 3 vols., Armando, Roma 1981; E. NAGEL, JR Newman, Gödel's Proof , Basic Books, Turin 1982; CHANGEAUX J.-P., Man neuronal , Feltrinelli, Milano 1983; DR Hofstadter, Gödel, Escher and Bach: an eternal Golden Braid , Adelphi, Milano 1984; JR SEARLE, Minds, brains and programs. A debate on artificial intelligence , clued-Clup, Milano 1984; H. DREYFUS, What computers can not do? The limits of artificial intelligence , Armando, Roma 1988; JR Searle, Intentionality Della , Bompiani, Milano 1988; J. SEARLE, The mind is a program? , "The Sciences" 23 (1990), n. 259, pp. 16-21; PM and P. Churchland, Can a machine think? in ibid., pp. 22-27; G. BASTI, The mind-body problem in philosophy and science , ESD, Bologna 1991; R. Penrose, The Emperor's New Mind , Rizzoli, Milano 1992, J. Searle, The rediscovery of the mind , Bollati-Basic Books, 1992 Torino, G. PICCININI, on a critique of artificial intelligence "strong" , "Journal of Philosophy" 85 (1994), n. 1, pp. 141-146; R. PENROSE, Shadows of the mind. In search of consciousness , Rizzoli, Milan 1996; DR Hofstadter and the "Research Group on the similarities fluid, Fluid Concepts and Creative Analogies. Model to calculate the fundamental mechanisms of thought , Adelphi, Milano 1996; A. ROSSI, The Phantom of intelligence. In search of the artificial mind , Cuen, Napoli 1998, J. SEARLE, The mystery of consciousness, R. Cortina, Milano 1998; F. Bertel, A. Olmi, A. FRIEND, A. Struma Science, analogy, abstraction. Thomas Aquinas and the sciences of complexity , the polygraph, Padova 1999; RJ RUSSELL ET AL. (Ed.), Neuroscience and the Person. Scientific Perspectives on Divine Action , Vatican Observatory Publications - Center for Theology and the Natural Sciences, Vatican City-Berkeley (CA) 1999.
0 comments:
Post a Comment