Disciplines From Which The Field of AI Borrows Ideas 2.1. Philosophy a. Can formal rules be used to draw valid conclusions? b. How does the mind arise from a physical brain? c. Where does knowledge come from? d. How does knowledge lead to action? 2.1.A. Concepts borrowed from “Philosophy” Rationalism: The mind operates, at least in part, according to logical rules, and to build physical systems that emulate some of those rules; it’s another to say that the mind itself is such a physical system. - Rene Descartes (1596–1650) Dualism: There is a part of the human mind (or soul or spirit) that is outside of nature, exempt from physical laws. Animals, on the other hand, did not possess this dual quality; they could be treated as machines. Materialism: The brain operates according to the laws of physics that constitute the mind. Free will is simply the way that the perception of available choices appears to the choosing entity. Empiricism: “Nothing is in the understanding, which was not first in the senses.” - John Locke (1632–1704) Induction: General rules are acquired by exposure to repeated associations between their elements. Confirmation Theory: Carnap and Carl Hempel (1905–1997) attempted to analyze the acquisition of knowledge from experience. 2.2. Mathematics • What are the formal rules to draw valid conclusions? • What can be computed? • How do we reason with uncertain information? 2.2.A. Concepts Borrowed From “Mathematics” # Algorithm: The important step after determining the importance of logic and computation was “to determine the limits of what could be done with logic and computation.” The first nontrivial algorithm is thought to be Euclid’s algorithm for computing greatest common divisors. (Note: An algorithm is a set of instructions designed to perform a specific task.) # Incompleteness Theorem: Incompleteness theorem showed that in any formal theory as strong as Peano arithmetic (the elementary theory of natural numbers), there are true statements that are undecidable in the sense that they have no proof within the theory. (Giuseppe Peano, 1858 – 1932) [Ref: 1, 2, 3] # Computable: This fundamental result (of Incompleteness Theorem) can also be interpreted as showing that some functions on the integers cannot be represented by an algorithm—that is, they cannot be computed. This motivated Alan Turing (1912–1954) to try to characterize exactly which functions are computable—capable of being computed. (For example, no machine can tell in general whether a given program will return an answer on a given input or run forever.) [Ref 4] # Tractability: Roughly speaking, a problem is called intractable if the time required to solve instances of the problem grows exponentially with the size of the instances. It is important because exponential growth means that even moderately large instances cannot be solved in any reasonable time. Therefore, one should strive to divide the overall problem of generating intelligent behavior into tractable subproblems rather than intractable ones. (Cobham, 1964; Edmonds, 1965). # NP-Completeness: How can one recognize an intractable problem? The theory of NP-completeness, pioneered by Steven Cook (1971) and Richard Karp (1972), provides a method. Cook and Karp showed the existence of large classes of canonical combinatorial search and reasoning problems that are NP-complete. Any problem class to which the class of NP-complete problems can be reduced is likely to be intractable. (See more in Ref 5) # Probability: Tell you how to deal with uncertain measurements and incomplete theories and underlies modern approaches to uncertain reasoning in AI systems. Peano's Axioms 1. Zero is a number. 2. If “a” is a number, the successor of “a” is a number. 3. zero is not the successor of a number. 4. Two numbers of which the successors are equal are themselves equal. 5. (induction axiom) If a set S of numbers contains zero and also the successor of every number in S, then every number is in S. Peano's axioms are the basis for the version of “number theory” known as “Peano arithmetic”. Ref 1 In layman's terms: Any axiomatic system which is capable of arithmetic is either incomplete or inconsistent. An incomplete system is the one in which there are theorems which may be true but cannot be proven. An inconsistent system, on the other hand, is the one in which there are contradictions. However, if we do not want contradictions, every axiomatic system which is capable of arithmetic must be incomplete. Moreover, a "stronger" result is due to Turing, who showed that the process of finding theorems that cannot be proven is undecidable. In general, a problem is undecidable if there exists no algorithm that, in finite time, will lead to a correct yes-or-no answer. In this case, the problem is undecidable because there is no algorithm that, in finite time, will tell you whether the theorem you want to prove can be proven. Note that the proof of Godel's Theorem is essentially a sophisticated Liar's Paradox, and the proof of Turing's Theorem uses Cantor's diagonal argument. Ref 2 Explanation of "incompleteness theorem": Choose Two: % Complete % Consistent % Non-trivial This is more of a wave in the direction of the meaning, but its what is generally remembered, knowing that I can look up the exact definition as required. Complete: A system is complete if anything that can be stated, can be proved (within the system). Consistent: A system is consistent if you can never prove a statement and its opposite. Non-trivial: A system is non-trivial is it can be used for arithmetic calculations. For example: First Order Logic (aka Propositional Logic), is trivial, this means it can be both complete and consistent. Ref 3 Computability Theory This means that we do not worry about time and space efficiency of the algorithms. We just wonder whether there exists an algorithm. Computability theory is interesting for its negative results, if there is no algorithm at all, there is definitely no practical algorithm that works for all instances. Ref 4 NP-Completeness An NP problem is an algorithmic problem such that if you have a case of the problem of size ‘n’, the number of steps needed to check the answer is smaller than the value of some polynomial in ‘n’. It doesn't mean one can find an answer in the polynomial number of steps, only check it. An NP-complete problem is an NP problem such that if one could find answers to that problem in polynomial number of steps, one could also find answers to all NP problems in polynomial number of steps. This makes NP-complete decision problems the hardest problems in NP (they are NP-hard). People spent lots of time looking for algorithms that finds answers to some NP-complete problem in polynomial number of steps, but have not found any. Because of this, if someone shows a problem to be NP-complete, it is not likely that there is an algorithm solving it in polynomial number of steps. Ref 5 2.3. Economics • How should we make decisions so as to maximize payoff? • How should we do this when others may not go along? • How should we do this when the payoff may be far in the future? 2.3.A Concepts Borrowed From “Economics” # Utility: Most people think of economics as being about money, but economists will say that they are really studying how people make choices that lead to preferred outcomes. When McDonald’s offers a hamburger for a dollar, they are asserting that they would prefer the dollar and hoping that customers will prefer the hamburger. The mathematical treatment of “preferred outcomes” or utility was first formalized by Leon Walras (pronounced “Valrasse”) (1834-1910) and was improved by Frank Ramsey (1931) and later by John von Neumann and Oskar Morgenstern in their book The Theory of Games and Economic Behavior (1944). # Decision Theory: Decision theory, which combines probability theory with utility theory, provides a formal and complete framework for decisions (economic or otherwise) made under uncertainty—that is, in cases where probabilistic descriptions appropriately capture the decision maker’s environment. This is suitable for “large” economies where each agent need pay no attention to the actions of other agents as individuals. For “small” economies, the situation is much more like a game: the actions of one player can significantly affect the utility of another (either positively or negatively). # Game Theory: For some games, a rational agent should adopt policies that are (or least appear to be) randomized. Unlike decision theory, game theory does not offer an unambiguous prescription for selecting actions. # Operations Research: For the most part, economists did not address the third question listed above, namely, how to make rational decisions when payoffs from actions are not immediate but instead result from several actions taken in sequence. This topic was pursued in the field of operations research, which emerged in World War II from efforts in Britain to optimize radar installations, and later found civilian applications in complex management decisions. # Satisficing: The pioneering AI researcher Herbert Simon (1916–2001) won the Nobel Prize in economics in 1978 for his early work showing that models based on satisficing—making decisions that are “good enough,” rather than laboriously calculating an optimal decision—gave a better description of actual human behavior (Simon, 1947). 2.4. Neuroscience # How do brains process information? 2.4.A. Concepts borrowed from Neuroscience # Neuroscience: Neuroscience is the study of the nervous system, particularly the brain. Although the exact way in which the brain enables thought is one of the great mysteries of science, the fact that it does enable thought has been appreciated for thousands of years because of the evidence that strong blows to the head can lead to mental incapacitation. # Neuron: 1873: Camillo Golgi (1843–1926) developed a staining technique allowing the observation of individual neurons in the brain. 1936-1938: Nicolas Rashevsky was the first to apply mathematical models to the study of the nervous system. # Singularity: Point in time at which computers reach a superhuman level of performance. 2.5. Psychology • How do humans and animals think and act? 2.5.A. Concepts Borrowed From “Psychology” # Behaviorism Behaviorism, also known as behavioral psychology, is a theory of learning based on the idea that all behaviors are acquired through conditioning. Conditioning occurs through interaction with the environment. Behaviorists believe that our responses to environmental stimuli shape our actions. # Cognitive Psychology Cognitive psychology the brain as an information-processing device. Cognitive psychology is the scientific study of the mind as an information processor. Cognitive psychologists try to build up cognitive models of the information processing that goes on inside people's minds, including perception, attention, language, memory, thinking, and consciousness. 2.6 Computer engineering • How can we build an efficient computer? For artificial intelligence to succeed, we need two things: intelligence and an artifact. The computer has been the artifact of choice. 2.7 Control theory and cybernetics • How can artifacts operate under their own control? 2.7.A. Concepts borrowed from “Control Theory” # Control Theory Control theory deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality. Ref: 1. Dynamical System 2. Stability Theory 3. Optimal Control # Cybernetics Cybernetics is a transdisciplinary approach for exploring regulatory and purposive systems—their structures, constraints, and possibilities. The core concept of the discipline is circular causality or feedback—that is, where the outcomes of actions are taken as inputs for further action. Cybernetics is concerned with such processes however they are embodied, including in environmental, technological, biological, cognitive, and social systems, and in the context of practical activities such as designing, learning, managing, and conversation. # Homeostatic Ashby’s “Design for a Brain” (1948, 1952) elaborated on his idea that intelligence could be created by the use of homeostatic devices containing appropriate feedback loops to achieve stable adaptive behavior. Homeostasis is any self-regulating process by which an organism tends to maintain stability while adjusting to conditions that are best for its survival. If homeostasis is successful, life continues; if it's unsuccessful, it results in a disaster or death of the organism. # Objective Function The objective function is a means to maximize (or minimize) something. This something is a numeric value. In the real world it could be the cost of a project, a production quantity, profit value, or even materials saved from a streamlined process. With the objective function, you are trying to arrive at a target for output, profit, resource use, etc. 2.8. Linguistics • How does language relate to thought? 2.8.A. Concepts borrowed from “Linguists” Modern linguistics and AI were “born” at about the same time, and grew up together, intersecting in a hybrid field called “computational linguistics” or “natural language processing”. The problem of understanding language soon turned out to be considerably more complex than it seemed in 1957. Understanding language requires an understanding of the subject matter and context, not just an understanding of the structure of sentences. This might seem obvious, but it was not widely appreciated until the 1960s. Much of the early work in knowledge representation (the study of how to put knowledge into a form that a computer can reason with) was tied to language and informed by research in linguistics, which was connected in turn to decades of work on the philosophical analysis of language.
Sunday, March 7, 2021
AI and the disciplines it borrows ideas from
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment