Teresa Escrig

News and oppinion about Cognitive AI & Robotics

A more or less concise Historical Evolution of Robotics and Artificial Intelligence. From Plato to the 1990’s

leave a comment

Lets first consider the historical evolution that shows man’s desire to build a machine like ourselves.  From philosophers and alchemists of the Middle Ages to 21st century scientists, the fascinating idea of creating a machine like the human being has endured.

The origins of Artificial Intelligence are attributed to the philosophers of antiquity. Plato (428 BC) wanted to know the characteristics of piety to determine if action could be regarded as pious.  This could be the first algorithm.  Aristotle conceived an informal system of syllogistic reasoning by which one could draw conclusions from premises, which became the precursor of reasoning.

Philosophers delineated the most important ideas related to artificial intelligence, but also needed a formalization of mathematics in three areas: computer science, logic and probability.  The idea of expressing a calculation with a formal algorithm is due to the 9th century Arab mathematician, al-Khwarizmi, who also introduced Arabic numerals and algebra in Europe.

From the XIII century the first legend of the talking head of Friar Roger Bacon, indicated how to build a wall of brass – to surround England – to protect it from invaders.  The medieval physician Paracelsus described a recipe based on the alchemical tradition to create a homunculus, like a living child although smaller.  In the second half of XIV century came the legend of the golem, a being created from clay and deformed magic, with supernatural physical powers, who defended the Jews in Prague.

Parallel to this set of legends and stories, there is a prehistory of the robot as a machine.  The gears and cams of mechanical technology developed in the Middle Ages, were used in windmills and water wheels for grinding grain.  The first pipe organs were built of air springs.  The first mechanical men were figures of church towers in the late Middle Ages.  The first robots were built to be offered as toys to monarchs: Leonardo da Vinci built a lion (1500); Gianello de la Torre a girl playing a lute (1540), Isaac and Salomon de Caus built Ornamental fountains with moving figures (1600), Descartes built a robot (1640).  The art of watchmaking also boosted the development of robotics, in fact Jackes Vaucanson (1738) constructed a duck  and two very realistic droids using this technology.  Each duck wing contained over four hundred articulating pieces.  The androids were musicians (flute and drum) emitting the sounds of instruments directly, without being recorded.  In 1740 David Hume proposed what is now known as the principle of induction: whose general rules are obtained by repeated contact with associations among elements.  Wolfgang von Kempelen (1769) invented an android that played chess.  You never knew if it was a fraud, with a person inside the box, that moved the hands of the android to move the chess pieces – it played well and properly – but it was the first time that the distinction between man and machine was blurred.  Between 1770 and 1773, Pierre and Henri-Louis Jaquet-Droz built three androids, the scribe, the artist and the musician, made of clockworks and cams.  Setting levers on a control dial, the writer could write any text, the cartoonist could draw a few predetermined pictures, and the musician truly played the organ.  The three androids had very precise movements and imitated – very realistically – human behavior in these areas.

Industrial robots today are direct descendants of these three automata, with three differences: 1) The adoption of a functional form rather than a human form; 2) the use of hydraulics and other sources of energy, instead of springs and clock movements; 3) using other programming methods more sophisticated than the cams.

In 1788, James Watt designed the first feedback control mechanism for maintaining the set speed on a steamboat.  To solve the problem of processing the census data of the United States of America, the statistician Herman Hollerith elaborated, in 1989 an Electric Tabulating System to encode and transfer the data into punch cards.  The cards went through a surface of mercury.  A set of wires then came down on the cards, a thread touching the hole made an electrical contact with a counter, which then advanced one unit.  Data was read on a meter.

During the Industrial Revolution, which created great strides in science and engineering, there was a big gap in the evolution of robotics, that lasted for a century and a half (1790-1940).  It was necessary for the machine to be able to store information and make decisions, which was not possible until the beginning of World War II.  Charles Babbage (19th century) had the idea of a general-purpose digital computer with a stored program.  He created it with the clockwork technology of his time, but the accuracy obtained was not enough.

In science fiction at that time, writer Mary Shelley, took the idea of the golem from the 14th century, and added a set of physical laws, to create Frankenstein (1817).  The play by Karel Capek, R.U.R. (1920) defined a world in which robots were used as laborers and soldiers of war.  When pain and emotions were installed in the robots, they rebelled, facing humans and virtually exterminated the human race – their creators.

In 1937, Alan Turing declared that with a simple set of basic operations using toggle switches, open or closed, the machine could perform any mathematical calculation that could be completed in a finite number of steps.

The automatic controls developed in the Second World War, such as radar, jet propulsion, the V-2 rocket (where the destination could be set), the B-29 heavy bomber and the atomic bomb, led to the post-war programmable robots which are in use today.  During World War II the Enigma encoder was developed by the Germans.  Turing and his research team developed Colossus, a computer that was reading valve data from a punched tape, which deciphered the Enigma code – and marked a turning point in the war.

In literature, to counter the pessimistic ideas put forth in R.U.R., the science fiction writer Isaac Asimov in 1942, defined the three laws of robotics:

  1. A robot may not injure a human being nor through inaction cause a human being to be harmed.
  2. A robot must obey the orders received by human beings except where such orders conflict with the First Law.
  3. A robot must protect its own existence, except where this conflicts with the first two laws.An example of robots in science fiction that followed Asimov’s laws were R2D2 and C3PO in The Star Wars Trilogy (1977).

In 1847, George Boole introduced a formal language to make logical inferences.  In 1879, Gottlob Frege constructed predicate logic first order, which is currently used as the basic system of knowledge representation.

The programmable computer appears next.  In 1941, the German Konrad Zuse invented the computer Z-3 and Plankalkül – the first high level programming language.  The first electronic computer – the ABC – was assembled between 1940 and 1942 is the United States by John Atanasoff and Clifford Berry.  The first digital, electronic and programmable computer was ENIAC – Electronic Numerical Integrator and Calculator – (1946).  Two differences separate computer ENIAC from the current computer: the decimal system, rather than binary; one that could only be programmed by making connections in a box, sticking pins, one by one in plugs, and only allowing a rigid set of orders.

John von Neumann had the idea that the computer should store your program using the same electronic code that manipulated data.  The program did not contain a fixed sequence of steps, but included conditions that allowed choosing the new sequence of steps to execute.  The next computer, built by von Neumann in 1947, EDVAC, had binary arithmetic with the program stored in an electronic memory – such as computers today.  The most significant improvement to occur with computers was with the IBM 701, built in 1952 by Nathalniel Rochester and his team.

At that time, the idea of what would later be known as Artificial Intelligence was introduced.  The first task in AI was developed by Warren McCulloch and Walter Pitts in 1943.  It consisted of a model of artificial neurons, which could be enabled or disabled as sufficient stimulation of neighboring neurons, which allowed calculating any computable function.  Donald Hebb in 1949 modified the model, including intensities between connections that neurons could learn.  In 1951 Marvin Minsky and Dean Edmonds built the first neural network computer.  In 1962, Frank Rosenblatt proved the convergence theorem of the perceptron, which indicated that the learning algorithm could adjust the connection strengths of a perceptron to correspond to any input data, provided that such correspondence was feasible.

In 1956, a group of researchers, McCarthy, Shannon and Marvin, coined the term Artificial Intelligence.  The first ‘intelligent’ program, called Logic Theorist, could prove theorems from Russell and Whitehead’s, Principia Mathematica.  In fact, one of the demonstrations provided to the program was shorter and more satisfactory than that provided by Russell and Whitehead.  The next step was a program called General Problem Solver, based on ‘means-end’ analysis and planning.  The ‘means-end’ analysis consisted in seeing where we are, comparing it to the place where we want to go and look for ways to reduce the difference.  Planning means identifying several goals on the road, monitoring which could bring us closer to the desired state.  This program had pretensions of generality, and worked well for logical problems and puzzles.  Meanwhile McCarthy was working on a program, which sought to use new knowledge while it was running the program, combined with what was already known.  This approach proved very difficult to implement, but originated the concept of timeshare, a computer used simultaneously by a large number of people.  In 1958 McCarthy defined the high-level language LISP, which would become the oldest programming language – and is still in use today.  McCarthy dedicated all of his efforts to the representation and reasoning in formal logic, which was integrated in the late 1960’s with Shakey the Robot at Stanford University.

The initial successes in the field of Artificial Intelligence set expectations among researchers who soon realized they could not be achieved.  The programs that worked for simple examples, failed miserably when they were used in real problems, because the complexity of the problems that AI was trying to solve grew exponentially with the size of the problem, i.e. the number of variables considered.

At the same time the first industrial robot, was introduced – called Unimates – which had a feedback control system and computer memory.  A user guided the robot through a sequence of steps, which the robot would then repeat.  The first task of Unimate was to control a die-casting.  With this industrial process metal parts were manufactured by injecting molten zinc or aluminum into a steel die – an unpleasant task for people and even dangerous to their health.

In the late 60’s, Stanford University and the Massachusetts Institute of Technology (MIT) developed a system in which a manipulator arm was guided by images received from a television camera located next to the arm, its task was to build a water pump from car parts distributed on a table.  The problem was not at all trivial. The robot had to understand the order of assembling the pump, while identifying the parts, handle any possible errors and make corrections.  At SRI International, a second version of Shakey was built, which exceeded the previous one in mobility.  It combined learning capabilities, pattern recognition software, that processed information from images, parts of the general problem solver, and programs to represent information about the outside world.

In 1978 the company that commercialized the Unimation, took out a smaller manipulator arm, called PUMAProgrammable Universal Machine for Assembly – specifically designed to handle smaller parts in the assembly of instruments and engines.  The company sold more than 50 robots per month.  These robots helped revolutionize the automotive industry.  For example, in a Detroit Chrysler plant, 50 robots working in two shifts performed the work previously done by two hundred human welders.  In a Texas factory, a robot selected bits from a tool rack, and drilled holes with a tolerance of 0.1mm.  The robot manufactured parts five times faster than a skilled person without wasting any parts.

The census conducted in 1980 counted 10,000 robots in Japan, 3,000 in the U.S.A., 850 in Germany, 300 in Sweden, 500 in Italy, 360 in Poland, 200 in France, 200 in Norway, 200 in England and 85 in the Soviet Union.  The large number of robots made by Japanese companies was due to a national strategy.  One company purchased a large number of robots to lease to manufacturers.  Business owners could rent robots and test their results without the risk of having to buy them, only to find that they didn’t serve their needs.

By 1980 the U.S. was producing 1,500 robots per year, while Japan was producing 7,500 robots per year.  In a television factory near Osaka, 80% of the parts of each unit were assembled by robots.  Quality was so high that there was only one checkpoint at the end of production!  Intermediate checkpoints were removed after realizing that they were unnecessary.  In the U.S. there were no businesses doing that.

All these robots were ‘blind’, performing tasks without the use of intelligence. They received parts corresponding to each stage in the assembly process in an exact position and then had to be placed in a predefined position.  If something changed an inch of the set position, the robot did not work.  Even today, most robots in the industry have these limitations.

Much effort has been devoted to artificial vision.  It is easy to take a video image, fragment it into small elements and convert it to numbers.  What is still an open research issue is the interpretation of the scene, recognizing objects that appear in it, even when partially hidden.

Much effort has also been devoted to reading and processing printed text.  For many years we have worked on machine translation between languages.  In 1949 Warren Weaver used methods to decipher codes developed during the war by Turing and his colleagues.  According to legend a researcher who worked with Weaver asked his computer to translate into Russian and from Russian into English the phrase – “The spirit is willing but the flesh is weak” – the result was “The vodka is nice, but the steak is rotten”.  The first translation programs offered even worse results.

Another early linguistic program was ELIZA, created in 1966 by Weitzenbaum at MIT.  ELIZA skipped an understanding of language by using a clever system of fixed responses.  The system identified key words and created questions for the interlocutor using those key words.

During the 1980s, expert systems were developed, programs including specialized knowledge of a particular subject and reasoning with it, issuing diagnoses: an expert system DENDRAL interpreted data from chemical instruments and provided advice on the structure of unknown compounds; MACYMSA carried out complex calculations and symbolic manipulations of higher mathematics; PROSPECTOR for geology, identified the location of deposits of molybdenum; MYCIN diagnosed meningitis and blood infections.  MYCIN incorporated an explanation of the deductions it was obtaining, the management of uncertainty, and the separation of the reasoning process from the knowledge.

The first expert system to be marketed, R1, was used by Digital Equipment Corporation to customize orders for new communication systems – which in turn provided a substantial cost savings for the company.

In 1981 the Japanese announced the “fifth generation” project, which lasted 10 years.  Intended to achieve intelligent computers, they would execute the programming language PROLOG, another AI language, along with LISP, commonly used in Artificial Intelligence, based on ‘first-order predicates formal logic’, instead of machine code (running normally on a computer).  The computer would take a huge amount of rules that would represent the knowledge of a part of reality, through millions of inferences per second.  The human and financial resources that the Japanese devoted to this project was matched by European and American efforts in fear that the Japanese would dominate the field.  The project was ultimately considered a huge failure and abandoned without satisfactory results.

Since the late 1980’s, Artificial Intelligence has made changes in methodology: it has gone from trying to solve general problems to addressing specific problems, using real-world applications instead of toy examples, and taking as its theoretical basis, mathematical theorems or solid experiments.

For example, in the field of natural language understanding, they were using hidden Markov models based on rigorous mathematical theory, and generated through learning processes based on a large volume of real data.  More recently, this area has been transitioning to industrial applications.

In 1986 the notion of normative expert systems was introduced, which act rationally according to the laws of decision theory, without attempting to imitate human experts.  That same year, Brooks developed the reactive architecture for autonomous agents.  In artificial vision there was a connection between perception and action, usually associated to a robot.

During the 1990’s there were advances in the connectionist paradigm, fuzzy logic, genetic algorithms, and qualitative models.  New methodologies such as KADS knowledge acquisition appeared.  Newell, Laird and Rosenbloom in 1990 introduced the idea of a complete agent architecture, which studied the behavior of agents engaged in real environments with continuous sensory inputs.

This work raised the awareness of the need to link all the fields of artificial intelligence to fully define the concept of agent (virtual) or robot (physical).

The origins of Cognitive Robots and the development of the first Cognitive Brain for Service Robotics are in the 1990’s with the development of qualitative models originally applied to simulated robots, and then, when the technology was ready, to real robots.

Written by Teresa Escrig

March 14th, 2012 at 10:50 pm

Leave a Reply