History of Artificial Intelligence: Can Machines Think?

Lets get a few points out in the open before we open the topic.

What is Thinking and Intellect

Conscious process of cognition is Thinking. In English: the process of building mental awareness of a situation is called thinking..

According to The Dictionary of Psychology. London: Routledge: In the study of the human mind, Intellect is the ability of the human mind to reach correct conclusions about what is true and what is false in reality; and includes capacities such as reasoning, conceiving, judging, and relating.  (Ref: Wikipedia)

To take it further, correct conclusion are not correct universally. not all humans agree on all things. But at the same time, large group of humans have agreed on basics of right and wrong, trade rules, common laws etc. This process of agreement can be described as humans of common intellect have come so similar conclusions.

Limits of Human Intelligence

Some people believe that human intelligence is limited by biological, physical, or environmental factors, while others believe that it is unlimited by cultural, social, or technological factors. I think that both perspectives have some validity and difficulty, and that there is no easy or general answer to this question. Human intelligence is a complex and mysterious phenomenon.

My conclusion is that we do not know what complete intelligence is, and that no human or group of humans has it. Complete intelligence is an elusive and undefined concept, and human intelligence is always partial and diverse.

Machine Thinking and Artificial Intelligence

Machine thinking and artificial intelligence are related but not identical concepts. Machine thinking is the ability of machines to perform cognitive functions that are usually associated with humans, such as reasoning, learning, problem-solving, and creativity. Artificial intelligence is the field of computer science that aims to create machines and software that can exhibit machine thinking.

In response to a comment at a lecture that it was impossible for a machine (at least ones created by humans) to think “You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!”

John von Neumann (quoted by E.T. Jaynes)

History of Artificial Intelligence

Realistic humanoid automata were built by craftsman from many ancient civilizations, notably works of Yan Shi and Al-Jazari.Yan Shi’s work on humanoid automata is one of the earliest accounts of robotics in history. Al-Jazari’s work on humanoid automata was remarkable for its time, and influenced later inventors and engineers. He demonstrated a high level of craftsmanship, creativity, and technical skill, and contributed to the development of robotics and mechanical engineering.

Artificial Brain and Machine Intelligence

A group of scientists from various disciplines started to explore the idea of creating an artificial brain in the mid-20th century.

Alan Turing was one of the pioneers of this field. He coined the term “Machine Intelligence” and proposed the famous Turing test to measure the intelligence of machines.

Dartmouth workshop

In 1956, the field of artificial intelligence research was founded as an academic discipline. Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) workshop, which was a seminal event that brought together some of the leading figures in the field, such as John McCarthy, Marvin Minsky, Claude Shannon, and Herbert Simon. They discussed the goals, methods, and challenges of artificial intelligence, and laid the foundations for future research

Today Artificial intelligence is not a single or unified field, but a collection of subfields and approaches that deal with different aspects and applications of intelligence, such as machine learning, natural language processing, computer vision, robotics, and more. 

AI Winters

AI winters are periods of reduced funding and interest in artificial intelligence research. They occur when the expectations and hype about AI are not met by the actual results and value, causing disappointment and criticism. The field of AI has experienced several winters, such as in the 1970s and the 1990s, when many projects and initiatives were abandoned or cut back.

Primarily AI winters are referred to times where technological limitations like lack of data and processing capabilities were limiting the process of testing latest research but there were other factors like social acceptability and trust in concept and the lack of scalability and robustness of neural networks. Economic conditions limiting fundings.

Rise of Big Tech and Social Media and GPUs

Rise of Big Tech and Social media helped fuel modern AI research and it resulted in rise and solutions of commercial problems like big data analytics, statistical approach to data and machine vision needs, gaming advancements etc.

GPUs were originally designed for graphics processing, especially for video games and 3D rendering. They were optimized for parallel computing, which means they can perform many calculations at the same time. This made them suitable for AI applications that require high-speed and high-volume data processing, such as machine learning and deep learning

we can summarize:

  • The availability and accessibility of large and diverse data sets, such as the World Wide Web, social media, and digital media, that provided rich and varied sources of information and knowledge for AI systems to learn from.
  • The development and improvement of machine learning algorithms and techniques, such as neural networks, deep learning, reinforcement learning, and natural language processing, that enabled AI systems to perform complex and sophisticated tasks, such as image recognition, speech synthesis, and game playing.
  • The advancement and innovation of hardware and software technologies, such as cloud computing, GPUs, and open source platforms, that increased the speed, power, and efficiency of AI systems, and lowered the cost and barriers of entry for AI research and development.
  • Big tech and social media firms helped fuel the research in AI by providing funding, data, and platforms for AI projects like DeepMind and OpenAI etc.

Milestone Achievements

  • Deep Blue, a chess-playing computer by IBM, beat the world champion Kasparov in 1997, with twice the speed of its previous version.
  • A Stanford robot and a CMU team won the DARPA challenges in 2005 and 2007, by driving autonomously in desert and urban settings.
  • Watson, a question answering system by IBM, beat the top Jeopardy! players in 2011, by a large margin.

These achievements were based on engineering skill and computational power, not a new paradigm.

Moore’s Law, which estimates that the memory and speed of computers doubles every year, had finally caught up and in many cases, surpassed our needs. This is precisely how Deep Blue was able to defeat Gary Kasparov in 1997, and how Google’s Alpha Go was able to defeat Chinese Go champion, Ke Jie.

It offers a bit of an explanation to the roller coaster of AI research; we saturate the capabilities of AI to the level of our current computational power (computer storage and processing speed), and then wait for Moore’s Law to catch up again.

The history and development of artificial intelligence is not just a simple function of computer storage and processing speed, but rather a complex and dynamic interaction of multiple factors, such as data, algorithms, technologies, and human creativity and collaboration.

  • IBM’s Deep Blue, Watson, and other AI systems achieved remarkable feats in chess, and other domains, thanks to engineering skill and computational power.
  • Moore’s Law, which predicts the exponential growth of computer memory and speed, enabled and limited the progress of AI research over time.
  • AI research is also influenced and supported by other factors, such as data, algorithms, technologies, and human creativity and collaboration.
image 17

A brief timeline of events in AI evolution

This is a selection of events and not an exhaustive list
DateEvent
10th century BCYan Shi presented King Mu of Zhou with mechanical men
1st centuryHero of Alexandria created mechanical men and other automatons
~800Jabir ibn Hayyan developed the Arabic alchemical theory of Takwin, the artificial creation of life in the laboratory
9th Centuryhe Banū Mūsā brothers created a programmable music automaton described in their Book of Ingenious Devices
9th Centuryal-Khwārizmī wrote textbooks with precise step-by-step methods for arithmetic and algebra, used in Islam, India and Europe until the 16th century. The word “algorithm” is derived from his name
1206Ismail al-Jazari created a programmable orchestra of mechanical human beings.
1620Francis Bacon developed empirical theory of knowledge and introduced inductive logic in his work Novum Organum
1641Thomas Hobbes published Leviathan and presented a mechanical, combinatorial theory of cognition. He wrote “…for reason is nothing but reckoning”
1654Blaise Pascal described how to find expected values in probability
1676Leibniz derived the chain rule. The rule is used by AI to train neural networks
1822–1859Charles Babbage & Ada Lovelace worked on programmable mechanical calculating machines.
1837The mathematician Bernard Bolzano made the first modern attempt to formalize semantics
1854George Boole set out to “investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in the symbolic language of a calculus”, inventing Boolean algebra.
1910-1913Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which showed that all of elementary mathematics could be reduced to mechanical reasoning in formal logic.
1931Kurt Godel encoded mathematical statements and proofs as integers, and showed that there are true theorems that are unprovable by any consistent theorem-proving machine. Thus “he identified fundamental limits of algorithmic theorem proving, computing, and any type of computation-based AI, laying foundations of theoretical computer science and AI theory.
1935Alonzo Church extended Gödel’s proof and showed that the decision problem of computer science does not have a general solution.
1937Alan Turing published “On Computable Numbers”, which laid the foundations of the modern theory of computation by introducing the Turing machine, a physical interpretation of “computability”. He used it to confirm Gödel by proving that the halting problem is undecidable.
1948Alan Turing produces “Intelligent Machinery” report, regarded as the first manifesto of Artificial Intelligence.
1951The first working AI programs were written in 1951 to run on the Ferranti Mark 1 machine of the University of Manchester:a checkers-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz.
1958John McCarthy (Massachusetts Institute of Technology or MIT) invented the Lisp programming language.
1959John McCarthy and Marvin Minsky founded the MIT AI Lab.
1960sRay Solomonoff lays the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction.
1979The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab.
Late 1970sStanford’s SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration.
1980First National Conference of the American Association for Artificial Intelligence (AAAI) held at Stanford.
1982The Fifth Generation Computer Systems project (FGCS), an initiative by Japan’s Ministry of International Trade and Industry
1989The development of metal–oxide–semiconductor (MOS) Very Large Scale Integration (VLSI), in the form of complementary MOS (CMOS) technology, enabled the development of practical artificial neural network (ANN) technology in the 1980s. A landmark publication in the field was the 1989 book Analog VLSI Implementation of Neural Systems by Carver A. Mead and Mohammed Ismail.
Late 1990sWeb crawlers and other AI-based information extraction programs become essential in widespread use of the World Wide Web.
Late 1990sDemonstration of an Intelligent room and Emotional Agents at MIT’s AI Lab.
Late 1990sInitiation of work on the Oxygen architecture, which connects mobile and stationary computers in an adaptive network.
2004DARPA introduces the DARPA Grand Challenge requiring competitors to produce autonomous vehicles for prize money.
2005Recommendation technology based on tracking web activity or media usage brings AI to marketing.
2008Cynthia Mason at Stanford presents her idea on Artificial Compassionate Intelligence, in her paper on “Giving Robots Compassion”.
2009Google builds autonomous car.
2011–2014Apple’s Siri (2011), Google’s Google Now (2012) and Microsoft’s Cortana (2014) are smartphone apps that use natural language to answer questions, make recommendations and perform actions.
2015In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI
2016Google DeepMind’s AlphaGo (versionLee) defeated Lee Sedol 4–1. Lee Sedol is a 9 dan professional Korean Go champion who won 27 major tournaments from 2002 to 2016.
2017Google Lens image analysis and comparison tool released in October 2017, associates millions of landscapes, artworks, products and species to their text description.
2019DeepMind’s AlphaStar reaches Grandmaster level at StarCraft II, outperforming 99.8 percent of human players.
Feb-20Microsoft introduces its Turing Natural Language Generation (T-NLG), which is the “largest language model ever published at 17 billion parameters”.
Nov-20AlphaFold 2 by DeepMind, a model that performs predictions of protein structure, wins the CASP competition.
2020OpenAI introduces GPT-3, a state-of-the-art autoregressive language model that uses deep learning to produce a variety of computer codes, poetry and other language tasks exceptionally similar, and almost indistinguishable from those written by humans. Its capacity was ten times greater than that of the T-NLG. It was introduced in May 2020, and was in beta testing in June 2020.
2022ChatGPT, an AI chatbot developed by OpenAI, debuts in November 2022.
Jan-23ChatGPT has more than 100 million users, making it the fastest growing consumer application to date.
16-Jan-23Three artists:Sarah Andersen, Kelly McKernan, and Karla Ortiz file a class-action copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists by training AI tools on five billion images scraped from the web without the consent of the original artists.
Mar-23OpenAI’s GPT-4 model is released in March 2023 and is regarded as an impressive improvement over GPT-3.5
7-Mar-23Nature Biomedical Engineering writes that “it is no longer possible to accurately distinguish” human-written text from text created by large language models
Mar-23Google releases in a limited capacity its chatbot Google Bard, based on the LaMDA and PaLM large language models.
29-Mar-23A petition of over 1,000 signatures is signed by Elon Musk, Steve Wozniak and other tech leaders, calling for a 6-month halt to what the petition refers to as “an out-of-control race” producing AI systems that its creators can not “understand, predict, or reliably control”.
In May 2023Google makes an announcement regarding Bard’s transition from LaMDA to PaLM2, a significantly more advanced language model.
May-23a Statement on AI Risk is signed by Geoffrey Hinton, Sam Altman, Bill Gates, and many other prominent AI researchers and tech leaders with the following succinct message:”Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
30-Oct-23US President Biden signed the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
Nov-23The first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.
Tagged , , , , , , , , , , , , , , . Bookmark the permalink.

Comments are closed.