AI The emerging Artificial General Intelligence debate

Comments · 26 Views

The AI market report provides an executive-level overview of the current AI market globally, with detailed forecasts of key indicators up to 2026. Published annually, the report provides a detailed analysis of the near-term opportunities, competitive dynamics, and evolution of AI Trends

A heated debate has erupted within the Artificial Intelligence Market since Google's artificial intelligence (AI) subsidiary DeepMind published a paper a few weeks ago describing a generalist agent they call Gato that can perform a variety of tasks using the same trained model and claimed that artificial general intelligence (AGI) can be achieved solely through sheer scaling. Although it may appear academic, the reality is that our society, including our laws, regulations, and economic models, is not prepared for AGI if it is imminent.

In fact, the generalist agent Gato is able to play Atari, caption images, chat, and stack blocks with a real robot arm thanks to the same trained model. It can also decide whether to output text, join torques, button presses, or other tokens based on its context. As a result, it appears to be a much more adaptable AI model than the well-known GPT-3, DALL-E 2, PaLM, or Flamingo, all of which are becoming very good at very specific, narrow tasks like writing natural language, understanding language, or creating images from descriptions.
Nando de Freitas, a DeepMind scientist and professor at the University of Oxford, stated, "It's all about scale now! The game has ended! and argue that simply scaling (larger models, larger training datasets, and more computing power) can lead to artificial general intelligence (AGI). But which "game" is Mr. de Freitas referring to? And what exactly is the discussion about?

For more Artificial Intelligence Market insights, download a free report sample

The AI argument: strong versus weak AI Before getting into the specifics of the debate and its implications for society as a whole, it's helpful to take a step back to understand the background.

The term "artificial intelligence" has come to mean a lot of different things over time, but at its most basic level, it can be said to be the study of intelligent agents—any system that perceives its environment and takes actions that increase its chances of achieving its goals. Since this has been the subject of intense debate for a considerable amount of time, the question of whether the agent or machine actually "thinks" is deliberately left out of the picture in this definition. In his well-known 1950 paper titled "The Imitation Game," British mathematician Alan Turing argued that rather than considering whether machines can think, we should focus on "whether or not it is possible for machinery to show intelligent behavior."
Because of this distinction, there are conceptually two main AI subfields: weak and strong AI. Strong AI, also known as artificial general intelligence (AGI) or general AI, is a theoretical type of AI in which a machine needs to be as smart as humans. As a result, it would possess a consciousness that is self-aware and has the capacity to learn, solve problems, and plan for the future. The "holy grail of AI" is the most ambitious definition of AI, but this is still purely theoretical for the time being. Traditionally, strong AI has been achieved through symbolic AI, in which a machine creates an internal, physical and abstract symbolic representation of the "world" and applies rules or reasoning to further learn and make decisions.

Because internal or symbolic representations of the world quickly become unmanageable when scaled up, research in this area has had limited success in resolving real-world issues.

Weak AI, also known as "narrow AI," is a less ambitious approach to artificial intelligence that relies on human intervention to define the parameters of its learning algorithms and provide the relevant training data to guarantee accuracy. This approach focuses on performing a specific task, such as answering questions based on user input, recognizing faces, or playing chess.

Face recognition algorithms, natural language models like OpenAI's GPT-n, virtual assistants like Siri or Alexa, Google/DeepMind's chess-playing program AlphaZero, and, to a certain extent, driverless cars are among the well-known examples of weak AI.

Artificial neural networks, which are systems inspired by the biological neural networks that make up animal brains, have typically been the method used to achieve weak AI. They are a collection of nodes or neurons that are connected to each other and an activation function that decides the output based on the weights in the interconnections and the data presented in the "input layer." The network can be "trained" by exposing it to a lot of data examples and "backpropagating" the output loss in order to adjust the weights in the interconnections so that the "output" is useful or correct.

There may be a third subfield known as "neuro-symbolic AI," which combines rule-based artificial intelligence and neural networks. Although it looks promising and makes sense conceptually, it is still in its infancy and appears to be closer to how our biological brains work.

Is scale really everything?
The central question in the current debate is whether, with sufficient scale, AI and machine learning models are capable of achieving artificial general intelligence (AGI), which would eliminate symbolic AI completely. Is there more that needs to be discovered and developed in AI algorithms and models, or is the issue currently limited to hardware scaling and optimization?

It would appear that Tesla is also adopting the Google/DeepMind perspective. The Tesla Bot, also known as Optimus, was unveiled by Tesla at its Artificial Intelligence (AI) Day event in 2021. This general-purpose robotic humanoid will be controlled by the same AI system that Tesla is developing for its advanced car driver assistance system. It is interesting to note that Elon Musk, the company's CEO, has stated that he expects the robot to be ready for production by 2023 and that Optimus will eventually be able to do "anything that humans don't want to do," implying that he anticipates that artificial intelligence will be a reality by then.

However, other AI research teams, most notably Yann LeCun, Chief AI Scientist at Meta and NYU Professor, who prefers the less ambitious term Human-Level AI (HLAI), believe that there are still a lot of problems to be solved and that merely increasing computational power will not be sufficient to address them, necessitating alternative software paradigms or models.

The machine's inability to predict how to influence the world through its actions, to deal with the world's inherent unpredictability, to predict the effects of sequences of actions so that it can reason and plan, and to represent and predict in abstract spaces are among these issues. In the end, the question is whether gradient-based learning with our existing artificial neural networks is sufficient to achieve this, or whether additional breakthroughs are required.


It is tempting to believe that deep learning models will be able to uncover and resolve the remaining issues with just more data and computational power, despite the fact that they are able to make "key features" emerge from the data without human intervention. To use a straightforward analogy, designing and building cars that are ever-increasingly faster and more powerful would not make them fly because, before we can solve the problem of flying, we need to fully comprehend aerodynamics.

The impressive progress made with deep learning AI models begs the question of whether the optimistic viewpoints of weak AI practitioners are not merely a case of the "law of instruments," or Maslow's Hammer, which states that "if the only tool you have is a hammer, you tend to see every problem as a nail."

Is the game over or should we team up?
Fundamental research, such as that conducted by Google/DeepMind, Meta, or Tesla, typically sits uncomfortable at private corporations due to the fact that, despite having large budgets, these businesses tend to prioritize competition and speed to market over academic collaboration and long-term thinking.

It's possible that resolving AGI necessitates the use of both approaches, rather than a contest between proponents of strong and weak AI. A comparison with the human brain, which is capable of both conscious and unconscious learning, is not out of the question. Our cerebellum is responsible for maintaining posture, balance, and equilibrium as well as the coordination and movement associated with motor skills, particularly those involving the hands and feet. It also accounts for approximately 10% of the volume of the brain and contains over 50% of the total number of neurons. We don't really know how we do this because it happens so quickly and without thinking about it. Our conscious brain, on the other hand, is able to deal with abstract concepts, plan, and predict, albeit at a much slower pace. In addition, professional athletes' and women's sportswomen excel at consciously acquiring knowledge and automating it through repetition and training.

If nature has evolved the human brain in this way over hundreds of thousands of years, why would an all-encompassing artificial intelligence system use a single model or algorithm?

Impact on society and investors This event would have huge repercussions for our society, just like the wheel, the steam engine, electricity, and the computer did. It doesn't matter what AI technology is used to achieve AGI. Our capitalist economic model would have to change if businesses were able to completely replace their human workforces with robots, or else there would be social unrest.

Having said that, the current debate is probably just corporate PR, and AGI is actually further away than we currently believe, so we have time to figure out what it could mean. However, it is evident that the pursuit of AGI will continue to drive investment in particular technology areas, such as software and semiconductors, in a shorter timeframe.

Our current hardware's capabilities are being put under more and more strain as a result of the success of specific use cases with the weak AI framework. For instance, the well-known Generative Pre-Trained Transformer 3 (GPT-3) model that OpenAI introduced in 2020 has 175 billion parameters, takes months to train, and is already able to write original prose with a fluency comparable to that of a human. It is possible to argue that a number of the currently available semiconductor products, such as CPUs, GPUs, and FPGAs, are capable of computing deep learning algorithms more or less effectively. However, as the models grow in size, their performance deteriorates, necessitating bespoke designs that are optimized for AI workloads. Leading cloud service providers like Amazon, Alibaba, Baidu, and Google, as well as Tesla and a number of semiconductor start-ups like Cambricon, Cerebras, Esperanto, Graphcore, Groq, Mythic, and Sambanova, have taken this approach.