-->

Welcome to our Coding with python Page!!! hier you find various code with PHP, Python, AI, Cyber, etc ... Electricity, Energy, Nuclear Power

Sunday, 3 October 2021

Abductive Inference & future path of #AI

All about Agile, Ansible, DevOps, Docker, EXIN, Git, ICT, Jenkins, Kubernetes, Puppet, Selenium, Python, etc
Myth of Artificial Intelligence

Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence.

Recent advances in deep learning have rekindled interest in the imminence of machines that can think and act like humans, or artificial general intelligence. By following the path of building bigger and better neural networks, the thinking goes, we will be able to get closer and closer to creating a digital version of the human brain.

But this is a myth, argues computer scientist Erik Larson, and all evidence suggests that human and machine intelligence are radically different. Larson’s new book, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do, discusses how widely publicized misconceptions about intelligence and inference have led AI research down narrow paths that are limiting innovation and scientific discoveries.

And unless scientists, researchers, and the organizations that support their work don’t change course, Larson warns, they will be doomed to “resignation to the creep of a machine-land, where genuine invention is sidelined in favor of futuristic talk advocating current approaches, often from entrenched interests.”

The myth of artificial intelligence

myth of AI book cover
The Myth of Artificial Intelligence, by Erik J. Larson

From a scientific standpoint, the myth of AI assumes that we will achieve artificial general intelligence (AGI) by making progress on narrow applications, such as classifying images, understanding voice commands, or playing games. But the technologies underlying these narrow AI systems do not address the broader challenges that must be solved for general intelligence capabilities, such as holding basic conversations, accomplishing simple chores in a house, or other tasks that require common sense.

“As we successfully apply simpler, narrow versions of intelligence that benefit from faster computers and lots of data, we are not making incremental progress, but rather picking the low-hanging fruit,” Larson writes.

The cultural consequence of the myth of AI is ignoring the scientific mystery of intelligence and endlessly talking about ongoing progress on deep learning and other contemporary technologies. This myth discourages scientists from thinking about new ways to tackle the challenge of intelligence.

“We are unlikely to get innovation if we choose to ignore a core mystery rather than face it up,” Larson writes. “A healthy culture for innovation emphasizes exploring unknowns, not hyping extensions of existing methods… Mythology about inevitable success in AI tends to extinguish the very culture of invention necessary for real progress.”

Deductive, inductive, and abductive inference

Flowchart

You step out of your home and notice that the street is wet. Your first thought is that it must have been raining. But it’s sunny and the sidewalk is dry, so you immediately cross out the possibility of rain. As you look to the side, you see a road wash tanker parked down the street. You conclude that the road is wet because the tanker washed it.

This is an example “inference,” the act of going from observations to conclusions, and is the basic function of intelligent beings. We’re constantly inferring things based on what we know and what we perceive. Most of it happens subconsciously, in the background of our mind, without focus and direct attention.

“Any system that infers must have some basic intelligence, because the very act of using what is known and what is observed to update beliefs is inescapably tied up with what we mean by intelligence,” Larson writes.  

AI researchers base their systems on two types of inference machines: deductive and inductive. Deductive inference uses prior knowledge to reason about the world. This is the basis of symbolic artificial intelligence, the main focus of researchers in the early decades of AI. Engineers create symbolic systems by endowing them with a predefined set of rules and facts, and the AI uses this knowledge to reason about the data it receives.

Inductive inference, which has gained more traction among AI researchers and tech companies in the past decade, is the acquisition of knowledge through experience. Machine learning algorithms are inductive inference engines. An ML model trained on relevant examples will find patterns that map inputs to outputs. In recent years, AI researchers have used machine learning, big data, and advanced processors to train models on tasks that were beyond the capacity of symbolic systems.

A third type of reasoning, abductive inference, was first introduced by American scientist Charles Sanders Peirce in the 19th century. Abductive inference is the cognitive ability to come up with intuitions and hypotheses, to make guesses that are better than random stabs at the truth.

Charles Sanders Peirce
American scientist Charles Sanders Peirce proposed abductive inference in the 19th century. Source: New York Public Library, Public Domain

For example, there can be numerous reasons for the street to be wet (including some that we haven’t directly experienced before), but abductive inference enables us to select the most promising hypotheses, quickly eliminate the wrong ones, look for new ones and reach a reliable conclusion. As Larson puts it in The Myth of Artificial Intelligence, “We guess, out of a background of effectively infinite possibilities, which hypotheses seem likely or plausible.”

Abductive inference is what many refer to as “common sense.” It is the conceptual framework within which we view facts or data and the glue that brings the other types of inference together. It enables us to focus at any moment on what’s relevant among the ton of information that exists in our mind and the ton of data we’re receiving through our senses.

The problem is that the AI community hasn’t paid enough attention to abductive inference.

AI and abductive inference

Abduction entered the AI discussion with attempts at Abductive Logic Programming in the 1980s and 1990s, but those efforts were flawed and later abandoned. “They were reformulations of logic programming, which is a variant of deduction,” Larson told TechTalks.

Erik Larson
Erik J. Larson, author of “The Myth of Artificial Intelligence”

Abduction got another chance in the 2010s as Bayesian networks, inference engines that try to compute causality. But like the earlier approaches, the newer approaches shared the flaw of not capturing true abduction, Larson said, adding that Bayesian and other graphical models “are variants of induction.” In The Myth of Artificial Intelligence, he refers to them as “abduction in name only.”

For the most part, the history of AI has been dominated by deduction and induction.

“When the early AI pioneers like [Alan] Newell, [Herbert] Simon, [John] McCarthy, and [Marvin] Minsky took up the question of artificial inference (the core of AI), they assumed that writing deductive-style rules would suffice to generate intelligent thought and action,” Larson said. “That was never the case, really, as should have been earlier acknowledged in discussions about how we do science.”

For decades, researchers tried to expand the powers of symbolic AI systems by providing them with manually written rules and facts. The premise was that if you endow an AI system with all the knowledge that humans know, it will be able to act as smartly as humans. But pure symbolic AI has failed for various reasons. Symbolic systems can’t acquire and add new knowledge, which makes them rigid. Creating symbolic AI becomes an endless chase of adding new facts and rules only to find the system making new mistakes that it can’t fix. And much of our knowledge is implicit and cannot be expressed in rules and facts and fed to symbolic systems.

“It’s curious here that no one really explicitly stopped and said ‘Wait. This is not going to work!’” Larson said. “That would have shifted research directly towards abduction or hypothesis generation or, say, ‘context-sensitive inference.’”

In the past two decades, with the growing availability of data and compute resources, machine learning algorithms—especially deep neural networks—have become the focus of attention in the AI community. Deep learning technology has unlocked many applications that were previously beyond the limits of computers. And it has attracted interest and money from some of the wealthiest companies in the world.

“I think with the advent of the World Wide Web, the empirical or inductive (data-centric) approaches took over, and abduction, as with deduction, was largely forgotten,” Larson said.

But machine learning systems also suffer from severe limits, including the lack of causality, poor handling of edge cases, and the need for too much data. And these limits are becoming more evident and problematic as researchers try to apply ML to sensitive fields such as healthcare and finance.

Abductive inference and future paths of AI

machine learning causality

Some scientists, including reinforcement learning pioneer Richard Sutton, believe that we should stick to methods that can scale with the availability of data and computation, namely learning and search. For example, as neural networks grow bigger and are trained on more data, they will eventually overcome their limits and lead to new breakthroughs.

Larson dismisses the scaling up of data-driven AI as “fundamentally flawed as a model for intelligence.” While both search and learning can provide useful applications, they are based on non-abductive inference, he reiterates.

“Search won’t scale into commonsense or abductive inference without a revolution in thinking about inference, which hasn’t happened yet. Similarly with machine learning, the data-driven nature of learning approaches means essentially that the inferences have to be in the data, so to speak, and that’s demonstrably not true of many intelligent inferences that people routinely perform,” Larson said. “We don’t just look to the past, captured, say, in a large dataset, to figure out what to conclude or think or infer about the future.”

Other scientists believe that hybrid AI that brings together symbolic systems and neural networks will have a bigger promise of dealing with the shortcomings of deep learning. One example is IBM Watson, which became famous when it beat world champions at Jeopardy! More recent proof-of-concept hybrid models have shown promising results in applications where symbolic AI and deep learning alone perform poorly.

Larson believes that hybrid systems can fill in the gaps in machine learning–only or rules-based–only approaches. As a researcher in the field of natural language processing, he is currently working on combining large pre-trained language models like GPT-3 with older work on the semantic web in the form of knowledge graphs to create better applications in search, question answering, and other tasks.

“But deduction-induction combos don’t get us to abduction, because the three types of inference are formally distinct, so they don’t reduce to each other and can’t be combined to get a third,” he said.

In The Myth of Artificial Intelligence, Larson describes attempts to circumvent abduction as the “inference trap.”

“Purely inductively inspired techniques like machine learning remain inadequate, no matter how fast computers get, and hybrid systems like Watson fall short of general understanding as well,” he writes. “In open-ended scenarios requiring knowledge about the world like language understanding, abduction is central and irreplaceable. Because of this, attempts at combining deductive and inductive strategies are always doomed to fail… The field needs a fundamental theory of abduction. In the meantime, we are stuck in traps.”

The commercialization of AI

tech giants artificial intelligence

The AI community’s narrow focus on data-driven approaches has centralized research and innovation in a few organizations that have vast stores of data and deep pockets. With deep learning becoming a useful way to turn data into profitable products, big tech companies are now locked in a tight race to hire AI talent, driving researchers away from academia by offering them lucrative salaries.

This shift has made it very difficult for non-profit labs and small companies to become involved in AI research.

“When you tie research and development in AI to the ownership and control of very large datasets, you get a barrier to entry for start-ups, who don’t own the data,” Larson said, adding that data-driven AI intrinsically creates “winner-take-all” scenarios in the commercial sector.

The monopolization of AI is in turn hampering scientific research. With big tech companies focusing on creating applications in which they can leverage their vast data resources to maintain the edge over their competitors, there’s little incentive to explore alternative approaches to AI. Work in the field starts to skew toward narrow and profitable applications at the expense of efforts that can lead to new inventions.

“No one at present knows how AI would look in the absence of such gargantuan centralized datasets, so there’s nothing really on offer for entrepreneurs looking to compete by designing different and more powerful AI,” Larson said.

In his book, Larson warns about the current culture of AI, which “is squeezing profits out of low-hanging fruit, while continuing to spin AI mythology.” The illusion of progress on artificial general intelligence can lead to another AI winter, he writes.

But while an AI winter might dampen interest in deep learning and data-driven AI, it can open the way for a new generation of thinkers to explore new pathways. Larson hopes scientists start looking beyond existing methods.

In The Myth of Artificial Intelligence, Larson provides an inference framework that sheds light on the challenges that the field faces today and helps readers to see through the overblown claims about progress toward AGI or singularity.

“My hope is that non-specialists have some tools to combat this kind of inevitability thinking, which isn’t scientific, and that my colleagues and other AI scientists can view it as a wake-up call to get to work on the very real problems the field faces,” Larson said.

Abductive Inference & future path of #AI ▶️
#Analytics, #MachineLearning, #AI, #Python, #Rstats, #Reactjs, #IoT, #IIoT, #ML, #NLP, #Linux, #flutter, #Serverless, #javascript, #SDGs, #TensorFlow, #SelfDrivingCars,
#CloudComputing, #BigData, #Robotics, #Cloud, #100DaysOfCode,

3 comments:

  1. AI & ML in Dubai
    https://www.nsreem.com/ourservices/ai-ml/
    Artificial intelligence is very widespread today. In at least certainly considered one among its various forms has had an impact on all major industries in the world today, NSREEM is #1 AI & ML Service Provider in Dubai
    1633322013083-7

    ReplyDelete
    Replies
    1. Artificial intelligence is very widespread today. In at least certainly considered one among its various forms has had an impact on all major industries in the world today,

      Delete
  2. New: Abductive Inference & future path of AI
    https://thepythoncoding.blogspot.com/2021/10/abductive-inference-future-path-of-ai.html

    #Analytics #MachineLearning #AI #Python #Rstats #Reactjs #IoT #IIoT #ML #NLP #Linux #flutter #Serverless #javascript #SDGs #TensorFlow
    #CloudComputing #BigData #Robotics #Cloud #100DaysOfCode #flutter

    ReplyDelete

Thanks for your comments

Rank

seo