5.6 Abduction
Abduction is a form of reasoning where assumptions are made to explain observations. For example, if an agent were to observe that some light was not working, it can hypothesize what is happening in the world to explain why the light was not working. An intelligent tutoring system could try to explain why a student gives some answer in terms of what the student understands and does not understand.
The term abduction was coined by Peirce (1839-1914) to differentiate this type of reasoning from deduction, which involves determining what logically follows from a set of axioms, and induction, which involves inferring general relationships from examples.
In abduction, an agent hypothesizes what may be true about an observed case. An agent determines what implies its observations - what could be true to make the observations true. Observations are trivially implied by contradictions (as a contradiction logically implies everything), so we want to exclude contradictions from our explanation of the observations.
To formalize abduction, we use the language of Horn clauses and assumables (the same input that was used for proving from contradictions). The system is given
- a knowledge base, KB, which is a set of of Horn clauses, and
- a set A of atoms, called the assumables; the assumables are the building blocks of hypotheses.
Instead of adding observations to the knowledge base, observations must be explained.
A scenario of ⟨KB,A⟩ is a subset H of A such that KB∪H is satisfiable. KB∪H is satisfiable if a model exists in which every element of KB and every element H is true. This happens if no subset of H is a conflict of KB.
An explanation of proposition g from ⟨KB,A⟩ is a scenario that, together with KB, implies g.
That is, an explanation of proposition g is a set H, H⊆A such that
KB∪H false.
A minimal explanation of g from ⟨KB,A⟩ is an explanation H of g from ⟨KB,A⟩ such that no strict subset of H is also an explanation of g from ⟨KB,A⟩ .
bronchitis ←smokes.
coughing ←bronchitis.
wheezing ←bronchitis.
fever ←influenza.
soreThroat ←influenza.
false ←smokes ∧nonsmoker.
assumable smokes, nonsmoker, influenza.
If the agent observes wheezing, there are two minimal explanations:
{influenza} and {smokes}
These explanations imply bronchitis and coughing.
If wheezing∧fever is observed, there is one minimal explanation:
{influenza}.
The other explanation is no longer needed in a minimal explanation.
Notice how, when wheezing is observed, the agent reasons that it must be bronchitis, and so influenza and smokes are the hypothesized culprits. However, if fever were also observed, the patient must have influenza, so there is no need for the hypothesis of smokes; it has been explained away.
If wheezing∧nonsmoker was observed instead, there is one minimal explanation:
{influenza, nonsmoker}
The other explanation of wheezing is inconsistent with being a non-smoker.
Determining what is going on inside a system based on observations about the behavior is the problem of diagnosis or recognition. In abductive diagnosis, the agent hypothesizes diseases and malfunctions, as well as that some parts are working normally, to explain the observed symptoms. This differs from consistency-based diagnosis in that the designer models faulty behavior in addition to normal behavior, and the observations are explained rather than added to the knowledge base. Abductive diagnosis requires more detailed modeling and gives more detailed diagnoses, because the knowledge base has to be able to actually prove the observations. It also allows an agent to diagnose systems in which there is no normal behavior. For example, in an intelligent tutoring system, by observing what a student does, the tutoring system can hypothesize what the student understands and does not understand, which can guide the action of the tutoring system.
Abduction can also be used for design, in which what is to be explained is a design goal and the assumables are the building blocks of the designs. The explanation is the design. Consistency means that the design is possible. The implication of the design goal means that the design provably achieved the design goal.
A user could observe that l1 is lit or is dark. We must write rules that axiomatize how the system must be to make these true. Light l1 is lit if it is okay and there is power coming in. The light is dark if it is broken or there is no power. The system can assume l1 is ok or broken, but not both:
dark_l1 ←broken_l1.
dark_l1 ←dead_w0.
assumable ok_l1.
assumable broken_l1.
false ←ok_l1 ∧broken_l1.
Wire w0 is live or dead depending on the switch positions and whether the wires coming in are alive or dead:
live_w0 ←live_w2 ∧down_s2 ∧ok_s2.
dead_w0 ←broken_s2.
dead_w0 ←up_s2 ∧dead_w1.
dead_w0 ←down_s2 ∧dead_w2.
assumable ok_s2.
assumable broken_s2.
false ←ok_s2 ∧broken_s2.
The other wires are axiomatized similarly. Some of the wires depend on whether the circuit breakers are okay or broken:
dead_w3 ←broken_cb1.
dead_w3 ←dead_w5.
assumable ok_cb1.
assumable broken_cb1.
false ←ok_cb1 ∧broken_cb1.
For the rest of this question, we assume that the other light and wires are represented analogously.
The outside power can be live or the power can be down:
dead_w5 ←outside_power_down.
assumable live_outside.
assumable outside_power_down.
false ←live_outside ∧outside_power_down.
The switches can be assumed to be up or down:
assumable down_s1.
false ←up_s1 ∧down_s1.
There are two minimal explanations of lit_l1:
{down_s1, down_s2, live_outside, ok_cb1, ok_l1, ok_s1, ok_s2}.
This could be seen in design terms as a way to make sure the light is on: put both switches up or both switches down, and ensure the switches all work. It could also be seen as a way to determine what is going on if the agent observed that l1 is lit; one of these two scenarios must hold.
There are ten minimal explanations of dark_l1:
{broken_s2}
{down_s1, up_s2}
{broken_s1, up_s2}
{broken_cb1, up_s1, up_s2}
{outside_power_down, up_s1, up_s2}
{down_s2, up_s1}
{broken_s1, down_s2}
{broken_cb1, down_s1, down_s2}
{down_s1, down_s2, outside_power_down}
There are six minimal explanations of dark_l1 ∧lit_l2:
{broken_s2, live_outside, ok_cb1, ok_l2, ok_s3, up_s3}
{down_s1, live_outside, ok_cb1, ok_l2, ok_s3, up_s2, up_s3}
{broken_s1, live_outside, ok_cb1, ok_l2, ok_s3, up_s2, up_s3}
{down_s2, live_outside, ok_cb1, ok_l2, ok_s3, up_s1, up_s3}
{broken_s1, down_s2, live_outside, ok_cb1, ok_l2, ok_s3, up_s3}
Notice how the explanations cannot include outside_power_down or broken_cb1 because they are inconsistent with the explanation of l2 being lit.
The bottom-up and top-down implementations for assumption-based reasoning with Horn clauses can both be used for abduction. The bottom-up implementation of Figure 5.9 computes, in C, the minimal explanations for each atom. The pruning discussed in the text can also be used. The top-down implementation can be used to find the explanations of any g by generating the conflicts and, using the same code and knowledge base, proving g instead of false. The minimal explanations of g are the minimal sets of assumables collected to prove g such that no subset is a conflict.
No comments:
Post a Comment
Thanks for your comments