This articlemay beconfusing or unclear to readers. In particular, the article doesn't describe case-based reasoning from a technical point of view, leaving readers uncertain about how programmers actually implement it. Please helpclarify the article. There might be a discussion about this onthe talk page.(December 2021) (Learn how and when to remove this message) |
Case-based reasoning (CBR), broadly construed, is the process of solving new problems based on the solutions of similar past problems.[1][2]
In everyday life, an automechanic who fixes anengine by recalling anothercar that exhibited similar symptoms is using case-based reasoning. Alawyer who advocates a particular outcome in atrial based onlegalprecedents or a judge who createscase law is using case-based reasoning. So, too, anengineer copying working elements of nature (practicingbiomimicry) is treating nature as a database of solutions to problems. Case-based reasoning is a prominent type ofanalogy solution making.
It has been argued[by whom?] that case-based reasoning is not only a powerful method forcomputer reasoning, but also a pervasive behavior in everyday humanproblem solving; or, more radically, that all reasoning is based on past cases personally experienced. This view is related toprototype theory, which is most deeply explored incognitive science.

Case-based reasoning has been formalized[clarification needed] for purposes ofcomputer reasoning as a four-step process:[3]
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(March 2016) (Learn how and when to remove this message) |
At first glance, CBR may seem similar to therule inductionalgorithms[note 1] ofmachine learning. Like a rule-induction algorithm, CBR starts with a set of cases or training examples; it forms generalizations of these examples, albeit implicit ones, by identifying commonalities between a retrieved case and the target problem.[4]
If for instance a procedure for plain pancakes is mapped to blueberry pancakes, a decision is made to use the same basic batter and frying method, thus implicitly generalizing the set of situations under which the batter and frying method can be used. The key difference, however, between the implicit generalization in CBR and the generalization in rule induction lies in when the generalization is made. A rule-induction algorithm draws its generalizations from a set of training examples before the target problem is even known; that is, it performs eager generalization.
For instance, if a rule-induction algorithm were given recipes for plain pancakes, Dutch apple pancakes, and banana pancakes as its training examples, it would have to derive, at training time, a set of general rules for making all types of pancakes. It would not be until testing time that it would be given, say, the task of cooking blueberry pancakes. The difficulty for the rule-induction algorithm is in anticipating the different directions in which it should attempt to generalize its training examples. This is in contrast to CBR, which delays (implicit) generalization of its cases until testing time – a strategy of lazy generalization. In the pancake example, CBR has already been given the target problem of cooking blueberry pancakes; thus it can generalize its cases exactly as needed to cover this situation. CBR therefore tends to be a good approach for rich, complex domains in which there are myriad ways to generalize a case.
In law, there is often explicit delegation of CBR to courts, recognizing the limits of rule based reasons: limiting delay, limited knowledge of future context, limit of negotiated agreement, etc. While CBR in law and cognitively inspired CBR have long been associated, the former is more clearly an interpolation of rule based reasoning, and judgment, while the latter is more closely tied to recall and process adaptation. The difference is clear in their attitude toward error and appellate review.
Another name for case-based reasoning in problem solving is symptomatic strategies. It does require à priori domain knowledge that is gleaned from past experience which established connections between symptoms and causes. This knowledge is referred to as shallow, compiled, evidential, history-based as well as case-based knowledge. This is the strategy most associated with diagnosis by experts. Diagnosis of a problem transpires as a rapid recognition process in which symptoms evoke appropriate situation categories.[5] An expert knows the cause by virtue of having previously encountered similar cases. Case-based reasoning is the most powerful strategy, and that used most commonly. However, the strategy won't work independently with truly novel problems, or where deeper understanding of whatever is taking place is sought.
An alternative approach to problem solving is the topographic strategy which falls into the category of deep reasoning. With deep reasoning, in-depth knowledge of a system is used. Topography in this context means a description or an analysis of a structured entity, showing the relations among its elements.[6]
Also known as reasoning from first principles,[7] deep reasoning is applied to novel faults when experience-based approaches aren't viable. The topographic strategy is therefore linked to à priori domain knowledge that is developed from a more a fundamental understanding of a system, possibly using first-principles knowledge. Such knowledge is referred to as deep, causal or model-based knowledge.[8] Hoc and Carlier[9] noted that symptomatic approaches may need to be supported by topographic approaches because symptoms can be defined in diverse terms. The converse is also true – shallow reasoning can be used abductively to generate causal hypotheses, and deductively to evaluate those hypotheses, in a topographical search.
Critics of CBR[who?] argue that it is an approach that acceptsanecdotal evidence as its main operating principle. Without statistically relevant data for backing and implicit generalization, there is no guarantee that the generalization is correct. However, allinductive reasoning where data is too scarce for statistical relevance is inherently based on anecdotal evidence.
CBR traces its roots to the work ofRoger Schank and his students atYale University in the early 1980s. Schank's model of dynamic memory[10] was the basis for the earliest CBR systems:Janet Kolodner's CYRUS[11] and Michael Lebowitz's IPP.[12]
Other schools of CBR and closely allied fields emerged in the 1980s, which directed at topics such as legal reasoning, memory-based reasoning (a way of reasoning from examples on massively parallel machines), and combinations of CBR with other reasoning methods. In the 1990s, interest in CBR grew internationally, as evidenced by the establishment of an International Conference on Case-Based Reasoning in 1995, as well as European, German, British, Italian, and other CBR workshops[which?].
CBR technology has resulted in the deployment of a number of successful systems, the earliest being Lockheed's CLAVIER,[13] a system for laying out composite parts to be baked in an industrial convection oven. CBR has been used extensively in applications such as the Compaq SMART system[14] and has found a major application area in the health sciences,[15] as well as in structural safety management.
There is recent work[which?][when?] that develops CBR within a statistical framework and formalizes case-based inference as a specific type of probabilistic inference. Thus, it becomes possible to produce case-based predictions equipped with a certain level of confidence.[16]One description of the difference between CBR and induction from instances is thatstatistical inference aims to find what tends to make cases similar while CBR aims to encode what suffices to claim similarly.[17][full citation needed]
Anearlier version of the above article was posted onNupedia.