During a panel at Fair Isaac’s Interact conference last week, a banker from Abbey National in the UK suggested that part of the credit crunch was due to the use of the FICO score. Unlike other panelists, who were former Fair Isaac employees, this gentleman was formerly of Experian! So there was perhaps some friendly rivalry, but his point was a good one. He cited an earlier presentation by the founder of Strategic Analytics that touched on the divergence between FICO scores and the probability of default. The panelist’s key point was that some part of the mortgage crisis could be blamed on credit scores, a point that was first raised in the media last fall.
The FICO score is not a probability.
Fair Isaac people describe the FICO score as a ranking of creditworthiness. And banks rely on the FICO score for pricing and qualification for mortgages. The ratio of the loan to value is also critical, but for any two applicants seeking a loan with the same LTV, the one with the better FICO score is more likely to qualify and receive the better price.
Ideally, a bank’s pricing and qualification criteria would accurately reflect the likelihood of default. The mortgage crisis demonstrates that their assessment, expressed with the FICO score, was wrong. Their probabilities were off. (more…)
Ian Ayres, the author of Super Crunchers, gave a keynote at Fair Isaac’s Interact conference in San Francisco this morning. He made a number of interesting points related to his thesis that intuitive decision making is doomed. I found his points on random trials much more interesting, however.
In one of his examples on “The End of Intuition”, a computer program using six variables did a better job of predicting Supreme Court decisions than a team of experts. He focused on the fact that the program “discovered” that one justice would most likely vote against an appeal if it was labeled a liberal decision. By discovered we mean that a decision tree for this justice’s vote had a top level decision as to whether the decision was liberal, in which case the program had no further concern for any other information. (more…)
I was prompted to post this by request from Mark Proctor and Peter Lin and in response to recent comments on CEP and backward chaining on Paul Vincent’s blog (with an interesting perspective here).
I hope those interested in artificial intelligence enjoy the following paper . I wrote it while Chief Scientist of Inference Corporation. It was published in the International Joint Conference on Artificial Intelligence over twenty years ago.
The bottom line remains:
- intelligence requires logical inference and, more specifically, deduction
- deduction is not practical without a means of subgoaling and backward chaining
- subgoaling using additional rules to assert goals or other explicit approaches is impractical
- backward chaining using a data-driven rules engine requires automatic generation of declarative goals
We implemented this in Inference Corporation’s Automated Reasoning Tool (ART) in 1984. And we implemented it again at Haley a long time ago in a rules langauge we called “Eclipse” years before Java.
Regretably, to the best of my knowledge, ART is no longer available from Inference spin-off Brightware or its further spin-off, Mindbox. To the best of my knowledge, no other business rules engine or Rete Algorithm automatically subgoals, including CLIPS, JESS, TIBCO Business Events (see above), Fair Isaac’s Blaze Advisor, and Ilog Rules/JRules. After reading the paper, you may understand that the resulting lack of robust logical reasoning capabilities is one of the reasons that business rules has not matured to a robust knowledge management capability, as discussed elsewhere in this blog. (more…)