We’ve moved to http://www.haleyAI.com

June 25, 2008

Zigtag for social semantic tagging

image

I started to use Radar Networks’ Twine at the invitation of CEO Nova Spivak after writing this earlier this year (also see this). I enjoyed it for a while, especially because a lot of technology folks were hooking up with each other, especially the semantic web community, on Twine. But I found it tedious to work through beta issues and to be bothered with recommendations or news about who was saying or bookmarking things about what. (I should have turned off the emails sooner!)

I was also disappointed that Twine was taking an apparently folksonomic approach to tagging. It was as if Radar Networks was riding semantic web buzz without really embracing it openly or sharing the momentum that the invite-only community was investing in. That may not sound fair – I believe that there are semantics in the back room, but that’s how it felt and it’s still the way it looks. But probably the worst part is the process that you have to go through to add a bookmark – which is the whole point, of course! (I ultimately sacrificed popup blockers, but the process still seems laborious compared to other alternatives.)

I stumbled across Zigtag almost accidentally while working for a VC firm with a portfolio of semantic startups. What I like most about Zigtag is that they make it obvious that they are building an ontology of tags and encourage users to select semantic tags (i.e., concepts) rather than folksonomic “words”. They also provide tools for managing tags that allow you to move smoothly and incrementally from a folksonomic to a more semantic approach.

More...

The key to the semantic approach for Zigtag is that shared tags are just that – they are more precise than strings. They are not only words – they have definitions.

Unfortunately, like Twine, Zigtag’s ontological model remains hidden.

My initial experience with Zigtag resulted in immediate jubilation. The Firefox plug-in works for me. It lets me type in tags with nice completion and recommendations from the tags that others have defined. Within 15 minutes I was writing to compliment Zigtag on a practical, elegant approach to the semantic bookmarking problem. I liked it much better than Twine right off the bat – and despite its book-market, I like Twine a lot! Within a few minutes I had an email from their founder, Reg Cheramy. An hour later we were talking. We talked about his early meeting with Michael Arrington, how his work compares to bulletin board or discussion forum emphasis in Twine , how he facilitates semantic tagging given a very large ontology and vocabulary, and so on.

Whether Reg took my advice to emphasize groups more or was already headed in that direction is unclear, but Zigtag now has group functionality that seems as good as (and in some ways better) than Twine’s. If you go to Zigtag the web site, you can find groups to join, but unlike Twine’s web site, Zigtag does not recommend groups for you based on your interests. I’m not sure this is a problem, though. Recommendations can be distracting. Nonetheless, if people want recommendations for more than content, it would be a simple step for Zigtag given the fact that they already recommend content that others have bookmarked.

I’m not too concerned with recommendations, even of content, so I cannot comment on Zigtag versus Twine on that front. Generally, there is plenty of RSS and recommendation noise to go around. I prefer the linked approach to finding information rather than searching and I don’t expect recommendations to become excellent in the near term. For more on this, you might want to check out the recent news about Vulcan’s EVRI investment at Webware or ReadWriteWeb.

I like to use Zigtag from the sidebar in Firefox. Actually, I owe Reg additional thanks for, in effect, causing me to abandon Internet Explorer for Firefox. I use it primarily to organize my bookmarks semantically and across machines. For those that want to do the same, you might also be interested in Mitch Kapor’s Foxmarks.

I’m fine with finding groups on my own and I like seeing people and what they tend to tag, too. Now that I know they are available on the web site, though, I want them in the sidebar. The fact that they are indirect on the web site, not presented in the sidebar, and not proactively recommended probably explains why there are relatively few (especially compared to Twine). It would be nice, for example, to see groups and people organized along with bookmarks according to how heavily they use tags as I pivot through various facets.

So, on a feature basis, I like Zigtag more than Twine for two primary reasons:

  1. Zigtag’s Firefox plug-in is a great user interface while Twine’s book-market is awkward in every sense that matters to me.
  2. Zigtag emphasizes and leverages shared tagging of tags that have clearly documented interpretations Twine is too folksonomic.

The picture shown in this post shows that Zigtag already “knows” a lot about semantics. Part of the reason is that they must have a roomful of people watching for tags that people enter that are not defined. Quite a few of the tags I’ve added have become defined within hours (sometimes minutes) of when I enter them. We’ll see how this scales up, but I like it – a lot.

The key question for both these sites is:

Are you going to share your ontology? If not, why not? If so, when or why not now?

Note that I am not suggesting they should. But if they have a reason not to, it would be nice to understand that.

It also would be nice to know whether the effort I expend on either site will be lost if they are acquired or I want to switch. That’s how it looks at Twine today.

Zigtag exports my bookmarks. I can get them from or over to Delicious, no problem. But I want their semantics, too. I would really appreciate preservation of the text, preferably the semantics of my tags. Perhaps if my bookmarks were simply output as an OWL referencing their ontology? At least then I could move without losing the effort that I have put into them, whether folksonomic or semantic. I also want to know if their ontology is are any good and, if so, I’d appreciate export to OWL so that I could use bookmarks for other purposes that interest me.

The background issue of data portability, for bookmarks, social networks, and other personal profile data is huge.

If I had OWL export and an open ontology, I would be less worried about my investment in Zigtag or Twine. Consider Techcrunch’s recent comments:

Zigtag’s biggest obstacle is the slew of other social bookmarking sites already available (). The semantic tagging feature is fairly unique, but its appeal is still untested, especially against automated semantic taggers like Twine. Frankly, a lot of people are just going to stick with the simple but effective Delicious interface.

It’s hard to argue with the first sentence, but the second seems harsh. Twine is getting credit that it may not deserve. Also, Zigtag recommends tags, too. But the third sentence is a problem for Zigtag as well as Twine, although the latter benefits from superior PR.

Another question, of course, is how Zigtag and Twine will fare once they try to make money. Radar Networks has stated that Twine will start running ads by the end of the year. Zigtag has made no public announcements. Delicious selectively advertises (e.g., on search pages), perhaps to feed intelligence to Yahoo’s advertising network. The advertisements are so selective that the value of other book-marking sites may be limited to the intelligence that they provide to established advertising networks. If so, this will hold down valuations and slow innovation. We’ll see, but obviously, I hope not..

May 8, 2008

Probabilities are Better than Scores

Strategic Analytics slide from Fair Isaac Interact on 2007 mortgage meltdownDuring a panel at Fair Isaac’s Interact conference last week, a banker from Abbey National in the UK suggested that part of the credit crunch was due to the use of the FICO score.  Unlike other panelists, who were former Fair Isaac employees, this gentleman was formerly of Experian!  So there was perhaps some friendly rivalry, but his point was a good one.  He cited an earlier presentation by the founder of Strategic Analytics that touched on the divergence between FICO scores and the probability of default.  The panelist’s key point was that some part of the mortgage crisis could be blamed on credit scores, a point that was first raised in the media last fall.

The FICO score is not a probability. 

Fair Isaac people describe the FICO score as a ranking of creditworthiness.  And banks rely on the FICO score for pricing and qualification for mortgages.  The ratio of the loan to value is also critical, but for any two applicants seeking a loan with the same LTV, the one with the better FICO score is more likely to qualify and receive the better price.

Ideally, a bank’s pricing and qualification criteria would accurately reflect the likelihood of default.  The mortgage crisis demonstrates that their assessment, expressed with the FICO score, was wrong.  Their probabilities were off. (more…)

April 29, 2008

Super Crunchers: predictive analytics is not enough

Ian Ayres, the author of Super Crunchers, gave a keynote at Fair Isaac’s Interact conference in San Francisco this morning.   He made a number of interesting points related to his thesis that intuitive decision making is doomed.   I found his points on random trials much more interesting, however.

In one of his examples on “The End of Intuition”, a computer program using six variables did a better job of predicting Supreme Court decisions than a team of experts.  He focused on the fact that the program “discovered” that one justice would most likely vote against an appeal if it was labeled a liberal decision.    By discovered we mean that a decision tree for this justice’s vote had a top level decision as to whether the decision was liberal, in which case the program had no further concern for any other information.  (more…)

April 18, 2008

A Common Upper Ontology for Advanced Placement tests

I have previously written about the lack of a common upper ontology in the semantic web and commercial software markets (e.g., business rules).  For example, the lack of understanding of time limits the intelligence and ease of use of software in business process management (BPM) and complex event processing (CEP).  The lack of understanding of money limits the intelligence and utility of business rules management systems (BRMS) in financial services and the capital markets.  and in enterprise decision management (EDM).   And, more fundamentally, understanding time and money (among other things, such as location, which includes distance) requires a core understanding of amounts.  

The core principle here is that software needs to have a common core of understanding that makes sense to most people and across almost every application.  These are the concepts of Pareto’s 80/20 Principle.  A concept like building could easily be out, but concepts like money and time (and whatever it takes to really understand money and time) are in.  Location, including distance, is in.  Luminousity could be out, but probably not if color is in.  Charge and current could be out, but not if electricity or magnetism is in.  The cutoff is less scientific than practical, but what is in has to be deeply consistent and completely rational (i.e., logically rigorous).[2] (more…)

April 16, 2008

Real AI for Games

Dave Mark’s post on Why Not More Simulation in Game AI? and the comments it elicited are right on the money about the correlation between lifespan and intelligence of supposedly intelligent adversaries in first person shooter (FPS) games.  It is extremely refreshing to hear advanced gamers agreeing that more intelligent, longer-lived characters would keep a game more interesting and engaging than current FPS.  This is exactly consistent with my experience with one of my employers who delivers intelligent agents for the military.  The military calls them “computer generated forces” (CGFs).  The idea is that these things need to be smart and human enough to constitute a meaningful adversary for training purposes (i.e., “serious games”).  Our agents fly fixed wing and rotary wing aircraft or animate special operations forces (SOFs) on the ground.  (They even talk – with humans – over the radio.  I love that part.  It makes them seem so human.) (more…)

The Semantic Arms Race: Facebook vs. Google

As I discussed in Over $100m in 12 months backs natural language for the semantic web, Radar Networks’ Twine is one of the more interesting semantic web startups.  Their founder, Nova Spivak, is funded by Vulcan and others to provide “interest-driven [social] networking”.  I’ve been participating in the beta program at modest bandwidth for a while.  Generally, Nova’s statements about where they are and where they are going are fully supported by what I have experienced.  There are obvious weaknesses that they are improving.  Overall, the strategy of gradually bootstrapping functionality and content by controlling the ramp up in users from a clearly alpha stage implementation to what is still not quite beta (in my view) seems perfect. 

Recently, Nova recorded a few minute video in which he makes three short-term predictions:More...

  1. Yahoo’s indexing of RDF will start the Semantic Web 3.0 arms race involving Google and Microsoft.
  2. The web will transition from pages to linked data. 
  3. Facebook “has to compete” with Google.

Nova was a little on the spot in the video.  Personally, I liked his “the web becomes a database” comment more than the Berners-Lee reiteration of linked data.  The notion of the entire web being a database is the right perspective on the semantic web (i.e., RDF), in my view.  Linked data is boring (try the Tabulator if linked data excites you.)  The action (and opportunity) is doing something with it!  When asked about ten years out, Nova displayed more of his deep insight and vision, however.  (See below.)  The truth is, beyond his first one, Nova was a little on the spot.  (See for yourself in the video.)

I love the pithy #3 that he decided to throw in there.  He did not invent that on the spot but found his legs just before being asked about longer term vision.   It makes sense, of course.  Google’s attacking with Open Social (so is the rest of the world including all the bookmarkers and even Nova’s Twine).  Facebook has to shift direction and the only target big enough given its size is search and advertising.

In his longer term vision he mentions the intelligent web that reasons and helps make decisions.  

This is where the battleground is for artificial intelligence and Semantic Web 4.0 (his term for the 4th decade of the web starting circa 2020).

Personally, I think natural language should have been in his first three.  Powerset will demonstrate that and all the action around Reuter/Clearforest/Calais (which he mentions and expects Google to compete with) indicate that natural language is critical to populating the semantic web (of course we have the database approach of DBpedia and Freebase, too).  In general, people are not going tag sentences or paragraphs.  Machines will.  The only RDF people are going to add are meta-tags at the page level for search engine optimization given Yahoo’s move (and the expected response from Google that Nova mentions.)

Certainly, natural language understanding is a prerequisite for the Semantic Web 4.0.  We will be talking more and typing less long before then.

Learning from the Future with Nova Spivack from Maarten on Vimeo.

April 15, 2008

Adaptive Decision Management

In this article I hope you learn the future of predictive analytics in decision management and how tighter integration between rules and learning are being developed that will  adaptively improve diagnostic capabilities, especially in maximizing profitability and detecting adversarial conduct, such as fraud, money laundering and terrorism.

Business Intelligence

hr-dashboard.jpg

Visualizing business performance is obviously important, but improving business performance is even more important.  A good view of operations, such as this nice dashboard[1], helps management see the forest (and, with good drill-down, some interesting trees). 

With good visualization, management can gain insights into how to improve business processes, but if the view does include a focus on outcomes, improvement in operational decision making will be relatively slow in coming.

Whether or not you use business intelligence software to produce your reports or present dashboards, however, you can improve your operational decision management by applying statistics and other predictive analytic techniques to discover hidden correlations between what you know before a decision and what you learn afterwards to improve your decision making over time.  More...

This has become known as decision management, thanks to Fair Isaac Corporation, but not until after they acquired Hecht Nielsen Corporation.

Enterprise Decision Management

decisionmanagement.jpg

HNC pioneered the use of predictive analytics to optimize decision making.  Dr. Nielsen formed the company in 1986 to apply neural network technology to to predict fraud.  The resulting application (perhaps it is more of a tool) is called Falcon.  It works.

In 2002, Fair Isaac acquired HNC (for roughly $800,000,000 in stock) to pursue a “common strategic vision for the growth of the analytics and decision management technology market”.  But shortly before the merger, HNC had acquired Blaze Software from Brokat for a song following the Dot Bomb of October, 2000 – a month before 9/11.  This gave HNC not only great learning technology but, with a business rules management system (BRMS), the opportunity to play in broader business process management (BPM), including underwriting and rating (which is highly regulated), for example. 

Of course, the business rules market has since become fairly mainstream and closely related to governance, risk and compliance (GRC), all of which were beyond the point decision making capabilities of either HNC or Fair Isaac before both these transactions.

Once Fair Isaac had predictive and rule technology under one roof, bright employees such as James Taylor, coined “Enterprise Decision Management”, or EDM for short.

Predictive Analytic Sweet Spots

Before it merged with Fair Isaac, HNC’s machine learning technology was successful (meaning it was saving tons of money, not just an application or two) in each of the following business to consumer (B2C) application areas:

  • Credit card fraud
  • Workmen’s compensation fraud
  • Property and casualty fraud
  • Medical insurance fraud

Clearly fraud, across insurance and financial services is a sweet spot for decision management.  Today, that includes money laundering and, in general, any form of deceit, including adversarial forms, such as involving terrorism.

HNC also moved into retail and other B2C areas, including:

  • Targeting direct marketing campaigns
  • Customer relationship management (CRM)
  • Inventory management

Some of the specific areas in marketing and CRM included:

  • “Up-selling” (i.e., predicting who might buy something better – and more expensive)
  • “Cross-selling” (i.e., predicting which customers might buy something else)
  • Loyalty (e.g., customer retention and increasing share of wallet)
  • Profitable customer acquisition (e.g., reducing “churn”)

The inventory applications included:

  • Merchandizing and price optimization
  • SKU-level forecasting, allocation and replenishment

Predictive Analytic Challenges

The principle problem with predictive analysis is the care and feeding of the neural network or the business intelligence software.  This involves formulating models, running them against example input data given outcomes, and examining the results.  For the most part, this is the province of statisticians or artificial intelligence folk.

A secondary challenge involves the gap between the output of a predictive model and the actual decision.  A predictive model generally outputs a continuous score rather than a discrete decision.  To make a decision, a threshold is generally applied to this score. 

  1. Yes or no questions are answered by applying a threshold to a score produced by a formula or neural network to determine “true” or “false”.
  2. Multiple choice questions are answered using a predictive model per choice and choosing the one with the highest score.
  3. More complex decisions are answered as above using a predictive model that combines the scores produced by other predictive models.

In general, especially where decisions are governed by policy or regulation, predictive models and decision tables are combined with rules using one of the following approaches:

  1. More complex decisions are answered as above using predictive models that are selected by rules in compliance with governing policy or regulations.
  2. More complex decisions are answered using rules that consider the scores produced by predictive models in compliance with governing policy or regulations.

In general, governance, risk and compliance (GRC) requires rules in addition to any predictive models.  Rules are also commonly used within or to select predictive models.  And special cases and exceptions are common applications of rules in combination with predictive models.

Scorecards

A simple case of defining (or combining) predictive models is a scorecard.  The following example shows a scorecard from Fair Isaac’s nice brochure on predictive analytics that could be part of a credit worthiness score:

An exemplary credit scorecard from Fair Isaac

Fair Isaac is the leader in credit scoring, of course.  Their FICO score is the output of a proprietary predictive model.

The following example shows how Fair Isaac’s predictive model is combined with other factors in the mortgage industry (click it for a closer look):

Rate Sheet for Mortgage Pricing

Note all the exceptions and special cases spread throughout this scorecard!

This explains why business rules have been so popular in the mortgage industry.  Pre-qualifying and quoting across many lenders clearly requires a business rules approach (which explains why Gallagher Financial embedded my stuff in their software a decade ago).  Even a single lender has to deal with its own special cases and the bigger the lender the more there are (which is why Countrywide Financial>[2] developed its own rules technology, called Merlin, decades ago).

Decision tables

For anything but the simplest decisions, the results of predictive models are considered along with other data using rules to make decisions.  In some cases, these rules are simple enough to fit into a decision table (or a decision tree rendered as a table) such as the following:

Medical Test Decision Table for Life InsuranceBase Insurance Premium Decision Tree Table

Tables like the one on the left can be used during underwriting to determine what variables are appropriate for gauging the risk of death covered by a life insurance policy.  This demonstrates that rules (in this case, very simple rules) can be used to determine which predictive model (or inputs) to consider in a decision. 

Tables like the one on the right correspond to decision trees and can be used instead of scorecards to set the base premium for auto insurance.  Additional rules typically adjust for other factors like driver’s education classes, driving record, student drivers, and other special cases and exceptions.  This is similar to the use of notes in the mortgage pricing sheet shown above.

The point is that real decisions are not as simple as a single predictive model, a scorecard, or a decision table.  And once these decisions are defined and automated using any combination of these techniques, improving those decisions can seem overwhelming complex (just from a technical perspective!)

Predictive analytics is not enough for EDM

Enterprise Decision Management (EDM), discussed above, is all about this multi-dimensional decision technology environment (scorecards, decision tables, and rules) but also about bringing statistical and neural network technology in to improve the decision making process more easily and less manually or subjectively.  The Fair Isaac brochure referenced above, for example, has some nice graphics showing statistical techniques (such as clustering) and graphs showing interconnected “nerves”.

There are several aspects of decision making that not even magically successful machine learning will eliminate, however:

  1. The requirement to comply with governing policies or regulations.
  2. Special cases that cannot be learned for various reasons, including:
    • Limitations on the number of variables used in predictive analytics.
    • Poorly understood, non-linear relationships in the data
    • A lack of adequate sampling for special cases
    • A need for certainty rather than probability
  3. Exceptions that cannot be learned, as with special cases.

Of course, special cases and exceptions are common in both policy and regulation.  For examples, consider policies that arise from contracts or customer relationships or the evolutionary nature of legislation, as reflected in the article on the earned income tax credit.

Rules are not enough for EDM

On the other extreme, commercial rule technology has not been capable of adaptively improving decision management.  In fact, except when they are modified by people, the use of rules in decision management is completely static, as well as entirely black and white.  There is no learning with any of the business rules management systems from leaders like Fair Isaac, Ilog, Haley, Corticon, or Pegasystems.

Amazingly, there are no mainstream rule systems today that deal with probability or other kinds of uncertainty.  Without such support, every rule in the tools from the vendors previously mentioned is black and white.  This makes them very awkward for applications such as diagnosis.  And all decision management applications, including profit maximization and all forms of fraud detection, are intrinsically diagnostic.  Prediction results in probabilities!

The earliest diagnostic expert systems were developed at Stanford.  One used subjective probabilities to diagnose bacterial infections.  Another used more rigorous probabilities to find or deposits (it more than paid for itself when it found a $100,000,000 molybdenum deposit circa 1990!).  These applications were called MYCIN and PROSPECTOR, respectively.

This seems shocking really, since the technology of these systems is well-understood and technically almost trivial.  The truth is that the Carnegie Mellon approach to business rules has won because it dealt with “the closed-world assumption”, which means that it could handle missing data better.  But CMU’s approach was strictly black and white.  Stanford was left in the dust commercially after the success of OPS5 at Digital Equipment Corporation and the commercialization of expert systems at Carnegie Group, Inference Corporation, IntelliCorp and Teknowledge left uncertainty in the dust during the mid-eighties.  Neuron Data, which became Blaze, followed the same trail away from the uncertain toward the black and white of tightly governed and regulated decisions.

With nothing but black and white rules, EDM leaves it up to people to adapt the decisions.  Sure they can use predictive analytics, but there is no closed loop from predictive analytics involving rules.  Any new rules or any changes to rules follow the stand-alone business rules approach.

Innovate for Rewards with bounded Risk

One problem with black and white rules technology is that it forces you to be right.  This stifles innovation.  Ideally, you could formulate an idea and experiment with it at bounded risk.  For example, you could say “what if we offered free checking to anyone who opens a new credit card account with us” and test it out.  You don’t want to absorb the cost of thousands accepting your offer only to lose more on checking fees that you gain through credit card fees.  So you indicate how often or how many such offers can be made. 

Not surprisingly, this approach is tried-and-true.  It’s most common form is the champion/challenger approach.  Fair Isaac has been “championing” this approach for some time (see this from James Taylor).

But how do you close the loop?  How do the rules learn when this new option should be used to maximize profit?  The fact is, they don’t.  People do it using predictive analytic techniques and manually refining the rules.

The problem, once again, is that the rules do not learn and that their outcomes are black and white.  The rules do not offer a probability that this will be a profitable transaction.  And they do not learn whether a transaction will be profitable over time, either.  That’s up to the users of predictive technology and managers.

Adaptive Decision Management

adaptivedecisionmanagement.jpg

 Adaptive Decision Management (ADM) is the next step in EDM.  In ADM the loop between predictive analytics and rules is closed.  At a minimum this involves learning the probabilities or reliability of rules and their conclusions.  This learning occurs using statistical or neural network techniques that can be trained, optimized, and tested off-line and – if your circumstances allow – even allowed to continue learning and adapting and optimizing while on-line.  For example, advertising, promotional (e.g., pricing) and social applications almost always adapt continuously.  Unfortunately, none of them use rules to do this yet, since the major players don’t support it!

Innovation and ADM

The adaptation of rule-based logic brings new flexibility and opportunity to the use of rules in decision management.  Adding a black and white business rule requires complete certainty that the rule will result in only appropriate decisions.  Of course, such certainty is a high hurdle.  Adaptive rules have a relatively low hurdle.

With adaptive rules an innovative idea can be introduced with a low, or even a zero probability.  As experience accumulates, the learning mechanism (again, statistical or neural) determines how reliable the rule is (i.e., how well it would have performed given outcomes).  The technology can even learn how to weight and combine the conditions of rules so as to maximize their predictive accuracy.  Without learning “inside a rule”, the probability of the rule as a whole may remain too low to be useful.  And, unlike a black box neural network, the functions that combine conditions and the probabilities of rules are readily accessible, whether for insight or oversight.

The overall impact of adaptive rules is that you can put an idea into action within a generalized, probabilistic champion/challenger framework.  And using techniques such as the subjective Bayesian method used in MYCIN or other more rigorous techniques as in PROSPECTOR, more patterns can be considered and leveraged with the continuously improving performance that EDM is all about.

The advantages of ADM include:

  1. the improving performance of EDM
  2. faster and more continuous improvement versus manual EDM
  3. a generalized approach to champion / challenger using probabilities
  4. better predictive performance than manually maintained scorecards or tables
  5. improved performance over black and white EDM by leveraging innovation adaptively

Although they haven’t told me about it explicitly, I would expect Fair Isaac to move in this direction first among the current leaders given their EDM focus.  I would not be surprised to see business intelligence (BI) vendors, perhaps SAS,  move in this direction, too.  I know it will happen since we are already working with one commercial source of adaptive rules technology.  Unfortunately, Automata is under NDA about their approach for now, but stay tuned…  In the meantime, if you’re interested in learning more, please drop us a note at info at haleyAI.com.  And if you see any issues or good applications, we would love to hear them.


[1]A nice dashboard from from Financial Services Technology (http://www.fsteurope.com/) using Corda (http://www.corda.com/)
[2]I recently helped Countrywide upgrade to our software, just as much for usability as performance improvements.

April 3, 2008

Cyc is more than encyclopedic

I had the pleasure of visiting with some fine folks at Cycorp in Austin, Texas recently.  Cycorp is interesting for many reasons, but chiefly because they have expended more effort developing a deeper model of common world knowledge than any other group on the planet.  They are different from current semantic web startups.  Unlike Metaweb‘s Freebase, for example, Cycorp is defining the common sense logic of the world, not just populating databases (which is an unjust simplification of what Freebase is doing, but is proportionally fair when comparing their ontological schemata to Cyc’s knowledge).  Not only does Cyc have the largest and most practical ontology on earth, they have almost incomprehensible numbers of formulas[1]  describing the world.   (more…)

March 28, 2008

Harvesting business rules from the IRS

Does your business have logic that is more or less complicated than filing your taxes?

Most business logic is at least as complicated.  But most business rule metaphors are not up to expressing tax regulations in a simple manner.  Nonetheless, the tax regulations are full of great training material for learning how to analyze and capture business rules.

For example, consider the earned income credit (EIC) for federal income tax purposes in the United States.  This tutorial uses the guide for 2003, which is available here. There is also a cheat sheet that attempts to simplify the matter, available here. (Or click on the pictures.)

eitc-publication-596-fy-2003.jpgeitc-eligibility-checklist-for-tax-year-2003.jpg

What you will see here is typical of what business analysts do to clarify business requirements, policies, and logic.  Nothing here is specific to rule-based programming.  (more…)

March 26, 2008

Agile decision services without XML details

Externalizing enterprise decision management using service-oriented architecture orchestrated by business process management makes increases agility and allows continuous performance improvement, but…

How do you implement the rules of EDM in an SOA decision service?  (more…)

March 21, 2008

Ontology of time in progress – amounts needed

Recent posts on money and time have produced some excellent comments and correspondence.  There is even recent OMG effort that is right on the money, at least concerning time.  For details, see the Date-Time Foundational Vocabulary RFP.  I am particularly impressed with SBVR “Foundation” Vocabularies, which I understand Mark Linehan of IBM presented last week at an OMG meeting in DC[1].

Mark’s suggestions include establishing standard upper ontologies for:

  1. Time & dates
  2. Monetary amount
  3. Location
  4. Unit of measure
  5. Quantities, cardinalities, and ratios
  6. Arithmetic operations

I will skip operations for now since they are not taxonomic concepts but functional relationships involving such concepts.  I believe the post on CEP and BPM covered time in adequate detail and the post on Siebel’s handling of foreign exchange covered the currency exchange aspects of money.  It only touched on the more general concept of amounts that I will focus on here.

The remaining concepts are common to almost every application conceivable.  They are some of the most primitive, domain-independent concepts of a critical and practical upper ontology.  They include: (more…)

March 20, 2008

RuleBurst Re-brands as Haley Limited

For those who are interested in my former company, they are still committed to natural language business rules management technology, as shown in their most recent press release.  They have also picked up on the public sector activity, especially eligibility, as discussed here

From the release, CEO, Dominic OHanlon, said:

  • “With our natural language rule authoring capabilities and BRMS solutions, we are uniquely positioned to make our customers more competitive and agile in a fast-paced, highly-regulated world.”
  • “For the government market, Haley is a worldwide leader in using natural language technology to rapidly transform regulations, policies and rules into automated decision-making systems, to determine eligibility for government services, and in the taxation and immigration arenas.”

As discussed here, this focuses comes from (more…)

March 14, 2008

In the names of CEP and BPM

Have you heard the one about how to drive BPM people crazy?

Ask them the question that drives CEP people crazy!

Last fall, at the RuleML conference in Orlando, (more…)

March 11, 2008

Over $100m in 12 months backs natural language for the semantic web

Radar Networks is accelerating down the path towards the world’s largest body of knowledge about what people care about using Twine to organize their bookmarks.  Unlike social bookmarking sites, Twine uses natural language processing technology to read and categorize people’s bookmarks in a substantial ontology.  Using this ontology, Twine not only organizes their bookmarks intelligently but also facilitates social networking and collaborative filtering that result in more relevant suggestions of others’ bookmarks than other social bookmarking sites can provide.

Twine should rapidly eclipse social bookmarking sites, like Digg and Redditt.  This is no small feat!

The underlying capabilities of Twine present Radar Networks with many other opportunities, too.  Twine could spider out from bookmarks and become a general competitor to Google, as Powerset hopes to become.  Twine could become the semantic web’s Wikipedia, to which Metaweb’s Freebase aspires. (more…)

Goals and backward chaining using the Rete Algorithm

I was prompted to post this by request from Mark Proctor and Peter Lin and in response to recent comments on CEP and backward chaining on Paul Vincent’s blog (with an interesting perspective here).

I hope those interested in artificial intelligence enjoy the following paper .  I wrote it while Chief Scientist of Inference Corporation.  It was published in the International Joint Conference on Artificial Intelligence over twenty years ago. 

The bottom line remains:

  1. intelligence requires logical inference and, more specifically, deduction
  2. deduction is not practical without a means of subgoaling and backward chaining
  3. subgoaling using additional rules to assert goals or other explicit approaches is impractical
  4. backward chaining using a data-driven rules engine requires automatic generation of declarative goals

We implemented this in Inference Corporation’s Automated Reasoning Tool (ART) in 1984.  And we implemented it again at Haley a long time ago in a rules langauge we called “Eclipse” years before Java.

Regretably, to the best of my knowledge, ART is no longer available from Inference spin-off Brightware or its further spin-off, Mindbox.  To the best of my knowledge, no other business rules engine or Rete Algorithm automatically subgoals,  including CLIPS, JESS, TIBCO Business Events (see above), Fair Isaac’s Blaze Advisor, and Ilog Rules/JRules.  After reading the paper, you may understand that the resulting lack of robust logical reasoning capabilities is one of the reasons that business rules has not matured to a robust knowledge management capability, as discussed elsewhere in this blog.  (more…)

March 5, 2008

Behind the CEP curtain – it’s about time, not the cache

TIBCO is the CEP vendor most focused on the market for business rules, as reflected in Paul Vincent’s post here.  Although I agree with Paul that rule vendors are not currently offering enough in terms of support for long-running processes, the conclusions that he draws in favor of considering a CEP alternative to a BRMS are not compelling yet.

Paul said that rules don’t address the following that are addressed by CEP:

  1. BAM (business activity monitoring) and the other BPM (business performance management)
  2. Complex-rule processing
  3. Customer-centric (portfolio-based) decisions / policies

I am sure Paul was just being flippant, but you may notice that there is a bit of a war going on between CEP, BPM and rules right now.  (more…)

Externalization of rules and SOA is important – for now

James Taylor’s notes on his lunch with Sandy Carter of IBM and the CEO of Ilog prompted me to write this.  Part of the conversation concerned the appeal of SOA and rules to business users.  Speaking as a former vendor, we all want business people to appreciate our technology.  We earn more if they do.  They say to IT “we want SOA” or “we want rules” and our sale not only becomes easier, it becomes more valuable.  So we try to convince the business that they are service-oriented, so they should use SOA.  Or we tell the business that they have (and make) rules, so they should use (and manage their own) rules.   And rules advocates embrace and enhance the SOA value proposition saying that combined, you get the best of both worlds.  This is almost precisely the decision management appeal.  Externalize your decisions as services and externalize rules from those services for increased agility in decision making.   This is an accurate and appropriate perspective for point decision making.  But it doesn’t cover the bigger picture that strategic business people consider, which includes governance and compliance. 

Nonetheless,

Effective SOA and business rules have one requirement (or benefit) in common: externalization.  

The externalization of services from applications (more…)

March 3, 2008

Oracle should teach Siebel CRM about location and money

Not long ago I posted on the need to understand common concepts well. My example then concerned the need to understand time well enough to answer a question like, “How much did IBM’s earnings change last quarter?”. Recently, in contemplating some training issues related to the integration of Haley Authority within Siebel, I came across examples phrasings from the documentation on Siebel’s web site, including:

  • if an account’s location contains “CA” then add 50000 in “USD” for the account
  • if an account’s location contains “CA” then add 70000 in “USD” on today for the account

Two things are immediately obvious.

  1. Oracle does not understand location.
  2. Oracle has an interesting, but nonetheless poor understanding of money.

Of course, I am intimately familiar with Authority’s understanding of money. However, Siebel needs more than Authority understands. (more…)

February 29, 2008

CEP crossing the chasm into BPM by way of BRMS

Complex event processing (CEP) software handles many low-level events to recognize a high-level event that triggers a business process.  Since many business processes do not consider low-level data events, BPM may not seem to need event processing.  On the other hand, event processing would not be relevant at all if it did not occasionally trigger a business process or decision.  In other words, it appears that:

  • CEP requires BPM but
  • BPM does not require CEP

The first point is market limiting for CEP vendors.  Fortunately for CEP vendors, however, most BPM does require event-processing, however complex.  In fact, event processing is perhaps the greatest weakness of current BPM systems (BPMS) and business rules management systems (BRMS), as discussed further below. (more…)

February 25, 2008

Agile Business Rules Management Requires Methodology

Don’t miss the great post about his and Ilog’s take on rule and decision management methodologies by James Taylor today (available here). 

Here’s the bottom line:

  1. Focus on what the system does or decides.
    • Focus on the actions taken during a business process and the decisions that govern them and the deductions that they rely on.
    • In priority order, focus on actions, then decisions, then deductions.
  2. Don’t expect to automate every nuance of an evolving business process on day one – iterate.
    • Iteratively elaborate and refine the conditions and exceptions under which
      • actions show be taken,
      • decisions are appropriate, and
      • deductions hold true

Another way of summing this up is:

Try not to use the word “then” in your rules!

Do check out the Ilog methodology as well as the one I developed for Haley that is available here.  The key thing (more…)

Older Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.