We have introduced a new paradigm for implementing belief-network inference that is oriented towards real-world, on-line applications. The proposed framework utilizes knowledge of query and evidence variables in an application to compile a belief network into an arithmetic expression called a Query DAG (Q-DAG). Each node of a Q-DAG represents a numeric operation, a number, or a symbol that depends on available evidence. Each leaf node of a Q-DAG represents the answer to a network query, that is, the probability of some event of interest. Inference on Q-DAGs is linear in their size and amounts to a standard evaluation of the arithmetic expressions they represent.

A most important point to stress about the work reported here is that it is *not*
proposing a new algorithm for belief-network inference. What we are proposing is a
paradigm for implementing belief-network inference that is orthogonal to
standard inference algorithms and is engineered to meet the demands of
real-world, on-line applications.
This class of applications is typically demanding for the following reasons:

- It typically requires very short response time, i.e., milliseconds.
- It requires software to be written in specialized languages, such as ADA, C++, and assembly before it can pass certification procedures.
- It imposes severe restrictions on the available software and hardware resources in order to keep the cost of a ``unit'' (such as an electromechanical device) as low as possible.

Our proposed approach still requires a belief-network algorithm to generate a Q-DAG, but it makes the efficiency of such an algorithm less of a critical factor.[+] For example, we show that some standard optimizations in belief-network inference, such as pruning and caching, become less critical in a Q-DAG framework since these optimizations tend to be subsumed by simple Q-DAG reduction techniques, such as numeric reduction.

The work reported in this paper can be extended in at least two ways. First, further Q-DAG reduction techniques could be explored, some oriented towards reducing the evaluation time of Q-DAGs, others towards minimizing the memory needed to store them. Second, we have shown that some optimization techniques that dramatically improve belief-network algorithms may become irrelevant to the size of Q-DAGs if Q-DAG reduction is employed. Further investigation is needed to prove formal results and guarantees on the effectiveness of Q-DAG reduction.

We close this section by noting that the framework we proposed is also applicable to order-of-magnitude (OMP) belief networks, where multiplication and addition get replaced by addition and minimization, respectively [GoldszmidtGoldszmidt1992, Darwiche GoldszmidtDarwiche \ Goldszmidt1994]. The OMP Q-DAG evaluator, however, is much more efficient than its probabilistic counterpart since one may evaluate a minimization node without having to evaluate all its parents in many cases. This can make considerable difference in the performance of a Q-DAG evaluator.

[Next] [Up] [Previous]