Constraint propagation for efficient inference in markov logic
International Conference on Principles and Practice of Constraint Programming, 2011•Springer
Many real world problems can be modeled using a combination of hard and soft constraints.
Markov Logic is a highly expressive language which represents the underlying constraints
by attaching realvalued weights to formulas in first order logic. The weight of a formula
represents the strength of the corresponding constraint. Hard constraints are represented as
formulas with infinite weight. The theory is compiled into a ground Markov network over
which probabilistic inference can be done. For many problems, hard constraints pose a …
Markov Logic is a highly expressive language which represents the underlying constraints
by attaching realvalued weights to formulas in first order logic. The weight of a formula
represents the strength of the corresponding constraint. Hard constraints are represented as
formulas with infinite weight. The theory is compiled into a ground Markov network over
which probabilistic inference can be done. For many problems, hard constraints pose a …
Abstract
Many real world problems can be modeled using a combination of hard and soft constraints. Markov Logic is a highly expressive language which represents the underlying constraints by attaching realvalued weights to formulas in first order logic. The weight of a formula represents the strength of the corresponding constraint. Hard constraints are represented as formulas with infinite weight. The theory is compiled into a ground Markov network over which probabilistic inference can be done. For many problems, hard constraints pose a significant challenge to the probabilistic inference engine. However, solving the hard constraints (partially or fully) before hand outside of the probabilistic engine can hugely simplify the ground Markov network and speed probabilistic inference. In this work, we propose a generalized arc consistency algorithm that prunes the domains of predicates by propagating hard constraints. Our algorithm effectively performs unit propagation at a lifted level, avoiding the need to explicitly ground the hard constraints during the pre-processing phase, yielding a potentially exponential savings in space and time. Our approach results in much simplified domains, thereby, making the inference significantly more efficient both in terms of time and memory. Experimental evaluation over one artificial and two real-world datasets show the benefit of our approach.
Springer
Showing the best result for this search. See all results