Markov Logic Networks (MLNs) is an approach to Statistical Relation Learning (SRL) that combines weighted First Order Logic formuals and Markov networks to allow probabilistic learning an inference over multi-relational data. Learning and inference in MLNs is supported by a number of algorithms such as Gibbs Sampling, L-BFGS, Voted Perceptron, and MC-SAT. There a number of software package that implement MLNs such as: Alchemy, Tuffy, Markov the beast, and ProbCog.

In Markov Logic Networks one can generalize two dimensions for categorizing the predicates in the MLN. The first is by the component being modelled in the domain. To apply the MLNs approach to a problem domain, we need to recognize that a domain is made up from three components: objects of interest, attributes of the objects, and relations between the objects. Hence a predicate can be categorized as either class predicate, a relation predicate, or an attribute predicate. The second dimension for categorizing predicates in MLNs is by the end-user data need. In most probabilistic inference systems, a user query is composed of a target query and evidence which is used to estimate the probability of the target query. Hence, a predicate can be categorized as either an evidence predicate or a query (target) predicate.

In MLNs learning can be done either discriminatively or generatively. In discriminative learning one or more predicates whose values are unknown during testing is designated as target predicates. The learning optimizes the performance with respect to such predicates assuming that all the remaining predicates are given. In generative learning all predicates are treated equally. Discriminative learning is used when we know a head of time what type of prediction we want to make. On the other hand generative learning is used when we want to capture all the aspects of domain. So from a system point of view the second categorization is essential if we perform discriminative learning of a model.

The first categorization allow us to relate a problem to one or more common Statistical Relations Learning tasks. Take for example collective classification, link prediction, and social networks analysis. In collective classification the goal is predict the class of the object given the object’s attributes and relations as evidence. Whereas in link prediction that goal is to predict the relation given the class(s) of the object and its relations. In social networks analysis that target query could be the attributes of the objects or the relations between objects. In general these tasks fall under “classification” in Machine Learning theory.