Statistical Inference in Hidden Markov Models Using k-Segment Constraints

Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online.


Introduction
The use of the Hidden Markov Model (HMM) is ubiquitous in a range of sequence analysis applications across a range of scientific and engineering domains, including signal processing (Juang and Rabiner, 1991;Crouse et al., 1998), genomics (Eddy, 1998;Li and Stephens, 2003) and finance (Paas et al., 2007).Fundamentally, the HMM is a mixture model whose mixing distribution is a finite state Markov chain (Rabiner, 1989;Cappé et al., 2005).Whilst the Markov assumptions rarely correspond to the true physical generative process, it often adequately captures first-order properties that make it a useful approximating model for sequence data in many instances whilst remaining tractable even for very large datasets.As a consequence, HMM-based algorithms can give highly competitive performance in many applications.
Central to the tractability of HMMs is the availability of recursive algorithms that allow fundamental quantities to be computed efficiently (Baum and Petrie, 1966;Viterbi, 1967).These include the Viterbi algorithm which computes the most probable hidden state sequence and the forward-backward algorithm which computes the marginal probability of a given state at a point in the sequence.Computation for the HMM has been wellsummarized in the comprehensive and widely read tutorial by Rabiner (1989) with a Bayesian treatment given more recently by Scott (2002).It is a testament to the completeness of these recursive methods that there have been few generic additions to the HMM toolbox since these were first described in the 1960s.However, as HMM approaches continue to be applied in increasingly diverse scientific domains and ever larger data sets, there is interest in expanding the generic toolbox available for HMM inference to encompass unmet needs.
The motivation for our work is to develop mechanisms to allow the exploration of the posterior sequence space.
Typically, standard HMM inference limits itself to reporting a few standard quantities.For an M -state Markov chain of length N there exists of M N possible sequences but often only the most probable sequence or the N M marginal posterior probabilities are used to summarize the whole posterior distribution.Yet, it is clear that, when the state space is large and/or the sequences long, many other sequences maybe of interest.Modifications of the Viterbi algorithm can allow arbitrary number of the most probable sequences to be enumerated whilst Bayesian techniques allows us to sample sequences from the posterior distribution.However, since a small change to the most likely sequences typically give new sequences with similar probability, these approaches do not lead to reports of qualitatively diverse sequences.By which we mean, alternative sequence predictions that might lead to different decisions or scientific conclusions.
In this article we describe a set of novel recursive methods for HMM computation that incorporates segmental constraints that we call k-segment inference algorithms.These are so-called because the algorithms are constrained to consider only sequences involving no more than k − 1 specified transition events.We show that k-segment procedures provide an intuitive approach for posterior exploration of the sequence space allowing diverse sequence predictions containing 1, 2, . . ., and k segments or specific transitions of interest.These methods can be applied prospectively during model fitting or retrospectively to an existing model.In the latter case, the utility of the methods described here comes at no cost (other than computational time) to the HMM user and we provide illustrative examples to highlight novel insights that maybe gained through k-segment approaches.

Background
The HMM encodes for two types of random sequences: the hidden state sequence or path x = (x 1 , . . ., x N ) and the observed data sequence y = (y 1 , . . ., y N ).Individual hidden states take discrete values, such that x n ∈ {1, . . ., M }, while observed variables can be of arbitrary type.The hidden state sequence x follows a Markov chain so that Here, the first hidden state x 1 is drawn from some initial probability vector π 0 so that π 0,m = p(x 1 = m) denotes the probability of x 1 being in state m ∈ {1, . . ., M }, whereas any subsequent hidden state x n (with n > 1) is drawn according to a transition matrix A so that [A] m m = p(x n = m|x n−1 = m ) expresses the probability of moving to a state m from m .Given a path x following the Markov chain in (1), the observed data are generated independently according to where the densities p(y n |x n = m, φ), m = 1, . . ., M , are often referred to as the emission densities and are parametrized by φ.Next we shall collectively denote all HMM parameters, i.e. π 0 , A and φ, by θ.
Statistical estimation in HMMs takes advantage of the Markov dependence structure, shown in Figure 1, which allows efficient dynamic programming algorithms to be applied.For instance, maximum likelihood (ML) over the parameters θ via the EM algorithm is carried out by the forward-backward (F-B) recursion (Baum and Petrie, 1966) that implements the Expectation step in O(M 2 N ) time.A similar recursion having the same time complexity is the Viterbi algorithm (Viterbi, 1967) which, given a fixed value for the parameters, estimates the maximum a posteriori (MAP) hidden sequence.Furthermore, straightforward generalizations of the Viterbi algorithm estimate the P -best list of most probable sequences (Schwartz and Chow, 1990;Nilsson and Goldberger, 2001).In contrast to ML point estimation, a Bayesian approach assigns a prior distribution p(θ) over the parameters and seeks to estimate expectations taken under the posterior distribution p(x, θ|y).The Bayesian framework also greatly benefits from efficient recursions derived as subroutines of Monte Carlo algorithms.Specifically, the popular Gibbs sampling scheme (Scott, 2002) relies on the forward-filtering-backward-sampling (FF-BS) recursion that simulates in O(M 2 N ) time a hidden sequence from the conditional posterior distribution p(x|θ, y).In summary, all recursions mentioned above have linear time complexity with respect to the length of the sequence N and are instances of more general inference tools developed in the theory of probabilistic graphical models (Cowell et al., 2003;Koller and Friedman, 2009).

Motivation
While the linear time efficiency of the current HMM recursions is one of the keys for the widespread adoption of HMMs in applications, the information regarding the posterior distribution obtained by these algorithms is still very limited.To this end, we define novel probabilistic inference problems for exploration of the HMM posterior distribution and we efficiently solve these problems by introducing linear time recursions.To start with a motivating example, assume that we are interested in the event which denotes the number of times a certain class of transitions occurs along the hidden path x of the HMM.
Then, we may wish to compute the probability: which is a global marginal obtained after a summation over all paths having exactly k occurrences from the certain class of transitions.This probability cannot be obtained from the current F-B recursion which allows us to compute only local marginals such as p(x n |y, θ) or p(x n−1 , x n |y, θ).The use of Monte Carlo methods to approximate the right hand side of ( 4) is also unsuitable because, while fast and exact simulation from p(x|θ, y) is possible by means of the FF-BS recursion, the obtained accuracy could be insufficient when the underlying value of p(c x = k|y, θ) is very small due to the extremeness of the event c x = k.
We can also define several other related tasks that throughout the paper we collectively refer to as k-segment inference problems.In such problems, we insert hard constraints into the HMM involving the number and type of segments we want to see along the path x, and then we query the model to provide us with probabilities or representative paths characterizing that constraint.Some additional representative examples of k-segment inference that we will study in this article are the computation of the optimal MAP sequence associated with the event c x = k and the simulation of paths from the conditional distribution p(x|c x = k, y, θ).
To solve k-segment inference problems we develop efficient linear time dynamic programming recursions.
The solution we provide introduces auxiliary counting variables into the original HMM so that we obtain an extended state-space HMM which is consistent with the original model.The auxiliary counting variables used in this augmentation allows us to inject evidence or constraints into the model so that the standard recursions applied to the extended HMM allow us to solve all k-segment inference problems of interest.This provides a simple and elegant solution and results in new HMM recursions that generalize the standard F-B, Viterbi and FF-BS algorithms.
The remaining of the article has as follows.Section 4 describes the main theory of k-segment inference, for applications to a posteriori model exploration, and derives the novel HMM recursions.Section 5 considers model fitting under k-segment constraints and presents suitable EM and Bayesian learning procedures.Section 6 presents extensions to the basic k-segment problems, while Section 7 discusses related work.Section 8 considers sequence analysis using two real-world examples from cancer genomics and text information retrieval.Finally, Section 9 concludes with a discussion and directions for future work.

Theory of k-segment inference
This section presents the theoretical foundations of k-segment inference problems starting with section 4.1 that defines such problems.Section 4.2 reformulates these problems in terms of an extended state-space HMM having auxiliary counting variables and section 4.3 presents efficient solutions based on linear time recursions.Finally, Section 4.4 gives a graphical illustration of the proposed algorithms using a simulated sequence.All the algorithms described in this section assume a fixed setting for the parameters θ.Therefore, to keep our expressions uncluttered in the following we drop θ from our expressions and write for instance p(x|y, θ) as p(x|y) and p(y|θ) as p(y).

k-segment inference problems
Any hidden path x in a HMM can have from 0 up to N − 1 transitions or equivalently from 1 up to N segments, where a segment is defined as a contiguous run of indices where x n−1 = x n .Following the notation used in eq.
(3), we define the number of all segments in x by where I(•) denotes the indicator function.c x is the sum of the number of transitions, i.e. the locations in the hidden path where x n−1 = x n , and the value one that accounts for the initial segment which is not the result of a transition.
Subsets of hidden paths associated with different number of segments comprise exclusive events which allow to decompose the posterior distribution p(x|y) as follows.If we introduce the events c x = k, with k = 1, . . ., N , each corresponding to the subset of paths {x|c x = k} having exactly k segments, the posterior distribution p(x|y) can be written as the following mixture: where is the posterior distribution conditional on having k segments, while is the posterior probability of the event c x = k.
The mixture decomposition in eq. ( 6) suggests that one way to explore the posterior distribution of the HMM is to compute quantities associated with the components of this mixture.This leads to the k-segment inference problems which can be divided into the following three types of problems: • Optimal decoding: Find the MAP hidden path that has k segments, that is the path with the maximum value of p(x|c x = k, y).
• Probability computation: Find the posterior probability of having k segments, i.e. p(c x = k|y).
• Path sampling: Draw independent samples from p(x|c x = k, y).
To this end, in Section 4.3 we introduce efficient linear time algorithms to solve all the above tasks together with several additional related tasks associated with more general events of the form k 1 ≤ c x ≤ k 2 , where These algorithms are based on a reformulation of the above k-segment inference problems that uses an extended state-space HMM containing auxiliary counting variables.

Auxiliary counting Markov chains
The basis of our algorithm is the augmentation of the Markov chain in (1) with auxiliary variables that count the number of segments.Specifically, c x from (5) can be considered as a counter that scans the path x and it increments by one any time it encounters a transition.We can represent this counting process with a Ndimensional vector of auxiliary variables s which is an increasingly monotone sequence of non-negative integers.
Conditioning on a certain path x, s is sampled deterministically according to the Markov chain where δ i,j is the delta mass that equals one when i = j and zero otherwise.We refer to the above conditional distribution as the counting Markov chain or counting chain because it is Markov chain that makes precise the concept of counting the segments.The counting chain starts at one, i.e. s 1 = 1 (which can be interpreted as sampling from the delta mass δ s1,1 ), and then it increments by one so that s n = s n−1 + 1 every time a transition occurs in the hidden path, i.e. whenever x n−1 = x n which implies the generation of a new segment.The joint density of the HMM is augmented with the counting chain so that is the new joint density having the conditional independence structure shown as directed graphical model in Figure 2. Because this augmentation is consistent1 , prior-to-posterior inference in the initial HMM and the HMM augmented with auxiliary variables are in theory equivalent.However, in practice, inference in the latter model is more flexible since it allows to solve the k-segment inference problems through the insertion of constraints in the counting process.More precisely, the final value of the counter s N equals c x so the event c x = k can be realized by adding the evidence s N = k in the graphical model of Figure 2. Therefore, all type of k-segment inference problems can be reformulated as follows: • Optimal decoding: The MAP hidden x * of p(x|c x = k, y) can be found according to • Probability computation: The posterior probability p(c x = k|y) can be expressed as p(s N =k,y) where p(y) is known from the forward pass of the standard F-B algorithm and • Path sampling: An independent sample x from p(x|c x = k, y) is obtained as where in the above s \N denotes all counting variables apart from the final s N which is clamped to k.For more general events of the form k 1 ≤ s N ≤ k 2 , where 1 ≤ k 1 < k 2 ≤ N , the above still holds with the slight modification that we will need additionally to maximize, marginalize or sample s N , respectively for the three cases above, under the constraint k 1 ≤ s N ≤ k 2 .Simple proofs for the correctness of all above statements can be found in the Appendix A.
Furthermore, the k-segment inference problems associated with the special case of the event s N > k can be equivalently reformulated by using a modified counting chain that absorbs when where the indicator function Notice that the above is an inhomogeneous chain having two modes: the first when the segment counting proceeds normally and the second when counting stops once the absorbing state is visited.The k-segment problems for the event s N > k are then solved by using the above chain and clamping s N to the value k + 1.
The augmentation with counting variables results in a new HMM having the pair (s n , x n ) as the new extended state variable.Given that s N = k, so that any pair (s n , x n ) can jointly take at most kM values, we can use the Viterbi algorithm to obtain the MAP of p(x|y, s N = k), the forward pass of the F-B algorithm to obtain p(s N = k, y) and the FF-BS algorithm to draw an independent sample from p(x|y, s N = k).A naive implementation of these algorithms can be done in O(k 2 M 2 N ) time.However, this complexity can be further reduced to O(kM 2 N ) by taking into account the deterministic structure of the counting chain as discussed in the following section.

Efficient computation via dynamic programming
Optimal decoding.We first describe the k-segment equivalent of the Viterbi algorithm for the optimal decoding problem under k-segment constraints, i.e. for obtaining the MAP of p(x|y, s N = k).This algorithm will be able to solve at once all such problems from k = 1 up to a maximum k = k max by applying a single forward pass for the maximum value k max which requires O(k max M 2 N ) operations.Then, by applying k max backtracking operations, each scaling as O(N ), we can obtain all k max optimal segmentations overall in O(k max M 2 N ) time.
More precisely, the Viterbi algorithm applies a forward pass where recursively p(y|x)p(x)p(s \N , s N = k max |x) is maximized with respect to the pair (s n−1 , x n−1 ) for any value of the next pair (s n , x n ).This can be implemented as a propagation of a message, which is a k max M dimensional vector, as follows.The message is initialized to γ(x 1 , s 1 ) = log p(y 1 |x 1 ) + log p(x 1 ) + log p(s which equals log p(y 1 |x 1 ) + log p(x 1 ) when s 1 = 1 and −∞ when s 1 > 1.This message then is propagated recursively according to where the auxiliary message δ(x n , s n ) simply stores the pair (x n−1 , s n−1 ) that gives the maximum in ( 16) needed later in backtracking.Naively, the nth recursive update can be implemented in O(k 2 max M 2 ) time since each s n takes at most k max values and each x n takes M values.However, for any given configuration of (s n , x n ) (out of the k max M possible), the permissible values for s n−1 are either For all remaining configurations, log p(s n |s n−1 , x n , x n−1 ) = −∞, so that these configurations need not to be checked when maximizing over (s n−1 , x n−1 ) for a certain pair (s n , x n ).Thus, the maximization in ( 16) can be done in M operations resulting in k max M 2 operations for the whole nth update.Subsequently, the full forward pass requires O(k max M 2 N ) operations.Once the forward pass is completed, we have the final message γ(x N , s N ) (together with all auxiliary δ messages) from which we can obtain all k max optimal segmentations using backtracking as follows.For k = 1, . . ., k max , we first compute Then, starting from (x * N , s * N = k) we backtrack recursively according to ers the optimal hidden path x * having exactly k segments.Each backtracking requires O(N ) simple indexing operations.
Probability computation.For the probability computation problem, we work similarly to the above Viterbi algorithm and we compute all joint densities p(s N = k, y) for k = 1 up to k max using the forward pass of the F-B algorithm applied to the augmented HMM.This recursively sums out each pair (s n−1 , x n−1 ) for any value of the next pair (s n , x n ), essentially passing through the so-called α message (Bishop, 2006).This message is a k max M dimensional vector taking as initial value which equals p(y 1 |x 1 )p(x 1 ) when s 1 = 1 and 0 otherwise.Then, the message is propagated according to the standard α recursion This recursion scales as O(k max M 2 ) since the summation over (x n−1 , s n−1 ) can be done in O(M ) time by taking advantage the structure of the counting conditional p(s n |s n−1 , x n , x n−1 ).As in any α recursion in a HMM, which we can easily obtain for k = 1, . . ., k max .Clearly, since the computation of a single recursion of the α message takes O(k max M 2 ) time, the above computations require overall O(k max M 2 N ) time.Given that the joint density p(s N = k, y) has been obtained, we can compute exactly the posterior probability p(s N = k|y) by dividing with the normalization constant p(y) (i.e. the overall likelihood of the HMM) obtained from the standard forward pass.
Similarly to the above we can also define the so called backwards or β message in the extended state-space HMM.Such message is useful when applying the EM algorithm for learning an HMM under k-segments constraints and its computation will be described in section 5.2.1.
Path sampling.We now turn into the sampling problem where we wish to draw a path from the conditional p(x|s N = k, y).Such a path can be obtained by sampling a pair (x, s \N ) from p(x, s \N |s N = k, y) and then discarding s \N .We apply the FF-BS algorithm that is based on the following decomposition where the index n in 1 n=N −1 starts from N − 1 and decrements down to one.Applying first the forward pass described above we have the final message α(x N , s N , y) from which we can sample Then, recursively we go backwards and each time we sample (x n , s n ), given the already sampled value of (x n+1 , s n+1 ), from where the message α(x n , s n ) = p(x n , s n , y 1 , . . ., y n ) is known from the forward pass.Each sampling step takes O(M ) time (again due to the deterministic nature of the conditional p(s n+1 |s n , x n+1 , x n )) and the whole backward sampling requires O(M N ) time.If we wish to simultaneously sample from all conditional distributions p(x|s N = k, y), with k = 1, . . ., k max , we can do this using a single forward pass that scales as O(k max M2 N ) and k max backward sampling iterations scaling as O(k max M N ), so the overall complexity is O(k max M 2 N ).
Furthermore, very simple and straightforward modifications of the above procedures can deal with the more For instance, if we wish to sample a path from p(x|k 1 ≤ s N ≤ k 2 , y), we need to first apply the forward pass for k max = k 2 and then perform backwards sampling exactly as described above with the only difference that initially we sample Similarly, the k-segment inference problems associated with the special event s N > k can be efficiently solved in O((k + 1)M 2 N ) time by using the absorbing counting chain 2 from ( 14) and then applying exactly the above algorithms by clamping Finally, it is important to notice that running k-segment inference up to some k max and setting k max + 1 as the absorbing state always gives a global summary of the posterior distribution that is guaranteed to be at least as informative as the standard Viterbi MAP path.More precisely, the events c x = 1, . . ., c x = k max and c x > k max comprise exclusive events that make up the whole set of paths for any value of k max .Therefore, the probabilities p(c x = 1|y), . . ., p(c x = k max |y) and p(c x > k max |y), computed based on the forward pass in the augmented HMM, always sum up to one, while the set of the corresponding k max + 1 optimal paths must include the standard Viterbi MAP path, which will be either one of the paths from 1 up to k max or the path with more segments than k max .We refer to the above combined sets of probabilities and optimal paths as the k max + 1 summary of the posterior distribution.

Illustrative Example
Here, we give a graphical illustration of optimal decoding and path sampling under k-segment constraints.For this, we simulated a data sequence according to y n |x n , m, σ 2 ∼ N (m xn , σ 2 ), n = 1, . . ., N = 1000, where the hidden sequence x = {x n } N n=1 was given by a Markov chain with M = 3 states, m = {−2, −1, 1} and σ = 0.9.
Using the simulated data, shown in the first row of Figure 3, we fitted a three-state HMM using the EM algorithm which recovered parameter estimates very close to the ground-truth ones.We then computed the standard Viterbi path and obtained the optimal segmentations, associated with the k max + 1 summary, using the k-segment equivalent of the Viterbi algorithm with k max = 10.These are shown in the second row of Figure 3.
Each such path is displayed so that the three states are shown with different color.On top, the Viterbi path is displayed, containing 14 segments, and then the 11 paths of the k max + 1 summary.The first 10 paths of the latter summary provide a coarse-to-fine hierarchical segmentation of the data sequence where the number of segments increase by one each time.Notice that two consecutive segmentations, do not always follow the principle used in circular binary segmentation algorithm (Olshen et al., 2004), i.e. the k + 1th segmentation might not be obtained by splitting into two segments a single segment from the kth one.Such a latter approach is sub-optimal.Also, notice that the final path that corresponds to the absorbing state (labelled with > 10 in the figure) is precisely the standard Viterbi path.The third panel of Figure 3 illustrates path sampling under k-segment constraints using the FF-BS algorithm in the augmented HMM.In particular, 10 samples are shown that are constrained to have exactly k = 7 segments.
We remark that the application of our k-segment algorithms, so far, has been applied entirely retrospectively to an HMM fitted using a very standard and common approach in a simple but generic model set-up.The ksegment constraints are not involved in the model fitting process but are applied retrospectively to provide a rich exploration of the posterior sequence space where qualitatively diverse segmentations are reported.For the expenditure of some computational time, the application of k-segment generalizations for optimal decoding, probability computation and path sampling provides the HMM user with alternative summaries.

k-segment inference in practice
So far we have presented novel recursions for HMM inference that are applied assuming a fixed value for the parameters θ.In this section, we discuss how we could use these recursions in a general statistical estimation problem with HMMs.More precisely, we will analyze the following two uses of k-segment constraints: i) the retrospective or a posteriori use where the parameters of the HMMs have been fitted beforehand and k-segment inference is used as a meta-analysis tool (section 5.1) and ii) the prospective or a priori use where the constraints are introduced during model fitting so that they actively influence the model parameters (section 5.2).

Retrospective utility of k-segment constraints
The most obvious practical use of k-segment inference is the following.Given an HMM with fixed parameters, e.g.estimated by ML training, apply the recursions of Section 4.3 to explore the posterior distribution over the hidden sequences.Some questions that arise are: i) what is justification of this approach?ii) when is it suitable?
and iii) how can be extended in a Bayesian estimation setting?
To start with the first question, a way to formalize the a posteriori use of k-segment inference is using decisiontheoretic arguments where actions are taken a posteriori given that beliefs about a system are described by some fixed and known probability model.For instance, the computation of the MAP hidden path x * of p(x|c x = k, θ M L , y), where θ M L is some value obtained by ML training, can be considered as choosing the action z that maximizes the following expected utility where the 0-1 utility function u(z, x; c x = k) takes the value one only when both z = x and x ∈ {x|c x = k}.
The introduction of constraints using the counting auxiliary variables allows for efficient maximization of the above expected utility using dynamic programming.
Regarding the second question, notice that the assumption of the a posteriori use of k-segment constraints implies that the model parameters, which quantify the structure of the hidden states, are inferred independently of whatever k-segment constraints we may wish to consider.In practice, this is sensible when the hidden states are clearly interpretable classes so that the emission densities model true class conditional densities and the transition matrix represents meaningful spatial correlation between these classes.
In a Bayesian setting, a prior distribution p(θ) is placed on the HMM parameters and then it is updated to a posterior distribution p(θ|y) by conditioning on the observed data.Following similar arguments with the ones above, under a posteriori use of k-segment constraints we assume that model parameters θ are conditionally independent from any constraint, say c x = k, given the observed data y, i.e. p(θ|c x = k, y) = p(θ|y).
Again such an assumption is sensible when the parameters describe the structure of true classes which is independent of any constraint in the classification procedure.Assuming now that computationally the posterior distribution p(θ|y) is realized by a set of samples {θ (t) } T t=1 , obtained say by some MCMC algorithm, the three k-segment inference problems can be tackled as follows.Firstly, the computation of p(c x = k|y) can be done according to where the Rao-Blackwellization when summing out x is carried out by the forward recursion of the F-B algorithm in the augmented HMM with the final counting variable clamped to value k.More precisely, for each parameter sample θ (t) , this recursion computes the probability p(c x = k, y|θ (t) ) in O(kM 2 N ) time from which we obtain p(c x = k|θ (t) , y) by normalizing with p(y|θ (t) ) obtained in O(M 2 N ) time using the standard forward pass in the unconstrained HMM.Similarly, to find the MAP of p(x|c x = k, y) we are based on the expression where we used the conditional independence assumption.Similarly to the unconstrained MAP of p(x|y) (Scott, 2002), the above cannot be maximized analytically with respect to x (notice also that applying Monte Carlo in the final integral will not help) and therefore we consider an approximate solution of the form obtained by applying the Viterbi algorithm in the augmented HMM.Here, θ is an approximation of the MAP of p(θ|y) computed from the samples, i.e. θ = θ (t * ) , t * = arg max 1≤t≤T p(y|θ (t) )p(θ (t) ) .Regarding path sampling, if we wish to draw a path x from p(x|c x = k, y), then by following eq.( 26) we can choose a parameter sample θ (t) uniformly form the set {θ (t) } T t=1 , and then draw a path from p(x|θ (t) , c x = k, y) using the FF-BS algorithm in the augmented HMM.
Finally, it is worth discussing how an HMM with its hidden states being true classes can be fitted to data in practice.It is important to note that this cannot be done in a completely unsupervised manner, as in such case the classes are not identifiable.Clearly, we need to inject knowledge into the model about which state corresponds to which class.One extreme way to achieve this is to use completely supervised learning so that the data consist of both the sequence y and the class-label sequence x, i.e. x is fully observed.ML training in such case simplifies significantly and the standard EM algorithm for the HMM is not needed.A similar scenario, somehow more realistic, is to use semi-supervised learning where the path x is partially observed.Such supervised or semi-supervised approaches can be also implemented in two phases where in the first phase some of the class conditional densities are found using labelled data and then they are used as fixed transition densities for segmental classification within a HMM.We will make use of such latter approach to train a HMM for text retrieval in Section 8.2.A different way is to inject knowledge about the classes is via the prior p(θ) by following a fully Bayesian approach or penalized ML.For instance, the classes might have a natural ordering or pairwise proximity which could be taken into account by choosing a suitable prior p(θ).This will resolve the identifiability issues by assigning the hidden states to classes implicitly via the prior while otherwise training could be performed in unsupervised manner with the path x being fully unobserved.

Learning with k-segment constraints
A second way to use k-segment constraints is to incorporate them in the model fitting process so that the inferred parameters, and hence the structure of the hidden states, will depend on these constraints.In contrast to the hidden states being classes, here the hidden states are latent variables that simply add flexibility in fitting the data, similarly for instance to latent components in unsupervised mixture density estimation, A suitable application is optimal compression of a data sequence by minimizing the reconstruction error, or equivalently maximizing model fitting, between the observed sequence and the representation provided by the model.In such case a k-segment constraint could represent a fixed budget in the reconstruction process and clearly it would be more efficient to incorporate the constraint during the model fitting.
To this end, next we describe the technical details of using k-segment constraints for learning an HMM either by applying the EM algorithm (section 5.2.1) or by applying Bayesian approaches (section 5.2.2).

Expectation-Maximization
Suppose that together with the data y we have an additional piece of information about the number of segments in the hidden path.Specifically, we shall assume that this number cannot be larger than k while any other constraint, such as being exactly k, can be dealt with in a similar manner.By incorporating this constraint in the augmented HMM we obtain the following joint density: where the evidence s N ≤ k reflects the information about the maximum number of segments allowed.Notice that incorporating the constraint simply amounts for constraining each counting variable s n to take the values 1, . . ., k.
We would like now to apply the EM algorithm to learn the parameters θ for which we need to write down the auxiliary Q function and subsequently derive the E and M steps.Since the factor p(s N ≤ k, s \N |x) does not contain learnable parameters, the auxiliary Q function can be written as where θ old denotes the current parameter values.This function has exactly the same form with the auxiliary function in the unconstrained HMM with the only difference being that p(x|y, θ old ) is replaced by p(x|s N ≤ k, y, θ old ).
The E step simplifies to computing all marginals p(x n |s N ≤ k, y, θ old ) and all pair-wise marginals p(x n−1 , x n |s N ≤ k, y, θ old ) which can be obtained by applying the F-B algorithm in the augmented HMM.Given the current θ old (omitted next for brevity), this algorithm computes the α messages, as shown in section 4.3, and the backward or β messages, so that the first β message is initialized to unity (i.e.β(x N , s N ) = 1) and subsequent β messages are recursively obtained according to Given that each s n takes k values, the β messages are computed in overall O(kM 2 N ) time.Having stored all α and β messages the desired marginals and pair-wise marginals are obtained from which involve summing out the auxiliary counting variables.Given these quantities from the E step, the form of M step remains the same as in unconstrained HMMs.The iteration between the above E and M steps leads to a local minimum of the likelihood p(c x ≤ k, y).
Further, as mentioned earlier, deriving EM algorithms under similar constraints can be done as above.For instance, if we wish to apply the EM algorithm by assuming the number of segments to be exactly equal to k, we need to clamp the final counting variable s N to the value k.
To give an example of using the above learning algorithms, we consider six simulated sequences (included the one from Figure 3) and we apply EM under the k-segment constraint c x ≤ 9.The six panels in Figure 4 shows the optimal k-segment paths (blue dashed lines) obtained after having optimized the HMM with the constrained EM described above.The corresponding retrospective optimal paths associated with the same constraint (red solid lines), i.e. the path found after having optimized the HMM parameters using the standard unconstrained EM, are

Bayesian approaches
It is also possible to learn an HMM under k-segment constraints using Bayesian inference and here we briefly outline how this can be done using Gibbs sampling.Consider a Bayesian HMM with a prior distribution p(θ) on where, as in the previous section, we assumed that the number of segments cannot exceed k.Notice that, while θ and s are conditionally independent given x, marginally they are dependent because of the constraint s N ≤ k.
We aim to compute the posterior distribution p(x, s, θ|s N ≤ k, y) and since this is too expensive we resort to Gibbs-type of sampling where we iteratively sample the paths (x, s) from the conditional p(x, s|θ, s N ≤ k, y) and the parameters θ from p(θ|x, y).The first step corresponds precisely to the path sampling under a k-segment constraint presented in section 4.3 using FF-BS in the augmented HMM.The second step requires simulating from the posterior conditional over parameters and clearly this will always be identical with the corresponding step when sampling in the unconstrained HMM.Also, when this step involves exact simulation from p(θ|x, y) the full algorithm is precisely Gibbs sampling, otherwise it is Metropolis-within-Gibbs where θ is sampled from a proposal distribution and then it is accepted or rejected.

Extended k-segment inference problems
In this section, we discuss extensions to the basic k-segment inference problems considered in section 4. Specifically, in section 6.1 we show how to solve generalized k-segment inference problems where we are interested in transitions of a particular type.In section 6.2, we extend the framework in a different direction by showing how to extract highly non-Markovian events along the HMM hidden path which consist of excursions from null states to abnormal states.

Counting segments satisfying certain constraints
In several applications of HMMs, we may wish to solve more general k-segment inference problems associated with probability events involving certain types of segments and transitions.For example, we could have a natural sub-group of states A ⊂ {1, . . ., M } and we would like to classify the observed sequence in terms of the occurrence or not of A based on the computation of the associated posterior probability.This problem consists of an example of generalized k-segment inference and in this section we show how this and related problems can be solved using auxiliary counting variables.
In a hidden path of an HMM (assuming an irreducible transition matrix) we can encounter M (M − 1) possible transitions.We can denote this set of all transitions by an M × M binary matrix C having ones everywhere and zeros in the diagonal, i.e.C(i, j) = I(i = j).Such a matrix characterizes the standard k-segment inference problems described earlier where all segments are of interest and are all counted.When we care about a subset of transitions, we can modify C so that C(i, j) = 1, if both i = j and the transition i → j belongs to this subset.
One way to visualise this is to think of colouring certain transitions in the HMM.Then, we will be interested in counting segments generated from only those coloured transitions.Furthermore, in order to be flexible about the inclusion of the initial segment (which is not the result of a transition) in the probability event, we can define an M -dimensional binary vector µ indicating the subset of values of the initial state x 1 that are of interest.Then analogously to equation ( 5), we can define which denotes the number of segments along the hidden path x which are compatible with the constraints (µ, C).
Subsequently, we can define probability events of the form c x = k, k 1 ≤ c x ≤ k 2 , the special events c x > k and etc, and subsequently formulate all associated k-segment inference problems as described in section 4.1.
To solve all these new problems, we introduce again auxiliary counting variables s and define a suitable counting Markov chain p(s|x) that generates deterministically the variables in s given the path x.This chain has the same structure with eq. ( 9) but with the following modified conditionals: Here, s 1 is set to one only for the subset of values of x 1 compatible with µ, otherwise it remains zero and the associated initial segments are not counted.The case of counting always the first segment corresponds to the special case where µ(x 1 = i) = 1, for each i, in which case p(s 1 |x 1 ) simplifies to δ s1,1 .Similarly, the conditional p(s n |s n−1 , x n−1 , x n ) is such that s n increases only when C(x n−1 , x n ) = 1 so that new segments for which x n−1 = x n and C(x n−1 , x n ) = 0 are not counted.Clearly, counting any segment is obtained as a special case for which All dynamic programming recursions of section 4.3 are applicable to the above generalized k-segment inference problems.Given that we solve these problems for k = 1 up to k = k max , the time complexity of these algorithms can be either O(k max M 2 N ) when the vector µ is equal to one everywhere or O((k max + 1)M 2 N ) when some of the elements of µ are zero.In the latter case the term k max + 1 appears simply because each counting variable s n can take k max + 1 values.Finally, standard Viterbi, F-B and FF-BS algorithms for HMMs are obtained as special cases of generalized k-segment recursions corresponding to setting µ and C to zero.To make this clear, notice that in such case none of the segments along the path x are counted so that k can take only the value zero, i.e. p(c x = 0|y) = 1 and p(x|c x = 0, y) = p(x|y).Therefore, in such case the generalized k-segment recursions reduce to the standard HMM recursions having complexity O(M 2 N ) which shows that the k-segment algorithms provide a more general inference methodology for HMMs.
Finally, to illustrate optimal decoding in a generalized k-segment setting, we consider again the simulated data of Figure 3. Suppose, we would like to count segments from the second (green) state only.The constraints (µ, C) we need to use are µ = [0 1 0] and C = [0 1 0; 0 0 0; 0 1 0] (where ; separates the rows of C).The fourth row of Figure 3 shows several optimal paths having 0 up to 8 segments associated with counting the second state in the HMM.

Extracting excursions using two layers of auxiliary variables
In several applications of HMMs where hidden states correspond to true states of nature, there is often a subset of states (in the simplest case just a single state) considered as normal or null states while the remaining ones represent abnormalities.In such applications the practitioner might be interested to identify excursions where the hidden path moves from any null state to abnormal states and returns back to a null state.Extracting such events using a k-segment formulation is challenging because an excursion has a high order Markov structure and therefore it cannot be identified by just comparing two consecutive states.To this end, next we describe a generalization of our augmentation framework with counting variables that efficiently solves the excursion problem.
We first give a precise definition of an excursion.Suppose in HMM the states are divided into two groups: the null set N ⊂ {1, . . ., M } and the abnormal set N = {1, . . ., M } \ N .An excursion is any sub-path (x i , . . ., x i+1 , . . ., x j ), with j − i > 1, where x i , x j ∈ N and the intermediate hidden variables (x i+1 , . . ., x j−1 ) take values from the abnormal set.In other words, an excursion is the sub-path having the start and end states clamped to normal states and with all intermediate variables clamped to abnormal values.Further, a special case of an excursion is a restricted excursion where the intermediate sub-path (x i+1 , . . ., x j−1 ) is clamped to the same abnormal state.
To count excursions, we introduce a new sequence of auxiliary variables e = (e 1 , . . ., e N ) which aim to signify the different phases of the excursion cycle.These variables unfold sequentially given the path x according to the following deterministic chain.Initially, e 1 is set to zero so that p(e 1 |x 1 ) = δ e1,0 and then any subsequent e n is drawn according to Here, the first part of the conditional signals the initiation of an excursion where e n is set to one once a transition from a normal state to an abnormal state occurs.The second part signifies the end of the excursion where we return to a normal state.The third part replicates the previous value and deals simultaneously with both intermediate variables in the excursion sub-path, in which case e n = e n−1 = 1, and situations where x has started in an abnormal state and an initiation of an excursion has not occurred so far, in which case e n = e n−1 = 0.The key now to count excursions is to increment a counter any time there is transition from one to zero in the path e signifying the completion of an excursion.This is achieved using counting variables s generated given e, so that s 1 = 0 and any subsequent s n is drawn from The initial HMM is augmented hierarchically with the above two layers of auxiliary variables so that p(y, x, e, s) = p(y|x)p(x)p(e|x)p(s|e), is the joint density of the extended state-space HMM and each triple (x n , e n , s n ) consists of the new extended hidden state.Then, by working analogously to section 4.3 we can derive recursions for all types of k-segment inference problems associated with counting excursions.For instance, by specifying a maximum number of k max excursions we can introduce evidence into the final state s N , such that s N ≤ k max , and then obtain all optimal k max paths containing k = 1 up to k = k max excursions using the Viterbi algorithm.Since each variable e n takes two possible values and s n takes k max + 1 possible values, the complexity of all dynamic programming algorithms will be O(2(k max + 1)M 2 N ) which is twice as slow as generalized k-segment inference.
Dealing with restricted excursions requires only a modification of the third "otherwise" part in eq. ( 37).In particular, this part must now be modified so that once an excursion cycle has previously been initiated, i.e.
e n−1 = 1, we will count any transition happening between abnormal states.More precisely, this part becomes Then, the problem of counting restricted excursions is solved by constraining all e n variables to take only the two values {0, 1}, so that once an excursion cycle is been initiated we cannot transit to a different abnormal state.
The time complexity of the dynamic programming recursions remains O(2(k max + 1)M 2 N ) as in the simple excursion case.
To illustrate the concept of extracting excursions we return to the dataset of Figure 3, where we would like to count excursions so that the first and second states comprise the null set and the remaining third state is taken as abnormal.The panel in the last row of Figure 3 shows several optimal paths found by counting excursions where, for clarity, only the excursion segments are displayed using black solid lines.

Relation to other methods
Our method formalizes and generalizes the approach of Kohlmorgen (2003) who provided the first solution (as far as we are aware) for a specific form of the k-segment inference problem.Kohlmorgen (2003) recognized that an exact dynamic programming solution for the optimal decoding MAP estimation problem existed.In this article, we have placed that insightful observation by Kohlmorgen (2003) within a novel counting Markov chain framework and showed that the use of dynamic programming can also be used for marginalization and sampling of random variables and thus, for instance, allow the computation of marginal probabilities over subset of hidden paths using the forward recursion and simulating samples with exactly k segments using the FF-BS algorithm.
The use of augmentation with auxiliary variables means that our framework is easily generalizable as someone can tackle different types of inference problems by constructing suitable counting chains.For instance, in Section 6.1, we took this forward by introducing and solving generalized k-segment inference problems in HMMs simply by generalizing the structure of the counting chain.
In addition, there are similarities in the way we construct counting chains with that of explicit duration HMMs (Mitchell et al., 1995;Murphy, 2002;Yu, 2010), which consists of a modification of the original HMM where each hidden state emits not a single observation but a sequence of observations.The number of these observations is chosen randomly from a distribution.This can be thought of as introducing duration or segment length constraints in the original HMM, so that the resulting model is a hidden semi-Markov model.From technical point the use of counting variables in ED-HMMs shares similarities with our methodology, however, the scope of our approach is very different.Specifically, in our case the counting variables are used to obtain probabilities and hidden paths in the original standard HMM, i.e. we do no alter the original HMM but instead we do exploratory inference in this model, while in the ED-HMM the counting variables define a new model (marginalizing out the s n s from the joint prior distribution of x n s and s n s gives a new semi-Markov model).
The task of k-segment inference in HMMs is strongly related to change point estimation.Traditional change point estimation algorithms; see e.g.(Auger and Lawrence, 1989;Fearnhead, 2006;Fearnhead and Liu, 2007), allow the computation of optimal segmentations of sequential data having one up to k max segments in O(k max N 2 ) time, i.e. these algorithms have quadratic complexity in the length of the data sequence.In contrast, our algorithms have linear complexity in the length of the sequence and therefore they can be applied to massive datasets as those encountered in bioinformatics.Recently, Killick et al. (2012) presented a linear complexity algorithm for change point estimation, which however, does not solve the optimal decoding problem in k-segment inference (i.e. it does not find all optimal segmentations having one up to a maximum number of segments), but instead it discovers a single segmentation with an a priori unknown number of segments.
Yau and Holmes (2013) also developed a decision theoretic approach for segmentation using Hidden Markov models by defining a loss function on transitions and identifying a Viterbi-like dynamic programming algorithm to efficiently compute the hidden state sequence that minimizes the posterior expected loss.The properties of the sequence predictions are modified through specification of the loss penalties on transitions as supposed to altering the transition dynamics of the Hidden Markov model.The k-segment algorithms developed here can also be applied to the method of Yau and Holmes (2013) to produce sequence predictions that minimize the posterior expected loss criterion subject to a desired k-segments constraint.The combination of the decision theoretic approach and the use of k-segments therefore provides a powerful tool for exploration of the complete state space of Hidden Markov models.

Examples
Next, we demonstrate the utility of k-segment methods in two real-word applications.Specifically, in section 8.1 we consider the problem of copy number identification in cancer genomic sequences, while in section 8.2 we discuss an application to text retrieval and topic modelling.

Genome-wide DNA copy number profiling in cancer
In this section, we consider the problem of genome-wide classification of somatic DNA copy number alterations (SCNAs) in cancer.SCNAs are a important constituent of the mutational landscape in cancer and refer to numerical copy number changes that result in extra or lost copies of parts of the genome.In cancer, these alterations lead to the loss of tumor suppressor genes (which restrict tumorigenic activity) or the gain of oncogenes (which promote tumorigenic activity) and many copy number alterations of such genes have been identified as being associated with cancer (Beroukhim et al., 2010).Next generation sequencing or microarray technologies have allowed cancers to be probed on a genome-wide scale for SCNAs and a number of statistical models have been developed to support the analysis of this data (Loo et al., 2010;Yau et al., 2010;Chen et al., 2011;Carter et al., 2012;Yau, 2013).A particularly popular class of these models have utilised Hidden Markov models to model microarray intensities or sequencing reads as observations of a hidden (discrete) state process that corresponds to the unobserved copy number sequence.
Specifically, a single nucleotide polymorphism (SNP) microarray dataset consists of a sequence of bivariate measurements {y i } n i=1 at n SNP locations spread across the genome.The first dimension of the measurements known sometimes as the Log R Ratio values which are intensity measurements whose magnitude is proportional to the total copy number at that particular genomic location.In human genome analysis, the Log R Ratio values are typically normalized such that values approximately equal to zero correspond to a DNA copy number of two since we typically inherit one copy of every gene from each parent.The second dimension, sometimes known as the B allele frequency, measures the relative contribution of one of the parental alleles to the overall signal which can allow us to determine which parental allele is lost or gained.
In Yau et al. (2010), these data sequences are modelled using a Bayesian hierarchical model specified via the following relationships: where x i ∈ {1, . . ., S} denotes the copy number state at the i-th location, {m j , Σ j } denotes the expected signal measurements and noise covariance for the j-th copy number state and π is a transition matrix such that π j corresponds to the transition probabilities out of the j-th copy number state.Note, we present only an abbreviated and simplified version of the complete model by Yau et al. (2010) here.For full details, see the original reference.
Table 1 shows an example set of copy number states.Yau et al. (2010) models transitions between superstates as relatively unlikely events leading to a "sticky" HMM that produces relatively few super-state segments.
Dynamics within super-states are modelled via an embedded Markov chain that approximates the patterns of genotypes observed in real data.The primary scientific interest is in the switching between super-states but it is necessary to fully model the complete genotypes in order to achieve this.

Copy Number State Total Copy
Table 1: Example copy number states.Each copy number state is associated with a total copy number and genotype which tells us the number of each parental allele (A/B).The super-state corresponds to subsets of copy number states with identical total copy number and/or loss of heterozygosity (LOH) status.
Full Bayesian posterior inference for this type of model is prohibited by the size of the datasets (O(n) ≈ 10 6 ).
Yau et al. ( 2010) perform model fitting using expectation-maximization to compute MAP parameter estimates and condition on these to obtain MAP segmentations using the Viterbi algorithm.The forward-backward algorithm can also be applied to obtain site-wise posterior probabilities of state occupation.Figure 5 shows an example copy number analysis of chromosome 1 of a colorectal cancer cell line SW837 from a SNP microarray dataset using the OncoSNP software from Yau et al. (2010).The chromosome exhibits a number of copy number alterations leading to changes in the pattern of the Log R Ratio and B Allele Frequency along the chromosome.Genomic regions with non-normal total copy number (2) can be identified from the Viterbi segmentations and the site-wise posterior probabilities.
The application of our k-segments methods can be used to augment these standard analyses with additional exploratory information.Figure 5 shows segmentations conditional on different fixed super state segment numbers obtained using k-segments.Here, we have used the ability to count certain transitions in k-segment inference to good effect to count only transitions between super-states and exclude transitions between copy number states within super-states.This means the k-th segmentation represents the most probable copy number segmentation that involves k different super-state segments as supposed to k segments defined on the original state space which would include transitions between states within super-states.These segmentations allow the exploration of alternative segmentations that differ from the MAP solution and yet retain segmental constraints that cannot be observed from the site-wise marginal probabilities.

Application to text retrieval using hidden Markov topic models
In this section, we apply k-segment inference to a information retrieval task where the objective is to process long documents and extract segments referring to certain topics.For this purpose, we define a hidden Markov topic model, as those proposed in (Gruber et al., 2007;Andrews and Vigliocco, 2010), that builds upon popular topic models such as probabilistic latent semantic indexing (Hofmann, 2001) and latent Dirichlet allocation (Blei et al., 2003).These latter models represent a document as a set of words y d = (y d,1 , . . ., y d,N d ) where each y d,n ∈ {1, . . ., V } points into a vocabulary consisting of V keywords.The words are generated exchangeably from a mixture distribution where mixing proportions are document-specific parameters while mixture components are multinomial distributions over the vocabulary.The latter distributions aim at capturing semantic topics, such as politics, sports, food and etc. Learning in these models is typically carried out in an unsupervised manner by using a corpus of several documents and assuming a common pool of topics so that each document, through the document-specific mixing proportions, can contain only a subset of topics.The basic topic models ignore word ordering and do not model spatial correlation according to which nearby words are more likely to belong to the same topic.Motivated by this limitation, HMM-based extensions have been developed in (Gruber et al., 2007;Andrews and Vigliocco, 2010) that assume that the latent topics of words in ordered text follows a Markov chain.
Next, we construct a similar HMM and use it to solve an information retrieval task based on semi-supervised learning.
Assume an unknown-content (test) document d in which we would like to scan through and retrieve segments  hidden topic of word y d,n .Further, the set of these topics is divided into the relevant topics and the irrelevant topics with the relevant topics being the ones from which we wish to extract text segments, estimate posterior probabilities of appearance and etc, while the irrelevant topics are unknown and document-specific topics of no interest to us.Without loss of generality, and to simplify our presentation, we shall assume M = 2 so that there is a one relevant and one irrelevant topic.The relevant topic is described by multinomial parameters φ r = (φ r,1 , . . ., φ r,V ) so that the emission distribution that generates a word y d,n is such that φ r is assumed to have been estimated by supervised learning using fully labeled documents according to the equations: where n v is the number of times the vth word appears in the labeled data and n is the total number of words in these data.Notice that the above is simply the Bayesian mean estimate under an uniform Dirichlet prior over φ r .
Similarly, the emission distribution for the irrelevant topic, i.e. p(y d,n |x d,n = 2), is described by the parameter ) which is a document-specific parameter to be estimated.Furthermore, the prior distribution π d and transition matrix A d of the HMM are also document-specific parameters and the full set ) can be estimated via the EM algorithm while φ r is kept fixed.In practice, we also place a conjugate Dirichlet prior over all unknown parameters so that EM finds MAP point estimates similar to those of eq. ( 44).
In the remaining of this section we demonstrate the above system using a freely available text corpus taken from the University of Oxford electronic library.3Specifically, we collected a set of 119 doctoral theses on several subjects such as History, Social Sciences, Philosophy, Law, Politics, Literature and Economics.The topic of Economics was considered to be the relevant topic while all remaining topics were taken as irrelevant.Ten out of 119 documents were classified (according to the library database system) to be about Economics while the remaining 109 theses were scattered across the other topics.Each dth document was represented by a sequence of words from a dictionary of size V = 1260 which was defined separately by choosing all different words from a large set of freely accessible Wikipedia articles. 4The multinomial parameters for the relevant topic of Economics was obtained by supervised learning using counts of words obtained from a small set of Wikipedia entries such as the entries Economics, Finance and Investment.Having preprocessed each document as above, we then considered two types of prediction tasks: i) classification and ii) detection that we describe next in turn.
Classification.For the classification task the objective was to predict in a test document the presence or absence of at least one occurrence of a segment from the topic Economics.The test documents consisted of the 109 theses, originally annotated as non-Economics documents, that were randomly perturbed in order to create a ground-truth dataset of known classification as explained in the Appendix B. Given this test dataset, the objective was to construct a binary classification system and classify each of the documents as relevant, i.e. as containing at least one text segment about Economics, or as irrelevant.Each test document was processed separately by applying the EM algorithm discussed earlier.Then, to achieve probabilistic classification, the posterior probability for the occurrence of at least one segment from the relevant topic is required.It can be obtained by applying ksegment inference using a counting variable c x that increments only when a segment from the relevant topic occurs.Notice that this requires the use of generalized counting, as described in section 6.1, that uses certain values for the constraints µ and C.5 Then, the posterior probability p(c x > 0|y d ) is computed using the forward pass in the augmented HMM which subsequently provides a probabilistic classifier.Using different thresholds in the classification probability, we can obtain different decision systems of varying false positive and true positive rates as shown by the ROC curve in Figure 7.In contrast, if we were about to perform classification using the Viterbi MAP path we can only obtain a single decision system that classifies documents as relevant or irrelevant based upon whether a segment from the relevant topic occurs or not in the Viterbi path.Such system gives a single value for the true positive and false positive rate as shown in Figure 7. Clearly, k-segment's ability to compute non-trivial posterior probabilities allows for more flexible uses of HMMs when building decision making systems.
Detection.We now turn into the second task which is concerned with the detection of individual segments within a document that belong to the relevant topic.We adopt a standard information retrieval setup that is referred to as top-k retrieval (Büttcher et al., 2010).This is the task of retrieving k patterns (typically full documents) that are most relevant to a given query among a large set of other possible patterns.Our specific top-k retrieval task will be to extract top-k text segments within the same large document and to achieve that we shall use the hidden Markov topic model.Also, to account for documents that may contain fewer than k segments from the relevant topic, we will relax the constraint to retrieve exactly k segments to the softer constraint of retrieving at most k segments.It is worth noticing that there is a similarity of k-segment problems in HMMs and top-k retrieval since both involve inference under counting constraints.More precisely, k-segment methods can naturally tackle the previous top-k retrieval task by applying optimal decoding, under the constraint c x ≤ k, that finds the optimal hidden path containing at most k text segments associated with the relevant topic.Next, it order to evaluate such system in test documents with known ground-truth segments, we randomly perturbed the 109 test documents as explained in the Appendix B.
To measure performance, we make use of a popular evaluation measure used in visual object detection literature.More precisely, detecting segments of certain topics in documents is similar to detecting instances of object categories in natural images.There, the detection problem is to predict a bounding box that locates an instance of an object category within the image.The well-established evaluation measure, used in the PASCAL visual object recognition challenge (Everingham et al., 2010), is the overlap area ratio.Adopting this in our case, we have that for a predicted segment S p = [i l , i r ], where i l and i r are the segment start and end locations within the test document, the overlap ratio is defined by Here, S gt is the ground-truth segment, S p ∩ S gt is the intersection of the predicted and the ground segments and S p ∪ S gt is their union.Clearly, r ∈ [0, 1] and values close to zero indicate poor detection while values close to one indicate strong detection.We consider as correct detections all cases when r exceeds the threshold of 80%; for an illustrative example of a correct detection see Figure 6.Also, to get a total document-specific performance that is normalized with respect to k, we average according to per document detection rate = 1 k where k p ≤ k is the number of predicted segments.From this we can obtain a mean detection rate that gives the overall performance in the whole test dataset.Figure 8(a) shows means detection rates for several top-k systems of varying values of k.Confidence intervals were obtained by repeating the experiment 100 times, so that in each repeat a random test dataset of 109 documents was created using bootstrapping together with the standard randomization involved in the segment insertion (see Appendix B).
Furthermore, it is interesting to compare the k-segment based method with a system constructed using the standard Viterbi MAP path in the HMM.Standard Viterbi gives a single path that will contain a priori an unknown number of segments from the relevant topic.Thus, to get top-k retrieval systems (for different values of k), we can rank all relevant-topic segments with respect to their length so that the top-1 retrieval system simply outputs the longest segment in the list, the top-2 retrieval system outputs the two longest segments and so forth.Using the same bootstrapped 100 repeats we also evaluated the standard Viterbi system and for each repeat we recorded the difference in mean detection rates (k-segment rate minus the standard Viterbi rate).Figure 8(b) displays the mean of these differences together with 95% confidence intervals and for several values of k.Clearly, there is a certain range of k values where the k-segment method outperforms the standard Viterbi method.Moreover, as k increases, the k-segment constraint c x ≤ k becomes weaker and the corresponding optimal paths converge to the standard Viterbi MAP paths which explains the fact that the performance of the two methods becomes identical for large k.
To summarise, both tasks in text retrieval presented above indicate that k-segment inference allows for more flexible use of HMMs which provides us with new options when building classification and decision making systems.

Discussion
HMMs can allow for highly efficient analysis of large quantities of sequence data.However, existing methods for reporting of posterior summaries from HMMs such as the Viterbi MAP path and the marginal probabilities are rather blunt.Here, we demonstrated how the use of auxiliary counting variables allows for computationally ... changes in services had brought within the direct employ of local authorities new professional groups who could claim authority and expertise in the field of child welfare -school medical officers, school nurses, juvenile employment workers, in addition to the many volunteer roles which were integral to the operation of Education Departments at local level.These new professional groups ... ... changes in services had brought within the direct employ of complexity system modeling to model market communication networks.This would be very useful to understand the connection between price signaling, public awareness and regulatory demand.In addition to modeling market networks, of Education Departments at local level.These new professional groups ... ... changes in services had brought within the direct employ of complexity system modeling to model market communication networks.This would be very useful to understand the connection between price signaling, public awareness and regulatory demand.In addition to modeling market networks, of Education Departments at local level.These new professional groups ...  facilitating thus the process of getting novel insight into structural rearrangements in cancer genomes.For other type of applications, that appear for instance in machine learning and pattern recognition, the proposed methods can allow to build more flexible HMM-based classification and decision making systems, as we have demonstrated using the text retrieval example.
Regarding future work, an interesting research direction is to exploit the ability of k-segment inference to efficiently explore the HMM posterior distribution in order to provide input into constructing meta statistical models.For instance, the ability to obtain alternative explanations of the same data sequence that may have high utility to the research scientist but occurs with very low probability.could allow the practitioner to re-rank different explanations based on his expertise and subsequently provide feedback into the model that can be used for supervised re-training.A second future direction is to exploit the fact that training an HMM under a k-segment constraint often gives sparse transition matrices.This is not surprising, since a bound on the number of segments essentially limits the number of transitions along the hidden path which subsequently can result in many inferred zeros in the transition matrix.Such sparsity property could be useful when we would like to perform state selection in large state-space HMMs.
To conclude, as data sets become larger and models more complex we expect to see increasing need for computationally efficient methods for posterior model exploration and statistical inference under constraints.In this paper, we have presented one such approach that significantly expands the statistical algorithmic toolbox of HMMs.
A Proofs for the auxiliary variable reformulation of k-segment problems Here, we provide proofs for the correctness of the reformulation of the three k-segment inference problems presented in section 4.2.Firstly, we will show that p(x|y, s N = k), computed via the augmented HMM, is equal to p(x|y, c x = k) given by eq. ( 7).We have that p(x|y, s N = k) is defined by reduces to the definition of p(x|y, c x = k) from eq. ( 7).From that, we can immediately obtain that the term that normalizes the right hand side of (47), i.e. the quantity p(s N = k, y), is equal to p(c x = k, y).This completes the proof regarding the correctness of the probability computation.
Based on the above, we can also conclude that the initial optimal decoding solution x * is the MAP of p(x|y, s N = k), i.e.
x * = arg max which shows that the reformulated optimal decoding problem is equivalent to the initial one.
Finally, regarding path sampling, the FF-BS in the augmented HMM gives a pair of paths ( x, s \N ) that jointly comprise an independent sample from p(x, s \N |s N = k, y).Thus, x alone is an independent sample from p(x|s N = k, y).

Figure 1 :
Figure 1: HMM depicted as a directed graphical model.
as finding the MAP of p(x|c x > k, y), sampling from p(x|c x > k, y) and etc.

Figure 2 :
Figure 2: Directed graphical model for the HMM augmented with the counting chain.

Figure 3 :
Figure3: The panel in the first row shows the simulated data sequence.The panel in the second plot displays the standard Viterbi path and 11 optimal paths corresponding to the k max + 1 summary (with k max = 10) of the k-segment inference.Each path is depicted with the three states shown in different colors (the state with emission Gaussian density of ground-truth mean −2 is shown in red, the one with mean −1 shown in green and the third one with mean 1 shown in blue).The panel in the third row shows 10 sample paths obtained by the FF-BS algorithm under the constraint that exactly 7 segments occur.The panel in the forth row illustrates generalized counting (section 6.1) so that the optimal paths having 0 up to 8 segments from the second state are shown.For clarity, only the segments of the second state of interest are displayed with black solid lines.The final panel in the last row illustrates counting excursions (section 6.2) having as the null set the first and second states while the third one is the single state in the abnormal set.Again for clarity, only the excursion segments (more precisely only the abnormal sub-segment, i.e. excluding the start and end points) are shown using black solid lines.
also displayed.Notice that each path is shown as a piece-wise constant function formed by the mean values of the Gaussian emission densities indicated by the states along the path.Each piece-wise constant function gives a reconstruction of the observed sequence.The mean squared errors (MSEs) for the retrospective and prospective reconstructions are shown inside parentheses in the legend of each panel.As expected, there MSEs indicate that incorporating a k-segment constraint during model fitting gives better reconstruction.In fact, by initializing the parameters in the constrained EM from the final values obtained by the standard EM should always lead to a likelihood value which is higher or equal to the corresponding value in the retrospective model.

Figure 4 :
Figure 4: Each of the six panels corresponds to a different simulated sequence.In each panel, the prospective (blue dashed line) and the retrospective (red solid line) optimal paths, under the k-segment constraint c x ≤ 9, are displayed together with the data.The paths are shown as piece-wise constant functions formed by the means of the Gaussian emission densities.MSEs of the two reconstructions are given in the legends of the panels inside parentheses.
referring to certain topics.As before the document is represented by a set of words y d = (y d,1 , . . ., y d,N d ) which are ordered according to their appearance in the text and assumed to have been generated from an HMM.Specifically, we assume there is a path x d = (x d,1 , . . ., x d,N d ) such that each x d,n ∈ {1, . . ., M } indicates the

Figure 5 :
Figure 5: Copy number analysis of the colorectal cancer cell line SW837 (Chromosome 1) using site-wise marginal posterior probabilities of a copy number aberration from the Forward-Backward algorithm, the Viterbi algorithm (black lines indicate detected regions of aberrant copy number), and k-segment analysis for different fixed super-state segment numbers.

Figure 6 :
Figure6: An example of detection of a text segment from the relevant topic of Economics.The box on top displays a piece of text from a test document before we randomly inserted segments from the topic of Economics (see Appendix B).The middle box displays the same text after having randomly inserted (and replaced the original piece of text) a segment from the topic of Economics which is shown in red.The box at the bottom shows in blue color the segment predicted as belonging to the relevant topic.In this case, the predicted segment was classified as a correct detection since it overlaps more than 80% with the ground-truth segment shown in the middle box.

Figure 7 :Figure 8 :
Figure7: The blue line shows the ROC (receiver operating characteristic) curve for the k-segment method that classifies documents by using the topic-occurrence posterior probability p(c x > 0|y).The black star shows the pair of the true and false positive rates corresponding to the classifier obtained by the standard Viterbi MAP path.
p(x|y, s N = k) ∝ p(y|x)p(x) s \N p(s \N , s N = k|x).(47) What we need to show is that s \N p(s \N , s N = k|x) is equal to the indicator function I(c x = k).Since p(s \N , s N = k|x) is a deterministic distribution, given that x has k segments there will be an unique s * \N such that p(s * \N , s N = k|x) = 1 and zero for all remaining s \N s.If x does not contain k segments, p(s \N , s N = k|x) = 0 for any s \N .Thus, when x has k segments s \N p(s \N , s N = k|x) = 1, otherwise s \N p(s \N , s N = 1|x) = 0. Therefore, s \N p(s \N , s N = 1|x) = I(c x = k) for any x, from which we conclude that p(x|y, s N = k) p(s|x) is a deterministic distribution the sum operation can be replaced by a max operation so thatx * = arg max x p(y|x)p(x) max s \N p(s \N , s N = k|x) , )p(x)p(s \N , s N = k|x) ,