Skew products of interval maps over subshifts

We treat step skew products over transitive subshifts of finite type with interval fibers. The fiber maps are diffeomorphisms on the interval; we assume that the end points of the interval are fixed under the fiber maps. Our paper thus extends work by V. Kleptsyn and D. Volk who treated step skew products where the fiber maps map the interval strictly inside itself. We clarify the dynamics for an open and dense subset of such skew products. In particular we prove existence of a finite collection of disjoint attracting invariant graphs. These graphs are contained in disjoint areas in the phase space called trapping strips. Trapping strips are either disjoint from the end points of the interval (internal trapping strips) or they are bounded by an end point (border trapping strips). The attracting graphs in these different trapping strips have different properties.


Introduction
We aim to describe the dynamics of specific step skew products with a shift as dynamics in the base and with interval fiber maps. That is, ω = (ω i ) i∈Z is a sequence using finitely many symbols, and σ is the left shift operator acting on it. We treat such systems in cases where σ is a subshift of finite type and where the f i 's are diffeomorphisms on a compact interval that fix the endpoints of the interval. Kleptsyn and Volk [5] conducted a study of dynamics of generic step skew products of diffeomorphisms on the line over subshifts of finite type. They looked at diffeomorphisms that are mapping a bounded interval strictly inside itself. They showed that so called bony graphs (after Kudryashov, see [6]) arise as attractors: these attractors are the union of a measurable graph and a zero measure set of intervals inside fibers (the bones).
A different situation occurs for diffeomorphisms on a compact interval that fix the endpoints of the interval. Such systems gained interest with an example by Kan [4] where they gave rise to intermingled basins. This example is over a full shift on two symbols and the end points of the interval are attracting on average. Il'yashenko [2,3] similarly considered examples of diffeomorphisms over a full shift under an assumption of repulsion on average at the end points. He established attractors with positive standard measure (the standard measure is the product of Markov measure on the shift space and Lebesgue measure on the fiber space). The attractors are the closure of an invariant measurable graph. Note the contrast with bony graphs which have zero standard measure.
We provide a classification of dynamics of generic step skew products of diffeomorphisms on a compact interval (all diffeomorphisms fixing end points of the interval) over subshifts of finite type. Both types of graphs, bony and thick, can arise in a single step skew product.

Step skew product systems over subshifts of finite type
Write for the finite set of symbols {1, . . . , N}. Let A = (a ij ) N i,j=1 be a matrix with a ij ∈ {0, 1}. Associated to A is the set A of bilateral sequences ω = (ω n ) ∞ −∞ composed of symbols in and with transition matrix A: a ω n ω n+1 = 1 for all n ∈ Z. Let ( A , σ ) be the subshift of finite type on A . The map σ shifts every sequence ω ∈ A one step to the left, (σ ω) i = ω i+1 . We can also consider the left shift operator σ acting on the one-sided symbol space + A , i.e. the space of sequences ω = (ω n ) ∞ 0 composed of symbols in with a ω n ω n+1 = 1 for all n ≥ 0. The spaces A and + A are endowed with the product topology. We assume that A is primitive, i.e.
This implies that the subshift σ is topologically transitive and topologically mixing. Consider the interval I = [0, 1] and {f 1 , . . . , f N }, a finite family of orientation preserving (strictly increasing) C 2 -diffeomorphisms defined on I assuming that f i (0) = 0 and f i (1) = 1 for every i ∈ . Write F + for the skew product system where the fiber maps f ω depend only on ω 0 , i.e. f ω = f ω 0 . We also write In this paper we consider the following set of step skew product systems. Definition 1.1: We denote by S the set of step skew product systems F : for orientation preserving diffeomorphisms f i : I → I that fix end points of I.

Markov measures
be a right stochastic matrix, i.e. π ij ≥ 0 and N j=1 π ij = 1, such that π ij = 0 precisely if a ij = 0. By the Perron-Frobenius theorem for stochastic matrices, there exists a unique positive left eigenvector p = (p 1 , . . . , p N ) for that corresponds to the eigenvalue 1; i.e.
We assume that p is normalized so that it is a probability vector, N i=1 p i = 1. For a finite word ω k 1 , . . . , ω k n , k i ∈ Z, the cylinder C k 1 ,...,k n ω k 1 ,...,ω kn (we will also use the notation C k 1 ,...,k n ω ) is the set As cylinders form a countable base of the topology on A , Borel measures on A are determined by their values on the cylinders. A Borel measure ν on A is called a Markov measure constructed from the distribution p i and the transition probabilities π ij , if for every ω ∈ A and k ≤ l, One can easily check that with this definition ν is well-defined and is a probability measure. Moreover, ν is invariant under the shift map σ ; it is ergodic and supp (ν) = A . From now on, we consider a fixed ergodic Markov measure ν on A . Write π for the natural projection A → + A . Then, ν + = πν is the Markov measure on + A . We do not consider measures on A that are not Markov measures. The reason is the connection of Markov measures to stationary measures for the stochastic process induced by F + , see Section 3. Definition 1.2: The standard measure s on A × I is the product of ν and the Lebesgue measure on the fiber.
The strip S ϕ 1 ,ϕ 2 is called a strict trapping strip if moreover internal boundaries are mapped inside the interior of S ϕ 1 ,ϕ 2 .
Likewise one can consider trapping strips for F + . It is clear that internal and border trapping strips are the only two possible kinds of trapping strips. Consider a trapping strip S with boundary functions ϕ 1 < ϕ 2 . Because of monotonicity of the fiber maps, the images F n (S) are strips. Since for a trapping strip S also F n (S) ⊆ S, we get that for every n ≥ 0 the image F n (S) is a trapping strip. Therefore any trapping strip S has a non-empty maximal attractor We encounter two different types of maximal attractors. Definition 1.4: A measurable graph B in A × I is called a bony graph if it is contained in a closed set that intersects ν-almost every fiber in a single point and every other fiber in an interval, which is called a bone.
Note that the standard measure of the closure of a bony graph is zero; Following [5] we also call the closed set that is the union of the measurable graph and the bones, a bony graph. A bony graph can have an empty set of bones; a bony graph with an empty set of bones is a continuous graph. It is easy to construct examples where the maximal attractor is in fact a continuous graph. Definition 1.5: A measurable graph B in A × I is called a thick graph if its closure has positive standard measure, i.e. s B > 0.
We also call the closure of the thick graph, a thick graph.

Classification of dynamics for generic skew products
The Lyapunov exponent of a system F ∈ S at a point (ω, in case the limit exists. Since for every i ∈ , x = 0, 1 are fixed points of f i , by the definition of Markov measure and Birkhoff's ergodic theorem, we obtain for x = 0, 1 that for ν + -almost all ω ∈ + A . Note that generically L(0) and L(1) differ from zero.
We have introduced all notions needed to present our description of the dynamics of generic step skew product systems. The following theorem holds for step skew product systems from an open and dense subset of S which is given explicitly in Section 2.1 below.

Theorem 2.1:
There is an open and dense set G of S, so that F ∈ G satisfies the following.
F admits a finite collection of disjoint trapping strips S t , 1 ≤ t ≤ T, of the form Furthermore, (1) S t contains a unique attracting invariant graph t : t is the graph of a measurable function X t : Kleptsyn and Volk [5] show that the bony graphs in internal strict trapping strips are upper-semicontinuous: They refer to these bony graphs as continuous bony graphs.

Genericity conditions
The open and dense set G of S in Theorem 2.1 is determined by a number of genericity conditions. Here we list the imposed genericity conditions. They are equal to those appearing in [5], with two additional conditions related to the fixed boundary points of I (items (1) and (5) below). The first condition gives that the end points of I are repelling or attracting, on average.
To formulate the further conditions we introduce the notions of simple transition and simple return. Definition 2.1: A finite word ω 1 , . . . , ω n is called admissible if each pair of consecutive symbols ω i ω i+1 is admissible; i.e. π ω i ω i+1 = 0. A map of the form is called an admissible composition if the word ω 1 , . . . , ω n is admissible. Definition 2.2: A simple transition is an admissible composition f ω 1 ,...,ω n : We can now state the following genericity conditions. Condition (4) precludes finite invariant sets, see [5]. The final condition relates to minimal iterated function systems. First we recall the definition of minimality of an iterated function system. Suppose given an iterated function system IFS {g 1 , . . . , g k } of continuous maps g i on a metric space X. Let Y be a subset of X with g i (Y ) ⊂ Y for all i. We say that IFS {g 1 , . . . , g k } is minimal on Y if for every points x, y ∈ Y and every neighbourhood V of y, there is a composition g i n • · · · • g i 1 that maps x into V .
The proof of [2, Lemma 3] gives the following result. Proposition 2.1: Let f , g : I → I be diffeomorphisms fixing the boundary points of I.
Then the iterated function system generated by f , g is minimal on some interval (0, u).
Here h is a local diffeomorphism. The two cases where ln (λ), ln (μ) are rationally dependent or not, are distinguished. In case ln (λ), ln (μ) are rationally dependent, the argument works if the second order derivative of h • g • h −1 at 0 is not zero. An explicit calculation shows that this gives the condition in the proposition.
The admissible returns f , g introduced in Lemma 4.6 satisfy the conditions formulated in Proposition 2.1.

Stationary measures
A key role in our study is played by ergodic invariant measures for the skew product systems. The necessary material is collected in this section. Write I = × I. For every i, j ∈ , π ij equals the probability of the transition from a point (i, x) in I to another point (j, f i (x)). For every i ∈ we denote {i} × I ∈ I by I i . We can identify I i with I. Denote by B the Borel sigma-algebra on I. We consider Borel probability measures m on the space I with m(I i ) = p i . For such a measure m, define the probability measure m i on I i by We denote by with an understanding that T m( k for the restriction of the Markov measure ν + to the cylinder C +,0 k . A direct computation gives the following correspondence between stationary measures and invariant measures for the skew product system with one sided time.
is an invariant measure of F + with marginal ν + on + A . Let F + be the Borel sigma-algebra on + A . It yields a sigma-algebra F 0 = π −1 F + on A , where π : A → + A is the natural coordinate projection. Write F for the Borel sigma-algebra on A . A measure μ on A × I with marginal ν has conditional measures μ ω on the fibers {ω} × I, such that for measurable sets A. A measure μ + on + A × I with marginal ν + likewise has conditional measures μ + ω . It is convenient to consider ν + also as a measure on A with sigma-algebra F 0 and μ + also as a measure on A × I with sigma-algebra F 0 ⊗ B. When ω ∈ A we will write μ + ω for the conditional measures μ + πω . The spaces of measures are equipped with the weak star topology. The following result relates invariant measures for the one-sided and the two-sided skew product systems. It is a special case of [1, Theorem 1.7.2]. We write A = − A × + A . and with this ω = (ω − , ω + ) for ω ∈ A . Proposition 3.1: Let μ + be an F + -invariant probability measure with marginal ν + . Then there exists an F-invariant probability measure μ with marginal ν and conditional measures ν-almost surely. Let μ be an F-invariant probability measure with marginal ν and : − Then is an F + -invariant probability measure with marginal ν + . The correspondence μ ↔ μ + given by (4), (5) is one-to-one and μ is ergodic if and only if μ + is ergodic. An invariant measure μ for which μ ω depends on the past ω − ∈ − A only, corresponds to a measure μ + that comes from a stationary measure m as in (3).

Bony graphs and thick graphs
The proof of Theorem 2.1 is divided into different steps. We will first discuss the case where both L(0) > 0 and L(1) > 0. The other cases are then easy to treat and will be considered later.

Repelling end points
We assume L(0) > 0 and L(1) > 0. We briefly outline the different steps in the proof of Theorem 2.1, which will be worked out below.
Step 1: Stationary measures: By a Krylov-Bogolyubov procedure on a suitable class of probability measures we construct stationary measures that do not assign measure to the endpoints 0 or 1 of the interval [0, 1].
Step 2: Trapping strips: The convex hull of the support of an ergodic stationary measure, as constructed in the first step, provides a trapping strip. Trapping strips can be border trapping strips or internal trapping strips.
Step 3: Conditional measures: A stationary measure gives rise to an invariant measure of the skew product system with two sided time. We prove that such an invariant measure has delta measures as conditional measures on fibers. For each trapping strip there is a unique invariant measure with support in the trapping strip.
Step 4: Attracting graphs: The points of the delta measures constitute an invariant graph. We discuss its properties in this final step.
For internal trapping strips these results have been obtained by Kleptsyn and Volk [5]. We now elaborate the different steps.
Step 1: Stationary measures. In the construction of stationary measures we iterate the transformation T , whose fixed points are the stationary measures. For k ∈ and for any n ∈ N, the nth iterate of m under the transformation T is calculated on I k as (T n m) k = 1 p k N i 1 ,...,i n =1 p i 1 π i 1 i 2 · · · π i n−1 i n π i n k f n i 1 ,...,i n m i 1 .
The above sum is over all N n possible symbol sequences of length n + 1 in n ending with the symbol k, and p i 1 π i 1 i 2 · · · π i n−1 i n π i n k is the probability of the transition to the symbol k in n steps along the symbol sequence i 1 , . . . , i n , k. We will need the following arithmetic bound that is connected to formula (6). Recall the assumptions L(0) > 0 and L(1) > 0. Write λ i = f i (0) andλ i = f i (1). Lemma 4.1: For n large enough and any k, 1 ≤ k ≤ N, Proof: We consider the end point 0. First note that for ν + -almost all ω, Hence A similar equality as (9) holds for L(1), the Lyapunov exponent at x = 1. The sum in (7) is an average over all symbol sequences of length n + 1 ending with a symbol k: where i = (i 1 , . . . ) and P n,k = {i ∈ + A ; σ n+1 i ∈ C k }. Since ν + is invariant we have ν + (C k ) = ν + (σ −(n+1) (C k )) = ν + (P n,k ) for any n ∈ N. We observe that ν + (P n,k ) = p k independent of n and we suppress the dependence of P n,k to n.
By ergodicity (2), 1 n ln (λ i 1 · · · λ i n ) converges to L(0) for ν + -almost all (i 1 , . . . ) ∈ + A , as n → ∞. We therefore have that for all ε > 0 there exists M so that ν + ( (ε, M) Take the positive constant K such that | ln λ j − L(0)| ≤ K for all j. Choose ε small and M = M(ε) so that ν + ( (ε, M) M). For any n ≥ M we can compute, Therefore, d n → 0, as n → ∞. Likewise, Since L(0) and L(1) are positive, for n large both Let M be the space of all Borel probability measures on I endowed with the weak-star topology. For small 0 < α < 1, q > 0 and c > 0 define The condition on the measure of small intervals [0, x) and (1 − x, 1] excludes measures supported on the end points 0 and 1. Note that N c depends on α and q; but we do not include this dependence in the notation. We first show that there exist ergodic stationary measures which belong to N c . Proof: Note that by (1), for each k ∈ , N i 1 ,...,i n =1 p i 1 π i 1 i 2 · · · π i n−1 i n π i n k = p k . (10) Let n 1 be a number such that for any n ≥ n 1 the inequality (7) holds in Lemma 4.1. In the following, fix any n ≥ n 1 . Since for each k there are N n possible transitions in n + 1 steps ending with k we may rewrite (7) as (10). We claim that there is a small α > 0 such that our Multiplying by α we get A similar reasoning applies to the end point 1 of I, starting with (8) rewritten as Moreover, for such δ > 0 we are able to choose a sufficiently small q = q(δ) > 0 in such a way that for each symbol sequence i 1 , . . . , i n in n , Take c with cq α > 1. Take a measure m from the N c that corresponds to α and q. We will prove T n m ∈ N c . To do this we must show that if x ≤ q then (T n m) k [0, x) ≤ cx α and (T n m) k (1 − x, 1] ≤ cx α for all k ∈ . Knowing that m k [0, x) ≤ cx α for each k ∈ and applying (11), (12) we get: Thus, for every m ∈ N c , the image T n m belongs to N c . Now we know that T n 1 (N c ) ⊂ N c . By the Krylov-Bogolyubov averaging method, for a measure m ∈ N c on the compact metric space I there is a subsequence of { 1 n n−1 r=0 T rn 1 m} n∈N which is convergent to a probability measurem ∈ N c such that T n 1m =m. Note thatm = 1 n 1 m + Tm + · · · + T n 1 −1m is a probability measure. Since T is linear and T n 1m =m, the measurem is a fixed point of T : Tm = 1 n 1 Tm + T 2m + · · · + T n 1m =m.
We have found a stationary measurem in N c for some c.
The following additional reasoning shows that there is an ergodic stationary measure in N c . Let N be the set of stationary measures on I which is a convex compact subset of M. The ergodic stationary measures are the extreme points of it. Note that N c is a convex compact subset of N , which is itself also convex and compact. We claim that the extreme points of N c are extreme points of N . Suppose by contradiction that there arē m 1 ,m 2 ∈ N \ N c and the convex combinationm = sm 1 + (1 − s)m 2 ∈ N c . In this case, for 0 ≤ x ≤ q,m 1,k ([0, x)) ≤ (c/s)x α andm 1,k ((1 − x, 1]) ≤ (c/s)x α and similar estimates for m 2 . That is, x →m i,k ([0, x))/x α and x →m i,k ((1 − x, 1])/x α are bounded. As Tm =m, we have by (11), (13) thatm ∈ Nc for somec < c. It follows that tm 1 + (1 − t)m 2 ∈ N c for t close to s. So s is an interior point of the set of values t for which tm 1 + (1 − t)m 2 ∈ N c . Since N c is closed it follows thatm i ∈ N c and the claim is proved. Since the extreme points of N are ergodic stationary measures, we conclude that the extreme points of N c are ergodic stationary measures. Since the set of extreme points of N c is nonempty by the Krein-Milman theorem, there are ergodic stationary measures.
Step 2: Trapping strips. Recall from Lemma 3.1 that a stationary measure m gives rise to an invariant measure for the one-sided skew product system, with marginal ν + on + A . We will see that the supports of such invariant measures are contained in mutually disjoint trapping strips. This step closely follows [5], with adjustments to account for the fixed end points. Definition 4.1: A subset D = N k=1 D k ⊆ I is called a domain if for each k ∈ , D k is a closed interval in I k .
A boundary point of an interval D k different from 0 or 1 is called an internal boundary point.

Definition 4.2:
The domain is strict trapping if any internal boundary point of D k is mapped inside the interior of D l .
The following proposition is [5,Proposition 4.5] and holds also here. Proposition 4.2: The following conditions are equivalent: For every admissible i, j we have f i ( supp (m i )) ⊆ supp (m j ). Since the maps f i are monotone we have that for any admissible transition i, j, Therefore, the collection I m = N k=1 I m,k is a domain, which is trapping by (14). The imposed genericity conditions imply that for a trapping domain no interval I m,k can be a single point. Proof: For a chosen trapping domain I m suppose that A m,k = 0 for some k ∈ . Then, knowing that x = 0 is a fixed point of f k for all k, we have for any l ∈ such that k, l is admissible that Hence, A m,l = min supp (m l ) = 0. Since the subshift σ is transitive A m,k = 0 for all k ∈ .
If A m,k = 0 for all k, [5,Lemma 6.3] applies and the result for A m,k holds by that lemma. If A m,k = 0 for all k, the arguments of [5,Lemma 6.3] apply to yield the same conclusion (the simple transition is redundant since 0 is a fixed point of all maps).
By Birkhoff's ergodic theorem, a generic sequence of random iterations (k n , x n ), x n ∈ I k n of a m-generic initial point is distributed with respect to the measure m. If we choose such a generic initial point (k 0 , x 0 ) then because the points (k n , x n ) are distributed with respect to m, the set X k = {x n }| k n =k is dense in supp (m k ) for any k. We apply this observation in the proof of the next lemma, which corresponds to [5,Lemma 6.7]. Lemma 4.3: For any two trapping domains I m 1 and I m 2 of two ergodic stationary measures m 1 , m 2 ∈ N c the corresponding intervals I m 1 ,k and I m 2 ,k are either disjoint for any k or coincide for any k. Proof: Assume that the intervals I m 1 ,k and I m 2 ,k intersect but do not coincide. Then, there is at least one end point of one of them that does not belong to the other one. Without loss of generality let it be the point B m 1 ,k . There is a neighbourhood V of B m 1 ,k such that By genericity condition (4), A m 1 ,k is different from B m 2 ,k . So there are generic points of m 1 in I m 1 ,k ∩ I m 2 ,k . Choose a generic point p 0 for m 1 in I m 1 ,k ∩ I m 2 ,k which is different from A m 1 ,k and B m 2 ,k . There is an admissible return g such that g(p 0 ) ∈ V (recall the observation that precedes the lemma), which implies g(p 0 ) / ∈ I m 2 ,k . On the other hand p 0 ∈ I m 2 ,k by assumption and g(I m 2 ,k ) ⊆ I m 2 ,k by (14). Since the diffeomorphisms f i 's are monotone g(p 0 ) ∈ I m 2 ,k . This is a contradiction. Therefore, I m 1 ,k and I m 2 ,k have empty intersection or coincide.
Again consider trapping domains I m corresponding to ergodic stationary measures m in N c . According to Lemma 4.3 these trapping domains are non-intersecting or coincide. By Lemma 4.2 for each trapping domain I m each end point A m,k and B m,k which does not coincide with x = 0 or x = 1 (respectively) is an image of a fixed point of a simple return by a simple transition. On the other hand, since has a finite number of symbols there is only a finite number of simple returns and simple transitions and by condition (2.1) in Section 2.1 any simple return has only finitely many fixed points. Hence, for any k ∈ only a finite number of I m,k 's can exist in I. Therefore, we conclude that there are finitely many disjoint trapping domains and corresponding to them finitely many disjoint trapping strips for F by Proposition 4.2. For every stationary measure m ∈ N c the corresponding domain I m = N k=1 I m,k and strip S m = N k=1 C 0 k × I m,k are equal to some trapping domain and trapping strip.
We thus obtain a finite number of stationary measures m t , 1 ≤ t ≤ T, with corresponding trapping domain I t and trapping strip S t .
Step 3: Conditional measures. We will see that inside each trapping strip S t , 1 ≤ t ≤ T, there exists a unique invariant measurable graph t to which almost every point of the trapping strip is attracted. First we show that for each 1 ≤ t ≤ T, μ t = μ m t has δ-measures as conditional measures along fibers inside the trapping strip S t , ν-almost surely. To prove the following lemma we follow [1, Theorem 1.8.4]. Lemma 4.4: For every ergodic stationary probability measure m, the conditional measure μ m,ω of μ m is a δ-measure for ν-almost every ω ∈ A . whereω 0 =ω 0 and ω + is any admissible sequence. Indeed, with such a construction the sequenceω belongs to C −m,...,m ω ∩ D, f n+r+κ σ −(n+r+κ+m)ω (x ) ∈ U m and f m σ −mω (U m ) ∈ U. Therefore, X(ω) = f (n+r+κ+m) σ −(n+r+κ+m)ω (x ) ∈ U and Q = (ω, X(ω)) ∈ ∩ U.

An attracting end point
It remains to consider cases with negative Lyapunov exponents at end points, i.e. where L(0) < 0 or L(1) < 0 or both. Note that in an internal trapping strip or border trapping strip bounded by an end point with positive Lyapunov exponent, stationary measures are constructed as before and the analysis proceeds as in the previous sections.
The following subcases remain: (1) L(0) and L(1) have different signs and F has no internal trapping strip, (2) L(0) and L(1) are both negative and F has no internal trapping strip, (compare (15)). Since the skew product system has negative Lyapunov exponent at the endpoint 1, A × {1} has a basin of attraction with positive standard measure. This contradicts (17). Hence, there is no stationary measure m that assigns mass outside the points {0, 1}. It follows that for almost all ω ∈ , lim n→∞ f n σ −n ω (x) = 1 for any x ∈ (0, 1]. L(0) and L(1) are both negative and F has no internal trapping strip. By the reasoning in the previous section, the inverse skew product map admits an attracting invariant graph. The skew product system hence has a repelling invariant graph, and attracting invariant graphs A × {0}, A × {1}.
At least one of L(0) or L(1) is negative and F admits an internal trapping strip. Suppose L(0) < 0. Again by following the reasoning in the previous section, the inverse skew product map then admits a border trapping strip, bounded by A × {0}, that contains an attracting invariant graph. The skew product system hence has a repelling invariant graph, and an attracting invariant graph A × {0}.

Disclosure statement
No potential conflict of interest was reported by the authors.