On the adjoint of Hilbert space operators

In general, it is a non trivial task to determine the adjoint $S^*$ of an unbounded operator $S$ acting between two Hilbert spaces. We provide necessary and sufficient conditions for a given operator $T$ to be identical with $S^*$. In our considerations, a central role is played by the operator matrix $M_{S,T}=\left(\begin{array}{cc} I&-T\\ S&I\end{array}\right)$. Our approach has several consequences such as characterizations of closed, normal, skew- and selfadjoint, unitary and orthogonal projection operators in real or complex Hilbert spaces. We also give a self-contained proof of the fact that $T^*T$ always has a positive selfadjoint extension.


Introduction
The notion of adjoint operator of a densely defined linear operator S acting between the (real or complex) Hilbert spaces H and K is originated by von Neumann [1] and is determined as an operator S * from K into H having domain dom S * = {k ∈ K|(Sh|k) = (h|k * ) for some k * ∈ H, for all h ∈ dom S}, and acting by Here the uniqueness of k * in H is guaranteed by density of the domain dom S of S. Nevertheless, it is a non-trivial task to determine the adjoint S * of S, that is, to describe the domain dom S * of S * explicitly and to specify the action of S * on elements of dom S * . Clearly, S and its adjoint S * fulfil the 'adjoining identity' that is to say, T = S * is a linear operator from K into H which satisfies (Sh|k) = (h|Tk), h ∈ dom S, k ∈ dom T. (1.1) However, in order to have T = S * , it is not enough to demand that T satisfies (1.1). For instance, every symmetric operator S, without being selfadjoint, satisfies (1.1) with T = S. In the present paper, we are particularly interested in pairs of (not necessarily densely defined) linear operators S and T from H into K, and K into H, respectively, which fulfil identity (1.1). We adopt the terminology of Stone [2] and say that S and T are adjoint to each other if they satisfy (1.1) and write S ∧ T, in that case (cf. also [3][4][5]). Our main purpose in this paper is to provide a method to verify whether the operators S and T under the weaker condition S ∧ T satisfy the stronger property S * = T, or the much stronger one of being adjoint of each other, i.e. S * = T and T * = S. In this direction, our main results are Theorem 2. A remarkable advantage of our treatment is that no density assumption on the domains of the operators S, T is imposed. On the contrary, densely definedness is just achieved as a consequence of the other conditions. Furthermore, the results are not limited to complex Hilbert spaces, they will remain valid in real spaces as well. This in turn allows to extend von Neumann's results characterizing skew-adjoint, selfadjoint and positive selfadjoint operators, to real Hilbert space setting.
The paper is organized as follows. In Section 2 we discuss the question whether a given operator T is identical with the adjoint of another operator S. The main result in this direction is Theorem 2.2 that gives an answer by means of the range of the operator matrix M S,T . This result will be extensively used throughout. In Theorem 3.1 of Section 3 a full description of operators which are adjoint of each other is established. This is a sharpening of Theorem 3.4 in [3]. We also offer a 'dual' version of the Hellinger-Toeplitz theorem which concerns full range operators that are adjoint to each other. In Section 4 we consider sums and products of linear operators. Our purpose here is to describe the situations in which for given two operators R, S equalities (R + S) * = R * + S * and (RS) * = S * R * hold. In [6] the first named author offered a metric characterization of the range of the adjoint S * of a densely defined linear operator S, namely, ran S * is described by consisting of those vectors z which fulfil a Schwarz type inequality x ∈ dom S with some non-negative constant M z . In Section 5, we improve this result to describe the range of an operator T which is adjoint to an operator S. In Sections 6 and 7, we deal with skew-adjoint, selfadjoint and positive selfadjoint operators and characterize them among skew-symmetric, symmetric and positive symmetric operators. Instead of using the defect index theory developed by J. von Neumann, our method involves the range of M S,S . In Theorem 7.3, we also present a new proof of the fact that T * T always has a positive selfadjoint extension (see [7]). In Section 8, we characterize densely defined closed operators. The main result of the section establishes also a converse to Neumann's classical result: T * T and TT * are both selfadjoint operators if and only if T is densely defined and closed (see also [8]). Finally, in Section 9 we obtain some characterizations of normal, unitary and orthogonal projection operators.

Characterization of the adjoint of a linear operator
Let H and K be real or complex Hilbert spaces and let S be a not necessarily densely defined or closed linear operator between them. The problem mentioned in the introduction consists of the identification of the adjoint S * of S (provided that S is densely defined). Doing so we start by fixing another linear operator T between K and H satisfying (1.1), that is, S and T are adjoint to each other. As a main tool for our investigations, we introduce the operator matrix The importance of the role of the operator matrix M S,T initiates the recent papers of the authors [3][4][5]9,10]. The 'flip' operator W : K × H → H × K will also be useful for our analysis, which is defined as follows (see [11]): The symbols pr H and pr K stand for the canonical projections of the product Hilbert space H × K onto H and K, respectively, which are defined accordingly by the correspondences The graph G(S) of an operator S is given by the usual identity: We notice that G(S) is a linear subspace of H × K and we have The orthocomplement G(S) ⊥ of the graph G(S) plays a specific role in the following characteristic statement: Lemma 2.1: Let S and T be linear operators between H and K, respectively, K and H. If S and T are adjoint to each other then we have the following identity: Proof: First of all S ∧ T means that for each h ∈ dom S and k ∈ dom T one has ((h, Sh)|( − Tk, k)) = −(h|Tk) + (Sh|k) = 0, whence it follows that W G(T) ⊆ G(S) ⊥ . On the other hand, The reverse inclusion is obtained by the following simple argument: suppose M S,T (h, k) ∈ G(S) ⊥ for some h, k, then This gives h = 0 and therefore M S,T (0, k) = ( − Tk, k) ∈ W G(T) , which proves the lemma.
The next result gives necessary and sufficient conditions for an operator T to be the adjoint of an operator S. We emphasize that conditions (ii), (iii) as natural concepts do not make use in any sense of the density of the domain of S, the classical condition of the existence of S * . Theorem 2.2: Let S and T be linear operators between Hilbert spaces H and K, respectively, K and H. Then the following statements (i)-(iii) are equivalent: hence (i) implies (ii). That (ii) implies (iii) is clear. Assume finally (iii) and prove (i). First of all we show that S is densely defined. To do so let h ∈ dom S ⊥ . In this case, (h, 0) belongs to G(S) ⊥ . By Lemma 2.1, we have G(S) ⊥ = W G(T) and hence (h, 0) = ( − Tk, k) for a certain k ∈ dom T. Thus k = 0 and h = Tk = 0, as it is claimed. As a result, the adjoint operator S * exists and satisfies As a consequence, S * = T, hence (iii) implies (i). Proof: To prove that (i) implies (ii) let us recall the following well-known decomposition

Operators which are adjoint of each other
regarding the densely defined closed operator S (see [12,Theorem 4.16]). Consider u ∈ H and v ∈ K. Since S * = T by assumption, identity (3.2) yields unique h ∈ dom S and k ∈ dom T such that Consequently, u = h − Tk and v = Sh + k, or in other words, the operator matrix This in turn shows that M S,T has full range, i.e.
ran M S,T = H × K.
Assume now (ii) and fix u ∈ H and v ∈ K, then there exist h ∈ dom S and k ∈ dom T such that by surjectivity. An easy calculation shows that M T,S (k, −h) = (v, u) and therefore that M T,S is surjective. A very similar argument shows that ran M T,S = K × H implies ran M S,T = H × K, thus we conclude that (ii) and (iii) are equivalent. To see that (ii) implies (iv), we are going to prove first that M −S,−T has full range too. To this aim fix u ∈ H and v ∈ K and choose h ∈ dom S, k ∈ dom T such that M S,T (h, k) = (u, v). In other words, we have the following equalities: shows that ran (I + TS) = H and ran (I + ST) = K because the product of surjective operators itself is surjective. Implication (iv) ⇒ (v) goes similar: by factorization (3.3), we see that ran M S,T = H × K. By the equivalence of (ii) and (iii), we obtain that ran M T,S = K × H and hence (iv) implies (v). Finally, the implication (v)⇒(i) follows immediately from Theorem 2.2.

Theorem 3.2: Let S and T be linear operators between
Hilbert spaces H and K, respectively, K and H, such that they are adjoint to each other. Suppose that ker S + ran T = H and ran S + ker T = K.
Then both S and T are densely defined operators having closed range, such that they are adjoint of each other: S * = T and T * = S.
whence we get v = Tk. On the other hand, for all x in dom S, whence k + w ∈ ran S ⊥ = ker T. This in turn implies that w ∈ dom T and −Tw = Tk = v, hence T . A very similar argument shows that G(T) ⊥ ⊆ ran M T,S , hence S and T are densely defined such that S * = T and T * = S. Finally, S and T have closed range because ( ran S) ⊥ = ker T and ran S + ker T = K, and ( ran T) ⊥ = ker S and ker S + ran T = H, respectively.
The generalized Hellinger-Toeplitz theorem (see e.g. [13]) says that for everywhere defined operators S and T the relation S ∧ T implies that both S and T are bounded and they are adjoint to each other. The corresponding 'dual' statement is phrased below: Theorem 3.3: Let S and T be linear operators between Hilbert spaces H and K, respectively K and H, such that they are adjoint to each other: S ∧ T. Assume in addition that S and T are of full range, i.e. ran S = K and ran T = H. Then S and T have (everywhere defined) bounded inverse, and S, T are adjoint to each other.
Proof: First of all we observe that ker S = {0} (respectively, ker T = {0}), hence the inverse operators S −1 and T −1 exist. Indeed, if Sh = 0 for some h ∈ dom S then for every k from dom T one has 0 = (Sh|k) = (h|Tk), whence h = 0, being orthogonal to ran T = H. Here, S −1 and T −1 are adjoint to each other because for h ∈ H and k = Sh ∈ K, we have By the remark preceding the theorem, S −1 and T −1 are bounded operators such that (S −1 ) * = T −1 . Consequently, S and T are densely defined and fulfil (3.1).

Adjoint of sums and products
Given two densely defined linear operators R, S acting between Hilbert spaces H and K one cannot expect in general the additive identity all the more so, because the operator on the left side does not exist in general. A well-known assumption for (4.1) is that any of the operators be bounded (see [13]). For more general results, the reader may consult [5,9,11,12,14,15]. In the next theorem, we provide necessary and sufficient conditions in order that (4.1) be satisfied (cf. also [ Proof: A direct calculation shows that R + S and R * + S * are adjoint to each other, hence the statement of the theorem follows from Theorem 2.2.
Next we concern with the multiplicative version where R, S are operators acting between Hilbert spaces H 2 and H 3 , respectively, H 1 and H 2 . Just like in the 'additive' case, we may not expect (4.2) to hold in general. The multiplicative identity can be guaranteed by some strongly restrictive conditions, for example, if R is (everywhere defined) bounded operator (see [13]) or when S admits bounded inverse (see [11]). As an application of Theorem 2.2, we gain necessary and sufficient conditions (cf. also [5]): Theorem 4.2: Let R, S be operators acting between Hilbert spaces H 2 and H 3 , respectively H 1 and H 2 . Then the following assertions are equivalent: Proof: A direct calculation shows that RS and S * R * are adjoint to each other, hence the statement of the theorem follows from Theorem 2.2.
Closures of sum and product of two linear operators may appear as the sum, respectively, the product of the closures of the operators (see also Appendix B of [14]): Corollary 4.3: Let R, S be densely defined closable operators on a Hilbert space H such that the following two relations are satisfied: Then we have the next additive property of the closure operation: Proof: Assumption (a) implies by Theorem 4.1 that (R + S) * = R * + S * . One more application of the same theorem proves the statement.

Corollary 4.4: Given two closable densely defined operators R and S on a Hilbert space H
such that they satisfy the following two conditions: Then we have the next multiplicative property of the closure operation: Proof: Assumption (a) implies (RS) * = S * R * in view of Theorem 4.2. Hence one more use of the theorem yields the statement.

Corollary 4.5:
Let R, S be linear operators in the Hilbert space H and assume they are adjoint to themselves (that is, R ∧ R and S ∧ S). Then the following two assertions are equivalent:

Proof:
We need only to check that RS ∧ SR:

indeed.
A symmetric version of the above result reads as follows: Proof: This is a straightforward consequence of the preceding corollary.

Range of adjoint operators
For a densely defined closed linear operator S between H and K, one has the orthogonal decomposition H = ker S ⊕ ran S * , so that the elements of the range closure of the adjoint operator S * are obtained as the ones being orthogonal to the kernel of S. Describing the elements of the range of S * is more involved. A metric characterization of ran S * is given in [6] by the first named author. Below we provide a generalization of that result for the case of operators which are adjoint to each other. Theorem 5.1: Let H, K be real or complex Hilbert spaces and let S : H → K and T : K → H be linear operators adjoint to each other and assume that ran S ∩ pr K W G(S) ⊥ ⊆ dom T. For a given z ∈ H, the following assertions are equivalent: For every x from dom S, we have which implies (ii). For the converse implication observe that (ii) forces the following linear functional to be continuous. (Here K stands for the underlying scalar field, R or C, respectively.) By the Riesz representation theorem, there is a unique vector u ∈ ran S such that and therefore because T ∧ S. Note that (5.1) simultaneously yields z − Tu ∈ ( dom S) ⊥ . Consequently, which completes the proof.
Since the adjoint of a densely defined linear operator S fulfils we just obtain identity As a consequence, we retrieve Theorem 1 of [6] which characterizes the range of the adjoint operator: Corollary 5.2: Let S be a densely defined linear operator between H and K. For z ∈ H, the following conditions are equivalent

Corollary 5.3: Let T be a densely defined closed linear operator between H and K.
For z ∈ K, the following conditions are equivalent: Proof: The proof is immediate from Corollary 5.2 because T = T * * by assumptions.
From the Banach closed range theorem, it is known that the adjoint S * of a densely defined closed operator S is of full range if and only if the operator is bounded from below. The next theorem establishes a generalization of that fact for operators which are adjoint to each other. Proof: Assume first (i) and consider z ∈ H. For any x ∈ dom S we have |(x|z)| ≤ x z ≤ c −1 y Sx , hence z ∈ ran T + ( dom S) ⊥ , according to Theorem 5.1. Suppose conversely (ii). Our first claim is to check that S is one-to-one: for if x ∈ ker S, then for any y ∈ dom T and u ∈ ( dom S) ⊥ we have (x|Ty + u) = (x|Ty) = (Sx|y) = 0, hence x = 0, indeed. The inverse S −1 of S exists therefore as an operator K ⊇ ran S → H. Furthermore, for any z ∈ dom S −1 , y ∈ dom T and u ∈ ( dom S) ⊥ , we have This in turn shows that the set {S −1 z|z ∈ dom S −1 , z ≤ 1} is weakly bounded and hence also uniformly bounded according to the Banach uniform boundedness principle. That means that there exists M > 0 such that S −1 z ≤ M z for all z ∈ dom T −1 , which clearly implies (i).

Corollary 5.5: For a densely defined linear operator S between H and K, the following statements are equivalent:
(i) There is c > 0 such that Sx ≥ c x for all x ∈ dom S. (ii) S * is a full range operator, i.e. ran S * = H.
Proof: This is an immediate consequence of the preceding theorem and (5.2).

Skew-adjoint and selfadjoint operators
An operator S acting on a real or complex Hilbert space H is called symmetric if it is adjoint to itself, i.e. S satisfies S ∧ S: We say that a densely defined operator S is selfadjoint if S * = S and we call S skew-adjoint if S * = −S. If the underlying Hilbert space is complex then the mapping S → iS establishes a bijective correspondence between symmetric and skew-symmetric, and also between selfadjoint and skew-adjoint operators. In our first result, we are going to provide a characterization of skew-adjoint operators among skew-symmetric ones. This simultaneously extends [4, Theorem 4.1] and [16, Theorem 2.1]. We emphasize again that no assumption regarding the density of the domain of S is required and also the underlying space is allowed to be either real or complex.  we just conclude the equivalence of (ii) and (iv). Finally, the missing equivalence (iv)⇔(v) follows from the observation that S 2 is a positive symmetric operator.
As an immediate, and somewhat surprising consequence, we obtain that any symmetric square root of a positive selfadjoint operator itself is selfadjoint: Corollary 6.4: Let A be a positive selfadjoint operator in the real or complex Hilbert space H. If B is a symmetric operator such that B 2 = A then B is selfadjoint.
A characterization of closed range selfadjoint operators is established in the next result: Corollary 6.5: Let S be a (not necessarily densely defined or closed) symmetric operator in the real or complex Hilbert space H such that

Then S is a (densely defined and) selfadjoint operator with closed range.
Proof: First we remark that ran S ⊥ = ker S. Indeed, for if z ∈ ran S ⊥ then z = k + Sh for suitable k ∈ ker S and h ∈ dom S, so whence it follows that z = k ∈ ker S, i.e. ran S ⊥ ⊆ ker S. The converse inclusion is straightforward. To see that S is selfadjoint, we prove that G(S) ⊥ ⊆ ran M S,S : for let (v, w) ∈ G(S) ⊥ . Choose h from dom S and k from ker S such that v = k + Sh. Then, as for all x in dom S, whence h + w ∈ ran S ⊥ = ker S. Hence w ∈ dom S and −Sw = Sh = v. Therefore which gives (v, w) ∈ ran M S,S . An application of Theorem 6.3 implies that S is selfadjoint.
As a straightforward consequence we also receive a generalization of [12,Exercise 10.4] by Weidmann: Corollary 6.6: Let S be a symmetric operator in a real or complex Hilbert space H such that ker (S + λI) + ran (S + λI) = H for some real λ. Then S is densely defined and selfadjoint.
We close the section with a 'dual' version of the classical Hellinger-Toeplitz theorem that improves [2, Theorem 2.9] of Stone: Corollary 6.7: A symmetric operator with full range is automatically densely defined and selfadjoint possessing a bounded inverse.
Proof: A surjective symmetric operator S obviously fulfils the conditions of the preceding corollary, hence S must be selfadjoint. Since S has trivial kernel, S −1 exists as an everywhere defined selfadjoint, hence closed operator. The closed graph theorem forces S −1 to be bounded.

Positive selfadjoint operators
A not necessarily densely defined linear operator A acting in a real or complex Hilbert space H is called positive if it satisfies A positive operator in a complex Hilbert space is automatically symmetric but this is not the case on real Hilbert spaces. To begin with we offer a characterization of positive selfadjoint operators (see also [4, Proposition 3.1]).

Theorem 7.1: Let A be a positive linear operator in a real or complex Hilbert space H.
Then the following assertions are equivalent:

(i) A is (densely defined and) selfadjoint. (ii) A is symmetric (i.e. A ∧ A) and ran (I + A) = H.
Proof: If A is positive and selfadjoint then I + A is a closed operator that is bounded from below, hence it has closed range. By selfadjointness, this implies ran (I + A) = H. Conversely, I + A is a full range symmetric, and hence selfadjoint operator in view of Corollary 6.7. As a result, A = (I + A) − I is also selfadjoint.
If T is a densely defined closed operator between H and K then T * T and TT * are both selfadjoint operators in the Hilbert spaces H and K, respectively. It is also known that the domain of the corresponding positive selfadjoint square roots |T| := (T * T) 1/2 and |T * | = (TT * ) 1/2 satisfy As an application of Theorem 6.3, we prove below an inequality that was called ' Consequently, i.e. U|T|U * is a positive symmetric operator with selfadjoint square. By Theorem 6.3, U|T|U * itself is selfadjoint and U|T|U * = |T * |, accordingly. For x ∈ dom T and y ∈ dom T * , we have therefore U * y ∈ dom|T| and completing the proof.
As it was shown in [7], T * T is not necessarily selfadjoint but it always has a positive selfadjoint extension, namely its smallest, so called Krein-von Neumann extension. Below we give an alternative proof of that statement, constructing a selfadjoint extension of T by means of the canonical graph projection of H × K onto G(T).
which yields y = −z ∈ dom T * . Consequently, ran G S ⊆ G(T). Next we prove that G S is the graph of an operator: with this aim let (0, w) ∈ G S . Then w belongs to dom T * because of the preceding observation. Furthermore, (0, w) ∈ G(T) implies for all z from dom T * , hence w ∈ ( dom T * ) ⊥ and therefore w = 0. We see now that G S is the graph of a closable operator, say S. Our next claim is to show that S is densely defined. To this end consider, a vector u from ( dom S) ⊥ and observe that 0 = u 0 x Sx holds for all x from dom S. Hence u 0 ∈ G(S) ⊥ and consequently, This means that u 0 belongs to G(T) ⊥ and thus u 0 = −T * z z for certain z from dom T * . This in turn shows that z = 0 and therefore u = −T * z = 0, as it is claimed. Now we see that S is a densely defined closable operator such that G(S) ⊆ G(T). In particular, S * S is a positive symmetric extension of T * S and thus T * S itself is positive and symmetric. According to Theorem 7.1, in order to prove selfadjointness, our only duty is to show that I + T * S has full range. To this aim consider u from H, then for certain x ∈ dom S and z ∈ dom T * . That gives Sx = −z ∈ dom T * and therefore as it is claimed.
In order to see that T * S extends T * T let us consider v in dom T * T, then v + T * Tv In particular, v ∈ dom S and Sv = Tv. Hence, T * Sv = T * Tv.
To the uniqueness part of the statement consider an operator R which satisfies conditions (a)-(c), with S replaced by R. By (b), Rx ∈ dom T * for every x ∈ dom R. Hence T * Rx −Rx ∈ W G(T * ) and x Rx ∈ G(T) by part (a) and therefore whence we infer that x Rx ∈ G(S) and R ⊆ S, accordingly. For the reverse inclusion consider u from H. By positive selfadjointness, there is a unique x in dom T * R such that u = x + T * Rx. Just as above, and therefore S ⊆ R. The proof is complete.
We remark that S = T * * for closable T may only happen if T is bounded. Indeed, in this case we have ran T * * ⊆ dom T * and hence T is bounded according to [19,Lemma 2.1]. For closed T, we can establish the following result: For the reverse inclusion let x ∈ H and y ∈ dom T * such that (x, y) ∈ G(T). Then whence we get x y = P x + T * y 0 ∈ G(S) and x ∈ dom S, accordingly.
Thanks to formula (7.2) describing the domain of S we are able to specify also the domain of T * S: closable if the closure G(T) is the graph of an operator. A densely defined linear operator T is closable if and only if its adjoint operator T * is densely defined, i.e. dom T * is a dense linear subspace of H. In that case, the second adjoint operator T * * of T exists and its graph G(T * * ) is just G(T). Densely defined closed operators are therefore characterized as being those closable operators T for which the equality T = T * * holds true.
One of the most important results concerning densely defined closed operators is due to von Neumann: if T is densely defined and closed then both T * T and TT * are selfadjoint operators. Recently the authors proved the converse of this statement ([8, Theorem 2.1]): if T * T and TT * are both selfadjoint operators (provided that T * exists) then T must be closed. Below we offer an improved version of this result: Theorem 8.1: Let T be a densely defined linear operator between the Hilbert spaces K and H. Then the following assertions (i)-(vi) are equivalent: Proof: The proof of implication (i)⇒(ii) is similar to that of implication (i)⇒(ii) of Theorem 3.1, so it is left to the reader. Equivalence of (ii) and (iii) is obtained due formula Since T * T and TT * are positive symmetric operators, an immediate application of Theorem 7.1 shows that (iii) and (iv) are equivalent. Implications (ii)⇒(v), (v)⇒(vi) (observe that T * ∧ T and use Theorem 2.2) and (vi)⇒(i) are clear. The proof is therefore complete.
We remark that the equivalence between (i) and (iv) has been established recently by Sandovici [20] for linear relations.

Normal, unitary and orthogonal projection operators
In this section, we apply our foregoing results in order to gain some characterizations of normal and unitary operators as well as orthogonal projections.
To start with, we provide some formally weaker conditions implying normality of an operator N. Recall that a densely defined closed linear operator N in a Hilbert space H is called normal if the selfadjoint operators N * N and NN * are identical. In view of the characterization Theorem 8.1 of a closed densely defined operator, the definition of normality can be weakened by omitting the 'closedness' assumption on N. The ensuing theorem says that it is moreover enough to assume N * N and NN * to be adjoint of each other: Theorem 9.1: Let N be a densely defined linear operator in a Hilbert space H. The following assertions (i)-(iv) are equivalent: Here we used the fact that N * N and NN * are densely defined symmetric operators.
We proceed to characterizations of unitary operators: (iv) U is densely defined, closed with dense range such that U −1 ⊂ U * .
Proof: From Theorem 2.2, we have (i)⇒(ii)⇒(iii)⇒(iv), so our only duty is to prove that (iv) implies (i). U −1 exists as a densely defined closed operator, furthermore we have ran U ⊆ dom U * , whence H = dom U * + ran U ⊆ dom U * . (9.1) This means that U * is everywhere defined and continuous in account of the closed graph theorem. Consequently, U is continuous too and U * = U −1 . The proof is therefore complete.
We close the paper by characterizing orthogonal projections (cf. also [3, Corollary 3.7]): Theorem 9.3: Let P be a symmetric linear operator in a Hilbert space H. The following assertions (i)-(iii) are equivalent: (i) P is an orthogonal projection, i.e. P is an everywhere defined bounded operator such that P = P * = P 2 . (ii) G(P) ⊥ ⊆ ran M P,P ∩ ran M P,P 2 . (iii) P is selfadjoint and P 2 ⊂ P.

Proof:
The equivalence between (ii) and (iii) is clear by Theorem 2.2. Furthermore, (i) obviously implies (iii). Finally, if we assume (iii) then we infer that P 2 = PP * = P because a selfadjoint operator has no proper selfadjoint extension. It remains to prove that P is continuous. Since have ran M P,P * = H × H by selfadjointness, and also dom P 2 = dom P, it follows that H = dom P + ran P * = dom P + ran P ⊆ dom P, i.e. dom P = H. By the closed graph we conclude that P is bounded.

Disclosure statement
No potential conflict of interest was reported by the authors.

Funding
Zsigmond Tarcsay was supported by the Hungarian Ministry of Human Capacities [grant number NTP-NFTÖ-17].