Linear Algebra and Linear Models (Universitext)
R. B. Bapat
Format: PDF / Kindle (mobi) / ePub
This book provides a rigorous introduction to the basic aspects of the theory of linear estimation and hypothesis testing, covering the necessary prerequisites in matrices, multivariate normal distribution and distributions of quadratic forms along the way. It will appeal to advanced undergraduate and first-year graduate students, research mathematicians and statisticians.
equals and the proof is complete. □ We may rewrite 5.3 as (5.1) for any positive definite matrix B. An alternative proof of (5.1) can be given using the Cauchy–Schwarz inequality: 5.2 Singular Values Let A be an n×n matrix. The singular values of A are defined to be the eigenvalues of . Since A′A is positive semidefinite, the singular values are nonnegative and we denote them by If there is no possibility of confusion then we will denote the singular values of A simply by σ 1≥⋯≥σ n .
density of y 1, that is, a N(μ 1,Σ 11) density to get the conditional density of y 2 given y 1. It turns out that the conditional distribution of y 2 given y 1 is multivariate normal with mean vector and the dispersion matrix . 8.2 Quadratic Forms and Cochran’s Theorem 8.3 Let y∼N(0,I n ) and let A be a symmetric n×n matrix. Then y′Ay has a chi-square distribution with r degrees of freedom if and only if A is idempotent and . Proof If A is idempotent with rank r, then there exists an
and E(SSE). We have and Thus Using the assumptions in the model the expectation can be computed as Therefore Similarly it can be shown that Solving the two equations we get the estimates The estimate is necessarily nonnegative. However, may be negative, in which case we take to be zero. It is easy to see that and are unbiased. We may compute E(SSA) and E(SSE) using the expression for the expectation of a quadratic form. We compute E(SSE) using this approach. Let y be the n×1 vector
107–120. MathSciNetMATHCrossRef Bapat, R. B. (2007). On generalized inverses of banded matrices. The Electronic Journal of Linear Algebra, 16, 284–290. MathSciNetMATH Bapat, R. B., & Ben-Israel, A. (1995). Singular values and maximum rank minors of generalized inverses. Linear and Multilinear Algebra, 40, 153–161. MathSciNetMATHCrossRef Bapat, R. B., & Bing, Z. (2003). Generalized inverses of bordered matrices. The Electronic Journal of Linear Algebra, 10, 16–30. MATH Bapat, R. B., &
11+a 13=a 22+a 31. 9.If S and T are vector spaces, then are S∪T and S∩T vector spaces as well? 10.For any matrix A, show that A=0 if and only if . 11.Let A be a square matrix. Prove that the following conditions are equivalent: (i) A=A′. (ii) A 2=AA′. (iii) . (iv) A 2=A′A. (v) . 12.Let A be a square matrix with all row sums equal to 1. If AA′=A′A, then show that the column sums of A are also equal to 1. 13.Verify that each of the following sets is a vector space and find its dimension: (i)