๐Ÿšจ Limited Offer: First 50 users get 500 credits for free โ€” only ... spots left!
Linear Algebra Flashcards

Free Linear Algebra flashcards, exportable to Notion

Learn faster with 47 Linear Algebra flashcards. One-click export to Notion.

Learn fast, memorize everything, master Linear Algebra. No credit card required.

Want to create flashcards from your own textbooks and notes?

Let AI create automatically flashcards from your own textbooks and notes. Upload your PDF, select the pages you want to memorize fast, and let AI do the rest. One-click export to Notion.

Create Flashcards from my PDFs

Linear Algebra

47 flashcards

A vector is a quantity that has both magnitude and direction. It can be represented geometrically as an arrow in space.
A scalar is a quantity that has only magnitude, no direction. Examples include mass, time, and temperature.
A matrix is a rectangular array of numbers, symbols, or expressions arranged in rows and columns.
The dimension of a vector is the number of components or entries it has.
The order of a matrix is the number of rows and columns it has, usually written as m x n, where m is the number of rows and n is the number of columns.
A linear transformation is a function that maps vectors in one vector space to vectors in another vector space while preserving vector addition and scalar multiplication.
A vector space is a set of vectors that satisfies certain algebraic operations, such as vector addition and scalar multiplication, and follows specific axioms.
The null space of a matrix is the set of all vectors x such that Ax = 0, where A is the matrix.
The rank of a matrix is the dimension of the column space or the number of linearly independent columns in the matrix.
The determinant of a matrix is a scalar value that is a function of the entries of the matrix. It is used to determine if the matrix is invertible and to solve systems of linear equations.
The transpose of a matrix is a new matrix obtained by interchanging the rows and columns of the original matrix.
The dot product of two vectors is a scalar value obtained by multiplying the corresponding components of the vectors and then summing the products.
The cross product of two vectors is a vector that is perpendicular to both vectors and has a magnitude equal to the area of the parallelogram formed by the two vectors.
The Euclidean norm of a vector is the square root of the sum of the squares of its components. It represents the length or magnitude of the vector.
The inverse of a matrix A is a matrix B such that AB = BA = I, where I is the identity matrix.
A basis for a vector space is a set of linearly independent vectors that can be used to express any vector in the space as a linear combination of the basis vectors.
The span of a set of vectors is the set of all possible linear combinations of those vectors.
The Gram-Schmidt process is a method for finding an orthonormal basis for a vector space, given a set of linearly independent vectors.
A diagonalizable matrix is a square matrix that is similar to a diagonal matrix, meaning it can be written as P^(-1)DP, where D is a diagonal matrix and P is an invertible matrix.
The trace of a square matrix is the sum of its diagonal elements.
A positive definite matrix is a symmetric matrix whose quadratic form is positive for all non-zero vectors.
The singular value decomposition of a matrix is a factorization of the matrix into the product of three matrices: U, ฮฃ, and V^T, where U and V are orthogonal matrices, and ฮฃ is a diagonal matrix.
The LU decomposition is a matrix decomposition that writes a matrix as the product of a lower triangular matrix (L) and an upper triangular matrix (U).
An eigenvalue of a square matrix is a scalar value ฮป such that Av = ฮปv for some non-zero vector v, where v is called the eigenvector corresponding to the eigenvalue ฮป.
An orthogonal matrix is a square matrix whose rows and columns are orthonormal vectors, meaning they are mutually perpendicular and have unit length.
A unitary matrix is a square complex matrix whose inverse is equal to its conjugate transpose.
A rotation matrix is a matrix that rotates a vector or a coordinate system about a fixed origin by a certain angle in a specific plane.
A projection matrix is a matrix that projects a vector onto a subspace of the vector space.
A normal matrix is a square matrix that commutes with its conjugate transpose, meaning AA^H = A^HA, where A^H is the conjugate transpose of A.
A Hermitian matrix is a square matrix that is equal to its conjugate transpose, meaning A = A^H.
A skew-symmetric matrix is a square matrix whose transpose is equal to its negative, meaning A^T = -A.
A Markov matrix is a square matrix that describes the transitions of a Markov chain, with each element representing the probability of transitioning from one state to another.
A stochastic matrix is a square matrix with non-negative entries, and each row sums to 1. It is often used to represent transition probabilities in Markov chains.
A nilpotent matrix is a square matrix whose powers eventually become the zero matrix after a finite number of multiplications.
An idempotent matrix is a square matrix that, when multiplied by itself, gives itself, meaning A^2 = A.
A permutation matrix is a square matrix that represents a permutation of a sequence or a matrix, having exactly one entry of 1 in each row and column, with the remaining entries being 0.
A Toeplitz matrix is a matrix in which each descending diagonal from left to right is constant, meaning the entries are constant along each diagonal.
A Vandermonde matrix is a matrix with the terms of a geometric progression in each row, often used in polynomial interpolation.
A Householder matrix is a matrix used in the Householder transformation, which reflects a vector across a plane or hyperplane containing the origin.
The Kronecker product, denoted by โŠ—, is an operation on two matrices that produces a larger matrix by taking the product of each element of the first matrix with the second matrix.
A block matrix is a matrix composed of smaller matrices, or blocks, arranged in a rectangular layout.
A circulant matrix is a square matrix in which each row is a cyclic shift of the row above it, and the columns are also cyclic shifts of the first column.
A Sylvester matrix is a matrix that arises in the study of resultants and is used to solve systems of polynomial equations.
A Hankel matrix is a square matrix in which each ascending skew diagonal from left to right is constant, meaning the entries are constant along each skew diagonal.
A Cauchy matrix is a matrix whose entries are given by the reciprocal of the difference between the row and column indices, often used in the study of integral equations.
A Hadamard matrix is a square matrix whose entries are either +1 or -1, and whose rows and columns are mutually orthogonal.
The Khatri-Rao product is a column-wise Kronecker product of two matrices, often used in signal processing and multilinear algebra.