Mathematics and Logics on the Basis of Quantum (Week 6)


Core 2 - Introduction to Quantum Computing


  Rúben André Barreiro


What is Linear Algebra?


  • The Linear Algebra is a branch of Mathematics that studies, in detail, Systems of Linear Equations.


  • The Systems of Linear Equations studied in Linear Algebra can be:

    • Algebraic;

    • Differential;


  • The Linear Algebra is central to almost all areas of Mathematics.


  • The Linear Algebra is fundamental in modern presentations of Geometry, including for defining:

    • Geometric Objects, such as:

      • Point(s) (0 Dimensions);

      • Lines(s) (1 Dimension);

      • Plane(s) or Surface(s) (2 Dimensions);

      • Solid Geometric Object(s) (3 Dimensions);


    • Geometric Transformations (also known as T.S.R. Transformations), such as:

      • Translaction(s) (Moving a Geometric Object from a location to another one);

      • Scaling(s) (Contract or Expand a Geometric Object, in terms of size);

      • Rotation(s) (Circular Movement of a Geometric Object around a location);


  • The Linear Algebra uses some fundamental concepts and structures of Mathematics, such as:

    • Vectors;

    • Vectorial Spaces;

    • Linear Transformations/Operations;

    • Systems of Linear Equations;

    • Matrices;


  • The Linear Algebra is also used in most sciences and fields of engineering, such as:

    • Mathematics;

    • Economics;

    • Physics;

    • Mechanics;

    • Chemistry;

    • Biochemistry;

    • Biology;

    • Electronics;

    • Computing (Computer Science);


  • The Linear Algebra allows modeling many natural phenomena, and computing efficiently with such models.

  • For Non-Linear Systems, which cannot be modeled with Linear Algebra, it's often used First-Order Approximations, taking advantage of the fact that the differential of a Multivariate Function at a point is the Linear Map that best approximates the Function near that point.

  • Many of the basic tools of Linear Algebra, particularly those related to the solution of Systems of Linear Equations, date back to antiquity, most precisely, around II Century, although many of these tools have not been isolated and considered separately until the XVII and XVIII Centuries.

  • This scientific topic began to take its current form in the middle of the XIX Century, when many of the, abstracted and generalized, notions and methods, from previous centuries became the basis to the beginning of Abstract Algebra.

  • The use of such notions and methods, in General Relativity, Statistics and Quantum Mechanics did much to spread the topic beyond the Pure Mathematics.



*** © Rúben André Barreiro - Learning Quantum Computing (Online Web Course) - All rights reserved ***


Reviewing Linear Algebra


  • Vector Spaces or Vectorial Spaces:

    • A Vector Space (also called a Vectorial Space or Linear Space) is a collection of objects called Vectors, which may be added/summed together and multiplied ("scaled") by other numbers, called Scalars (per example, Real Numbers, Complex Numbers, Rational Numbers, among others);

    • A Vector Space over a Field $ F $ is a Set $ V $ together with 2 (two) Operations that satisfy the 8 (eight) Axioms;


    • Operations/Transformations:

      • Vector Addition ($ + : V \times V \Rightarrow V $):

        • Takes any 2 (two) Vectors $ v $ and $ w $, and assigns to them, a 3rd (third) Vector which is commonly written as $ v + w $, and called the sum/addition of these 2 (two) Vectors;

        • The resultant Vector is also an Element of the Set $ V $;


      • Scalar Multiplication ($ \cdot : F \times V \Rightarrow V $):

        • Takes any Scalar $ a $ and any Vector $ v $, and gives another Vector $ av $;

        • The resultant Vector ($ av $) is also an Element of the Set $ V $;


      • Notes:

        • Elements of $ V $ are commonly called Vectors;

        • Elements of $ F $ are commonly called Scalars;


    • Axioms:

      • The Operations/Transformations of Vector Addition and Scalar Multiplication must satisfy certain requirements, called Axioms, described as following:

        • In the following table, let $ u $, $ v $ and $ w $ be arbitrary Vectors in $ V $, and, $ a $ and $ b $ Scalars in $ F $:

          Axiom Meaning
          Associativity of Addition $ u + (v + w) = (u + v) + w $
          Commutativity of Addition $ u + v = v + u $
          Identity Element of Addition There exists an Element $ 0 \in V$, called the Zero Vector, such that $ v + 0 = v $, for all $ v \in V $
          Inverse Elements of Addition For every $ v \in V $, there exists an $ -v \in V $, called the Additive Inverse of $ v $, such that, $ v + (-v) = 0 $
          Compatibility of Scalar Multiplication with Field Multiplication $ a(bv) = (ab)v $
          Identity Element of Scalar Multiplication $ 1v = v $, where 1 (one) denotes the Multiplicative Identity in $ F $
          Distributivity of Scalar Multiplication with respect to Vector Addition $ a(u + v) = au + av $
          Distributivity of Scalar Multiplication with respect to Field Addition $ (a + b)v = av + bv $


    • There are 2 (two) cases, commonly used in Engineering:

      • When the Scalar Field $ F $ is the Real Numbers $ \mathbb{R} $, the Vector Space is called a Real Vector Space;

      • When the Scalar Field $ F $ is the Complex Numbers $ \mathbb{C} $, the Vector Space is called a Complex Vector Space;


    • The general definition of a Vector Space allows Scalars to be Elements of any Fixed Field $ F $:

      • The notion is then known as a F-Vector Space or a Vector Space over $ F $;

      • A Field is, essentially, a Set of Numbers possessing the addition, subtraction, multiplication and division Operations/Transformations:

        • Real Numbers form a Field;

        • Complex Numbers form a Field;

        • Rational Numbers form a Field;




  • Matrix:

    • Definition:

      • In Mathematics and Computing, a Matrix is a concept that represents a rectangular array of numbers, symbols, expressions or other mathematical objects, arranged in rows and columns, for which mathematical operations, such as addition and multiplication are defined.

      • The previously mentioned numbers, symbols, expressions, or other mathematical objects, in the Matrix are called its entries (or, its elements).

      • The horizontal and vertical lines of entries in a Matrix are called rows and columns, respectively.

      • Most commonly, a Matrix over a field $ F $ is a rectangular array of scalars, each of which is a member of $ F $.

      • The most known type of Matrices are the Real and Complex Matrices, i.e., Matrices whose entries (or, elements) are Real Numbers or Complex Numbers, respectively.


      • For instance, this is a Real Matrix:

        • $$ M_{1} = \begin{bmatrix} 2 & -3 \\ 8 & 5 \\ \end{bmatrix}\ (\ M_{1} \in \mathbb{R} \ )\hspace{1000ex} $$


      • And, in other hand, for instance, this is a Complex Matrix:

        • $$ M_{2} = \begin{bmatrix} 6 & -1+2i & i\\ 3-i & 7 & 3\\ \end{bmatrix}\ (\ M_{2} \in \mathbb{C} \ )\hspace{1000ex} $$


    • Applications:

      • A possible major application of Matrices is to represent Linear Transformations, that is, generalizations of Linear Functions, such as $ f(x) = 4x $.

      • The Matrices can also be used to represent Rotation Matrices, since are also Linear Transformations, for Rotation of Vectors in 2D Space and 3D Space (Two-Dimensional Space and Three-Dimensional Space).

      • Another application of Matrices is in the solution of Systems of Linear Equations.

      • The applications of Matrices are found in most scientific fields:

        • Physics:

          • Classical Mechanics:

            • Study of Classical Phenomena, such as, the Motion of Rigid Bodies;

          • Optics;

          • Electromagnetism;

          • Quantum Mechanics;

          • Quantum Electrodynamics;


        • Computer Science and Engineering:

          • Computer Graphics:

            • Manipulation of 3D Objects/Models and, its Projection onto a 2D Screen (i.e., Two-Dimensional Screen);

          • Artificial Intelligence:

            • Neural Networks;

            • Hidden Markov Models;

          • Machine Learning:

            • Machine Learning:

              • Datasets for Training and Learning Processes;

              • Confusion Matrix for Predictions;

          • Computational Game Theory:

            • Games in the Normal Form, represented through a Matrix of Outcomes for the Players involved;

          • Image Processing:

            • Digital Images/Videos, processed and manipulated through Matrices representing its Pixels and other Informations (such as, Colors, Luminosity, Hue, Saturation, among others);

          • Computer Vision:

            • Facial Recognition or Recognition of Patterns;

          • Cryptography:

            • Cipher Processes for Encryption and Decryption;


        • Calculus and Numerical Analysis:

          • Generalization of Classical Analytical Notions, such as Derivatives and, Exponentials to Higher Dimensions (such as, Hessian Matrices and Jacobian Matrices, Infinite Matices representing the Derivative Operator, which acts on the Taylor Series of a function, among others);


        • Probability Theory and Statistics:

          • Stochastic Matrices for description of Sets of Probabilities;


        • Economics:

          • Description of Systems of Economic Relationships;


    • Notation:

      • The Matrices are commonly represented, through Box Brackets or Parentheses, with $ r $ rows and $ c $ columns:

        • $$ M = \begin{bmatrix} m_{1,1} & m_{1,2} & \cdots & m_{1,c} \\ m_{2,1} & m_{2,2} & \cdots & m_{2,c} \\ \vdots & \vdots & \ddots & \vdots \\ m_{r,1} & m_{r,2} & \cdots & m_{r,c} \\ \end{bmatrix} = \begin{pmatrix} m_{1,1} & m_{1,2} & \cdots & m_{1,c} \\ m_{2,1} & m_{2,2} & \cdots & m_{2,c} \\ \vdots & \vdots & \ddots & \vdots \\ m_{r,1} & m_{r,2} & \cdots & m_{r,c} \\ \end{pmatrix}\ = (\ M \in \mathbb{R^{r \times c}} \ ) = (\ m_{i,j} \in \mathbb{R^{r \times c}} \ )\hspace{1000ex} $$


      • The Horizontal and Vertical Lines of Entries (or, Elements) in a Matrix are called Rows and Columns, respectively.

      • Usually, Matrices are symbolized using upper-case letters, such as, $ M $, $ A $ or $ B $, per example.

      • And, the indices of a Matrix (i.e., its entries or elements) are commonly represented by its corresponding lower-case letters, with two subscript indices, regarding the number of the row and number of the column, where that entry (or, element) corresponding to that index are placed in the Matrix, like per example, $ m_{1,1} $, $ a_{2,3} $ or $ b_{3,5} $.

      • The entry (or, element) in the ith row and jth column of a Matrix is sometimes referred to as the $ i,j $, $ (i,j) $ or $ (i,j)^{th} $ entry (or, element) of the Matrix.

      • For example, given the following Matrix $ M $:

        • $$ M = \begin{bmatrix} 6 & 2 & -5 & 1 \\ 3 & 7 & 2 & 9 \\ 2 & \phantom{\hspace{0.8ex}}2 & -4 & 0 \\ \end{bmatrix}\ (\ M \in \mathbb{R^{3 \times 4}}\ )\hspace{1000ex} $$

        • $ m_{2,1} = 3 $

        • $ m_{3,2} = 2 $

        • $ m_{1,3} = -5 $


    • Dimensions (or, Size):

      • By convention, the dimensions of a Matrix are described by the notation $ m \times n $, where $ m $ is the number of rows (or, #rows), and $ n $ is the number of columns (or, #columns), and in sometimes, can have a notation similar to $ M_{m \times n} $, for a Matrix $ M $ with $ m $ rows and $ n $ columns.


      • For example, the dimension of the previously mentioned Complex Matrix $ M $ is $ 2 \times 3 $ ("two by three"), because there are two rows and three columns:

        • $$ M = \begin{bmatrix} 6 & -1+2i & i\\ 3-i & 7 & 3\\ \end{bmatrix}\ (\ M \in \mathbb{C} \ )\hspace{1000ex} $$


      • Special Cases:

        • Row Vector (or, Row Matrix):

          • Matrices with a single Row (i.e., with dimension $ 1 \times n $) are sometimes used to represent a Vector and are called Row Vector (or, Row Matrix):

            • $$ M_{1} = \begin{bmatrix} 4 & 6 & 8\\ \end{bmatrix}\ (\ M_{1} \in \mathbb{R^{1 \times 3}} \ )\hspace{1000ex} $$


        • Column Vector (or, Column Matrix):

          • Matrices with a single Column (i.e., with dimension $ n \times 1 $) are sometimes used to represent a Vector and are called Column Vector (or, Column Matrix):

            • $$ M_{2} = \begin{bmatrix} 1 \\ 4 \\ 7\\ \end{bmatrix}\ (\ M_{2} \in \mathbb{R^{3 \times 1}} \ )\hspace{1000ex} $$


        • Square Matrix:

          • Matrices with the same number of Rows and Columns (i.e., with dimension $ n \times n $) are sometimes used to represent a Linear Transformation from a Vector Space to itself, such as Reflection, Rotation or Shearing:

            • $$ M_{3} = \begin{bmatrix} 1 & 3 & 5\\ 8 & 5 & 2\\ 7 & 1 & 4\\ \end{bmatrix}\ (\ M_{3} \in \mathbb{R^{3 \times 3}} \ )\hspace{1000ex} $$


        • Empty Matrix:

          • Matrices with the no Rows or no Columns (i.e., with dimension $ 0 \times n $ or $ n \times 0 $) are sometimes used in some contexts, such as, Computer Algebra Programs:

            • $$ M_{4} = \begin{bmatrix} \end{bmatrix}\ (\ M_{4} \in \mathbb{R^{0 \times 0}} \ )\hspace{1000ex} $$


        • Infinite Matrix:

          • Matrices with infinite number of Rows or infinite number of Columns (i.e., with dimension $ \infty \times \infty $ or $ \infty \times \infty $) are sometimes used in some contexts, such as, Atomic Theory or Planetary Theory:

            • $$ M_{5} = \begin{bmatrix} {m_{5}}_{[1,1]} & {m_{5}}_{[1,2]} & \cdots & {m_{5}}_{[1,\infty]} \\ {m_{5}}_{[2,1]} & {m_{5}}_{[2,2]} & \cdots & {m_{5}}_{[2,\infty]} \\ \vdots & \vdots & \ddots & \vdots \\ {m_{5}}_{[\infty,1]} & {m_{5}}_{[\infty,2]} & \cdots & {m_{5}}_{[\infty,\infty]} \\ \end{bmatrix}\ (\ M_{5} \in \mathbb{R^{\infty \times \infty}} \ )\hspace{1000ex} $$



  • If two Matrices have the same dimension (or, same size), i.e., each one of those Matrices has the same number of rows and the same number of columns as the other one, then, those same two Matrices can be added or subtracted, element by element.


  • For instance, let $ M_{3} $ and $ M_{4} $ be the following Real Matrices:

    • $$ M_{3} = \begin{bmatrix} 6 & -8 \\ 0 & 2 \\ \end{bmatrix}\ (\ M_{3} \in \mathbb{R} \ )\hspace{1000ex} $$

    • $$ M_{4} = \begin{bmatrix} 1 & -1 \\ 9 & 5 \\ \end{bmatrix}\ (\ M_{4} \in \mathbb{R} \ )\hspace{1000ex} $$


  • Performing the Addition operation, between the Matrices $ M_{3} $ and $ M_{4} $, it will result in the following Matrix $ M_{5} $:

    • $$ M_{5} = M_{3} + M_{4} = \begin{bmatrix} 6 & -8 \\ 0 & 2 \\ \end{bmatrix} + \begin{bmatrix} 1 & -1 \\ 9 & 5 \\ \end{bmatrix} = \begin{bmatrix} (6 + 1) & (-8 + (-1)) \\ (0 + 9) & (2 + 5) \\ \end{bmatrix} = \begin{bmatrix} (6 + 1) & (-8 - 1)) \\ (0 + 9) & (2 + 5) \\ \end{bmatrix} = \begin{bmatrix} 7 & -9 \\ 9 & 7 \\ \end{bmatrix} ( \ M_{3},\ M_{4}\ and \ M_{5} \in \mathbb{R} \ )\hspace{1000ex} $$


  • Performing the Subtraction operation, between the Matrices $ M_{3} $ and $ M_{4} $, it will result in the following Matrix $ M_{6} $:

    • $$ M_{6} = M_{3} - M_{4} = \begin{bmatrix} 6 & -8 \\ 0 & 2 \\ \end{bmatrix} - \begin{bmatrix} 1 & -1 \\ 9 & 5 \\ \end{bmatrix} = \begin{bmatrix} (6 - 1) & (-8 - (-1)) \\ (0 - 9) & (2 - 5) \\ \end{bmatrix} = \begin{bmatrix} (6 - 1) & (-8 + 1) \\ (0 - 9) & (2 - 5) \\ \end{bmatrix} = \begin{bmatrix} 5 & -7 \\ -9 & -2 \\ \end{bmatrix} ( \ M_{3},\ M_{4}\ and \ M_{6} \in \mathbb{R} \ )\hspace{1000ex} $$


  • In order, to perform a Multiplication operation between two Matrices, the rules change a little, such that, that two Matrices can be multiplied only if the number of columns of the first Matrix are equal to the number of rows of the second Matrix, i.e., the "inner dimensions" are the same.

  • Or, in other words, the Multiplication operation between two Matrices $ M_{1} $ and $ M_{2} $, it's possible, if and only if, $ M_{1} $ and $ M_{2} $, have dimensions of $ m \times $ $ n $ and $ n $ $ \times p $, respectively.

  • If the previously mentioned rule it's verified, the Matrices' Product between $ M_{1} $ and $ M_{2} $ it's possible, resulting in a new Matrix $ M_{3} $ with the "outer dimensions", i.e., with dimension of $ m \times p $.

  • In other hand, if the previously mentioned rule it's not verified, there's no Matrices' Product between $ M_{1} $ and $ M_{2} $.

  • This property, hint that Matrix Multiplication it's not commutative.

  • After verifying it the previously mentioned properties holds, the Matrix Multiplication it's performed by doing the dot product between the entries (or, elements), of the rows and columns of the 1st and 2nd Matrices (or, the Left and Right Matrices), respectively.



  • In general terms, the Matrix Multiplication it's formulated by the following definition:

    • Let $ A $, be a $ m \times n $ Matrix:

      • $$ A = \begin{bmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,n}\\ a_{2,1} & a_{2,2} & \cdots & a_{2,n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m,1} & a_{m,2} & \cdots & a_{m,n}\\ \end{bmatrix}\hspace{1000ex} $$


    • Let $ B $, be a $ n \times p $ Matrix:

      • $$ B = \begin{bmatrix} b_{1,1} & b_{1,2} & \cdots & b_{1,p}\\ b_{2,1} & b_{2,2} & \cdots & b_{2,p}\\ \vdots & \vdots & \ddots & \vdots\\ b_{n,1} & b_{n,2} & \cdots & b_{n,p}\\ \end{bmatrix}\hspace{1000ex} $$


    • And, let $ C $, be the resultant Matrix from the Multiplication of the Matrices $ A $ and $ B $, with dimensions $ m \times p $:

      • $$ C = \begin{bmatrix} c_{1,1} & c_{1,2} & \cdots & c_{1,p}\\ c_{2,1} & c_{2,2} & \cdots & c_{2,p}\\ \vdots & \vdots & \ddots & \vdots\\ c_{m,1} & c_{m,2} & \cdots & c_{m,p}\\ \end{bmatrix}\hspace{1000ex} $$


    • Where any entry (or, element) of C it's defined as the following:

      • $$ c_{i,j} = \sum_{k = 1}^{n} a_{i,k} \times b_{k,j} = [\ ( a_{i,1} \times b_{1,j} ) + ( a_{i,2} \times b_{2,j} ) + \cdots + ( a_{i,n} \times b_{n,j} )\ ],\ for \ i = 1,\ \cdots,\ m \ and \ j = 1,\ \cdots,\ p.\hspace{1000ex} $$



  • For example, one possible Matrix Multiplication, can be demonstrated as following:

    • Given the following Real Matrices $ M_{1} $ and $ M_{2} $, of dimensions (or, size) of $ 2 \times 3 $ and $ 3 \times 4 $, respectively:

      • $$ M_{1} = \begin{bmatrix} 6 & 2 & 1\\ 3 & 4 & 2\\ \end{bmatrix}\ (\ M_{1} \in \mathbb{R} \ )\hspace{1000ex} $$

      • $$ M_{2} = \begin{bmatrix} 5 & 7 & 3 & 0\\ 0 & 2 & 1 & 2\\ 1 & 4 & 6 & 3\\ \end{bmatrix}\ (\ M_{2} \in \mathbb{R} \ )\hspace{1000ex} $$


    • And, let $ M_{3} $ be the Matrix resultant from the Matrix Multiplication between $ M_{1} $ and $ M_{2} $.

    • It's possible to calculate $ M_{3} $, because the Matrices $ M_{1} $ and $ M_{2} $ have the same "inner dimensions", i.e., "inner dimension" of $ 3 $.

    • And, the resultant Matrix $ M_{3} $ will have dimension of the "outer dimensions" of the Matrices $ M_{1} $ and $ M_{2} $i.e., dimension of $ 2 \times 4 $.


    • Thus, the resultant Matrix $ M_{3} $ it's calculated as following:

      • $$ M_{3} = M_{1} \times M_{2} = \begin{bmatrix} 6 & 2 & 1\\ 3 & 4 & 2\\ \end{bmatrix}\ \times \begin{bmatrix} 5 & 7 & 3 & 0\\ 0 & 2 & 1 & 2\\ 1 & 4 & 6 & 3\\ \end{bmatrix}\ = \ \hspace{1000ex} $$ $$ = \begin{bmatrix} [\ (6 \times 5) + (2 \times 0) + (1 \times 1)\ ] & [\ (6 \times 7) + (2 \times 2) + (1 \times 4)\ ] & [\ (6 \times 3) + (2 \times 1) + (1 \times 6)\ ] & [\ (6 \times 0) + (2 \times 2) + (1 \times 3)\ ] \\ [\ (3 \times 5) + (4 \times 0) + (2 \times 1)\ ] & [\ (3 \times 7) + (4 \times 2) + (2 \times 4)\ ] & [\ (3 \times 3) + (4 \times 1) + (2 \times 6)\ ] & [\ (3 \times 0) + (4 \times 2) + (2 \times 3)\ ] \\ \end{bmatrix} = \ \hspace{1000ex} $$ $$ = \begin{bmatrix} (30 + 0 + 1) & (42 + 4 + 4) & (18 + 2 + 6) & (0 + 4 + 3)\\ (15 + 0 + 2) & (21 + 8 + 8) & (9 + 4 + 12) & (0 + 8 + 6)\\ \end{bmatrix} = \begin{bmatrix} 31 & 50 & 26 & 7\\ 17 & 37 & 25 & 14\\ \end{bmatrix}\ (\ M_{1},\ M_{2}\ and\ M_{3}\ \in \mathbb{R} \ )\hspace{1000ex} $$



  • Any Matrix can be multiplied element-wise by a scalar from its associated field $ F $.

  • The individual items in a $ \phantom{\hspace{0.5ex}} m \times n \phantom{\hspace{0.5ex}} $ Matrix M, often denoted by $ M_{i,j} $, where $ i $ and $ j $ usually vary from $ 1 $ to $ m $ and $ n $, respectively, are called its entries (or, elements).

  • In order to represent an entries (or, elements) of the results of Matrix Operations, the indices of the entry (or, element), are often attached to the parenthesized (or, bracketed) Matrix expression.


  • For example, given the Matrices $ M_{1} $ and $ M_{2} $:

    • $ (M_{1} + M_{2})_{i,j} $, refers to an element of a Matrix Addition, between $ M_{1} $ and $ M_{2} $;

    • $ (M_{1} - M_{2})_{i,j} $, refers to an element of a Matrix Subtraction, between $ M_{1} $ and $ M_{2} $;

    • $ (M_{1} \cdot M_{2})_{i,j} $, refers to an element of a Matrix Multiplication, between $ M_{1} $ and $ M_{2} $;


  • In the context of abstract index notation, this ambiguously refers also to the whole Matrix Operation.


  • Once again, given the Matrices $ M_{1} $ and $ M_{2} $:

    • $ (M_{1} + M_{2}) $, refers to the whole Matrix Addition, between $ M_{1} $ and $ M_{2} $;

    • $ (M_{1} - M_{2}) $, refers to the whole Matrix Subtraction, between $ M_{1} $ and $ M_{2} $;

    • $ (M_{1} \cdot M_{2}) $, refers to the whole Matrix Multiplication, between $ M_{1} $ and $ M_{2} $;


  • In order to perform a Matrix Division, in Linear Algebra, it's not possible to do it "directly", i.e., given two Matrices $ M_{1} $ and $ M_{2} $, there is no "direct" Matrix Operation like $ M_{1} \div M_{2} $.


  • In these situations, the correct procedure to follow, it's to do the following steps:

    1. Calculate the Inverse Matrix of the Matrix corresponding to the denominator of the supposed "Matrix Division";

    2. Calculate the Matrix Multiplication between the Matrix corresponding to the numerator and the Inverse Matrix of the Matrix corresponding to the denominator of the supposed "Matrix Division".


  • Thus, given two Matrices $ M_{1} $ and $ M_{2} $, and let $ M_{3} $ be the Matrix resultant from the Matrix Division between $ M_{1} $ and $ M_{2} $, $ M_{3} $ is calculated as:

    • $ M_{3} = M_{1} \cdot {(M_{2})}^{-1} $;


  • Given a Rotation Matrix $ R $ and a Column Vector $ v $ (i.e., a Matrix with only one column), describing the Position of a Point in a 2D Plane, or in a 3D Space (i.e., in the Euclidean Space):

    • The Product $ Rv $ (i.e., the Multiplication between the Rotation Matrix and the Column Vector) will result in a new Column Vector describing the Position of that Point after the respective Rotation Transformation be applied;


  • In Linear Algebra, this Rotation Matrix $ R $ is a Matrix it's defined, by convention, using the following Matrix:

    • $$ R = $$



*** © Rúben André Barreiro - Learning Quantum Computing (Online Web Course) - All rights reserved ***