Section 1.11 Operations in Quantum Computation
Subsection 1.11.1 Outer Product
The tensor product is only one of many operations on qubits. In Subsection 1.5.4 we described how the inner product could be represented as the product of a vector’s transpose and the vector. Since an \(n\) dimensional vector transpose is \(1\times n\) and the vector is \(n\times1\text{,}\) the resulting product is \(1\times1\text{,}\) which is functionally equivalent to a scalar. If we were to reverse the order of the vector multiplication and multiply a column vector on the left and a row vector on the right, we would multiply an \(n\times1\) vector by an \(1\times m\) vector to produce an \(n\times m\) matrix. More specifically, we want to multiply a vector on the left by the vectors adjoint (complex conjugated transpose) on the right. This operation is known as the outer product. The outer product between two qubits \(\ket{\psi}\) and \(\ket{\phi}\) is shown below.
\begin{equation*}
\ket{\psi} = \begin{pmatrix} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n \end{pmatrix}, \ket{\phi} = \begin{pmatrix} \beta_1 \\ \beta_2 \\ \vdots \\ \beta_m \end{pmatrix}
\end{equation*}
\begin{equation*}
\ket{\psi}\bra{\phi} = \begin{pmatrix} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n \end{pmatrix} \begin{pmatrix} \beta_1^* & \beta_2^* & \ldots & \beta_m^* \end{pmatrix} = \begin{pmatrix} \alpha_1 \beta_1^* & \alpha_1 \beta_2^* & \ldots &\alpha_1\beta_m^*\\ \alpha_2 \beta_1^* & \alpha_2 \beta_2^* &\ldots &\alpha_2\beta_m^*\\ \vdots&\vdots&\ddots&\vdots\\ \alpha_n \beta_1^* & \alpha_n \beta_2^* &\ldots &\alpha_n\beta_m^*\\ \end{pmatrix}
\end{equation*}
Subsection 1.11.2 Completeness Relation
If a set of basis vectors for a quantum system \(\{ \ket{\mathcal{B}_1},\ket{\mathcal{B}_2},\ldots,\ket{\mathcal{B}_n} \}\) have the property that the sum of the outer products of each basis vector with itself is equal to the identity matrix:
\begin{equation*}
\sum_{i=1}^{n} \ket{\mathcal{B}_i}\bra{\mathcal{B}_i}= I
\end{equation*}
Then that set is said to have a completeness relation. In general, orthonormal bases will have a completeness relation. Since the completeness relation is the same as the identity operator, we can use it on any vector expression without changing its value.
Suppose \(\ket{\psi}\) is a vector in a space \(V\) and \(\{ \ket{\phi_1},\ket{\phi_2},\ldots,\ket{\phi_n} \}\) is an orthonormal basis of \(V\) with a completeness relation. We can then use the completeness relation as follows
\begin{equation*}
\begin{split} \ket{\psi} &= I \ket{\psi} \\ &= \sum_{i=1}^n \ket{\phi_i} \bra{\phi_i} \ket{\psi} \\ &= \ket{\phi_1} \bra{\phi_1} \ket{\psi} + \ket{\phi_2} \bra{\phi_2} \ket{\psi} + \ldots + \ket{\phi_n} \bra{\phi_n} \ket{\psi} \\ &= \alpha_1 \ket{\phi_1} + \alpha_2 \ket{\phi_2} + \ldots + \alpha_n \ket{\phi_n} \end{split}
\end{equation*}
Where each \(\alpha_j = \braket{\phi_j | \psi} \) and represents the component of the vector \(\ket{\psi}\) that is in the direction of the basis \(\ket{\phi_j}\text{.}\) Thus the completeness relation can be used to decompose a vector into its basis elements.
Subsection 1.11.3 Inner Product
Here we will redefine the inner product in Bra-Ket notation and describe some additional properties. For two qubits \(\ket{a}\) and \(\ket{b}\)
\begin{equation*}
\ket{a}= \begin{pmatrix}a_{1} \\a_{2} \\\vdots \\a_{n}\end{pmatrix}, \ket{b}=\begin{pmatrix}b_{1} \\b_{2} \\\vdots \\b_{n} \end{pmatrix}
\end{equation*}
their inner product
\begin{equation*}
\braket{a | b} = \sum_{k=1}^{n} a_{k} b_{k}
\end{equation*}
We now define
\begin{equation*}
\delta_{k j}= \Bigg \{ \begin{array}{ll} 1, & \text { if } k=j \\ 0, & \text { if } k\neq j \end{array}
\end{equation*}
We call \(\delta_{k j}\text{,}\) the Kronecker delta. It is the mathematical way to express anything that is equal to 0 unless the index \(k=j\text{,}\) in which case it is 1. For an orthonormal basis \(\{ \ket{\beta_1}, \ket{\beta_n}, \ldots, \ket{\beta_n} \}\) we have the property
\begin{equation*}
\braket{\beta_k | \beta_j} = \delta_{kj}
\end{equation*}
That is, for any two vectors in the basis, their inner product is 0, unless the two vectors are the same, in which case their inner product is 1.
For \(\alpha \in \mathbb{C}\) and \(\ket{\psi},\ket{\phi},\ket{\rho}\in\mathcal{H}\text{,}\) the inner product has the following properties:
\begin{equation*}
\begin{split} & \text{1. Distributive Law: } \bra{\psi}(\ket{\phi} + \ket{\rho}) = \braket{\psi | \phi} + \braket{\psi | \rho}
\\ & \text{2. Associative Law: } \bra{\psi} (\alpha \ket{\phi}) = \alpha \braket{ \psi | \phi}
\\ & \text{3. Hermitian Symmetry: } \braket{\psi | \phi} = \braket{\phi | \psi}^{*}
\\ & \text{4. Definite Form: } \braket{\psi | \psi} \geq 0 \text{ and } \braket{\psi | \psi} = 0 \text{ only if } \ket{\psi} = \vec{0} \end{split}
\end{equation*}
Subsection 1.11.4 Diagonalization
For a vector space \(V\) with a basis \(\{ \ket{\beta_1}, \ket{\beta_2}, \ldots, \ket{\beta_n} \}\text{,}\) A diagonal representation for an operator \(A\) that acts on the space would be
\begin{equation*}
A=\sum_{i} \lambda_{i}\ket{\beta_i}\bra{\beta_i}
\end{equation*}
where the \(\lambda_i\) are eigenvalues that correspond to the basis state \(\beta_i\text{.}\) An operator is said to be diagonalizable if it has a diagonal representation. As an example of a diagonal representation, note that the Pauli \(Z\) matrix (see Subsection 2.2.4) may be written
\begin{equation*}
Z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} - \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} = \ket{0}\bra{0} - \ket{1}\bra{1} = \sum_{i=1}^{2} \lambda_i \ket{\beta_i}\bra{\beta_i}
\end{equation*}
where \(\lambda_1 = 1, \lambda_2 = -1, \beta_1 = \ket{0}, \text{ and }\beta_2 = \ket{1}\text{.}\) Diagonal representations are sometimes also known as orthonormal decompositions.
Subsection 1.11.5 Density Operator
Any basis state for a vector space can also be expressed with a density operator that provides us with another method to study the state of the entire system. For a vector space with a basis \(\{ \ket{\psi_1},\ket{\psi_2},\ldots,\ket{\psi_n} \}\text{,}\) the density operators for its basis states are given by
\begin{equation*}
\rho_i = \ket{\psi_i}\bra{\psi_i}
\end{equation*}
This means that the density operator \(\rho\) for any basis state is equal to the outer product of that state with itself. These density operators for states have the following properties:
\begin{equation*}
\begin{split} & \text{1. Idempotent: } \rho^2 = \ket{\psi}\braket{\psi | \psi} \bra{\psi} = \ket{\psi}\bra{\psi} = \rho
\\ & \text{2. Trace(the sum of the diagonals): For an } n \times n \text{ density operator: } \\ & Tr(\rho) = \sum_{i,j=1}^n \rho_{i,j} = 1 \text{ (where } \rho_{i,j} \text{ represents the entry in the i-th row and j-th column of }\rho)
\\ & \text{3. Hermiticity: } \rho^{\dagger} = (\ket{\psi}\bra{\psi})^{\dagger} = \ket{\psi}\bra{\psi} = \rho
\\ & \text{4. Positive Semi-Definite: For a state }\ket{\phi}, \bra{\phi}\rho\ket{\phi} = \braket{\phi|\psi} \braket{\psi|\phi} = \braket{\phi|\psi}^2 \geq 0 \end{split}
\end{equation*}
For a state
\begin{equation*}
\ket{\psi} = \begin{pmatrix} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n \end{pmatrix}
\end{equation*}
The density operator would be
\begin{equation*}
\rho = \begin{pmatrix} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n \end{pmatrix} \begin{pmatrix} \alpha_1^* & \alpha_2^* & \ldots & \alpha_n^* \end{pmatrix} = \begin{pmatrix} \alpha_1\alpha_1^* & \alpha_1\alpha_2^* & \ldots & \alpha_1\alpha_n^* \\ \alpha_2\alpha_1^* & \alpha_2\alpha_2^* & \ldots & \alpha_2\alpha_n^* \\ \vdots & \vdots & \ddots & \vdots \\ \alpha_n\alpha_1^* & \alpha_n\alpha_2^* & \ldots & \alpha_n\alpha_n^* \end{pmatrix}
\end{equation*}
Find the density operator for the following state
\begin{equation*}
\ket{\psi} = \frac{1}{\sqrt{2}}(\ket{00} + \ket{11})
\end{equation*}
Hint.
Solution.
\begin{equation*}
\ket{\psi} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1\\0\\0\\1 \end{pmatrix}
\end{equation*}
\begin{equation*}
\rho = \frac{1}{\sqrt{2}}\begin{pmatrix} 1\\0\\0\\1 \end{pmatrix} \frac{1}{\sqrt{2}}\begin{pmatrix} 1&0&0&1 \end{pmatrix} = \frac{1}{2}\begin{pmatrix} 1 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 \end{pmatrix}
\end{equation*}
Now suppose we want to find a density operator for an entire system. The system exists within a vector space with a basis \(\{ \ket{\beta_1},\ket{\beta_2},\ldots,\ket{\beta_n} \}\) and has a probability \(p_i\) of being in the state \(\ket{\beta_i}\) after measurement. The density operator for the entire system is defined by
\begin{equation*}
\rho = \sum_{i=1}^n p_i \rho_i = \sum_{i=1}^n p_i \ket{\beta_i}\bra{\beta_i}
\end{equation*}
The density operator for the system has the same properties as the density operator for individual states: Idempotent, Trace=1, Hermiticity, and Positive Semi-Definite.
Subsection 1.11.6 The Commutator
Remember that matrix multiplication is generally not commutative, i.e. for two matrices \(A\) and \(B\text{,}\) \(AB \neq BA\text{.}\) However, there are exceptions to this generality. The commutator between two operators \(A\) and \(B\) is defined to be
\begin{equation*}
[A, B] \equiv A B-B A
\end{equation*}
If \([A, B]=0\text{,}\) that is, \(A B=B A\text{,}\) then we say \(A\) commutes with \(B\text{.}\) Similarly, the anti-commutator of two operators \(A\) and \(B\) is defined by
\begin{equation*}
\{A, B\} \equiv A B+B A
\end{equation*}
we say \(A\) anti-commutes with \(B\) if \(\{A, B\}=0\text{,}\) that is \(AB = -BA\text{.}\) It turns out that many important properties of pairs of operators can be deduced from their commutator and anti-commutator.