bandgap.io

Bloch's Theorem via Representation Theory

In 1928, Felix Bloch used Fourier analysis to analyze the motion of electrons in a periodic potential. His eponymous theorem is the dividing line between the chaos of disordered condensed matter theory and the power and elegance of solid state physics, introducing the crystal momentum and by extension band structure and the classification of matter into conductors, insulators, and semiconductors.

Bloch's theorem tells us that translations which leave a crystal invariant induce phase shifts in the wavefunction. More precisely, let \(\mathbf{a}_1, \mathbf{a}_2\), and \(\mathbf{a}_3\) be primitive translation vectors of the Bravais lattice, and let \(\mathbf{R} = n_1\mathbf{a}_1 + n_2\mathbf{a}_2 + n_3\mathbf{a}_3\) where \(n_1, n_2, n_3 \in \mathbb{Z}\). Then Bloch's theorem tells us that \(\psi(\mathbf{r}-\mathbf{R}) = e^{i\theta}\psi(\mathbf{r})\) for some \(\theta \in \mathbb{R}\).

We don't need group theory to prove Bloch's theorem, so let's prove it now without the heavy artillery:

Bloch's Theorem: Let \(\mathcal{H} := -\frac{1}{2} \nabla^2 + V\) where \(V(\mathbf{r}+\mathbf{R}) = V(\mathbf{r})\) for all \(\mathbf{R}\) in the lattice. Then solutions of \(\mathcal{H}\psi = E\psi\) where \(\psi\) satisfies periodic boundary conditions \(\psi(\mathbf{r} - N\mathbf{R}) = \psi(\mathbf{r})\) for some \(N \in \mathbb{N}\) satisfy \(\psi(\mathbf{r}-\mathbf{R}) = e^{i\theta}\psi(\mathbf{r})\) for some \(\theta \in \mathbb{R}\).

Proof: Let \(T_{\mathbf{R}}\psi(r) := \psi(\mathbf{r}-\mathbf{R})\), so that \(T_{\mathbf{R}}^{N} = I\). From this we see that the eigenvalues of \(T_{\mathbf{R}}\) are the roots of unity; if \(T_{\mathbf{R}}\phi = \lambda \phi\), then \(\lambda = \exp\left(\frac{2\pi i n}{N}\right)\). Since the potential \(V\) is invariant under \(T_{\mathbf{R}}\), then \([\mathcal{H}, T_{\mathbf{R}}] = 0\) and we can find simultaneous eigenstates of \(H\) and \(T_{\mathbf{R}}\). Hence \begin{align*} T_{\mathbf{R}}\psi = \exp\left(\frac{2\pi i n}{N}\right)\psi \quad \mathrm{whenever} \quad \mathcal{H}\psi = E \psi \end{align*} This is the "translation => phase shifts" claim.

Now we wish to prove the same thing using representation theory. Using representation theory to prove Bloch's theorem is a bit like using battle ax as a letter opener, but once we're done you'll have a battle ax. The endzone is a long way away, so let's outline our strategy:

Let \(G\) be a group and \(V\) be a finite dimensional complex vector space. Let \(\mathrm{GL}(V)\) denote the set of invertible linear transformations of this space. A representation of \(G\) is a map \(\Gamma\colon G \to \mathrm{GL}(V)\) which preserves the group operation, i.e., \begin{align*} \Gamma(g_1\circ g_2) = \Gamma(g_1)\Gamma(g_2), \quad \forall g_{1}, g_{2} \in G \end{align*} A map which obeys this property is called a homomorphism. By choosing a basis for \(V\), we can identify \(\mathrm{GL}(V)\) with \(\mathrm{GL}(n, \mathbb{C})\), and then for each \(g \in G\), \(\Gamma(g)\) is an \(n\times n \) matrix and the representation is said to have dimension \(n\).

An example of a group representation is the trivial representation \(\Gamma(A) = 1\). We can see that \(\Gamma(A\circ B) = \Gamma(A)\Gamma(B) = 1\), so it preserves the group operation and is a homomorphism. Of course this is a pretty useless representation because it can't distinguish between two different groups. But it exists and must be counted when we attempt to enumerate all irreducible representations of a group, as we shall soon see. A less trivial representation is called the regular representation. The idea is to take the group multiplication table and use it to create a matrix. For example, take \(C_{3} := \{g, g^2, g^3 = e\}\). This is a cyclic abelian group, perhaps representing rotations by \(2\pi/3\) about an axis. The multiplication table written so that the identity winds up on the diagonal is

\(C_{3}\) \(e\) \(g\) \(g^2\)
\(e\) \(e\) \(g\) \(g^2\)
\(g^2\) \(g^2\) \(e\) \(g\)
\(g\) \(g\) \(g^2\) \(e\)
The table is then replaced by a matrix, with 1s where the group element is, and zeros elsewhere. In this case \begin{align*} \Gamma_{\mathrm{reg}}(e) &= \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \\ \Gamma_{\mathrm{reg}}(g) &= \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix} \\ \Gamma_{\mathrm{reg}}(g^2) &= \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix} \end{align*} It's easy to verify that \(\Gamma_{\mathrm{reg}}(g^2) = \Gamma_{\mathrm{reg}}(g)^2\) and \(\Gamma_{\mathrm{reg}}(g^3) = \Gamma_{\mathrm{reg}}(e)\), showing that \(\Gamma_{\mathrm{reg}}\) is a homomorphism and therefore a representation.

Mathematicians love group representations because they allow us to understand all the linear actions of a group. And because humans understand matrices very well, we often study representations where the range is \(\mathrm{GL}(n;\mathbb{C})\)- the set \(n\times n\) invertible matrices with complex entries. Once we've represented a group with matrices, our understanding of finite-dimensional invertible linear operators can be used to understand the symmetries of the object.

One problem with matrix representations of groups is that we can present "the same" representation with different matrices using "similarity transforms", i.e., if \(\Gamma_{1}\colon G \to \mathrm{GL}(n; \mathbb{C})\), then for any \(S \in \mathrm{GL}(n; \mathbb{C})\) \begin{align*} \Gamma_{2}(g) := S\Gamma_{1}(g)S^{-1} \end{align*} is also a group representation of \(G\). There are a few ways we get around this non-uniqueness:

By "block diagonal" we mean that the representation can be written as the direct sum of representations of smaller dimension, as follows: \begin{align*} \Gamma(g) = \begin{pmatrix} \Gamma_1(g) & \mathbf{0} \\ \mathbf{0} & \Gamma_{2}(g) \end{pmatrix} \end{align*} In this case, the representation \(\Gamma\) is said to be reducible, and we write \(\Gamma = \Gamma_{1}\oplus \Gamma_{2}\).

The regular representation of \(C_{3}\) is reducible by the similarity transform \begin{align*} S := \frac{1}{\sqrt{3}} \begin{pmatrix} 1 & 1 & 1 \\ 1 & \omega_{3} & \omega_3^2 \\ 1 & \omega_3^2 & \omega_3 \end{pmatrix} \end{align*} where \(\omega_3 := \exp(2\pi i/3)\). Applying this similarity transform yields \begin{align*} \hat{\Gamma}_{\mathrm{reg}}(g) &= S^{-1}\Gamma_{\mathrm{reg}}(g)S = \begin{pmatrix} 1 & & \\ & \omega_3 & \\ & & \omega_3^2 \end{pmatrix} =: \begin{pmatrix} \Gamma_{0}(g) & & \\ & \Gamma_{1}(g) & \\ & & \Gamma_{2}(g) \end{pmatrix} \\ \hat{\Gamma}_{\mathrm{reg}}(g^2) &= S^{-1}\Gamma_{\mathrm{reg}}(g^2)S = \begin{pmatrix} 1 & & \\ & \omega_3^2 & \\ & & \omega_3 \end{pmatrix} =: \begin{pmatrix} \Gamma_{0}(g^2) & & \\ & \Gamma_{1}(g^2) & \\ & & \Gamma_{2}(g^2) \end{pmatrix} \\ \hat{\Gamma}_{\mathrm{reg}}(e) &= S^{-1}\Gamma_{\mathrm{reg}}(e)S = \begin{pmatrix} 1 & & \\ & 1 & \\ & & 1 \end{pmatrix} =: \begin{pmatrix} \Gamma_{0}(e) & & \\ & \Gamma_{1}(e) & \\ & & \Gamma_{2}(e) \end{pmatrix} \\ \end{align*} This shows that the regular representation of \(C_{3}\) is the direct sum of one dimensional representations, i.e., \(\hat{\Gamma}_{\mathrm{reg}} = \Gamma_{0}\oplus \Gamma_{1}\oplus \Gamma_2\), where \(\Gamma_{j}(g^{k}) = \omega_{3}^{jk}\).

Just because a representation is irreducible does not mean that the presentation is unique. In fact, a similarity transform can make the presentation arbitrarily ugly. So we are now going to show that that any representation is equivalent to a unitary representation. This will not present the representation uniquely, since a unitary change of basis will cause the presentation to change, but that will be taken care of by the characters as we shall soon see.

I find the proof to be quite creative. Let \(\Gamma\colon G \to \mathbf{GL}(n; \mathbb{C})\) be a representation. Let \begin{align*} H := \sum_{g \in G} \Gamma(g)\Gamma(g)^{\dagger} \end{align*} \(H\) is Hermitian, and as such can be diagonalized by a unitary matrix \(U\), and we write \(D = U^{\dagger}HU\). If \(\hat{\Gamma}(g) := U^{\dagger}\Gamma(g)U\), then \begin{align*} D = \sum_{g \in G} \hat{\Gamma}(g)\hat{\Gamma}(g)^{\dagger} \end{align*} Not only is \(D\) diagonal, its diagonal elements are strictly positive: \begin{align*} D_{kk} &= \sum_{g \in G} \sum_{j=1}^{n} \hat{\Gamma}(g)_{kj}\hat{\Gamma}(g)_{jk}^{\dagger}\\ &= \sum_{g \in G} \sum_{j=1}^{n} \hat{\Gamma}(g)_{kj}\hat{\Gamma}(g)_{kj}^{*}\\ &= \sum_{g \in G} \sum_{j=1}^{n} |\hat{\Gamma}(g)_{kj}|^{2} \end{align*} The elements are obviously positive, they are strictly positive, because if they weren't, \(\hat{\Gamma}(g)\) would have a row of zeros and would not be invertible and hence not an element of \(\mathrm{GL}(n, \mathbb{C})\). Since the elements are strictly positive, we can define \begin{align*} D^{1/2} := \mathrm{diag}(\sqrt{D_{11}}, \ldots, \sqrt{D}_{nn}), \quad D^{-1/2}:= \mathrm{diag}(1/\sqrt{D_{11}}, \ldots, 1/\sqrt{D_{nn}}), \end{align*} and define a new representation \begin{align*} \Pi(g) := D^{-1/2}U^{\dagger}\Gamma(g)UD^{1/2} = D^{-1/2}\hat{\Gamma}(g)D^{1/2} \end{align*} and \begin{align*} \Pi(g)\Pi(g)^{\dagger} &= D^{-1/2}\hat{\Gamma}(g)D\hat{\Gamma}(g)^{\dagger}D^{-1/2} \\ &= D^{-1/2}\hat{\Gamma}(g)\left( \sum_{h \in G} \hat{\Gamma}(h)\hat{\Gamma}(h)^{\dagger} \right)\hat{\Gamma}(g)^{\dagger}D^{-1/2} \\ &= D^{-1/2}\left( \sum_{h \in G} \hat{\Gamma}(gh)\hat{\Gamma}(gh)^{\dagger} \right)D^{-1/2} \\ &= D^{-1/2}\left( \sum_{g \in G} \hat{\Gamma}(g)\hat{\Gamma}(g)^{\dagger} \right)D^{-1/2} \quad \mathrm{(by\,rearrangement \, theorem)}\\ &= \mathrm{I} \end{align*} Hence \(\Pi\) is a unitary representation.

Even if we restrict our attention to irreducible unitary representations, we still can produce different presentations of the individuals irreps by application of a unitary change of basis. So instead of studying group representations directly, it is sometimes worthwhile to study the trace of the group representation, via \begin{align*} \chi(g) := \mathrm{Tr}(\Gamma(g)) \end{align*} The map \(\chi \colon G \to \mathbb{C}\) is called a character, which I understand to be an antiquated synonym for "trace". Due to cyclicity of the trace, if \(\Gamma_2\) is similar to another representation \(\Gamma_1\) via an invertible matrix \(S\), then \begin{align*} \chi_{2}(g) := \mathrm{Tr}(\Gamma_{2}(g)) = \mathrm{Tr}(S\Gamma_{1}(g)S^{-1}) =: \chi_{1}(g), \quad \forall g \in G \end{align*} so that \(\chi_{1} = \chi_{2}\). This is a great simplification because a similarity transform is just a change in how we describe our space-not a fundamental aspect of the matrix group. Characters also allow us to identify distinct group elements which perform the same action. For example, if we rotate a cube by \(90^{\circ}\) about the \(x\)-axis, this action isn't "fundamentally different" than rotating it \(90^{\circ}\) about the \(y\)-axis, as

Rotate \(90^{\circ}\) counterclockwise about \(x\) = (Rotate by \(90^{\circ}\) about \(z\))(Rotate by \(90^{\circ}\) about \(y\))(Rotate by \(-90^{\circ}\) about \(z\))
Group elements that perform "the same" action live in the same conjugacy class. We say that \(a \in G\) is conjugate to \(b \in G\) and write \(a\sim b\) iff there exists \(g \in G\) such that \(a = gbg^{-1}\). It is easily proved that \(\sim\) is reflexive, symmetric, and transitive so that it is an equivalence relation. We will denote the set of all group elements which are conjugate to an element \(a \in G\) by \(\mathcal{C}_a\), or \(\mathcal{C}_k\) for some \(k \in \mathbb{N}\) if we don't need to specify an element and only need to enumerate the classes. If \(a \sim b\), then a character views them as the same. Check it: \begin{align*} \chi(a) = \chi(gbg^{-1}) = \mathrm{Tr}(\Gamma(g)\Gamma(b)\Gamma(g)^{-1}) = \mathrm{Tr}(\Gamma(b)) = \chi(b). \end{align*} A class function is a map \(\phi \colon G \to \mathbb{C}\) that takes the same value on all elements of a conjugacy class, i.e., \(\phi(gag^{-1}) = \phi(a)\, \forall g \in G\). Characters are class functions, so they can be used to distinguish "fundamentally different" group actions.

If we know all the conjugacy classes and all the irreducible characters of a certain group, we can create a character table. Here's the character table of \(C_{3}\):

\(C_{3}\) \(\mathcal{C}_1 = \{e\}\) \(\mathcal{C}_2 = \{g\}\) \(\mathcal{C}_3 = \{g^2\}\)
\(\chi_{0}\) \(1\) \(1\) \(1\)
\(\chi_{1}\) \(1\) \(\omega_3\) \(\omega_3^2\)
\(\chi_{2}\) \(1\) \(\omega_3^2\) \(\omega_3\)
The entries of the character table are the values of the characters computed on the conjugacy class. Since \(C_3\) is abelian, all conjugacy classes consist of a single element. To see the power of identifying conjugacy classes, working through a less trivial example will help. Consider an equilateral triangle. If we label the vertices as with numbers 1, 2, and 3, we see that a clockwise rotation through \(120^{\circ}\) is equivalent to the permutation \begin{align*} d := \begin{pmatrix} 1 & 2 & 3 \\ 2 & 3 & 1 \end{pmatrix} \end{align*} A counterclockwise rotation by \(120^{\circ}\) moves the vertex labels to \begin{align*} f := \begin{pmatrix} 1 & 2 & 3 \\ 3 & 1 & 2 \end{pmatrix} \end{align*} But an observer below the triangle views \(d\) as a counterclockwise rotation, and \(f\) as clockwise. In addition, if we rotate the triangle out of plane by \(180^{\circ}\) about an axis passing through a vertex and cutting the triangle in half, we fix one vertex and permute the other two. There are three such symmetry operations, given in permutation notation by \begin{align*} a := \begin{pmatrix} 1 & 2 & 3 \\ 1 & 3 & 2 \end{pmatrix} \\ b := \begin{pmatrix} 1 & 2 & 3 \\ 3 & 2 & 1 \end{pmatrix} \\ c := \begin{pmatrix} 1 & 2 & 3 \\ 2 & 1 & 3 \end{pmatrix} \end{align*} For completeness, \begin{align*} e := \begin{pmatrix} 1 & 2 & 3 \\ 1 & 2 & 3 \end{pmatrix} \end{align*} These definitions give the multiplication table
\(S_{3}\) \(e\) \(a\) \(b\) \(c\) \(d\) \(f\)
\(e\) \(e\) \(a\) \(b\) \(c\) \(d\) \(f\)
\(a\) \(a\) \(e\) \(e\) \(f\) \(b\) \(c\)
\(b\) \(b\) \(f\) \(e\) \(d\) \(c\) \(a\)
\(c\) \(c\) \(d\) \(f\) \(e\) \(a\) \(b\)
\(f\) \(f\) \(b\) \(c\) \(a\) \(e\) \(d\)
\(d\) \(d\) \(c\) \(a\) \(b\) \(f\) \(e\)
From the character table, we can generate the list of conjugacy classes. For example, \begin{align*} \mathcal{C}_{a} = \{a, bab^{-1}, cac^{-1}, dad^{-1}, faf^{-1}\} = \{a, bab, cac, daf, fad\} = \{a, b, c\} \end{align*} which confirm our intuition that our of plane rotations by \(180^{\circ}\) are "the same" symmetries of the triangle. Similarly, we can calculate \begin{align*} \mathcal{C}_{d} = \{d, ada^{-1}, bdb^{-1}, cdc^{-1}, fdf^{-1}\} = \{d,f\} \end{align*} confirming our intuition that \(120^{\circ}\) rotations clockwise and counterclockwise are "the same". Finally, we have (as always) \(\mathcal{C}_e = \{e\}\), the identity always being in a class by itself. We now look for irreducible representations of \(S_{3}\). First, we have the trivial representation \(\Gamma_{0}(e) = \Gamma_{0}(a) = \cdots = \Gamma_{0}(f) = 1\).

Next we search for a 1D representation we will call \(\Gamma_1\). We note that \(\Gamma_1(e)\) must equal 1, or else the homomorphism \(\Gamma_1(a) = \Gamma_1(ea) = \Gamma_1(e)\Gamma_1(a)\) doesn't hold. Since \(a^2 = e\), then \(1 = \Gamma_1(a^2) =\Gamma_1(a)^2\), so that \(\Gamma_1(a) = \pm 1\). Similarly, we see that \(\Gamma_1(b) = \pm 1\) and \(\Gamma_1(c) = \pm 1\). Since \(d^3=f^3 = e\), then \(\Gamma_1(d) = \exp(2\pi ik_D/3)\) for \(k_D \in \{0,1,2\}\), and \(\Gamma_1(f) = \exp(2\pi ik_F/3)\), for \(k_F \in \{0, 1, 2\}\). But \(\Gamma_1(d) = \Gamma_1(ab) = \Gamma_1(a)\Gamma_1(b) \in \mathbb{R}\), so in fact \(\Gamma_1(d) = 1\). Then \(1 = \Gamma_1(e) = \Gamma_1(df) = \Gamma_1(d)\Gamma_1(f)\) so that \(\Gamma_1(f) = 1\) as well. We can choose \(\Gamma_1(a) = 1\), but then we are back to the trivial representation. So we must choose \(\Gamma_1(a) = \Gamma_1(b) = \Gamma_1(c) = -1\). This expends all the 1D representation of \(S_3\).

Finally, we search for 2D representations. Since \(d\) corresponds to a clockwise rotation by \(2\pi/3\), it seems reasonable to attempt to represent it by \begin{align*} \Gamma_2(d) = \begin{pmatrix} \cos(-2\pi/3) & -\sin(-2\pi/3)\\ \sin(-2\pi/3) & \cos(-2\pi/3) \end{pmatrix} = \frac{1}{2} \begin{pmatrix} -1 & \sqrt{3}\\ -\sqrt{3} & -1 \end{pmatrix} \end{align*} Since \(f\) is a rotation in the opposite direction, we need \begin{align*} \Gamma_2(f) = \frac{1}{2} \begin{pmatrix} -1 & -\sqrt{3}\\ \sqrt{3} & -1 \end{pmatrix} \end{align*} We need \begin{align*} \Gamma_2(a)^2 = \Gamma_2(b)^2 = \Gamma_2(c)^2 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \end{align*} This isn't a whole lot to go on, but since \(a, b\) and \(c\) are reflections we at least have the intuition that \(\mathrm{det}(\Gamma_2(a)) = -1\). Assume that \begin{align*} \Gamma_2(a) = \begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix} \end{align*} then we can use the homomorphism requirement to generate \(\Gamma_2(b)\) and \(\Gamma_2(c)\). This gives \begin{align*} \Gamma_2(b) = \frac{1}{2} \begin{pmatrix} 1 & -\sqrt{3} \\ -\sqrt{3} & -1 \end{pmatrix} \end{align*} and \begin{align*} \Gamma_2(c) = \frac{1}{2} \begin{pmatrix} 1 & \sqrt{3} \\ \sqrt{3} & -1 \end{pmatrix} \end{align*} That these three representations exhaust the irreducible representation of \(S_3\) will be proven later; for now we just accept this as fact and present the character table:

\(S_{3}\) \(\mathcal{C}_{e} = \{e\}\) \(\mathcal{C}_{a} = \{a,b,c\}\) \(\mathcal{C}_{d} = \{d,f\}\)
\(\chi_{0}\) \(1\) \(1\) \(1\)
\(\chi_{1}\) \(1\) \(-1\) \(1\)
\(\chi_{2}\) \(2\) \(0\) \(-1\)

Before moving on, I wanna do the character table of the Klein 4-group, denoted \(V\). This is because, analogous to the group of translations of a crystal, the Klein 4-group is abelian, but generated by more than one element. Let \(a,b\) denote the generators of \(V\). The Klein 4-group is defined by \(a^2 = b^2 = e\) and \(ab = ba\). The multiplication table is given below:

\(V\) \(e\) \(a\) \(b\) \(ab\)
\(e\) \(e\) \(a\) \(b\) \(ab\)
\(a\) \(a\) \(e\) \(ab\) \(b\)
\(b\) \(b\) \(ab\) \(e\) \(a\)
\(ab\) \(ab\) \(b\) \(a\) \(e\)
Because the group is abelian, each element lies in its own conjugacy class, so we don't need to compute the classes. Next, we need to enumerate the representations. First, we have the trivial representation \(\Gamma_0(e) = \Gamma_0(a) = \Gamma_0(b) = \Gamma_0(ab) = 1\). Second, we look for 1D representations. If \(\Gamma(a) = \lambda \in \mathbb{C}\), then \(\Gamma(e) = \Gamma(a)\Gamma(a) = \lambda^1 = 1\), so that \(\Gamma(a) = \pm 1\). By the same reasoning, \(\Gamma(b) = \pm 1\) and \(\Gamma(ab) = \pm 1\). The only restriction we have is that \(\Gamma(ab) = \Gamma(a)\Gamma(b)\), so we should obtain multiple 1D representations. For \(\Gamma_1\), we take \(\Gamma_1(a) = 1\) and \(\Gamma_1(b) = -1\) so that \(\Gamma_1(ab) = -1\). We can continue in this way to construct the following character table:
\(V\) \(\{e\}\) \(\{a\}\) \(\{b\}\) \(\{ab\}\)
\(\chi_0\) \(1\) \(1\) \(1\) \(1\)
\(\chi_1\) \(1\) \(1\) \(-1\) \(-1\)
\(\chi_2\) \(1\) \(-1\) \(1\) \(-1\)
\(\chi_3\) \(1\) \(-1\) \(-1\) \(1\)
Again, we will soon see a proof that 1D representations exhaust all the representations of the Klein 4-group.

To recap: We study representations because we understand matrices very well. But these matrices can have dimension which can be made arbitrarily large while still preserving the group structure, and hence we choose to study to irreducible representations. The presentation of irreducible representations can be made arbitrarily ugly via similarity transformations, so we choose a similarity transformation that makes the representation unitary. But even with a unitary representation, we can still apply another unitary change of basis to construct an equivalent representation. Hence we study the characters of the irreducible unitary representations, because the trace is doesn't "see" a change of basis.

We've still got a ways to go, so let's take a look forward. To prove Bloch's theorem, we still need two results: First, we need to prove that "the character table is square", i.e., the number of inequivalent irreducible representations is the same as the number of conjugacy classes in the group. Second, we require the formula \begin{align}\label{dimensionality_of_irreps} \sum_{i=1}^{\# \mathrm{irreps}} \mathrm{dim}(\Gamma_{i})^{2} = |G| \end{align} The primitive translation vectors of a crystal form an abelian group isomorphic to \(\mathbb{Z}_{N_1}\times \mathbb{Z}_{N_2}\times \mathbb{Z}_{N_3}\), where \(N_{i}\) are defined by the periodic boundary conditions \(\psi(\mathbf{r} + N_{i}\mathbf{a}_i) = \psi(\mathbf{r})\). Each element in an abelian group forms its own conjugacy class, so we have \(N_1N_2N_3\) conjugacy classes and hence \(N_1N_2N_3\) irreducible inequivalent representations. Since \(|\mathbb{Z}_{N_1}\times \mathbb{Z}_{N_2}\times \mathbb{Z}_{N_3}| = N_1 N_2 N_3\), then (\ref{dimensionality_of_irreps}) tells us that each of these representations is one-dimensional. The characters of these one-dimensional representations are roots of unity, which are the phase shifts of the wavefunction under translation.

We begin with the Schur orthogonality relation. Let \(\{\Gamma_{i}\}_{i=1}^{n}\) be a set of inequivalent irreducible unitary representations of a group \(G\). Then \begin{align}\label{schur_orthonality} \sum_{g \in G} \Gamma_{i}(g)_{\mu \nu}\Gamma_{j}(g)_{\mu' \nu'}^{*} = \frac{|G|}{\mathrm{dim}(\Gamma_j)}\delta_{ij}\delta_{\mu \mu'}\delta_{\nu \nu'} \end{align} We will not prove this as it requires messy but standard matrix algebra; the proof can be found in Dresselhaus. From (\ref{schur_orthonality}), we obtain a result for the orthogonality of characters: \begin{align}\label{character_orthogonality} \sum_{g \in G} \chi_{j}(g) \chi_{k}(g)^{*} = |G|\delta_{jk} \end{align} The proof is simple: Since \begin{align*} \chi_j(g) = \mathrm{Tr}(\Gamma_j(g)) = \sum_{\mu=1}^{\mathrm{dim}(\Gamma_j)} \Gamma_j(g)_{\mu \mu} \end{align*} then \begin{align*} \sum_{g \in G} \chi_{j}(g) \chi_{k}(g)^{*} &= \sum_{g \in G} \sum_{\mu = 1}^{\mathrm{dim}(\Gamma_j)} \Gamma_j(g)_{\mu \mu} \sum_{\nu=1}^{\mathrm{dim}(\Gamma_k)} \Gamma_{k}(g)_{\nu \nu}^{*} \\ &= \sum_{\mu = 1}^{\mathrm{dim}(\Gamma_j)} \sum_{\nu=1}^{\mathrm{dim}(\Gamma_k)}\sum_{g \in G} \Gamma_j(g)_{\mu \mu} \Gamma_{k}(g)_{\nu \nu}^{*} \\ &= \sum_{\mu = 1}^{\mathrm{dim}(\Gamma_j)} \sum_{\nu=1}^{\mathrm{dim}(\Gamma_k)} \frac{|G|}{\mathrm{dim}(\Gamma_k)} \delta_{jk}\delta_{\mu \nu} \\ &= |G|\delta_{jk} \end{align*} We now define the inner product of characters by \begin{align*} \left< \chi_{j}, \chi_{k}\right> := \frac{1}{|G|} \sum_{g \in G} \chi_{j}(g) \chi_{k}(g)^{*} = \frac{1}{|G|} \sum_{k=1}^{\# \mathrm{conjugacy\, classes}} |\mathcal{C}_{k}|\chi_{j}(\mathcal{C}_{k}) \chi_{k}(\mathcal{C}_{k})^{*} \end{align*} (This abuses notation a bit by using the same symbol for the character defined on group representations as conjugacy classes.) Suppose we compute the character of a direct sum of representations. It is easy to show that \(\chi_{\Gamma_1\oplus \Gamma_2} = \chi_{\Gamma_1} + \chi_{\Gamma_2}\). And since a general representation is the direct sum of irreducible representations, then the character of general representation can be written as \begin{align*} \chi = \sum_{j=1}^{\# \mathrm{irreps}} a_{j}\chi_{j} \end{align*} where \(\chi_{j}\) are the irreducible characters. Due to the orthogonality of irreducible characters, we see that \(a_{j} = \left<\chi, \chi_j\right>\), so the decomposition of a reducible character into irreducible components is unique. In addition, we can regard \(\mathrm{span}\{\chi_1, \chi_2, \ldots, \chi_{n}\}\) as an inner product space, with the irreducible characters forming an orthonormal basis. How many vectors are there in this space? Suppose \(G\) is comprised of \(m\) conjugacy classes \(\mathcal{C}_{\ell}\), each of cardinality \(|\mathcal{C}_{\ell}|\). Then (\ref{character_orthogonality}) can be written as \begin{align*} \sum_{\ell=1}^{m} \sqrt{\frac{|\mathcal{C}_{\ell}|}{|G|}} \chi_{j}(\mathcal{C}_{\ell}) \sqrt{\frac{|\mathcal{C}_{\ell}|}{|G|}}\chi_{k}(\mathcal{C}_{\ell})^{*} = \delta_{jk} \end{align*} Then define \begin{align*} \mathbf{v}_{k} := \left(\sqrt{\frac{|\mathcal{C}_{1}|}{|G|}}\chi_{k}(\mathcal{C}_{1}), \sqrt{\frac{|\mathcal{C}_{2}|}{|G|} }\chi_{k}(\mathcal{C}_{2}), \ldots, \sqrt{\frac{|\mathcal{C}_{m}|}{|G|}}\chi_{k}(\mathcal{C}_{m})\right) \end{align*} so that \(\left<\mathbf{v}_{j}, \mathbf{v}_{k} \right> = \delta_{jk}\). This shows that the number of characters cannot be more than the number of classes, or the set \(\{\mathbf{v}_{k}\}\) would be linearly dependent. If it was less than the number of classes, it wouldn't span the space-and we have already seen that the irreducible characters span, as any character can be represented as a sum of irreducible characters. This gives us a key result:

The number of irreducible representations = The number of conjugacy classes

Let's decompose the character of the regular representation into irreducible characters: \begin{align*} \chi_{\mathrm{reg}}(\mathcal{C}_k) = \sum_{j=1}^{\#\mathrm{irreps}} a_{j}\chi_{j}(\mathcal{C}_{k}) \end{align*} From our inner product we know that \begin{align*} a_j = \frac{1}{|G|} \sum_{k=1}^{\#\mathrm{irreps}} |\mathcal{C}_k| \chi_{j}(\mathcal{C}_k) \chi_{\mathrm{reg}}(\mathcal{C}_k). \end{align*} But by construction, \begin{align*} \chi_{\mathrm{reg}}(\mathcal{C}_k) &= \begin{cases} |G| &\quad e \in \mathcal{C}_k \\ 0 &\quad e \not \in \mathcal{C}_k \\ \end{cases} \end{align*} so \(a_j = \chi_j(\{e\})\), where \(\{e\}\) is the single element conjugacy class containing the identity. In addition, \(\chi_j(\{e\}) = \mathrm{dim}(\Gamma_j)\), so \begin{align*} \chi_{\mathrm{reg}}(\mathcal{C}_k) = \sum_{j=1}^{\#\mathrm{irreps}}\mathrm{dim}(\Gamma_j)\chi_{j}(\mathcal{C}_{k}) \end{align*} and replacing \(\mathcal{C}_{k}\) by \(\{e\}\), we obtain the dimension formula \begin{align}\label{dimension_formula} |G| = \sum_{j=1}^{\#\mathrm{irreps}}\mathrm{dim}(\Gamma_j)^2 \end{align}

Let's use this information to generate the linear character of the translation group of a crystal. Since the translation group is abelian, each element lies in its own conjugacy class. So the number of irreducible inequivalent representation of this group is equal to the number of elements. Each representation number one-dimensional, or (\ref{dimension_formula}) cannot be satified. Now we look for one-dimensional representations: Because of the periodic boundary conditions, we know that for any representation \(\Pi\), \begin{align*} \Pi( T_{ \mathbf{a}_i }^{N_{i}}) = \Pi(T_{\mathbf{a}_i})^{N_i} = 1 \end{align*} so \(\Pi(T_{\mathbf{a}_i}) = \exp(2\pi i n_i/N_i)\), where \(n_i \in \mathbb{Z}_{N_i}\). We now distinguish between representation by subscript, i.e., we define representations on a basis via \begin{align*} \Pi_{(n_1, n_2, n_3)}(T_{\mathbf{a}_1}) &= \exp(2\pi i n_1/N_1),\\ \Pi_{(n_1, n_2, n_3)}(T_{\mathbf{a}_2}) &= \exp(2\pi i n_2/N_2),\\ \Pi_{(n_1, n_2, n_3)}(T_{\mathbf{a}_3}) &= \exp(2\pi i n_3/N_3) \end{align*} Then if \(\mathbf{R} = m_1\mathbf{a}_1 + m_2 \mathbf{a}_2 + m_3 \mathbf{a}_3\), we have \begin{align*} \Pi_{(n_1, n_2, n_3)}(\mathbf{R}) = \exp\left(2\pi i\left( \frac{n_1m_1}{N_1} + \frac{n_2m_2}{N_2}+ \frac{n_3m_3}{N_3} \right)\right) \end{align*} The three-index notation to label the irreducible representations is closer to how Bloch originally wrote them, but it has become more fashionable to label the irreps by the crystal momentum \(\mathbf{k} := \frac{n_1}{N_1} \mathbf{b}_1 + \frac{n_2}{N_2} \mathbf{b}_2 + \frac{n_3}{N_3} \mathbf{b}_3\). We can then write \begin{align*} \Pi_{\mathbf{k}}(\mathbf{R}) = \exp\left( i \mathbf{k}\cdot \mathbf{R}\right) \end{align*}

How do representations act on quantum mechanical wavefunctions? Given a symmetry operator \(P_{g}\) and a function \(\psi \in L^2\), we define \begin{align*} P_{g}\psi := \psi\circ g^{-1} \end{align*} which is often written \(P_{g}\psi(\mathbf{x}) := \psi(g^{-1}\mathbf{x})\). (Note that this definition requires that the group elements have some sensible action on the domain of \(\psi\).) The inverse is there because we should think of the group acting on the function, not the coordinates. That this is in fact a group homomorphism is easily seen: \begin{align*} P_{g_1}P_{g_2}\psi = P_{g_1}\psi\circ g_2^{-1} = \psi\circ g_2^{-1}\circ g_1^{-1} = \psi\circ(g_1\circ g_2)^{-1} = P_{g_1\circ g_2}\psi \end{align*} The action of a group on a random function from \(L^2\) is not very interesting. Hence we restrict attention to a finite-dimensional subspace of \(W \subset L^2\) that is invariant under the action of the group. By "invariant under the action of the group" we mean \begin{align*} \psi \in W \implies P_{g}\psi \in W \quad \forall g \in G \end{align*} How do we find these subspaces? Consider a Hamiltonian \(\mathcal{H}\) which commute with all symmetry operators \(P_g\). If \(\lambda\) is an eigenvalue of \(\mathcal{H}\), define the \(\lambda\)-eigenspace \begin{align*} V_{\lambda} := \{ \psi \colon \mathcal{H}\psi = \lambda \psi\}. \end{align*} Since \begin{align*} \mathcal{H}P_{g} \psi = P_{g}\mathcal{H}\psi = \lambda P_{g}\psi \quad \forall \psi \in V_{\lambda} \end{align*} then \(V_{\lambda}\) is an invariant subspace for the action of the group. If \(V_{\lambda}\) is finite-dimensional, then we can choose a basis\(\{\psi_{i}\}_{i=1}^{n}\) for it and the effect of each symmetry operator is equivalent to the action of a matrix. So for some \(\Gamma(g)\in \mathbb{GL}(n; \mathbb{C})\), we write \begin{align*} P_{g}\psi_{i} = \sum_{j}\psi_{j}\Gamma(g)_{ji} \end{align*} That this definition preserves the group action is easily demonstrated: \begin{align*} P_{g_1}P_{g_2}\psi_{i} &= P_{g_1}\sum_{j}\psi_{j}\Gamma(g_2)_{ji} \\ &= \sum_{j}(P_{g_1}\psi_{j})\Gamma(g_2)_{ji} \\ &= \sum_{j,k}\psi_{k}\Gamma(g_1)_{kj}\Gamma(g_2)_{ji} \\ &= \sum_{k}\psi_{k}(\Gamma(g_1)\Gamma(g_2))_{ki} \\ &=: P_{g_1\circ g_2}\psi_i \end{align*} where the last step only follows if \(\Gamma\) is a representation. Remark: Those of you who feel like we should've defined \begin{align*} P_{g}\psi_{i} = \sum_{j}\Gamma(g)_{ij}\psi_{j} \quad \mathrm{(bad\, definition!)} \end{align*} should verify that the group law \(P_{g_1}P_{g_2} = P_{g_1\circ g_2}\) is not satisfied.

We are now ready to prove Bloch's theorem.

Proof: The operators \(T_{\mathbf{R}}\) form an abelian group isomorphic to \(\mathbb{Z}_{N_1}\times \mathbb{Z}_{N_2}\times \mathbb{Z}_{N_3}\). The irreducible representations of this group are all one dimensional and given by \(\exp\left(2\pi i\left(\frac{n_1m_1}{N_1} + \frac{n_2m_2}{N_2} +\frac{n_3m_3}{N_3} \right) \right)\) for \(n_i, m_i \in \mathbb{Z}_{N_i}\). Therefore, the action of this translation group on \(\lambda\)-eigenspaces of a Hamiltonian invariant under its action is given by \begin{align*} T_{\mathbf{R} }\psi = \exp\left(2\pi i\left(\frac{n_1m_1}{N_1} + \frac{n_2m_2}{N_2} +\frac{n_3m_3}{N_3} \right)\right) \psi \quad \mathrm{whenever\,} \mathcal{H}\psi = \lambda \psi. \end{align*}

So Bloch's theorem is really a consequence of the Hamiltonian being invariant under an abelian group. So in addition there is a "rotational Bloch's theorem" in 2-dimensions, since 2D rotations form an abelian group as well. Let's use graphene as an example. The graphene lattice is invariant under rotations by \(2\pi/6\), and as such solutions to Schrodinger's equation for graphene carry representations of \(C_{6}\). This is a cyclic abelian group of order 6, so the irreducible representations are given by \(\exp\left(2\pi i n/6 \right)\) for \(n \in \{0, 1, 2, 3, 4,5\}\). We obtain the "rotations \(\implies\) phase shifts" analog of Bloch's theorem in graphene: \begin{align*} R_{2\pi/6}\psi = \exp(i\pi n/3)\psi \end{align*}

References: