• Ei tuloksia

Introduction to Hopf algebras and representations

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Introduction to Hopf algebras and representations"

Copied!
91
0
0

Kokoteksti

(1)

Introduction to Hopf algebras and representations

Kalle Kyt ¨ol¨a

Department of Mathematics and Statistics University of Helsinki

Spring semester 2011

(2)
(3)

Contents

1 Preliminaries in linear algebra 5

1.1 On diagonalization of matrices . . . 5

1.2 On tensor products of vector spaces . . . 9

2 Representations of finite groups 15 2.1 Reminders about groups and related concepts . . . 15

2.2 Representations: Definition and first examples . . . 17

2.3 Subrepresentations, irreducibility and complete reducibility . . . 19

2.4 Characters . . . 21

3 Algebras, coalgebras, bialgebras and Hopf algebras 27 3.1 Algebras . . . 27

3.2 Representations of algebras . . . 29

3.3 Another definition of algebra . . . 33

3.4 Coalgebras . . . 34

3.5 Bialgebras and Hopf algebras . . . 37

3.6 The dual of a coalgebra . . . 43

3.7 Convolution algebras . . . 44

3.8 Representative forms . . . 47

3.9 The restricted dual of algebras and Hopf algebras . . . 48

3.10 A semisimplicity criterion for Hopf algebras . . . 53

4 Quantum groups 57 4.1 A building block of quantum groups . . . 57

4.2 Braided bialgebras and braided Hopf algebras . . . 62

4.3 The Drinfeld double construction . . . 71

4.4 A Drinfeld double ofHqand the quantum groupUq(sl2) . . . 76

4.5 Representations ofDq2andUq(sl2) . . . 80

4.6 Solutions to YBE from infinite dimensional Drinfeld doubles . . . 85

4.7 On the quantum groupUq(sl2) at roots of unity . . . 88

(4)

Acknowledgements. What I know of Hopf algebras I learned mostly from Rinat Kashaev in the spring of 2010 when he taught the courseAlg`ebres de Hopfat theUniversit´e de Gen`eve. His lectures were inspiring, and he kindly spent a lot of time discussing the subject with me, explaining things in ways I can’t imagine anyone else do. I was the teaching assistant of the course, and I thank the students for the good time we had studying Hopf algebras together in the exercise sessions.

I wish to thank especially Mucyo Karemera and Thierry L´evy for the discussions which were instrumental for me to get some understanding of the topics.

The present notes were prepared for the lectures of my courseIntroduction to Hopf algebras and representations, given in the University of Helsinki in the spring of 2011. A significant part of the material follows closely the notes I took from Rinat’s lectures. I thank the participants of the course for making it a very enjoyable experience. Antti Kemppainen was an extraordinary teaching assistant — his solutions to the exercises were invariably better than the solutions I had had in mind, and he gave plenty of valuable feedback on how to improve the course. Ali Zahabi and Eveliina Peltola also discussed the material extensively with me, and their comments lead to a large number of corrections, clarifications and other improvements to the present notes.

With several people from the mathematical physics groups of the university, we continued to study related topics in a seminar series on quantum invariants during the summer 2011. I thank in particular Juha Loikkanen, Antti Harju and Jouko Mickelsson for their contributions to the seminar series.

(5)

Chapter 1

Preliminaries in linear algebra

During most parts of this course, vector spaces are over the fieldCof complex numbers. Often any other algebraically closed field of characteristic zero could be used instead. In some parts these assumptions are not used, andKdenotes any field. We usually omit explicitly mentioning the ground field, which should be clear from the context.

Definition 1.1. LetV,WbeK-vector spaces. The space of linear maps (i.e. the space of homo- morphisms ofK-vector spaces) fromVtoWis denoted by

Hom(V,W) = n

T:V→W

Tis aK-linear mapo .

The vector space structure on Hom(V,W) is with pointwise addition and scalar multiplication.

The (algebraic) dual of a vector spaceVis the space of linear maps fromVto the ground field, V = Hom(V,K).

We denote the duality pairing by bracketsh·,·i. The value of a dual vectorϕ∈Von a vectorv∈V is thus usually denoted byhϕ,vi.

Definition 1.2. For T : V → W a linear map, the transpose is the linear mapT : W → V defined by

hT(ϕ),vi = hϕ,T(v)i for allϕ∈W, v∈V.

1.1 On diagonalization of matrices

In this section, vector spaces are over the fieldCof complex numbers.

Recall first the following definitions.

Definition 1.3. The characteristic polynomial of a matrixA∈Cn×nis pA(x) = det

xI−A .

The minimal polynomial of a matrixAis the polynomialqAof smallest positive degree such that qA(A)=0, with the coefficient of highest degree term equal to 1.

The Cayley-Hamilton theorem states that the characteristic polynomial evaluated at the matrix itself is the zero matrix, that ispA(A) = 0 for any square matrixA. An equivalent statement is that the polynomialqA(x) dividespA(x). These facts follow explicitly from the Jordan normal form discussed later in this section.

(6)

Motivation and definition of generalized eigenvectors

Given a square matrixA, it is often convenient to diagonalizeA. This means finding an invertible matrixP(“a change of basis”), such that the conjugated matrixP A P1is diagonal. If, instead of matrices, we think of a linear operatorAfrom vector spaceVto itself, the equivalent question is finding a basis forVconsisting of eigenvectors ofA.

Recall from basic linear algebra that (for example) any real symmetric matrix can be diagonalized.

Unfortunately, this is not the case with all matrices.

Example 1.4. Letλ∈Cand

A =







λ 1 0

0 λ 1

0 0 λ







∈ C3×3.

The characteristic polynomial ofAis

pA(x) = det(xI−A) = (x−λ)3,

so we know thatAhas no other eigenvalues but λ. It follows from det(A−λI) = 0 that the eigenspace pertaining to the eigenvalueλis nontrivial, dim (Ker (A−λI))>0. Note that

A−λI =







0 1 0 0 0 1 0 0 0







 ,

so that the image ofAis two dimensional, dim (Im (A−λI))=2. By rank-nullity theorem, dim (Im (A−λI))+dim (Ker (A−λI)) = dim

C3 = 3,

so the eigenspace pertaining toλmust be one-dimensional. Thus the maximal number of linearly independent eigenvectors ofAwe can have is one — in particular, there doesn’t exist a basis ofC3 consisting of eigenvectors ofA.

We still take a look at the action ofAin some basis. Let

w1 =







 1 0 0







w2 =







 0 1 0







w3 =







 0 0 1







 .

Then the following “string” indicates howA−λImaps these vectors w3

A7→λ w2

A7→λ w1 A7→λ 0.

In particular we see that (A−λI)3=0.

The “string” in the above example illustrates and motivates the following definition.

Definition 1.5. LetVbe a vector space andA:V→Vbe a linear map. A vectorv∈Vis said to be a generalized eigenvector of eigenvalueλif for some positive integerpwe have (A−λI)pv=0.

The set of these generalized eigenvectors is called the generalized eigenspace ofApertaining to eigenvalueλ.

Withp=1 the above would correspond to the usual eigenvectors.

(7)

Hopf algebras and representations Spring 2011

The Jordan canonical form

Although not every matrix has a basis of eigenvectors, we will see that every complex square matrix has a basis of generalized eigenvectors. More precisely, ifVis a finite dimensional complex vector space andA:V→Vis a linear map, then there exists eigenvaluesλ1, λ2, . . . , λkofA(not necessarily distinct) and a basis{w(j)m : 1≤ j≤k,1 ≤m≤nj}ofVwhich consists of “strings” as follows

w(1)n1

Aλ1

7→ w(1)n11 A7→λ1 · · · A7→λ1 w(1)2 A7→λ1 w(1)1 A7→λ1 0 w(2)n2 A7→λ2 w(2)n

21

A7→λ2 · · · A7→λ2 w(2)2 A7→λ2 w(2)1 A7→λ2 0

... ... ...

w(k)nk

Aλk

7→ w(k)n

k1 Aλk

7→ · · · A7→λk w(k)2 A7→λk w(k)1 A7→λk 0.

(1.1)

Note that in this basis the matrix ofAtakes the “block diagonal form”

A =



















Jλ1;n1 0 0 · · · 0 0 Jλ2;n2 0 · · · 0

0 0 Jλ3;n3 0

... ... ... ...

0 0 0 · · · Jλk;nk



















, (1.2)

where the blocks correspond to the subspaces spanned byw(1j),w(2j), . . . ,w(j)nj and the matrices of the blocks are the following “Jordan blocks”

Jλj;nj =























λj 1 0 · · · 0 0 0 λj 1 · · · 0 0

0 0 λj 0 0

... ... ... ...

0 0 0 · · · λj 1 0 0 0 · · · 0 λj























∈Cnj×nj.

Definition 1.6. A matrix of the form (1.2) is said to be in Jordan normal form (or Jordan canonical form).

The characteristic polynomial of the a matrixAin Jordan canonical form is pA(x) = det (xI−A) =

Yk j=1

(x−λj)nj.

Note also that if we write a blockJλ;n=λI+Nas a sum of diagonal partλIand upper triangular partN, then the latter is nilpotent: Nn = 0. In particular the assertionpA(A)= 0 of the Cayley- Hamilton theorem can be seen immediately for matrices which are in Jordan canonical form.

Definition 1.7. Twon×nsquare matricesAandBare said to be similar ifA=P B P1for some invertible matrixP.

It is in this sense that any complex square matrix can be put to Jordan canonical form, the matrix Pimplements a change of basis to a basis consisting of the strings of the above type. Below is a short and concrete proof.

Theorem1.8

Given any complexn×nmatrixA, there exists an invertible matrixPsuch that the conjugated matrixP A P1is in Jordan normal form.

(8)

Proof. In view of the above discussion it is clear that the statement is equivalent to the following:

ifVis a finite dimensional complex vector space andA:V→Va linear map, then there exists a basis ofVconsisting of strings as in (1.1).

We prove the statement by induction onn = dim (V). The casen =1 is clear. As an induction hypothesis, assume that the statement is true for all linear maps of vector spaces of dimension less thann.

Take any eigenvalueλofA(any root of the characteristic polynomial). Note that dim (Ker (A−λI))>0,

and sincen=dim (Ker (A−λI))+dim (Im (A−λI)), the dimension of the image ofA−λIis strictly less thann. Denote

R = Im (A−λI) and r = dim (R) < n.

Note thatRis an invariant subspace forA, that isA R⊂R(indeed,A(A−λI)v=(A−λI)A v). We can use the induction hypothesis to the restriction ofAtoR, to find a basis

{w(j)m : 1≤ j≤k,1≤m≤nj} ofRin which the action ofAis described by the strings as in (1.1).

Letq=dim (R ∩ Ker (A−λI)). This means that inRthere areqlinearly independent eigenvectors ofAwith eigenvalueλ. The vectors at the right end of the strings span the eigenspaces ofAin R, so we assume without loss of generality that the lastqstrings correspond to eigenvalueλand others to different eigenvalues: λ1, λ2, . . . , λkq ,λandλkq+1kq+2 = · · ·= λk =λ. For all j such thatk−q< j≤kthe vectorw(j)nj is inR, so we can choose

y(j)∈V such that (A−λI)y(j) = w(nj)j. The vectorsy(j)extend the lastqstrings from the left.

Find vectors

z(1),z(2), . . . ,z(nrq) which complete the linearly independent collection

w(k1q+1), . . . ,w(k11),w(k)1

to a basis of Ker (A−λI). We have now foundnvectors inV, which form strings as follows z(1) A7→λ 0

... ...

z(nrq) A7→λ 0 w(1)n1

Aλ1

7→ · · · A7→λ1 w(1)1 A7→λ1 0

... ... ...

w(knkqq)

Aλkq

7→ · · · A

λkq

7→ w(k1q) A

λkq

7→ 0 y(kq+1) A7→λ w(knkqq++11)

A7→λ · · · A7→λ w(k1q+1) A7→λ 0

... ... ... ...

y(k) A7→λ w(k)n

k1

A7→λ · · · A7→λ w(k)1 A7→λ 0.

It suffices to show that these vectors are linearly independent. Suppose that a linear combination of them vanishes

Xk j=kq+1

αjy(j)+X

j,m

βj,mw(j)m +

nrq

X

l=1

γlz(l) = 0.

(9)

Hopf algebras and representations Spring 2011

From the string diagram we see that the image of this linear combination underA−λIis a linear combination of the vectorsw(mj), which are linearly independent, and since the coefficient ofw(nj)j is αj, we getαj=0 for allj. Now recalling that{w(j)m}is a basis ofR, and{w(j)1 :k−q< j≤k} ∪ {z(l)}is a basis of Ker (A−λI), and{w(j)1 :k−q< j≤k}is a basis ofR ∩ Ker (A−λI), we see that all the coefficients in the linear combination must vanish. This finishes the proof.

Exercise 1(Around the Jordan normal form)

(a) Find two matrices A,B ∈ Cn×n, which have the same minimal polynomial and the same characteristic polynomial, but which are not similar.

(b) Show that the Jordan normal form of a matrixA∈Cn×nis unique up to permutation of the Jordan blocks. In other words, ifC1=P1A P11andC2 =P2A P21are both in Jordan normal form,C1with blocksJλ1,n1, . . .Jλk;nkandC2with blocksJλ01,n01, . . .Jλ0l;n0l, thenk=land there is a permutationσ∈Sksuch thatλj0σ(j)andnj=n0σ(j)for all j=1,2, . . . ,k.

(c) Show that any two matrices with the same Jordan normal form up to permutation of blocks are similar.

Let us make some preliminary remarks of the interpretation of Jordan decomposition from the point of view of representations. We will return to this when we discuss representations of algebras, but a matrix determines a representation of the quotient of the polynomial algebra by the ideal generated by the minimal polynomial of the matrix. Diagonalizable matrices can be thought of as a simple example of completely reducible representations: the vector spaceV is a direct sum of eigenspaces of the matrix. In particular, if all the roots of the minimal polynomial have multiplicity one, then all representations are completely reducible. Non-diagonalizable matrices are a simple example of a failure of complete reducibility. The Jordan blocksJλj;nj correspond to subrepresentations (invariant subspaces) which are indecomposable, but not irreducible ifnj>1.

1.2 On tensor products of vector spaces

A crucial concept in the course is that of a tensor product of vector spaces. Here, vector spaces can be over any fieldK, but it should be noted that the concept of tensor product depends of the field. In this course we only need tensor products of complex vector spaces.

Definition 1.9. LetV1,V2,Wbe vector spaces. A mapβ:V1×V2→Wis called bilinear if for all v1∈V1the mapv2 7→β(v1,v2) is linearV2→Wand for allv2∈V2the mapv17→β(v1,v2) is linear V1→W.

Multilinear mapsV1×V2× · · · ×Vn→Ware defined similarly.

The tensor product is a space which allows us to replace some bilinear (more generally multilinear) maps by linear maps.

Definition 1.10. LetV1andV2be two vector spaces. A tensor product ofV1andV2is a vector spaceUtogether with a bilinear mapφ:V1×V2→Usuch that the following universal property holds: for any bilinear mapβ:V1×V2 → W, there exists a unique linear map ¯β:U→ Wsuch that the diagram

V1×V2

β

- W

U

¯β -

φ -

,

(10)

commutes, that isβ=β¯◦φ.

Proving the uniqueness (up to canonical isomorphism) of an object defined by a universal iso- morphism is a standard exercise in abstract nonsense. Indeed, if we supposeU0with a bilinear mapφ0:V1×V2→U0is another tensor product, then the universal property ofUgives a linear map ¯φ0:U→U0such thatφ0=φ¯0◦φ. Likewise, the universal property ofU0gives a linear map φ¯ :U0→Usuch thatφ=φ¯ ◦φ0. Combining these we get

idU◦φ = φ = φ¯◦φ0 = φ¯◦φ¯0◦φ.

But here are two ways of factorizing the mapφitself, so by the uniqueness requirement in the universal property we must have equality idU=φ¯◦φ¯0. By a similar argument we get idU0=φ¯0◦φ.¯ We conclude that ¯φand ¯φ0are isomorphisms (and inverses of each other).

Now that we know that tensor product is unique (up to canonical isomorphism), we use the following notations

U = V1⊗V2 and

V1×V2 3 (v1,v2) 7→φ v1⊗v2 ∈ V1⊗V2.

An explicit construction which shows that tensor products exist is done in Exercise 2. The same exercise establishes two fundamental properties of the tensor product:

• If (v(1)i )iI is a linearly independent collection inV1 and (v(2)j )jJ is a linearly independent collection inV2, then the collection

v(1)i ⊗v(2)j

(i,j)I×Jis linearly independent inV1⊗V2.

• If the collection (v(1)i )iI spansV1 and the collection (v(2)j )jJ spansV2, then the collection v(1)i ⊗v(2)j

(i,j)I×Jspans the tensor productV1⊗V2.

It follows that if (v(1)i )iIand (v(2)j )jJare bases ofV1andV2, respectively, then v(1)i ⊗v(2)j

(i,j)I×J

is a basis of the tensor productV1⊗V2. In particular ifV1andV2are finite dimensional, then dim (V1⊗V2) = dim (V1) dim (V2).

Exercise 2(A construction of the tensor product)

We saw that the tensor product of vector spaces, defined by the universal property, is unique (up to isomorphism) if it exists. The purpose of this exercise is to show existence by an explicit construction, under the simplifying assumption thatVandWare function spaces (it is easy to see that this can be assumed without loss of generality).

For any setX, denote byKXthe vector space ofKvalued functions onX, with addition and scalar multiplication defined pointwise. Assume thatV⊂KXandW⊂KYfor some setsXandY. For

f ∈KXandg∈KY, definef ⊗g∈KX×Yby f⊗g

(x,y) = f(x)g(y).

Also set

V⊗W = spann f⊗g

f ∈V, g∈Wo , so that the map(f,g)7→ f⊗gis a bilinear mapV×W→V⊗W.

(a) Show that if(fi)iIis a linearly independent collection inVand(gj)jJis a linearly independent collection inW, then the collection

fi⊗gj

(i,j)I×Jis linearly independent inV⊗W.

(11)

Hopf algebras and representations Spring 2011

(b) Show that if(fi)iIis a collection that spansVand(gj)jJis collection that spansW, then the collection

fi⊗gj

(i,j)I×J spansV⊗W.

(c) Conclude that if(fi)iI is a basis ofVand(gj)jJis a basis ofW, then fi⊗gj

(i,j)I×J is a basis ofV⊗W. Conclude furthermore thatV⊗W, equipped with the bilinear mapφ(f,g)= f⊗g fromV×WtoV⊗W, satisfies the universal property defining the tensor product.

A tensor of the formv(1)⊗v(2) is called a simple tensor. By part (b) of the above exercise, any t∈V1⊗V2can be written as a linear combination of simple tensors

t = Xn α=1

v(1)α ⊗v(2)α ,

for some v(1)α ∈ V1 and v(2)α ∈ V2, α = 1,2, . . . ,n. Note, however, that such an expression is by no means unique! The smallest n for which it is possible to write t as a sum of simple tensors is called the rank of the tensor, denoted by n = rank(t). An obvious upper bound is rank(t) ≤ dim (V1) dim (V2). One can do much better in general, as follows from the following useful observation.

Lemma1.11 Suppose that

t =

n

X

α=1

v(1)α ⊗v(2)α ,

wheren=rank(t). Then both(v(1)α )nα=1and(v(2)α )nα=1are linearly independent collections.

Proof. Suppose, by contraposition, that there is a linear relation Xn

α=1

cαv(1)α =0,

where not all the coefficients are zero. We may assume thatcn=1. Thusv(1)n =−Pn1

α=1cαv(1)α and using bilinearity we simplifytas

t =

n1

X

α=1

v(1)α ⊗v(2)α +v(1)n ⊗v(2)n =

n1

X

α=1

v(1)α ⊗v(2)α

n1

X

α=1

cαv(1)α ⊗v(2)n =

n1

X

α=1

v(1)α

v(1)α −cαv(2)n

which contradicts minimality ofn=rank(t). The linear independence of (v(2)α ) is proven similarly.

As a consequence we get a better upper bound

rank(t)≤min{dim (V1),dim (V2)}.

Taking tensor products with the one-dimensional vector spaceKdoes basically nothing: for any vector spaceVwe can canonically identify

V⊗K V and K⊗V V

v⊗λ 7→λv λ⊗v 7→λv.

By the obvious correspondence of bilinear mapsV1×V2 →WandV2×V1→W, one also always gets a canonical identification

V1⊗V2 V2⊗V1.

(12)

Almost equally obvious correspondences give the canonical identifications (V1⊗V2)⊗V3 V1⊗(V2⊗V3)

etc., which allow us to omit parentheses in multiple tensor products.

A slightly more interesting property than the above obvious identifications, is the existence of an embedding

V2⊗V1 ,→ Hom(V1,V2) which is obtained by associating tov2⊗ϕthe linear map

v1 7→ hϕ,v1iv2

(and extending linearly from the simple tensors to all tensors). The following exercise verifies among other things that this is indeed an embedding and that in the finite dimensional case the embedding becomes an isomorphism.

Exercise 3(The relation between Hom(V,W) andW⊗V)

(a) Forw∈Wandϕ∈V, we associate tow⊗ϕthe following mapV→W v 7→ hϕ,viw.

Show that the linear extension of this defines an injective linear map W⊗V −→ Hom(V,W).

(b) Show that if both V and W are finite dimensional, then the injective map in (a) is an isomorphism

W⊗V Hom(V,W).

Show that under this identification, the rank of a tensort∈ W⊗Vis the same as the rank of a matrix of the corresponding linear mapT∈Hom(V,W).

Definition 1.12. When

f :V1→W1 and g:V2→W2

are linear maps, then there is a linear map

f⊗g:V1⊗V2→W1⊗W2

defined by the condition

(f⊗g)(v1⊗v2) = f(v1)⊗g(v2) for allv1∈V1, v2∈V2. The above map clearly depends bilinearly on (f,g), so we get a canonical map Hom(V1,W1)⊗Hom(V2,W2) ,→ Hom(V1⊗V2,W1⊗W2),

which is easily seen to be injective. When all the vector spacesV1,W1,V2,W2are finite dimensional, then the dimensions of both sides are given by

dim (V1) dim (V2) dim (W1) dim (W2), so in this case the canonical map is an isomorphism

Hom(V1,W1)⊗Hom(V2,W2) Hom(V1⊗V2,W1⊗W2).

As a particular case of the above, interpreting the dual of a vector spaceVasV=Hom(V,K) and usingK⊗KK, we see that the tensor product of duals sits inside the dual of the tensor product.

Explicitly, ifV1andV2are vector spaces andϕ1∈V12 ∈V2, then v1⊗v2 7→ hϕ1,v1i hϕ2,v2i

(13)

Hopf algebras and representations Spring 2011

defines an element of the dual ofV1⊗V2. To summarize, we have an embedding V1⊗V2 ,→ (V1⊗V2).

IfV1andV2are finite dimensional this becomes an isomorphism V1⊗V2 (V1⊗V2).

As a remark, later in the course we will notice an asymmetry in the dualities between algebras and coalgebras, Theorems 3.34 and 3.45. This asymmetry is essentially due to the fact that in infinite dimensional case one only has an inclusionV⊗V⊂(V⊗V)but not an equality.

The transpose behaves well under the tensor product of linear maps.

Lemma1.13

When f :V1→W1andg:V2→W2are linear maps, then the map f⊗g:V1⊗V2→W1⊗W2

has a transpose(f⊗g)which makes the following diagram commute (W1⊗W2) (f⊗g)

- (V1⊗V2)

W1⊗W2

6

f⊗g- V1⊗V2.

6

Proof. Indeed, forϕ∈W1,ψ∈W2and any simple tensorv1⊗v2∈V1⊗V2we compute h(f⊗g)(ϕ⊗ψ),v1⊗v2i =hf(ϕ)⊗g(ψ),v1⊗v2i

=hf(ϕ),v1i hg(ψ),v2i

=hϕ,f(v1)i hψ,g(v2)i

=hϕ⊗ψ,f(v1)⊗g(v2)i

=hϕ⊗ψ,(f⊗g)(v1⊗v2)i

=h(f ⊗g)(ϕ⊗ψ),v1⊗v2i.

(14)
(15)

Chapter 2

Representations of finite groups

We begin by taking a brief look at the classical topic of representations of finite groups. Here many things are easier than later in the course when we discuss representations of “quantum groups”. The most important result is that all finite dimensional representations are direct sums of irreducible representations, of which there are only finitely many.

2.1 Reminders about groups and related concepts

Definition 2.1. A group is a pair (G,∗), whereGis a set and∗is a binary operation onG

∗:G×G→G (g,h)7→g∗h such that the following hold

“Associativity”: g1∗(g2∗g3)=(g1∗g2)∗g3for allg1,g2,g3∈G

“Neutral element”: there exists an elemente∈Gs.t. for allg∈Gwe haveg∗e=g=e∗g

“Inverse”: for anyg∈G, there exists an elementg1∈Gsuch thatg∗g1=e=g1∗g A group (G,∗) is said to be finite if its order|G|(that is the cardinality ofG) is finite.

We usually omit the notation for the binary operation∗and write simplygh:=g∗h. For the binary operation in abelian (i.e. commutative) groups we often, though not always, use the additive symbol+.

Example 2.2. The following are abelian groups

- a vector spaceVwith the binary operation+of vector addition

- the setK\ {0}of nonzero numbers in a field with the binary operation of multiplication - the infinite cyclic groupZof integers with the binary operation of addition

- the cyclic group of orderNconsisting of allNthcomplex roots of unityn eik/N

k=0,1,2, . . . ,N−1o , with the binary operation of complex multiplication.

We also usually abbreviate and write onlyGfor the group (G,∗).

Example 2.3. LetXbe a set. ThenS(X) :=σ:X→Xbijective with composition of functions is a group, called the symmetric group ofX.

In the caseX={1,2,3, . . . ,n}we denote the symmetric group bySn.

(16)

Example 2.4. Let V be a vector space and GL(V) = Aut(V) =

A:V→Vlinear bijection with composition of functions as the binary operation. Then GL(V) is a group, called the general linear group of V (or the automorphism group of V). When V is finite dimensional, dim (V) =n, and a basis ofVhas been chosen, then GL(V) can be identified with the group of n×nmatrices having nonzero determinant, with matrix product as the group operation.

LetKbe the ground field andV=Knthe standardn-dimensional vector space. In this case we denote GL(V)=GLn(K).

Example 2.5. The groupD4 of symmetries of a square, or the dihedral group of order 8, is the group with two generators

r“rotation byπ/2” m“reflection”

and relations

r4=e m2=e rmrm=e.

Definition 2.6. Let (G1,∗1) and (G2,∗2) be groups. A mapping f :G1→G2is said to be a (group) homomorphism if for allg,h∈G1

f(g ∗1 h) = f(g) ∗2 f(h).

Example 2.7. The determinant functionA7→det(A) from the matrix group GLn(C) to the multi- plicative group of non-zero complex numbers, is a homomorphism since det(A B)=det(A) det(B).

The reader should be familiar with the notions of subgroup, normal subgroup, quotient group, canonical projection, kernel, isomorphism etc.

One of the most fundamental recurrent principles in mathematics is the isomorphism theorem.

We recall that in the case of groups it states the following.

Theorem2.8

LetGandHbe groups andf :G→Ha homomorphism. Then 1) Im f:= f(G)⊂His a subgroup.

2) Ker f:= f1({eH})⊂Gis a normal subgroup.

3) The quotient groupG/Ker f

is isomorphic toIm f .

More precisely, there exists an injective homomorphism f¯: G/Ker f → Im f

such that the following diagram commutes

G f

- H

G/Ker f

¯f - π

-

, whereπ:G→G/Ker f

is the canonical projection.

The reader has surely encountered isomorphism theorems for several algebraic structures already

— the following table summarizes the corresponding concepts in a few familiar cases

(17)

Hopf algebras and representations Spring 2011

Structure Morphism f ImageIm f

KernelKer f group group homomorphism subgroup normal subgroup vector space linear map vector subspace vector subspace

ring ring homomorphism subring ideal

... ... ... ...

We will encounter isomorphism theorems for yet many other algebraic structures during this course: representations (modules), algebras, coalgebras, bialgebras, Hopf algebras, . . . . The idea is always the same, and the proofs only vary slightly, so we will not give full details in all cases.

A word of warning: since kernels, images, quotients etc. of different algebraic structures are philosophically so similar, we use the same notation for all. It should be clear from the context what is meant in each case. Usually, for example, Ker ρ

would mean the kernel of a group homomorphismρ:G→GL(V) (a normal subgroup ofG), whereas Ker ρ(g)would then signify the kernel of the linear mapρ(g) :V→V(a vector subspace ofV, which incidentally is{0}when ρ(g)∈GL(V)).

2.2 Representations: Definition and first examples

Definition 2.9. LetGbe a group andVa vector space. A representation ofGinVis a group homomorphismG→GL(V).

Supposeρ:G→GL(V) is a representation. For anyg∈G, the imageρ(g) is a linear mapV→V.

When the representationρis clear from context (and maybe also when it is not), we denote the images of vectors by this linear map simply byg.v:=ρ(g)v∈V, forv∈V. With this notation the requirement thatρis a homomorphism reads (g h).v=g.(h.v). It is convenient to interpret this as a left multiplication of vectorsv∈Vby elementsgof the groupG. Thus interpreted, we say thatV is a (left)G-module (although it would be more appropriate to call it aK[G]-module, whereK[G]

is the group algebra ofG).

Example 2.10. Let V be a vector space and setρ(g) = idV for all g ∈ G. This is called the trivial representation ofG in V. If no other vector space is clear from the context, the trivial representation means the trivial representation in the one dimensional vector spaceV=K.

Example 2.11. The symmetric groupSnforn ≥2 has another one dimensional representation called the alternating representation. This is the representation given byρ(σ) = sign(σ) idK, where sign(σ) is minus one when the permutationσis the product of odd number of transpositions, and plus one whenσis the product of even number of transpositions.

Example 2.12. LetD4be the dihedral group of order 8, with generatorsr,mand relationsr4=e, m2=e,rmrm=e. Define the matrices

R =

"

0 −1

1 0

#

and M =

" −1 0

0 1

# .

SinceR4 = I,M2 = I,RMRM = I, there exists a homomorphism ρ : D4 → GL2(R) such that ρ(r) = R,ρ(m) = M. Such a homomorphism is unique since we have given the values of it on generatorsr,mofD4. If we think of the square in the planeR2with verticesA=(1,0),B=(0,1), C=(−1,0),D=(0,−1), then the linear mapsρ(g),g∈D4, are precisely the eight isometries of the plane which preserve the squareABCD. Thus it is very natural to represent the groupD4in a two dimensional vector space!

A representationρis said to be faithful if it is injective, i.e. if Ker ρ={e}. The representation of

(18)

the symmetry group of the square in the last example is faithful, it could be taken as a defining representation ofD4.

When the ground field isC, we might want to write the linear mapsρ(g) :V→Vin their Jordan canonical form. But we observe immediately that the situation is as good as it could get:

Lemma2.13

LetGbe a finite group,Va finite dimensional (complex) vector space, andρa representation ofG inV. Then, for anyg∈G, the linear mapρ(g) :V→Vis diagonalizable.

Proof. Observe thatgn =efor some positive integern(for example the order of the elementgor the order of the groupG). Thus we haveρ(g)n =ρ(gn)=ρ(e)= idV. This says that the minimal polynomial ofρ(g) dividesxn−1, which only has roots of multiplicity one. Therefore the Jordan

normal form ofρ(g) can only have blocks of size one.

We still continue with an example (or definition) of representation that will serve as useful tool later.

Example 2.14. Letρ1, ρ2be two representations of a groupGin vector spacesV1,V2, respectively.

Then the space of linear maps between the two representations Hom(V1,V2) = {T:V1→V2linear} becomes a representation by setting

g.T = =ρ2(g)◦T◦ρ1(g1)

for allT∈Hom(V1,V2),g∈G. As usual, we often omit the explicit notation for the representations ρ1, ρ2, and write simply

(g.T)(v)=g.

T(g1.v)

for anyv∈V1. To check that this indeed defines a representation, we compute g1.(g2.T)

(v) = g1.

(g2.T)(g11.v)

= g1.g2.

T(g21.g11.v)

= g1g2. T

(g1g2)1.v

=

(g1g2).T (v).

Definition 2.15. LetGbe a group andV1,V2twoG-modules (=representations). A linear map T:V1→V2is said to be aG-module map (sometimes also called aG-linear map) ifT(g.v)=g.T(v) for allg∈G,v∈V.

Note thatT∈ Hom(V1,V2) is aG-module map if and only ifg.T=Tfor allg∈ G, when we use the representation of Example 2.14 on Hom(V1,V2). We denote by HomG(V1,V2)⊂Hom(V1,V2) the space ofG-module maps fromV1toV2.

Exercise 4(Dual representation)

Let Gbe a finite group and ρ : G → GL(V) be a representation of G in a finite dimensional (complex) vector spaceV.

(a) Show that any eigenvalueλofρ(g), for anyg∈G, satisfiesλ|G|=1.

(b) Recall that the dual space ofVisV ={f :V→Clinear map}. Forg∈Gand f ∈Vdefine ρ0(g).f ∈Vby the formula

0(g).f,vi=hf, ρ(g1).vi for allv∈V.

Show thatρ0:G→GL(V)is a representation.

(19)

Hopf algebras and representations Spring 2011

(c) Show thatTr(ρ0(g))is the complex conjugate ofTr(ρ(g)).

Exercise 5(A two dimensional irreducible representation ofS3)

Find a two-dimensional irreducible representation of the symmetric groupS3.

Hint: Consider the three-cycles, and see what different transpositions would do to the eigenvectors of a three-cycle.

Definition 2.16. ForGa group, an elementg2∈Gis said to be conjugate tog1∈Gif there exists ah∈Gsuch thatg2 = h g1h1. Being conjugate is an equivalence relation and conjugacy classes of the groupGare the equivalence classes of this equivalence relation.

Exercise 6(Dihedral group of order 8)

The groupD4of symmetries of the square is the group with two generators,randm, and relations r4 =e,m2=e,rmrm=e.

(a) Find the conjugacy classes ofD4.

(b) Find four non-isomorphic one dimensional representations ofD4. (c) There exists a unique group homomorphismρ:G→GL2(C)such that

ρ(r) =

"

0 −1

1 0

#

ρ(m) =

"

−1 0

0 1

#

(here, as usual, we identify linear mapsC2→C2with their matrices in the standard basis).

Check that this two dimensional representation is irreducible.

2.3 Subrepresentations, irreducibility and complete reducibility

Definition 2.17. Letρbe a representation ofGinV. IfV0⊂Vis a subspace and ifρ(g)V0⊂V0 for allg∈G(we say thatV0is an invariant subspace), then taking the restriction to the invariant subspace,g7→ρ(g)|V0defines a representation ofGinV0called a subrepresentation ofρ.

We also callV0a submodule of theG-moduleV.

The subspaces{0} ⊂VandV⊂Vare always submodules.

Example 2.18. Let T : V1 → V2 be aG-module map. The image Im (T) = T(V1) ⊂ V2 is a submodule, since a general vector of the image can be written asw =T(v), and g.w= g.T(v)= T(g.v) ∈ Im (T). The kernel Ker (T) = T1({0}) ⊂ V1 is a submodule, too, since ifT(v) = 0 then T(g.v)=g.T(v)=g.0=0.

Example 2.19. When we consider Hom(V1,V2) as a representation as in Example 2.14, the subspace HomG(V1,V2)⊂Hom(V1,V2) ofG-module maps is a subrepresentation, which, by the remark after Definition 2.15, is a trivial representation in the sense of Example 2.10.

Definition 2.20. Letρ1 :G→ GL(V1) andρ2 :G → GL(V2) be representations ofGin vector spacesV1andV2, respectively. LetV=V1⊕V2be the direct sum vector space. The representation ρ:G→GL(V) given by

ρ(g)(v1+v2) = ρ1(g)v12(g)v2 whenv1 ∈V1⊂V, v2∈V2⊂V is called the direct sum representation ofρ1andρ2.

BothV1andV2are submodules ofV1⊕V2.

(20)

Definition 2.21. Letρ1 :G →GL(V1) andρ2 :G→ GL(V2) be two representations ofG. We make the tensor product spaceV1⊗V2a representation by setting for simple tensors

ρ(g) (v1⊗v2) = (ρ1(g)v1)⊗(ρ2(g)v2)

and extending the definition linearly to the whole ofV1⊗V2. Clearly for simple tensors we have ρ(h)ρ(g) (v1⊗v2) =

ρ1(h)ρ1(g)v1

ρ2(h)ρ2(g)v2

=

ρ1(hg)v1

ρ2(hg)v2

= ρ(hg) (v1⊗v2) and since both sides are linear, we haveρ(h)ρ(g)t = ρ(hg)tfor allt ∈ V1⊗V2, so thatρ : G→ GL(V1⊗V2) is indeed a representation.

A key property of representations of finite groups is that any invariant subspace has a complemen- tary invariant subspace in the following sense. An assumption is needed of the ground field: we need to divide by the order of the group, so the order must not be a multiple of the characteristic of the field. In practise we only work with complex representations, so there is no problem.

Proposition2.22

LetGbe a finite group. IfV0is a submodule of aG-moduleV, then there is a submoduleV00⊂V such thatV=V0⊕V00as aG-module.

Proof. First choose any complementary vector subspaceUforV0, that isU⊂V0such thatV=V0⊕U as a vector space. Letπ0 :V→V0be the canonical projection corresponding to this direct sum, that is

π0(v0+u) = v0 whenv0∈V0,u∈U.

Define

π(v) = 1

|G| X

gG

g.π0(g1.v).

Observe thatπ|V0 =idV0and Im (π)⊂V0, in other words thatπis a projection fromVtoV0. If we set V00=Ker (π), then at leastV=V0⊕V00as a vector space. To show thatV00is a subrepresentation, it suffices to show thatπis aG-module map. This is checked by doing the change of summation variable ˜g=h1gin the following

π(h.v) = 1

|G| X

gG

g.π0(g1.h.v) = 1

|G| X

gG

g.π0

(h1g)1.v

= 1

|G| X

˜ gG

hg.π˜ 0

˜ g1.v

= h.π(v).

We conclude thatV00=Ker (π)⊂Vis a subrepresentation and thusV=V0⊕V00as a representation.

Definition 2.23. Letρ:G→GL(V) be a representation. If there are no other subrepresentations but those corresponding to{0}andV, then we say thatρis an irreducible representation, or that Vis a simpleG-module.

Proposition 2.22, with an induction on dimension of theG-moduleV, gives the fundamental result about representations of finite groups called complete reducibility, as stated in the following. We will perform this induction in more detail in Proposition 3.18 when we discuss the complete reducibility and semisimplicity of algebras.

Corollary2.24

LetGbe a finite group andVa finite dimensionalG-module. Then, as representations, we have V = V1⊕V2⊕ · · · ⊕Vn,

where each subrepresentationVj⊂V,j=1,2, . . . ,n, is an irreducible representation ofG.

(21)

Hopf algebras and representations Spring 2011

Exercise 7(An example of tensor products and complete reducibility withD4)

The groupD4is the group with two generators,randm, and relationsr4=e,m2 =e,rmrm=e. Re- call that we have seen four one-dimensional and one two-dimensional irreducible representation ofD4in Exercise 6. LetVbe the two dimensional irreducible representation ofD4given by

r 7→

"

0 −1

1 0

#

m 7→

"

−1 0

0 1

# .

Consider the four-dimensional representationV⊗V, and show by an explicit choice of basis for V⊗Vthat it is isomorphic to a direct sum of the four one-dimensional representations.

We also mention the basic result which says that there is not much freedom in constructingG- module maps between irreducible representations. For the second statement below we need the ground field to be algebraically closed: in practise we use it only for complex representations.

Lemma2.25(Schur’s Lemma)

IfVandWare irreducible representations of a groupG, andT:V→Wis aG-module map, then (i) eitherT=0orTis an isomorphism

(ii) ifV=W, thenT = λidVfor someλ∈C.

Proof. If Ker (T) , {0}, then by irreducibility ofV we have Ker (T) = V and thereforeT = 0. If Ker (T) = {0}, then T is injective and by irreducibility ofW we have Im (T) = W, so T is also surjective. This proves(i). To prove(ii), pick any eigenvalueλofT(here we need the ground field to be algebraically closed). Now consider theG-module mapT−λidV, which has a nontrivial kernel. The kernel must be the whole space by irreducibility, soT−λidV=0.

Exercise 8(Irreducible representations of abelian groups)

(a) LetGbe an abelian (=commutative) group. Show that any irreducible representation ofG is one dimensional. Conclude that (isomorphism classes of) irreducible representations can be identified with group homomorphismsG→C.

(b) LetCnZ/nZbe the cyclic group of ordern, i.e. the group with one generatorcand relation cn=e. Find all irreducible representations ofCn.

2.4 Characters

In the rest of this sectionGis a finite group of order|G|and all representations are assumed to be finite dimensional.

We have already seen the fundamental result of complete reducibility: any representation ofGis a direct sum of irreducible representations. It might nevertheless not be clear yet how to concretely work with the representations. We now introduce a very powerful tool for the representation theory of finite groups: the character theory.

Definition 2.26. Forρ:G→GL(V) a representation, the character of the representation is the functionχV:G→Cgiven by

χV(g) = Tr(ρ(g)).

Observe that we have

χV(e) = dim (V)

and for two group elements that are conjugates,g2=hg1h1, we have χV(g2) = Tr(ρ(g2)) = Tr

ρ(h)ρ(g1)ρ(h)1

= Tr(ρ(g1)) = χV(g1).

(22)

Thus the value of a character is constant on each conjugacy class ofG(such functionsG→Care called class functions).

Example 2.27. We have seen three (irreducible) representations of the group S3: the trivial representation U and the alternating representation U0, both one dimensional, and the two- dimensional representationVin Exercise 5. The conjugacy classes of symmetric groups correspond to the cycle decompositions of a permutation — in particular forS3the conjugacy classes are

identity : {e}

transpositions : {(12),(13),(23)} 3-cycles : {(123),(132)}.

We can explicitly compute the trace of for example the transposition (12) and the three cycle (123) to get the characters of these representations

χ(e) χ((12)) χ((123))

U 1 1 1

U0 1 −1 1

V 2 0 −1

.

Recall that we have seen how to make the dualV a representation (Exercise 4), how to make direct sumV1⊕V2a representation (Definition 2.20) and also how to make the tensor product a representation (Definition 2.21). Let us now see how these operations affect characters.

Proposition2.28

LetV,V1,V2be representations ofG. Then we have (i) χV(g) = χV(g)

(ii) χV1V2(g) = χV1(g)+χV2(g) (iii) χV1V2(g) = χV1(g)χV2(g).

Proof. Part (i) was done in Exercise 4. For the other two, recall first that if ρ : G → GL(V) is a representation, thenρ(g) is diagonalizable by Lemma 2.13. Therefore there aren = dim (V) linearly independent eigenvectors with eigenvaluesλ1, λ2, . . . , λn, and the trace is the sum of theseχV(g) = Pn

j=1λj. Consider the representations ρ1 : G → GL(V1), ρ2 : G → GL(V2). For g ∈G, take bases of eigenvectors ofρ1(g) andρ2(g) forV1 andV2, respectively: ifn1 =dim (V1) andn2 =dim (V2) letv(1)α ,α=1,2, . . . ,n1, be eigenvectors ofρ1(g) with eigenvaluesλ(1)α , andv(2)β , β= 1,2, . . . ,n2, eigenvectors ofρ2(g) with eigenvaluesλ(2)β . To prove (ii) it suffices to note that v(1)α ∈V1 ⊂V1⊕V2andv(2)α ∈V2 ⊂V1⊕V2 are then1+n2 =dim (V1⊕V2) linearly independent eigenvectors for the direct sum representation, and the eigenvalues areλ(1)α andλ(2)β . To prove (iii) note that the vectorsv(1)α ⊗v(2)β are then1n2 =dim (V1⊗V2) linearly independent eigenvectors of V1⊗V2, and the eigenvalues are the productsλ(1)α λ(2)β , since

g.(v(1)α ⊗v(2)β ) =

ρ1(g).v(1)α

ρ2(g).v(2)β

= (λ(1)α v(1)α )⊗(λ(2)β v(2)β ) = λ(1)α λ(2)β (v(1)α ⊗v(2)β ).

Therefore the character of the tensor product reads χV1V2(g) = X

α,β

λ(1)α λ(2)β =

n1

X

α=1

λ(1)α

n2

X

β=1

λ(2)β

= χV1(g)χV2(g).

Viittaukset

LIITTYVÄT TIEDOSTOT

Describing concept formation as a dialectical process in which we iteratively devise more accurate and instrumentally reliable representations of targets in reality,

The fact that imagery, VSTM and the encoding of new input rely on overlapping neural resources raises a number of questions: how do the internal memory and imagery

The analysis in chapter three studies the different controversial aspects of the family depicted in We Need to Talk about Kevin and interrogates the representations allowed

Vuonna 1996 oli ONTIKAan kirjautunut Jyväskylässä sekä Jyväskylän maalaiskunnassa yhteensä 40 rakennuspaloa, joihin oli osallistunut 151 palo- ja pelastustoimen operatii-

Ana- lyysin tuloksena kiteytän, että sarjassa hyvätuloisten suomalaisten ansaitsevuutta vahvistetaan representoimalla hyvätuloiset kovaan työhön ja vastavuoroisuuden

First, the analysis indicates that the comic representations in advertisement are multi-dimensional: there are used numbers of comic mechanisms in the Billys Pizza

MetaMorpho shows some relation to machine translation systems which use logical semantic representations (e.g. Rosetta), but the original form of the rule-to-rule hypothesis in

At this point in time, when WHO was not ready to declare the current situation a Public Health Emergency of In- ternational Concern,12 the European Centre for Disease Prevention