[ Main page  Part I. Geometry and combinatorics  Part III. Multiplication  Part IV. Closures ] 
In working with Coxeter groups on a computer,
the most basic questions to be answered are

In any Cartan realization, the elements of $W@
are linear transformations,
and one natural idea is to represent them
by the corresponding matrices. In some circumstances,
for example in lowdimensional graphics
or investigating the geometry of regions
of the Tits cone, this
is a reasonable choice, but usually
it is a very poor one. To avoid roundoff
errors (a potentially serious problem), it is a good
idea to use
a Cartan matrix with integral coordinates if it
exists (as it does for the crystallographic groups),
or the standard realization, in which case you are faced
with arithmetic in cyclotomic fields. Storage (proportional
to $S^{2}@) is one problem,
and the low speed involved in cyclotomic arithmetic is another.
Also, testing
inequalities involving cyclotomic numbers can be tricky.
Why are roundoff problems potentially serious? For finite and affine groups they probably aren't, but for groups where the Tits cone is acute it may well be  as a chamber $wC@ appproaches its boundary in projective space, it becomes more and more difficult to separate two distinct chambers. This problem is not avoided with crystallographic groups, either, where integer overflow may occur. In practice, I have seen graphics programs have trouble with single precision calculations (about $7@ decimal accuracy), producing images of chambers that were visibly awry near the Tits cone boundary. 
A method that is subject to some of the criticisms
in the previous section, but is nonethless
sometimes convenient, is one which represents each
element of $W@ by a vector in $S@ dimensions.
First fix a vector $u@ in the interior of the
fundamental chamber $C@. Then an element $w@
is characterized by either of its transforms $wu@
or $w^{1}u@  in effect by the chamber $wC@ itself,
since no $w@ will fix $u@. The vector is probably
best expressed in terms of the coordinates
$w_{a} = < a, w^{1}u >@ where $a@
varies over the basic roots. The amount of
storage involved is minimal, certainly.
Then we have a very simple
and efficient formula for reflection (where $a = a_{s}@):
This is especially efficient because the number $< b, a^{v} >@ (an entry in the Cartan matrix) will in practice be $0@ most of the time and can then be ignored. This formula can be used to multiply two elements $x@ and $y@ if we know an expression $y = s_{1} ... s_{n}@ for $y@.
One nice feature of this representation of
elements is that we can use it to
find the reduced expressions to use in multiplication. We know that
$ws < w@ if and only if $C@ and $w^{1}C@ lie
on opposite sides of the hyperplane $a = 0@ ($a = a_{s}@),
that is to say if $< a, Cw > = < w^{1}a, C > < 0@,
or in other words if $w_{a} < 0@. So to find a reduced
expression for $w@ we let $s@ be such
that $w_{a}@ is negative and calculate $ws@.
Repeat until there are no negative coordinates,
when $w = 1@.

One natural way to express elements of $W@ is in
terms of one of its expressions as a product of generators.
At first this seems a daunting idea, but in time
it looks better and better. Its principal advantage
is that there are certainly no problems with accuracy.
Apparent disadvantages are the complexity of
calculations  it is not at all clear where to begin  and possibly
having to deal with words of arbitrarily large length.
In particular, neither the problem of multiplication nor that of generation
have immediately apparent solutions.
These, at least, turn out not to be real difficulties,
and for interesting mathematical reasons.
As for the length of words, in practice it does not seem to be a problem
except for elements where calculations are impractical
for other, more serious, reasons.
The first difficulty faced in this approach is to
decide which of many, many possible reduced expressions
by which to represent an element.
There are two natural choices.
The first is to single out what is called the
More precisely, the
Sometimes it is convenient to use the
In the rest of this lecture, we shall explore this purely combinatorial approach. 
The first interesting combinatorial result was found by Jacques Tits.
In principle, in showing that one word in $S@ is
equivalent to another,
it could be necessary to inflate the sizes
of words as well as shrink them. Tits showed
around 1968 that no inflation was necessary.
Although this result will play no very practical role, the
result is useful in hand calculation,
and I will present here a slight modification of Tits' proof.
Changing Tit's terminology slightly, I define
In particular, as the illustration shows,
there is a natural order in which to count such points,
and the proof proceeds by induction on
the index of the pair $
Suppose now that $
Suppose that there exists $z@ with $
Since $
Suppose that $y = s ### y_{*}@. Then $x_{*}@, $y_{*}@ satisfy the induction hypothesis, so that $R(x_{*})@ and $R(y_{*})@ have an element in common. But then so do $s ### R(x_{*})@, $s ### R(y_{*})@, hence $R(x)@ and $R(y)@. So now we may assume that
So now we may assume (a) $
Because of the potentially huge size of $R(x)@, it is definitely not usually
practical to calculate it. Nonetheless, Tits' reductions are
easy enough to carry out by hand in simple cases,
and his Theorem assures that this effort will not be fruitless.
Also, in principle Tits' algorithm tells us
how to calculate products. For this,
choose an ordering of the elements of $S@
(or, equivalently, index $S@). Assign to
each element in $W@ its unique

We are going to be interested in recognizing patterns in
reduced expressions for elements of $W@. The simplest
patterns of importance will turn out to be
Let's look again at the dihedral group of order $8@.
Here is a list of all the
This can also be described by a diagram:
in which paths starting at the dotted disc parametrize
But the real point to be made occurs with the infinite group $A_{1}^{~}@. The words representing elements can be represented by paths in this diagram, where again elements are in bijection with nodes:
but also by paths in this one:
Of course, this is a wonderul compression! It is the first example we have seen of a phenomenon that we shall see much of later on, and one that seems to be ubiquitous in this subject: infinite Coxeter groups are often represented well by finite structures.
Diagrams of the sort we are looking at  roughly speaking, directed labeled graphs
together with a designated starting point  are called
automata or state machines in the theory
of languages. They play an important role
in how computers parse input, among other things.
The sets of strings that are recognized by such diagrams are
called regular expressions.
They are called automata because paths tracing through one follow
a program with a strictly limited number of options  if they
have arrived at a node, then what they can do subsequently depends only
on that node, not on the history of how they got to it. The paths have limited memory,
like a simple mechanical toy.

I believe, for reasons that will be laid out
later on, that regular expressions will prove to be unavoidable
in understanding important phenomena associated to Coxeter groups.
How important remains to be seen. Nonetheless, for this reason
it seems useful to look at them in more detail. There are many
points that haven't been touched upon yet, and a few that haven't
been made very precise.
I should also point out that by no means
all
interesting languages are regular. The simplest example
is that of balanced left and right parenthetical
expressions. The point is that
a regular language has only a limited memory, whereas
whether or not a parenthesis `)' is balanced somewhere in an expression
by a `(' may involve searching back to the beginning
of the expression.
These are not:
Note that we are not restricting ourselves to reduced fractions. We can put together a recipe for expressing a positive rational number: (1) To write a positive integer we write a sequence of nonnegative digits, starting with a positive one. (2) To write a positive rational number, write a positive integer, followed optionally by a slash $/@ followed by a positive integer other than $1@. This recipe is not trivial to read, and for most people it is easier to understand if we encode it in a diagram:
This looks like a finite automaton, but there is a
new feature  the red nodes. The point is
that not all paths in this diagram lead to
an acceptable expression for a fraction. The prefix of a valid expression
for a fraction may not be itself valid.
For example, the empty string is not acceptable.
Also, a single $1@ in the denominator is not acceptable.
In this diagram, only those paths leading to
a yellow node correspond to acceptable
expressions.
The yellow nodes are said to be the
So we have a precise definition:
a
Why can't we expect a finite automaton to recognize reduced fractions?
Whether an expression is recognized by a finite automaton depends on
its form, not its content  its syntax, not its semantics.
Deciding whether a fraction is reduced or not involves
thinking about the meaning of fractions, not just their form.
We shouldn't expect anything as simple
as a finite automaton to handle this sort of thing. Syntactical
analysis is often, however, a preliminary to semantic interpretation.
The nodes of the new automaton are the saturated subsets of the nodes of $A@. There is an edge labeled by $s@ from $X@ to $X ### s@. The start state is the saturation of the set of start states of $A@. A node is accepting if it contains an accepting node from $A@. I leave it as an exercise to verify that this equivlent to $A@. The new automaton may be huge. In practice, most of the new nodes may be thrown away, because they are not attainable from the start state. As an extreme example, if $A@ is itself deterministic then the only attainable nodes are the singletons. The standard algorithm for constructing the new automaton constructs only those new nodes which are attainable.
The potential exponential growth does occasionally happen.
This accounts for the result that
The problem is that there is no way to see how the automata for $X@ and $Y@ are related to that for $X # Y@. The source of the difficulty is that the b in abb is different from that in abc  one comes from $X@ and the other from $Y@. There is apparently no direct way to build a deterministic automaton directly from the two components, but there is nonetheless a simple solution: instead of trying to build a deterministic automaton recognizing $X # Y@ we shall build a nondeterministic one. The dual source of the letter b will be dealt with (at least implicitly) by allowing two edges from a given source labeled by b. The actual construction for an arbitrary $X@ and $Y@ is this:
Make up a new version of $A@ by adding to a copy of it
a new terminal node z, with
unlabeled transitions to it from
each accepting state in $A@. Now add on to this in turn
a copy of $B@, with unlabeled transitions from $z@
to any of the starting states of $B@ (only one, of
course, if $B@ happens to be
deterministic). The accepting nodes of the
new automaton are just the accepting states of $B@.
Here's an example we have seen already. The tautologous automaton recognizing reduced words for $A_{1}^{~}@ is
But now some of the nodes are equivalent in the sense that the paths out of these nodes are the same  i.e. the options for continuing a word from that node are the same. In this picture, all the nodes coloured green are equivalent since all paths out of any of them is just a repeated $st ... @, and all those coloured blue are also equivalent:
We can collapse all of the same colour to a single point, to get a new equivalent automaton:
This construction is always valid, but a bit of care is
required. It is not the full set of words leaving a node
which is important, but just those which lead to acceptable
nodes. For example, a few extra edges could always be added to
a dead state without changing things substantially.
If $A@ is any automaton, define two
nodes $n_{1}@ and $n_{2}@ to be equivalent if they have the following property:
a word $w@ leads from $n_{1}@ to
an accepting state in $A@
if and only if it leads from $n_{2}@ to an accepting state.
Define a new automaton whose nodes are the equivalence classes of nodes in $A@.
It start state is the class containing the start state of $A@. Its accepting states
are those classes containing accepting nodes of $A@. There is an edge
labeled $s@ from a class $N_{1}@ to a class $N_{2}@ if there is
an edge from $n_{1}@ in $N_{1}@ to $n_{2}@ in $N_{2}@ labeled by $s@.
I leave it as an exercise to verify that this is an equivalent deterministic automaton.
Suppose we have already found the blocks $B@ of elements equivalent of order $m@, and suppose there are $k@ input symbols. Then we partition each block $B@ into smaller blocks $B_{C_{1}, ... , C_{k}}@, where $B_{C_{1}, ... , C_{k}}@ is the set of $b@ in $B@ such that $b # s_{i}@ lies in the block $C_{i}@, for each $i = 1@, ... , $k@. If there are $N@ blocks of order $m@ equivalence, then a priori there might be as many as $N^{k+1}@ blocks of order $m+1@, a potentially disastrous superexponential growth. In fact there exists an algorithm with time dependence $O(k n log n)@ if $n@ is the number of states in the original automaton. This works because most of the blocks $B_{C_{1}, ... , C_{k}}@ will be empty, so the idea is to find away of going from one order to the next by somehow automatically excluding the empty ones. This algorithm solves a slightly more general problem, one which occurs when dealing with Coxeter groups. Suppose we are given a collection of maps $f_{i}@, $i@ varying over a set $S@, and a partition of a set $X@ into blocks $X_{i}@. Two elements $x@ and $y@ are called equivalent if any iteration of the $f@'s applied to them finds them in the same block. We shall call an element dead if all the $f@'s take it to itself. It works like this, in a rough description:

Let's look at the
Certain things are not difficult to see in this image.
Every nonempty
It is significant that each of these regions is convex. Indeed, it appears that all of the regions comprised of a single chamber together with its successors in the tree are convex. We shall why this is so in a moment.
The question is whether we can recognize in this tree a repetition of
patterns similar to that in the tree for $A_{1}^{~}@. The answer is that we can.
For example, there exist some simple cycles
There are lots of such cycles  in the following picture, each yellow chamber represents the same position in such a cycle:
But there are more interesting repetitions:
Start out at $C@ and move by <9999ff>3.
You arrive at essentially the same pattern if you follow this by
Two chambers will be called
Note that there are indeed loops
This is not an isolated case. It is a very beautiful result of Brigitte Brink and Bob Howlett that:
We shall look at the proof in subsequent sections.
That Here is the finite automaton recognizing reduced words for $A_{2}^{~}@:
Its symmetry arises from the symmetry of the Coxeter diagram.
Embedded in it are three hexagons, which are the finite automata
for the dihedral
groups of two generators, each of order $6@. The minimal finite
automaton recognizing reduced words for a finite
Coxeter group is always the same size
as the group itself  this is essentially because all
words terminate in the longest element.

The key to the proof of the Theorem of Brink and Howlett
is a more fundamental result of theirs
about the roots of a Cartan realization of a Coxeter group.
One positive root $b@ is said to
In this and a few other simple cases the minimal roots all touch the chamber $C@, but that does not always happen. The fundamental result of Brink and Howlett , upon which all their nontrivial results depend, is that:
Bob Howlett remarks somewhere that he thinks this is a fundamental fact about Coxeter groups, even though there seem to be few applications of it to other properties, and I agree with him. I am not going to prove the theorem of Brink and Howlett here, but I shall explain some more elementary properties of minmal roots.
There are two interesting order relations among positive roots.
One is dominance, which we have seen already.
The other is more closely related to
how the positive roots are generated. Every positive root
is equal to the transform of a basic root
by and element of $W@. Define the
1. The minimal roots are minimal in this order.
2. If $p@ is a minimal root and $s@ in $S@,
then exactly one of these holds:
(a) $p = a_{s}@, and $s p < 0@;
(b) $s p@ is another minimal root;
(c) $s p@ dominates $a_{s}@.
Thus we get the
The root hyperplanes $p = 0@ and $q = 0@ intersect in
Lemma. If $p@ dominates $q@ and $wq@ is a positive root then $wp@ is also a positive root and $wp@ dominates $wq@.
If $p Brink's thesis.

For each $w@ in $W@ let $
Then $w # s@ lies in


