]> An IDEAL Group, CLC, Project

ON COMPARABILITY OF RANDOM PERMUTATIONS

DISSERTATIO N

Presented in Partial Fulfillment of the Requirements for

the Degree Doctor of Philosophy in the Graduate

School of the Ohio State University

By

Adam Hammett, B.S.

**** *

The Ohio State University

2007

Dissertation Committee:

133C1bClbl11111111lbbCC. Approved by

Dr. Boris Pittel, Advisor

Dr. Gerald Edgar

1 ċ.

Advisor

Dr. Akos Seress Graduate Program in

Graduate Program in Mathematics

ABSTRACT

Two permutations of [n] :={1, 2, ..., n} are comparable in the Bruhat order if one

can be obtained from the other by a sequence of transpositions decreasing the number

of inversions. We show that the total number of pairs of permutations (π, σ) with

π<σ is of order (n!)2/n2 at most. Equivalently, if π, a are chosen uniformly at

random and independently of each other, then P(π<σ) is of order n-2 at most. By

a direct probabilistic argument we prove P(π<σ) is of order (0.708)n at least, so

that there is currently a wide qualitative gap between the upper and lower bounds.

Next, emboldened by a connection with Ferrers diagrams and plane partitions

implicit in Bressoud's book [13], we return to the Bruhat order upper bound and show

that for n permutations π1, ..., πr selected independently and uniformly at random,

P(π1<...<πr)=O(n-r(r-1)),

thus providing an extension of our result for pairs of permutations to chains of length

r>2.

Turning to the related weak order "-"- when only adjacent transpositions are

admissible - we use a non-inversion set criterion to prove that Pn*:=P(πσ) is

submultiplicative, thus showing existence of ρ=limPn*n. We demonstrate that ρ

is 0.362 at most. Moreover, we prove the lower bound i=1n(H(i)/i) for Pn*, where

ii

H(i):=j=1i1/j. In light of numerical experiments, we conjecture that for each

order the upper bounds for permutation-pairs are qualitatively close to the actual

behavior. We believe that extensions to r-chains similar to that for the Bruhat order

upper bound can be made for our other bounds in each order, and are presently

working in this direction.

Finally, the weak order poset happens to be a lattice, and we study some properties

of its infimums and supremums. Namely, we prove that the number of r-tuples

(π1 , . . . ' πr) of n-permutations with minimal infimum, 12 ... n, asymptotically equals

-(n!)rhr(z*)(z*)n+1, r>2, n . (1)

Here, z*=z*(r)(1, 2) is the unique (positive) root of the equation

hr(z):=j0(-1)j(j!)rzj=0

within the disk |z|<2. Moreover, (1) is also the asymptotic number of r-tuples with

maximal supremum, n(n -1) ...1.

iii

To my wife, Rachael, for her unending suppport.

iv

ACKNOWLEDGMENTS

First and foremost, I wish to thank my advisor, Boris Pittel. Our collaboration was

quite intensive on this research project, and without his countless suggestions and

endless encouragement, none of these results would have materialized. He is truly an

outstanding example of what a mentor should be.

This work was inspired in large part by a thought-provoking talk Mark Skandera

gave at the MIT Combinatorics Conference honoring Richard Stanley's 60th birthday.

We are grateful to Mark for an enlightening follow-up discussion of comparability

criteria for the Bruhat order. We thank Sergey Fomin for encouragement and for

introducing us to an instrumental notion of the permutation-induced poset, and also

for many insightful suggestions regarding the history of Bruhat order. Without Ed

Overman's invaluable guidance we would not have been able to obtain our numerical

results. Craig Lennon gave us an idea for proving an exponential lower bound in

the case of Bruhat order. We thank Mikl6s B6na for his interest in this work and

for drawing our attention to the lower bound for the number of linear extensions in

Richard Stanley's book.

v

VITA

April 28, 1979. . . . . . . . . . . . . . . . . . . . . . . . . Born - Santa Maria, CA

1997-2001 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Undergraduate,

Westmont College - Santa Barbara, CA

2001 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.S. in Mathematics,

Westmont College

2001-Present . . . . . . . . . . . . . . . . . . . . . . . . . . Graduate Teaching Associate,

The Ohio State University

FIELDS OF STUDY

Major Field: Mathematics

Specialization: Combinatorial Probability

vi

TABLE OF CONTENTS

Abstract ii

Dedication . 1v

Acknowledgments . v

Vita vi

List of Figures x

List of Tables xi

CHAPTER PAGE

1 Introduction 1

1.1 Bruhat Order, Preliminaries 1

1.2 Main Results Related to the Bruhat Order 5

1.3 Weak Order, Preliminaries 7

1.4 Main Results Related to the Weak Order 8

2The Proof of the Bruhat Order Upper Bound 14

2.1 A Necessary Condition for Bruhat Comparability 14

2.2 A Probabilistic Simplification 21

2.3 Asymptotics 23

3The Proof of the Bruhat Order Lower Bound . 26

3.1 A Sufficient Condition for Bruhat Comparability 26

3.2 A Reduction to Uniforms 29

3.3 An Algorithm to Maximize the Bound 34

vii

4

An Extension to Chains in Bruhat Order

39

4.1 The General Setup . 39

4.2 A Tractable Necessary Condition 41

4.3 The Core Counting Problem 42

4.4 A Probabilistic Simplification 56

4.5 Asymptotics 57

5Some Properties of the Weak Ordering 62

5.1 A Criterion for Weak Comparability 62

5.2 Submultiplicativity of Pn* 65

6 The Proof of the Weak Order Upper Bound 67

6.1 A Necessary Condition for Weak Comparability 67

6.2 A Reduction to Uniforms 69

6.3 An Algorithm to Minimize the Bound 71

7The Proof of the Weak Order Lower Bound 75

7.1 A Formula for P(Ei(π)Ei(σ)) . . 75

7.2 Positive Correlation of the Events {Ei(π)Ei(σ)} 77

7.3 Linear Extensions of Arbitrary Finite Posets 81

8Numerics . 83

8.1 Bruhat Order Numerics 83

8.2 Weak Order Numerics 85

9On Infs and Sups in the Weak Order Lattice 89

9.1 A Connection with Complete, Directed, Acyclic Graphs 89

9.2 Computing Infs and Sups in the Weak Order Lattice 93

9.3 Some Equivalent Criteria for inf{π1 , . . . ' πr}=12 ... n 96

9.4 Submultiplicativity Again 99

9.5 Sharp Asymptotics of Pn(r) . 1Ot

9.5.1 Step 1: An Exact Formula for Pn(r) lai

9.5.2 Step 2: A Generating Function for Pn(r).. 105

9.5.3 Step 3: Asymptotics 108

Vlll

10

Open Problems

114

10.1 The Problems 114

Bibliography 115

IX

LIST OF FIGURES

FIGURE PAGE

1.1 The Bruhat order on S3 and S4. 2

1.2 Superimposing M(π) and M(σ) to form M(π, σ). 5

1.3 The permutation-induced poset 7(2143). 10

2.1 Finding a necessary condition for π<σ. 16

2.2 Selection of first m=m1 ( n/2-m resp.) rows (columns resp.) in

corners to support M(π). 21

3.1 G5 and an emboldened subgrid C. 27

3.2 Deletion of 2 largest elements of π, σ, and its affect on C. . 28

4.1 Finding a necessary condition for π1<... <πr. 43

4.2 A non-intersecting nest of 7 lattice paths. 46

4.3 An intersecting nest of 7 lattice paths. 47

4.4 "Swapping" the tails in Figure 4.3. 49

8.1 Experimental determination of the exponent -a in the asymptotic

equation Pncn-a. 85

8.2 Experimental determination of the ratio ρ in the asymptotic equation

Pn*cρn. 86

x

LIST OF TABLES

TABLE PAGE

3.1 Exact computation of Nk for smallish k. 37

6.1 Exact computation of Nk* for smallish k. 74

8.1 Computer simulation data for Pn. 84

8.2 Exact computation of Pn for smallish n. 84

8.3 Computer simulation data for Pn*. 87

8.4 Exact computation of Pn* for smallish n. 87

8.5 Our theoretical lower bound for Pn* applied for smallish n. 88

Xi

(JIAPTER 1

INTRODUCTION

In this chapter, we present the fundamental ideas necessary to understand the ma-

terial in subsequent chapters. We also state the main results to be proved in this

dissertation. It is assumed that the reader has some familiarity with probability the-

ory and combinatorics. A good graduate-level introduction to these subjects can be

found, for instance, in Billingsley [4] and Stanley's volumes on enumerative combina-

torics, [46] and [47].

1.1 Bruhat Order, Preliminaries

Let n>1 be an integer. Two permutations of [n] :={1, ..., n} are comparable in the

Bruhat order if one can be obtained from the other by a sequence of transpositions of

pairs of elements forming an inversion. Here is a precise definition of the Bruhat order

on the set of permutations Sn (see [46, p. 172, ex. 75. a.], Humphreys [30, p. 119]).

If ω =ω(1) ... ω(n) Sn, then a reduction of ω is a permutation obtained from ω

by interchanging some ω (i) with some ω (j) provided i<j and ω (i)>ω (j). We say

that π<σ in the Bruhat order if there is a chain σ=ω1ω2...ωs=π, where

each ωt is a reduction of ωt-1. The number of inversions in ωt strictly decreases with t.

1

4321

321

312

132

123

1234

Figure 1.1: The Bruhat order on S3 and S4.

Indeed, one can show that if ω2 is a reduction of ω1 via the interchange ω1(i)ω1(j),

i<j, then

inv(ω1)=inv(ω2)+2N(ω1)+1,

N(ω1):=|{k : i<k<j, ω1(i)>ω1(k)>ω1(j)}|;

here inv(ω1), say, is the number of inversions in ω1 (see Björner and Brenti [6]). Figure

1.1 illustrates this poset on S3 and S4. The Bruhat order notion can be extended

to other Coxeter groups (see Björner [5], Deodhar [20], and [6, p. 63] for historical

background), but we will be dealing with the symmetric group Sn only.

The definition of the Bruhat order is very transparent, and yet deciding for given π, a

2

whether π<σ from the definition is computationally difficult, even for smallish n.

Fortunately, there exist efficient algorithms for checking Bruhat comparability, which

can all be traced back to an algorithmic comparability criterion due to Ehresmann

(1934) [22] (see also Knuth [34], Björner and Brenti [6]). The Ehresmann "tableau

criterion" states that π<σ if and only if πi,j<σi,j for all 1<i<j<n -1, where

πi,j and σi,j are the i-th entry in the increasing rearrangement of π(1) , . . . , π(j)

and of a(1) , . . . ' σ(j). These arrangements form two staircase tableaux, hence the

term tableau criterion" For example, 41523 > 21534 is verified by element-wise

comparisons of the two tableaux

12451235

14 5 1 2 5

1412

4 2

Also, it is well-known that Ehresmann's criterion is equivalent to the (0, 1)-matrix

criterion. It involves comparing the number of Fs contained in certain submatrices

of the (0, 1)-permutation matrices representing π and a (see B6na[10], [6]). Later,

Björner and Brenti [7] were able to improve on the result of [22], giving a tableau

criterion that requires fewer operations. Very recently, Drake, Gerrish and Skandera

[21] have found two new comparability criteria, involving totally nonnegative poly-

nomials and the Schur functions respectively. We are aware of other criteria (see [5],

Fulton [25, pp. 173-177], Lascoux and Schiitzenberger [36], [20] ), but we found the

(0, 1)-matrix and Ehresmann criteria most amenable to probabilistic study.

3

The (0, 1)-matrix criterion for Bruhat order on Sn says that for π, σSn, π<σ if

and only if for all i, j<n, the number of π(1) , . . . , π(i) that are at most j exceeds

(or equals) the number of a(1), . . . ' a(i) that are at most j (see [11] for this version).

It is referred to as the (0, 1)-matrix criterion because of the following recasting of this

condition: let M (π), M(σ) be the permutation matrices corresponding to π, σ, so

that for instance the (i, j)-entry of M(π) is 1 if π(j)=i and 0 otherwise. Here, we

are labeling columns 1, 2, . . . ' n when reading from left to right, and rows are labeled

1, 2, . . . ' n when reading from bottom to top so that this interpretation is like placing

ones at points (i, π(i)) of the n ×n integer lattice and zeroes elsewhere. Denoting

submatrices of M (ċ) corresponding to rows I and columns J by M (ċ)I,J, this criterion

says that π<σ if and only if for all i, j<n, the number of ones in M (π)[i],[j] is at

least the number of ones in M (σ)[i],[j] (see [21] for this version).

An effective way of visualizing this criterion is to imagine the matrices M (π) and

M (σ) as being superimposed on one another into a single matrix, M (π, σ), with the

ones for M (π) represented by ×'s("crosses"), the ones for M (σ) by 's ("balls")

and the zeroes for both by empty entries. Note that some entries of a (π, σ) may

be occupied by both a cross and a ball. Then the (0, 1)-matrix criterion says that

π<σ if and only if every southwest submatrix of M (π, σ) contains at least as many

crosses as balls. Here, in the notation above, a southwest submatrix is a submatrix

M (π, σ)[i],[j] of M (π, σ) for some i, j<n. It is clear that we could also check

π<σ by checking that crosses are at least as numerous as balls in every northeast

submatrix of M (π, σ). Likewise, π<σ if and only if balls are at least as numerous

as crosses in every northwest submatrix of M (π, σ), or similarly balls are at least

4

M(0)

Figure 1.2: Superimposing M(π) and M(σ) to form M(π, σ).

as numerous as crosses in every southeast submatrix of M (π, σ). Parts of all four

of these equivalent conditions will be used in our proofs. As a quick example, with

π= 21534 and σ=41523, π<σ is checked by examining southwest submatrices of

M (π, σ) in Figure 1.2. Also, the superimposing of M (π) with M (σ) to form M (π, σ)

is illustrated in this figure.

1.2 Main Results Related to the Bruhat Order

In this dissertation, we use the (0, 1)-matrix and the Ehresmann criteria to obtain

upper and lower bounds for the number of pairs (π, σ) with π<σ.

5

Theorem 1.2.1. Let n>1 be an integer, and let π, σSn be selected independently

and uniformly at random. Then there exist universal constants c1, c2>0 such that

c1(0.708)n<P(π<σ)<c2/n2

Equivalently, the number of pairs (π, σ) with π<σ is sandwiched between the

counts c1(0.708)n(n!)2 and c2n-2(n!)2. The lower bound follows from a sufficient

condition derived from the (0, 1)-matrix criterion, and a computer-aided tabulation

of an attendant function of a smallish integer argument. Empirical estimates based on

generating pairs of random permutations suggest that P(π<σ) is of order n-(2+δ),

for δ close to 0.5 from above. So apparently it is the upper bound which comes close

to the true proportion P(π<σ). It is certain that the constant 0.708 can be further

improved, but we do not know if our method could be extended to deliver a lower

bound (1 -o(1))n. A lower bound n-a, a qualitative match of the upper bound, seems

out of sight presently.

A deeper insight reveals a more general result, related to chains of length r in Bruhat

order, once we realize some connections with MacMahon's formula [13] for counting

plane partitions contained in an r×s×t box. Without going into much unnecessary

detail here, one can visualize a plane partition as stacks of unit cubes pushed into a

corner. The k-th Ehresmann condition contains a clear connection between Bruhat

order on permutations and counting combinatorial objects related to plane partitions,

namely non-intersecting lattice paths, a notion we will make precise later on. A closer

look at our methods for permutation-pairs in the spirit of Gessel and Viennot's work

6

[26] implies an extension of Theorem 1.2.1, upper bound, from pairs of permutations

to r-tuples:

Theorem 1.2.2. Let π1, ..., πrSn be selected independently and uniformly at

random. Then there exists a uniform constant c >0 such that

P(π1<...<πr)<c/nr(r-1).

Note that this result implies that there are at most cn-r(r-1) (n!)r length r chains in

the Bruhat order poset.

1.3 Weak Order, Preliminaries

Then we turn to the modified order on Sn, the rveak order "-" Here πσ if there

is a chain σ=ω1ω2...ωs=π, where each ωt is a simple reduction of ωt-1,

i.e. obtained from ωt-1 by transposing two adjacent elements ωt-1(i), ωt-1(i+1) with

ωt-1(i)>ωt-1(i+1). Since at each step the number of inversions decreases by 1,

all chains connecting a and π have the same length. Alternatively, there is a simple

non-inversion (resp. inversion) set criterion, contained in Berge [3], we can use to

check πσ. Indeed, given ωSn introduce the set of non-inversions of ω:

E(ω)={(i, j) : i<j, ω-1(i)<ω-1(j)}.

Similarly, for ωSn we introduce the set of inversions of ω:

7

E*(ω)={(i, j) : i>j, ω-1(i)<ω-1(j)}.

Then, for given π, σSn, we have πσ if and only if E(π)E(σ) (equivalently

E*(π)E*(σ)). Note that ωSn is uniquely determined by its E(ω) (resp. its

E*(ω)).

It turns out that the poset (Sn,) is a lattice (see [3]). Indeed, given π1, ..., πrSn,

there is an efficient way to compute E(inf{π1, ..., πr}) (resp. E*(sup{π1 , . . . , πr}))

from the set i=1rE(πi) (resp. i=1rE " ( πi )). We will see precisely how to do this later.

1.4 Main Results Related to the Weak Order

We prove the following probabilistic result for weak order comparability:

Theorem 1.4.1. Let π, σSn be selected independently and uniformly at random,

and let Pn*:=P(πσ). Then Pn* is submultiplicative, i.e. Pn1+n2*<Pn1*Pn2*. Con-

sequently there exists ρ=limPn*n. Furthermore, there exists an absolute constant

c>0 such that

i=1n(H(i)/i)<Pn*<c(0.362)n

where H(i):=j=1i1/j. Consequently, ρ< 0.362.

The proof of the upper bound is parallel to that of Theorem 1.2.1, lower bound,

while the lower bound follows from the non-inversion (resp. inversion) set criterion

8

described last section. Empirical estimates indicate that ρ is close to 0.3. So here too,

as in Theorem 1.2.1, the upper bound seems to be qualitatively close to the actual

probability Pn*. And our lower bound, though superior to the trivial bound 1/n!, is

decreasing superexponentially fast with n, which makes us believe that there ought

to be a way to vastly improve it.

Paradoxically, it is the lower bound that required a deeper combinatorial insight.

Clearly the number of π's below (or equal to) a equals e(P), the total number of

linear extensions of 7=P(σ), the poset induced by σ. (The important notion of

7(σ) was brought to our attention by Sergey Fomin [24].) We prove that for any

poset 7 of cardinality n,

e(P)>n!/iPd(i), (1.1)

where d(i):=| { jP : j<i in 7} |. (This bound is an exact value of e(P) if the

Hasse diagram is a (directed) rooted tree, Knuth [34, sect. 5.1.4, ex. 20], or a forest

of such trees, Björner and Wachs [8].) The bound (1.1) for e(P(σ)) together with the

independence of sequential ranks in the uniform permutation were the key ingredients

in the proof of Theorem 1.4.1, lower bound.

Mikl6s B6na[12] has informed us that this general lower bound for e(P) had been

stated by Richard Stanley as a level 2 exercise in [46, p. 312, ex. 1] without a

solution. We have decided to keep the proof in the dissertation, since we could

not find a published proof anywhere either. The classic hook formula provides an

example of a poset 7 for which (1.1) is markedly below e(P). It remains to be seen

whether (1.1) can be strengthened in general, or at least for 7(σ). As an illustration,

9

1 2

Figure 1.3: The permutation-induced poset 7(2143).

7=P(2143) has the Hasse diagram appearing in Figure 1.3. Then e(P) =4, but

(1.1) delivers only

e(P)>24/9 e(P)>3.

Regarding the lattice properties of (Sn,), note that the identity permutation 12 ... n

is the unique minimum, and n(n-1) ...1 is the unique maximum. Let π1, ..., πrSn

be selected independently and uniformly at random. It is natural to ask: "How

likely is it that the infimum (resp. supremum) of {π1 , . . . ' πr} is the unique mini-

mal (resp. maximal) element in the weak order lattice?" Equivalently, what is the

asymptotic number of r-tuples (π1 , . . . ' πr) such that inf{π1, . . . , πr}=12 ... n (resp.

sup{π1, ..., πr}=n(n -1) ...1), n ? It turns out that the answer is the same

whether we consider infs or sups, which allows us to focus only on infimums. We

prove the following:

Theorem 1.4.2. Let Pn(r)=P (inf{π1, ..., πr}=12 ...n). Then

1. As a function of n, Pn(r) is submultiplicative. Hence, there exists

to

p(r)=limPn(r)n= inf Pn(r)n, r>1.

2. For each r>1, put hr(z)=j0(-1)j(j!)rzj and Hr(z)=(hr(z))-1 Then, letting

P0(r)=1, we have

Hr(z)=n0Pn(r)zn,

from which we obtain (Darboux theorem [2])

Pn(r)-1z*hr(z*)1(z*)n, n .

Here, z*=z*(r)(1, 2) is the unique simple root of hr(z)=0 in the disc

|z|<2. Consequently, p(r)=1/z*

Unlike our results about comparability in the Bruhat and weak orderings, here we

have a sharp asymptotic formula. The key to the proof of this theorem is establishing

the exact formula

Pn(r)=k=0n-1(-1)k b1+...+bn-k=nb1,...,bn-k1,(b1!)r...(bn-k!)r1,

which follows from the principle of inclusion-exclusion. This formula for Pn(r) is, in

some sense, an r-analog of that for the Eulerian numbers ( B6na[11], Knuth [34]).

Indeed, it turns out that Pn(r) is the probability that the uniform, independent permu-

tations π1-1, ..., πr-1 have no common descents. Introduce the random variable Sn(r),

11

the number of these common descents, so that Pn(r)=P(Sn(r)=0). Another natural

question here is:

Problem. What is the limiting distribution of Sn(r)l.?

We believe that the answer here is "Gaussian", as it is in the case of the number

of descents in a single uniformly random permutation (Sachkov [44]). Our feeling

is that the proof will involve use of the bivariate generating function Fr(x, y)=

n1xnE[(1+y)Sn(r)], which we prove has the simple form

Fr(x, y)=xfr(xy)1-xfr(xy), fr(z).=j0zj(j+1)!r .

Interestingly, this generating function is a special case of a more general result proved

by Richard Stanley [45], although he was probably unaware of the connections his

work had with the weak ordering.

In conclusion we mention several papers that are in the same spirit of this dissertation.

First, [39] and [40] (both by Boris Pittel), where the "probability-of-comparability"

problems were solved for the poset of integer partitions of n under dominance order,

and for the poset of set partitions of [n] ordered by refinement. Also, [41] (again

by B. Pittel), where the "infimum/supremum" problem was solved for the lattice

of set partitions of [n] ordered by refinement. In [16], E. R. Canfield presents an

enlightening extension of the inf/sup work done in [41]. Very recently, in [1], R. M.

Adin and Y. Roichman explore the valency properties of a typical permutation in the

Bruhat order Hasse diagram.

12

This work is in large part the result of an intensive collaborative effort with my

doctoral advisor, Boris Pittel. Portions of this dissertation have been accepted for

publication (2006) in the journal Transactions of the American Mathematical Society

(see [28] for availability).

13

(JIAPTER 2

THE PROOF OF THE BRUHAT ORDER UPPER BOUND

In this chapter, we focus on the proof of Theorem 1.2.1, upper bound. The proof

divides naturally into three steps, hence the divisions of the sections that follow.

We need to show that

P(π<σ)=O(n-2).

The argument is based on the (0, 1)-matrix criterion. We assume that n is even. Only

minor modifications are necessary for n odd.

2.1 A Necessary Condition for Bruhat Comparability

The (0, 1)-matrix criterion requires that a set of n2 conditions are met. The challenge

is to select a subset of those conditions which meets two conflicting demands. It has

to be sufficiently simple so that we can compute (estimate) the probability that the

random pair (π, σ) satisfies all the chosen conditions. On the other hand, collectively

these conditions need to be quite stringent for this probability to be o(1). In our first

advance we were able (via Ehresmann's criterion) to get a bound O(n-1/2) by using

about 2 n1/2 conditions. We are about to describe a set of 2n conditions that does the

job.

14

Let us split the matrices M (π, σ), M (π) and M (σ) into 4 submatrices of equal size

n/2×n/2- the southwest, northeast, northwest and southeast corners, denoting them

Msw(ċ), Mne(ċ), Mnw(ċ) and Mse(ċ) respectively. In the southwest corner Msw(π, σ),

we restrict our attention to southwest submatrices of the form i×n/2, i=1, ..., n/2.

If π<σ, then as we read off rows of Msw(π, σ) from bottom to top keeping track

of the total number of balls and crosses encountered thus far, at any intermediate

point we must have at least as many crosses as balls. Let us denote the set of pairs

(π, σ) such that this occurs by Esw. We draw analogous conclusions for the northeast

corner, reading rows from top to bottom, and we denote by Ene the set of pairs (π, σ)

satisfying this condition.

Similarly, we can read columns from left to right in the northwest corner, and here we

must always have at least as many balls as crosses. Denote the set of these pairs (π, σ)

by Enw. The same condition holds for the southeast corner when we read columns

from right to left. Denote the set of these pairs (π, σ) by Ese. Letting E denote the

set of pairs (π, σ) satisfying all four of the conditions above, we get

{π<σ}E=EswEneEnwEse.

Pairs of permutations in E satisfy 2n of the n2 conditions required by the (0, 1)-matrix

criterion. And unlike the set {π<σ}, we are able to compute |E|, and to show that

P(E)=(n!)-2|E|=O(n-2). Figure 2.1 is a graphical visualization of the reading-off

process that generates the restrictions defining the set E.

If a row (column) of a submatrix M(π)I,J ( M(σ)I,J resp.) contains a marked entry, we

say that it supports the submatrix. Clearly the number of supporting rows (columns)

15

M(π,0)

-

|

-

Figure 2.1: Finding a necessary condition for π<σ.

equals the number of marked entries in M(π)I,J ( M(σ)I,J resp.). Now, given π, σ,

let M1=M1(π), M2=M2(σ) denote the total number of rows that support Msw(π)

and Msw(σ) respectively. Then Mnw(π) , Mnw(σ) are supported by M3=n/2-M1

columns and by M4=n/2-M2 columns respectively. The same holds for the

southeastern corners of M(π) and M(σ). Obviously the northeastern submatrices of

M(π) and M(σ) are supported by M1 rows and M2 rows respectively. Then we have

16

P(E)=m1,m2P(EA (m1, m2)),

(2.1)

A (m1, m2):={(π, σ) : M1=m1, M2=m2} .

Clearly EA (m1, m2)= if m1<m2. We claim that, for m1>m2,

P(EA(m1, m2))=[(m1-m2+1)(n/2+1)(n/2-m2+1)(m1+1)]4ċi=14(n/2mi)(n/2n)2. (T)

Here and below m3:=n/2-m1 and m4:=n/2-m2 stand for generic values of M3

and M4 in the event A (m1, m2).

To prove (T), let us count the number of pairs (π, σ) in EA(m1, m2). First consider

the southwest corner, Msw(π, σ). Introduce L1=L1(π, σ), the number of rows

supporting both Msw(π) and Msw(σ). So L1 is the number of rows in the southwest

corner Msw (π, σ) containing both a cross and a ball. Suppose that we are on the

event {L1=P1}. We choose p1 rows to support both Msw(π) and Msw(σ) from the

n/2 first rows. Then, we choose (m1-P1+m2-P1) more rows from the remaining

(n/2-P1) rows. Each of these secondary rows is to support either Msw(π) or Msw(σ) ,

but not both. This step can be done in

ways. Next, we partition the set of (m1-P1+m2-P1) secondary rows into two row

subsets of cardinality (m1-l1) (rows to contain crosses) and (m2-l1) (rows to contain

balls) that will support Msw(π) and Msw(σ) , accompanying the p1 primary rows

17

supporting both submatrices. We can visualize each of the resulting row selections as

a subsequence of (1, . . . , n/2) which is a disjoint union of two subsequences, one with

p1 elements labeled by a ball and a cross, and another with (m1-P1+m2-P1) elements,

(m1-l1) labeled by crosses and the remaining (m2-l1) elements labeled by balls.

The condition Esw is equivalent to the restriction: moving along the subsequence from

left to right, at each point the number of crosses is not to fall below the number of

balls. Obviously, no double-marked element can cause violation of this condition.

Thus, our task is reduced to determination of the number of (m1-P1+m2-P1)-long

sequences of m1-l1 crosses and m2-l1 balls such that at no point the number of

crosses is strictly less than the number of balls. By the classic ballot theorem (see

Takacs [49, pp. 2-7] ), the total number of such sequences equals

(m1-l1+1)-(m2-l1)(m1-l1+1)+(m2-l1)

=m1-m2+1m1-l1+1 .

The second binomial coefficient is the total number of (m1-p1+m2-P1)-long

sequences of (m1-l1) crosses and (m2-l1) balls. So the second fraction is the

probability that the sequence chosen uniformly at random among all such sequences

meets the ballot theorem condition. The total number of ways to designate the rows

supporting Msw(π) and Msw(σ) , subject to the condition Esw, is the product of two

counts, namely

18

(l1n/2)(m1-l1+m2-l1n/2-l1)(m1-l1+m2-l1m1-l1)m1-m2+1m1-l1+1

=m1-m2+1n/2-m2+1(m2n/2)(p1m2)(m1-l1+1n/2-m2+1)

Suummmmirng this last expression over all p1<m2, we obtain

m1-m2+1n/2-m2+1 m2

=m1-m2+1n/2-m2+1 (2.2)

=(m1-m2+1)(n/2+1)(n/2-m2+1)(m1+1) .

Here, in the first equality, we have used the binomial theorem. The product of the

two binomial coefficients in the final count (2.2) is the total number of row selections

from the first n/2 rows, m1 to contain crosses and m2 to contain balls. So the

fraction preceding these two binomial factors is the probability that a particular row

selection chosen uniformly at random from all such row selections satisfies our ballot

condition "crosses never fall below balls" Equivalently, by the very derivation, the

expression (2.2) is the total number of paths (X(t), Y(t))0tn/2 on the square lattice

connecting (0, 0) and (m1, m2) such that X(t+1) -X(t), Y(t+1) -Y(t){0,1}, and

X(t)>Y(t) for every t. (To be sure, if X(t+1) -X(t)=1 and Y(t+1) -Y(t)=1,

the corresponding move is a combination of horizontal and vertical unit moves.)

Likewise, we consider the northeast corner, Mne(π, σ). We introduce L2=L2(π, σ),

the number of rows in Mne(π, σ) containing both a cross and a ball. By initially

restricting to the event {L2=P2}, then later summing over all p2<m2, we obtain

19

another factor (2.2). Analogously, a third and fourth factor (2.2) comes from con-

sidering columns in the northwest and southeast corners, Mnw(π, σ) and Mse(π, σ).

Importantly, the row selections for the southwest and the northeast submatrices do

not interfere with the column selections for the northwest and the southeast corners.

So by multiplying these four factors (2.2) we obtain the total number of row and

column selections on the event A(m1, m2) subject to all four restrictions defining E!

Once such a row-column selection has been made, we have determined which rows

and columns support the four submatrices of M(π) and M(σ). Consider, for instance,

the southwest corner of M(π). We have selected m1 rows (from the first n/2 rows)

supporting Msw(π) , and we have selected m3 columns (from the first n/2 columns)

supporting Mnw(π). Then it is the remaining n/2-m3=m1 columns that support

Msw(π). The number of ways to match these m1 rows and m1 columns, thus to

determine Msw(π) completely, is m1!. The northeast corner contributes another m1!,

while each of the two other corners contributes m3!, whence the overall matching

factor is (m1!m3!)2. The matching factor for a is (m2!m4!)2. Multiplying the number

of admissible row-column selections by the resulting i=14(mi!)2 and dividing by (n!)2,

we obtain

P(EA(m1, m2))=[(m1-m2+1)(n/2+1)(n/2-m2+1)(m1+1) ]4. i=14(mi!)2(n!)2,

which is equivalent to (T). Figure 2.2 is a graphical explanation of this matching

factor. In it, we show the matrix M(π) in a case when in the southwest and the

20

n/2"-m

])m

m(

-v

n/2-m

Figure 2.2: Selection of first m=m1 ( n/2-m resp.) rows (columns resp.) in corners

to support M(π).

northeast squares π is supported by the bottom m(=m1) and the top m rows re-

spectively; likewise, in the northwest and the southeast squares π is supported by the

n/2-m leftmost and the n/2-m rightmost columns respectively.

2.2 A Probabilistic Simplification

Let us show that (2.1) and (T) imply

21

P (E)<E[(M1-M2+1)4(n/2+1)4(n/2-M2+1)4(M1+1)4

()

First, M1 and M2 are independent with

P(Mi=mi)=(n/2mi)(n/2n), i=1,2.

Indeed, Mi obviously equals the cardinality of the intersection with [n/2] of a uni-

formly random subset of size n/2 from [n], which directly implies these formulas.

Thus, each Mi has the hypergeometric distribution with parameters n/2, n/2, n/2; in

other words, Mi has the same distribution as the number of red balls in a uniformly

random sample of n/2 balls from an urn containing n/2 red balls and n/2 white balls.

By the independence of M1 and M2 , we obtain

P(M1=m1, M2=m2)=(n/2m1)(n/2m2)(n/2n)2 2.

It remains to observe that (2.1) and (T) imply

P(E)=m1m2(m1-m2+1)4(n/2+1)4(n/2-m2+1)4(m1+1)4ċP(M1=m1, M2=m2)

<m1,m2(m1-m2+1)4(n/2+1)4(n/2-m2+1)4(m1+1)4ċP(M1=m1, M2=m2)

=E[(M1-M2+1)4(n/2+1)4(n/2-M2+1)4(M1+1)4],

and (‡) is proved.

22

2.3 Asymptotics

The advantage of (‡) is that it allows us to use probabilistic tools exploiting the

independence of the random variables M1 and M2. Typically the Mi's are close to

n/4, while |M1-M2| is of order n1/2 at most. So, in view of (‡) we expect that

P(E)=O(n-2).

We now make this argument rigorous. First of all, by the "sample-from-urn" inter-

pretation of Mi,

E[Mi]=n2(n-1n/2-1)(n/2n) =n/4. (2.3)

Then (see Janson et al. [31, p. 29]) the probability generating function of Mi is

dominated by that of Bin(n, 1/4), and consequently for each t>0 we have

P(|Mi-n/4|>t)=O(exp(-4t2/n)) .

Hence, setting t=n2/3 we see that

P(n/4-n2/3<Mi<n/4+n2/3)>1 -e-cn1/3,

for some absolute constant c>0. Introduce the event

An=2i=1{n/4-n2/3<Mi<n/4+n2/3} .

Combining the estimates for Mi, we see that for some absolute constant c1>0,

P(An)>1 -e-c1n1/3

23

Now the random variable in (‡), call it Xn, is bounded by 1, and on the event An,

within a factor of 1+O(n-1/3),

Xn=(4n)8(M1-M2+1)4(n/2+1)4

Therefore

P(E)<(5n)8(n/2+1)4E[(M1-M2+1)4]+O(e-c1n1/3).

It remains to prove that this expected value is O(n2). Introduce M¯i=Mi-E[Mi],

i=1,2. Then

(M1-M2+1)4=(M¯1-M¯2+1)4<27(M¯14+M¯24+1),

as

(a+b+c)2<3(a2+b2+c2).

We now demonstrate that E[M¯i4]=O(n2). To this end, notice first that E[M¯i2] is of

order n exactly. Indeed, extending the computation in (2.3),

E[Mi(Mi-1)]=n2(n2-1) (n-2n/2-2)(n/2n)

=n(n-2)216(n-1).

Therefore

24

E [M¯i2]=Var[Mi]

=E[Mi(Mi-1)]+E[Mi]-E2[Mi]

=n(n-2)216(n-1)+n4-n216 (2.4)

=n16+O(1).

Furthermore, as a special instance of the hypergeometrically distributed random vari-

able, Mi has the same distribution as the sum of n/2 independent Bernoulli variables

Yj{0,1} (see Vatutin and Mikhailov [38], alternatively [31, p. 30]). Therefore,

(2.4) and the Lindeberg-Feller Central Limit Theorem imply

M¯in/16N(0,1), (2.5)

where N(0,1) is the standard normal random variable. In fact, since

Yj-E[Yj]n/160, n ,

we can say more. Indeed, we have (2.5) together with convergence of all the moments

(see Billingsley [4, p. 391]). Therefore, in particular

E[M¯i4](n/16)4E[N(0,1)4], n ,

i.e. E[M¯i4]=O(n2). This completes the proof of Theorem 1.2.1 (upper bound).

25

(JIAPTER 3

THE PROOF OF THE BRUHAT ORDER LOWER

BOUND

In this chapter, we prove Theorem 1.2.1, lower bound. We will actually prove some-

thing better than what was stated there, showing that for each ε >0

P(π<σ)=Ω((α-ε)n),

where

(y =112549793885132421333522!=0.70879 . . . .

First, some preliminaries.

3.1 A Sufficient Condition for Bruhat Comparability

Introduce π* ( σ* resp.), the permutation π ( σ resp.) with the element n deleted.

More generally, for k<n, πk* ( σk* resp.) is the permutation of [n -k] obtained from

π ( σ resp.) by deletion of the k largest elements, n, n -1, ..., n -k+1. The key to

the proof is the following:

26

Figure 3.1: G5 and an emboldened subgrid C.

Lemma 3.1.1. Let k [n]. If every northeastern submatrix of M(π, σ) with at most

k rows contains at least as many crosses as balls, and πk*<σk*, then π<σ.

Before proceeding with the proof, we introduce one more bit of notation. Let Gn be

the empty n ×n grid depicted in the M(ċ)'s of Figure 1.2. Figure 3.1 is a depiction

of G5 and an emboldened northeastern-corner 3 ×4 subgrid of it, denoted by C. If

C is any subgrid of Gn, then M(ċ|C) denotes the submatrix of M(ċ) that "sits" on

C. To repeat, the (0, 1)-matrix criterion says that π<σ if and only if for each

northeastern-corner subgrid C of Gn, we have at least as many crosses as balls in

M(π, σ|C).

Proof. By the assumption, it suffices to show that the balls do not outnumber crosses

in M ( π, a |C) for every such subgrid C with strictly more than k rows. Consider any

such C. Let C(k) denote the subgrid formed by the top k rows of C. Given a submatrix

A of M(π) (of M(σ) resp.), let |A| denote the number of columns in A with a cross

(a ball resp.). We need to show

27

|-

-

-

-

-|

-

-

-

|

|

o

|

|

o

|

|

o

-

-

*

-

-|

-

-

-

*t

|

×

|

×

|

|

×

|

|

Figure 3.2: Deletion of 2 largest elements of π, σ, and its affect on C.

|M(π|C)|>|M(σ|C)|.

By the assumption, we have |M(π|C(k))|>|M(σ|C(k))|. Write |M(π|C(k))|=

|M(σ|C(k))|+λ, A >0. We now delete the top k rows from M(π), M(σ) together

with the k columns that contain the top k crosses in the case of M(π) and the k

columns that contain the top k balls in the case of M(σ). This produces the matrices

M(πk*) and M(σk*). In either case, we obtain the grid Gn-k together with a new

northeastern subgrid: C(πk*) in the case of M(π) and C(σk*) in the case of M(σ).

Figure 3.2 is a graphical visualization of this deletion process in the special case

π=12534, σ=45132, k=2 and C the 3 ×4 northeastern subgrid of G5. We have

emboldened C in M(π) , M(σ) , and the resulting C(π2*), C(σ2*) in M(π2*), M(σ2*)

respectively.

Since we delete more columns in the case of π than σ, note that C(πk*)C(σk*) as

28

northeastern subgrids of Gn-k. In fact, these grids have the same number of rows,

but C(πk*) has A fewer columns. Hence, as πk*<σk*, we have

|M(πk*|C(πk*))|>|M(σk*|C(πk*))|>|M(σk*|C(σk*))|-λ.

So

|M(π|C)|=|M(π|C(k))|+|M(πk*|C(πk*))|

=|M (a |C(k)) |+ A +|M(πk*|C(πk*))|

>|M(σ|C(k))|+|M(σk*|C(σk*))|

=|M(σ|C)|,

which proves the lemma.

3.2 A Reduction to Uniforms

For each k<n, let En , k denote the event "every northeast submatrix of the top k

rows has at least as many crosses as balls" Then by Lemma 3.1.1,

{π<σ}En,k{πk*<σk*}.

Now the events En,k and {πk*<σk*} are independent! So we get

P(π<σ)>P(En,k)P(πk*<σk*) . (3.1)

For the permutation π ( σ resp.) introduce li(π)=π-1(i) ( li(σ)=σ-1(i) resp.),

the index of a column that contains a cross (a ball resp.) at the intersection with

29

row i. In terms of the li(ċ)'s, En,k is the event: for each integer j<k and m<n,

the number of ln(π), pn-1(π), ..., ln-j+1(π) that are m at least is more than or equal

to the number of ln(σ), pn-1(σ), ..., ln-j+1(σ) that are m at least. We could have

replaced an integer m<n with a real number which means that

En,k={(π, σ) : (l(π), l(σ))Ck},

for some cone-shaped (Borel) set CkR2k ; here l(π)={ln-i+1(π)}1ik, l(σ)=

{ln-i+1(σ)}1ik.

Our task is to estimate sharply P(En,k) for a fixed k, and n . Observe first that

l(π) and l(σ) are independent, and each uniformly distributed. For instance

P(ln(π)=j1, ..., ln-k+1(π)=jk)=1(n)k, 1<j1...jk<n,

where (n)k=n(n -1) ...(n -k+1). Since (n)knk as n , ln(π), ...,

pn-k+1(π) are almost independent [n]-uniforms for large n, and fixed k. Let us make

this asymptotic reduction rigorous. Let U be a uniform-[0, 1] random variable, and

let U1, ..., Un be independent copies of U. Then each nUi is uniform on [n], and it

is easy to show that

P (nU1=i1, ..., nUk=ik|nU1...nUk)=1(n)k.

In other words, {ln-i+1(π)}1ik has the same distribution as the random vector

nU :={nUi}1ik conditioned on the event An,k={nU1...nUk}.

Analogously {ln-i+1(σ)}1ik is distributed as nV :={nVi}1ik conditioned on

30

Bn,k={nV1...nVk}, where V1, ..., Vk are independent [0, 1]-uniforms,

independent of U1, ..., Uk. We will need yet another event Dn,k on which

min{minij|Ui-Uj|, minij|Vi-Vj|, m,jn|UReject,-Vj|}>1/n.

Clearly on Dn,k

(nU, nV )Ck(U, V)Ck ;

here U:={Ui}1ik, V:={Vi}1ik. In addition Dn,kAn,kBn,k, and

P(Dn,kc)<2k2P(|U1-U2|<1/n)<4k2/n.

Therefore

P(En,k)=P((l(π), l(σ))Ck)

=P({(nU,nV)Ck}{An,kBn,k})P(An,kBn,k)

=P({(nU,nV)Ck}Dn,k)+O(P(Dn,kc))1-O(P(Dn,kc))

=P((U,V)Ck)+O(k2/n)1-O(k2/n)

=Qk+O(k2/n),

where Qk=P((U, V)Ck). Let us write Pn=P(π<σ). Using (3.1) and the last

estimate we obtain then

Pn>QkPn-k (1 +O(k2/n))=QkPn-kexp(O(k2/n)) , n >k.

31

Iterating this inequality n/k times gives

Pn>Qkn/kPn-n/kkexp[j=0n/k-1O(k2n-jk)]

Since the sum in the exponent is of order O(k2logn), we get

lim inf Pnn>Qkk, k>1.

Thus

lim inf Pnn>supkQkk.

Therefore, for each k and ε (0, Qkk) , we have

Pn=Ω((Qkk-ε)n) . (3.2)

Next

Lemma 3.2.1. As a function of k, Qk is supermultiplicative, i.e. Qk1+k2>Qk1Qk2

for all k1, k2>1. Consequently there exists limkQkk, and moreover

limkQkk=supk1Qkk.

Thus we expect that our lower bound would probably improve as k increases.

Proof. Qk is the probability of the event Ek={(U(k), V(k))Ck}; here U(k) :=

{Ui}1ik, V(k):={Vi}1ik. Explicitly, for each j<k and each c[0,1], the

32

number of U1, ..., Uj not exceeding c is at most the number of V1, ..., Vj not exceeding

c. So Qk1+k2=P(Ek1+k2), Qk1=P(Ek1), while Qk2=P(Ek2)=P(Ek2*). Here the

event Ek2* means that for each j*<k2 and each c[0,1], the number of Ui, i=

k1+1, ..., k1+j*, not exceeding c is at most the number of Vi, i=k1+1, ..., k1+j*,

not exceeding c. The events Ek1 and Ek2* are independent. Consider the intersection

of Ek1 and Ek2* . There are two cases:

1) j< A4. Then the number of Ui, i<j not exceeding c is at most the number of

Vi, i<j not exceeding c, as Ek1 holds.

2) A4 <j< A4 +k2. Then the number of Ui, i<j, not exceeding c is at most

the number of Vi, i< A4 not exceeding c (as Ek1 holds), plus the number of Vi,

A4 <i<j, not exceeding c (as Ek2* holds). The total number of these Vi is the

number of all Vi, i<j, that are at most c, c[0,1].

So Ek1+k2Ek1Ek2*, and we get Qk1+k2>Qk1Qk2. The rest of the statement

follows from a well-known result about super(sub)multiplicative sequences (see P61ya

and Szegö [43, p. 23, ex. 98] ).

El

Given 1<j<i<k, let Ui,j ( Vi,j resp.) denote the j-th element in the increasing

rearrangement of U1, ..., Ui ( V1, ..., Vi resp.). Then, to put it another way, Qk is

the probability that the k Ehresmann conditions are met by the independent k-

dimensional random vectors U and V, both of which have independent entries. That

is, we check Ui,j>Vi,j for each 1<j<i<k by performing element-wise comparisons

in the following tableaux:

33

Uk,1 Uk,2 Uk,3

Uk,k

Vk,1 Vk,2 Vk,3

Vk,k

.ċ. .ċ. .ċ. .ċ. .ċ. .ċ. .ċ. .ċ.

U3,1 U3,2 U3,3 V3,1 V3,2 V3,3

U2,1 U2,2 V2,1 V2,2

U1,1 V1,1

3.3 An Algorithm to Maximize the Bound

What's left is to explain how we determined a = 0.70879....

It should be clear that whether or not (U(k), V(k)) is in Ck depends only on the

size ordering of U1, ..., Uk, V1, ..., Vk. There are (2k)! possible orderings, all being

equally likely. Thus Qk=Nk/(2k)!, where Nk is the number of these linear orderings

satisfying this modified Ehresmann criterion. Since the best constant in the lower

exponential bound is probably limkQkk, our task was to compute Nk for k as

large as our computer could handle. ( "Probably", because we do not know for certain

that Qkk increases with k.)

Here is how Nk was tabulated. Recursively, suppose we have determined all Nk-1

orderings of x1, ..., xk-1, y1, ..., yk-1 such that (x(k-1), y(k-1))Ck-1. Each such

ordering can be assigned a2 (k-1)-long sequence of O's and l's, O's for xi 's and Fs

for yj's, 1<i, j<k-1. Each such sequence meets the ballot-theorem condition:

as we read it from left to right the number of Fs never falls below the number of

O's. We also record the multiplicity of each sequence, which is the number of times

it is encountered in the list of all Nk-1 orderings. The knowledge of all 2(k-1)-long

34

ballot-sequences together with their multiplicities is all we need to compile the list of

all 2k-long ballot-sequences with their respective multiplicities.

For k=1, there is only one ballot-sequence to consider, namely 10, and its multiplicity

is 1. So N1=1, and

Q1=1/2!.

Passing to k=2, we must count the number of ways to insert 1 and 0 into 10 so that

we get a 4-long ballot-sequence of two O's and two Fs. Inserting 1 at the beginning,

giving 110, we can insert 0 into positions 2, 3 or 4, producing three ballot-sequences

1010, 1100, 1100,

respectively. (Inserting 0 into position 1 would have resulted in 0110 which is not a

ballot-sequence.) Similarly, inserting 1 into position 2, we get 110, and inserting 0

under the ballot condition gives th ree ballot-sequences

1010, 1100, 1 tOO.

Finally, inserting 1 at the end, giving 101, we can only insert 0 at the end, obtaining

one ballot-sequence

toto.

Hence, starting from the ballot-sequence 10 of multiplicity 1, we have obtained two

35

4-long ballot-sequences, 1010 of multiplicity 3 and 1100 of multiplicity 4. Therefore

N2=3+4=7, and

Q2=7/4!.

Pass to k=3. Sequentially we insert 1 in each of 5 positions in the ballot-sequence

1010, and then determine all positions for the new 0 which would result in a 6-long

ballot-sequence. While doing this we keep track of how many times each 6-long

ballot-sequence is encountered. Multiplying these numbers by 3, the multiplicity of

1010, we obtain a list of 6-long ballot-sequences spawned by 1010 with the number

of their occurrences. We do the same with the second sequence 1100. Adding the

numbers of occurrences of each 6-long ballot-sequence for 1010 and 1tOO we arrive at

the following list of five 6-long ballot-sequences with their respective multiplicities:

tttooo : 36,

ttotoo : 32,

ttooto : 24,

tottoo : 24,

tototo : 19.

Therefore N3=36+32+24+24+19 =135, and

Q3=135/6!.

36

Table 3.1: Exact computation of Nk for smallish k.

We wrote a computer program for this algorithm. Pushed to its limit, the computer

delivered table 3.1.

Combining (3.2) and the value of 1Q111 in this table, we see that for each ε>0,

Pn=Ω((1Q111-ε)n)=Ω((0.708...-ε)n) .

The numbers Qkk increase steadily for k<12, so at this moment we would not rule

out the tantalizing possibility that Qkk1 as k. Determination of the actual

37

limi t is a challenging open problem. The proof just given only involves mention of

the (0, 1)-matrix criterion, but it was the Ehresmann criterion that actually inspired

our initial insights.

38

(JIAPTER 4

AN EXTENSION TO CHAINS IN BRUHAT ORDER

Our goal in this chapter is to prove Theorem 1.2.2, which extends our upper bound

result on Bruhat-comparability of permutation-pairs. Namely, we will show that for

π1, ..., πrSn selected independently and uniformly at random, we have

P(π1<...<πr)=O(n-r(r-1)) .

So the number of length r chains in the Bruhat poset is of order at most n-r(r-1) (n!)r.

Our basic approach will be at its core, the same as it was for permutation-pairs, but

our enumerative techniques will mimic those established by Gessel and Viennot to

count various classes of non-intersecting lattice paths (see [26]). These same tech-

niques are also highlighted in Bressoud's book [13], which recounts the proof of the

Alternating-Sign Matrix Conjecture.

4.1 The General Setup

First, some preliminaries. Recall the "superimposed" matrix M(π, σ) of ×'s and 's

we introduced earlier ( ×'s for π and 's for σ). Let's introduce the analogous more

general matrix M(π1 , . . . , πr), where we place ×j's at positions (i, πj(i)), 1<i<n,

39

1<j<r. Here, we read rows bottom to top and columns left to right. For instance,

if π1=21543, π2=25143 and π3= 54213, we have

5

4

3

M(π1, π2, π3)=

2

1

1234 5

Given a set of rows I[n] and columns J[n], let M(ċ)I,J denote the submatrix

of M(ċ) corresponding to rows I and columns J. Again, rows are labeled 1, 2, . . . ' n

from bottom to top, and columns are labeled 1, 2, . . . ' n from left to right. The (0, 1)-

matrix criterion says that π1<...<πr if and only if for each southwest submatrix

M(π1 , . . . , πr)[μ],[ν] , μ, U [n], we have

×1's> . . . > ×r 's.

Note that this is the case in M(π1, π2, π3) above, so that we have π1<π2<π3. One

more bit of notation: let Mj(μ, u) denote the number of ×j's in M(πj)[μ] , [ν] . In this

notation,

π1< . . . <πr M1(μ, ν)> . . . >Mr(μ, ν) μ, U [n].

40

4.2 A Tractable Necessary Condition

Now, as in the case of permutation-pairs, we need to find an event which contains

{π1<...<πr}

that is more amenable to enumerative techniques. For simplicity, let's assume n is

even and fix μ=u =n/2 (in what follows, only minor modifications are necessary for

n odd). In our computations, we will primarily concentrate on the single southwest

submatrix

Msw(π1, ..., πr):=M(π1, ..., πr)[n/2],[n/2].

We similarly denote by

Mnw(π1, ..., πr), Mne(π1, ..., πr), Mse(π1, ..., πr)

the northwest, northeast and southeast n/2×n/2 subsquares of M(π1 , . . . , πr), re-

spectively. If π1<...<πr, it is necessary that

M1(i, n/2)>...>Mr(i, n/2), i=1, ..., n/2. (4.1)

This is analogous to the necessary condition we considered for permutation-pairs: we

read off rows of Msw (π1 , . . . ' πr) one at a time, keeping track of the total number of

×j's encountered thus far, 1<j<r. If π1<...<πr, at any intermediate point in

this "reading-off" process, we must never have encountered more ×j's than ×j-1 's,

1<j<r.

41

Let Esw denote the event described in (4.1). Recalling our work with permutation-

pairs, we extract similar events necessary for {π1<...<πr} by considering columns

in the northwest n/2×n/2 square (denote this event Enw), rows in the northeast

square (Ene) and columns in the southeast square (Ese). Then

{π1<...<πr}E:=EswEnwEneEse,

and unlike the set {π1<...<πr}, we can compute |E|! Namely, our task is reduced

to showing

P(E)=|E|(n!)r=O(n-r(r-1)) .

We have seen Figure 4.1 before, but we show it again here to aid in visualizing the

"reading-off" process used to generate the restrictions defining the event E.

4.3 The Core Counting Problem

If a row (column) of a submatrix M(πj)I,J contains a marked entry, we say that it

supports the submatrix. Clearly the number of supporting rows (columns) equals

the number of marked entries in M(πj)I,J. Now, given πj, 1<j<r, let Mj:=

Mj(n/2, n/2), the total number of rows that support Msw(πj). Then Mnw(πj) is

supported by n/2-M1 columns. The same holds for the southeastern corner, Mse(πj).

Obviously the northeastern submatrix Mne(πj) is supported by Mj rows. Then we

have

42

M(π,0)

-

|

-

Figure 4.1: Finding a necessary condition for π1<...<πr.

P(E)= P(E(m1, ..., mr)), (4.2)

m1, . . . , mr

E (m1, ..., mr):=E{(π1, ..., πr) : M1=m1, ..., Mr=mr}.

Clearly, by the (0, 1)-matrix criterion, if there is 1<i<j<r such that mi<mj,

then E(m1 , . . . , mr)=. Otherwise, we claim:

Theorem 4.3.1. For m1>...>mr,

43

P(E ( m1, . . . , mr) )=[1IIr(mi-mj+j-i)(n/2+j-i)(mi+j-i)(n/2-mj+j-i)]4 . i=1r(min/2)(n/2-min/2)(n/2n).

This result is really the crux of our argument. The proof, however, will take a little

work. We present it in three steps.

Proof. Let us assume we are on the event E(m1 , . . . , mr) and that m1>...>mr.

STEP 1. As promised, we first concentrate on the southwest n/2×n/2 subsquare

Msw (π1 , . . . , πr). For each 1<j<r, we need to choose mj rows to support Msw(πj)

in such a way that Esw is satisfied. We claim that the number of ways to choose these

supporting rows, which we denote Nn/2 (m1 , . . . , mr), is given by the determinant-type

formula

(4.3)

Nn/2(m1, ..., mr)=det[ ]i,j=1r ;

here i is the row index, and j the column index. To prove (4.3), we will exploit a

connection with non-intersecting lattice paths implicit in the restrictions defining Esw.

As we have already mentioned, these enumerative techniques were first introduced

by Gessel and Viennot [26], and were highlighted by Bressoud [13].

For each j[r], we construct a lattice path associated with Msw(πj) as follows: start

at the point (j, -j) in the plane. If the first row of Msw(πj) contains a marked entry,

execute the move (0, 1). Otherwise, execute the move (1, 0). In general, when looking

at the i-th row of Msw(πj), 1<i<n/2, we move from our current position up 1 unit

44

if the i-th row contains a marked entry, and move right 1 unit otherwise. This way,

Msw(πj) generates a lattice path consisting of unit moves (0, 1) or (1, 0), connecting

the point (j, -j) with (n/2-mj+j, mj-j).

Now, by considering all r of these paths together in the plane, we get what is known

as a nest of lattice paths. The restrictions defining Esw imply that the nest of r

lattice paths we have just constructed is non-intersecting , i.e. no two paths touch

each other. So to prove the formula (4.3) we need only show that this determinant

counts the total number of these nests of r non-intersecting lattice paths, with moves

(0, 1) and (1, 0), joining the points S :={(j, -j) :j[r]} to the points F:=

{(n/2-mi+i, mi-i) : i[r]}. Figure 4.2 is an illustration of one such nest of

r=7 paths.

To count the number of these non-intersecting nests, we instead consider the collection

of all nests of r lattice paths, with moves (0, 1) and (1, 0), joining the points of S to

the points of F. We require only that no two of the r paths in this nest begin at the

same point or end at the same point, with no further restrictions. In particular, such

a nest of r lattice paths uses every point from both S and F. This allows for some

very tangled nests, like the one shown in Figure 4.3.

To weed out the intersecting nests from the non-intersecting ones, we will employ a

special inclusion-exclusion type argument which gives rise to a sum over permutations

of [r]. Each nest gives rise to a permutation of [r] as follows: define the i-th path

to be the one that ends at (n/2-mi+i, mi-i), i[r]. If the i-th path starts at

(j, -j), j[r], then we define σ(i)=j. For instance, the tangled nest in Figure 4.3

corresponds to the permutation σ= 3512476.

45

(n/2-m1+1, m1-1)

(1,-1)

(n/2-m7+7, m7-7

(7,-7)

Figure 4.2: A non-intersecting nest of 7 lattice paths.

46

Figure 4.3: An intersecting nest of 7 lattice paths.

On the other hand, given σSr, in order that a nest give rise to a in this cor-

respondence the i-th lattice path must end at (n/2-mi+i, mi-i) and begin at

(σ(i), -σ(i)), and so takes a total of mi-i+σ(i) steps northward and n/2-mi+i-σ(i)

steps eastward, i[r]. Hence, the total number of nests corresponding to the per-

mutation a equals

47

i=1r .

Introduce

I(σ):=|{(i, j) : 1<i<j<r, σ-1(i)>σ-1(j)}|,

the inversion number of σ. We claim that the number of nests of r non-intersecting

lattice paths joining S to F equals

(4.4)

σSr(-1)1(σ)i=1r(mi-i+n/2 a (i))=det[ ]i,j=1r,

which proves the formula (4.3).

To prove (4.4), notice that this determinant sums over all possible nests, both in-

tersecting and non-intersecting, where each nest is counted as +1 if the inversion

number of the corresponding a is even, and as -1 otherwise. If a nest happens to be

non-intersecting, then the corresponding permutation is the identity, 12 ...n, which

has inversion number 0, and so these nests are counted as +1. We need to show that

everything else in this sum cancels. To do this, we will pair intersecting nests up,

one corresponding to a permutation with an even inversion number the other an odd

inversion number.

Let a nest N with at least one intersection point be given, and let σSr be its

corresponding permutation. Consider the intersection point (x, y) furthest to the

right in N. If there is more than one intersection point in this column, let (x, y) be

the one that is highest. In Figure 4.3, this is the point (13, 2). We now "swap tails"

48

Figure 4.4: "Swapping" the tails in Figure 4.3.

at (x, y). Specifically, if the paths cross each other at (x, y), we swap the tails so

that they just meet, and vice versa in the other situation. Figure 4.4 is a graphical

visualization of this "swapping" process in the case of our running example Figure

4.3.

Doing this, we get a new intersecting nest N that differs from N only at this "swap-

ping point, (x, y). Let σSr denote the permutation corresponding to N. By

our choice of intersection point (x, y), it is clear that σ only differs from a by a

single adjacent swap of entries in σ. For instance, in our Figure 4.3 example, we have

σ= 3512476 and σ= 3512746. In general we will have I(σ)=I(σ)±1, and so this

pair of intersecting nests cancel each other out in the sum (4.4). Therefore, what we

claimed in (4.4) (and hence (4.3) also) is proved.

STEP 2. Next, we claim that formula (4.3) implies

P(E(m1, ..., mr))=(det[ ]i,j=1r)4ċi=1r[(mi!)2(n/2-mi)!2](n!)r. (4.5)

As a first step to the proof of (4.5), we notice that we have shown something more

49

general regarding the count Nn/2 (m1 , . . . , mr). Consider the following ballot-counting

problem: suppose we have r canditates, C1, ..., Cr, running for election, receiving a

total of μ1>...>μr votes respectively. Suppose we count the votes in a rather

peculiar way: we have a total of lJ ballot boxes arranged in a row. Each box is

allowed to have at most one vote for each candidate with no further restrictions. In

particular, a given box could possibly be empty, and may have at most r ballots in it,

one cast for each candidate. We open the ballot boxes one at a time, keeping track

of the cumulative total votes cast for each candidate at every intermediate point. We

wish to know the total number of allocations of ballots in boxes so that at each of

these intermediate points, we have C1 with at least as many votes as C2, who in turn

has at least as many votes as C3, and so on. By our very derivation above this count

is given by NU(μ1, ..., μr). Namely, we have proved:

Lemma 4.3.2. For the ballot-counting problem above, we have

NU(μ1, ..., μr)=det[ ]i,j=1r

What's more we claim that

NU(ν -μr, ..., u -μ1)=NU(μ1, ..., μr). (4.6)

Indeed, by Lemma 4.3.2, the left-hand side is given by

50

NU(ν -μr, ..., u -μ1)=det[ ]i,j=1r

=det[ ]i,j=1r

We now switch row i with row r-i+1, and column j with column r-j+1, i, j[r].

This has no effect on the determinant. Hence

NU(ν -μr, ..., u -μ1)=det[ ]i,j=1r=NU(μ1, ..., μr),

and formula (4.6) is proved.

We now prove (4.5). First of all, we have already seen that the number of allow-

able supporting-row selections in the southwest subsquare, subject to the restrictions

defining Esw is given by the count in (4.3). A second factor (4.3) comes from choosing

supporting-rows subject to the restrictions defining Ene in the northeast subsquare.

By considering supporting- column selections in the northwest subsquare, subject to

Enw, Lemma 4.3.2 together with equation (4.6) tell us that the total number of al-

lowable supporting-column selections equals

Nn/2 (n/2-mr, ..., n/2-m1)=Nn/2(m1, ..., mr)=det[ ]i,j=1r

also, thus giving a third factor. Analogously, a fourth factor comes from considering

supporting-column selections in the southeast subsquare, subject to the restrictions

defining Ese. So by multiplying these four factors (4.3) we obtain the total number of

51

row and column selections on the event E (m1, . . . , mr) subject to all four restrictions

defining E!

Once such a row-column selection has been made, we have determined which rows

and columns support the four submatrices of M(πi), i[r]. Consider, for instance,

the southwest corner of M(π1). We have selected m1 rows (from the first n/2 rows)

supporting Msw (π1), and we have selected n/2-m1 columns (from the first n/2

columns) supporting Mnw(π1). Then it is the remaining n/2-(n/2-m1)=m1

columns that support Msw (π1). The number of ways to match these m1 rows and

m1 columns, thus to determine Msw (π1) completely, is m1!. The northeast corner

contributes another m1!, while each of the two other corners contributes (n/2-m1)!,

whence the overall matching factor is (m1!)2(n/2-m1)!2. In general, the matching

factor for πi is (mi!)2(n/2-mi)!2, i[r]. Multiplying the number of admissible row-

column selections by the resulting total matching factor i=1r[(mi!)2(n/2-mi)!2] and

dividing by (n!)r, we obtain the formula (4.5).

STEP 3. As a final step in the proof of Theorem 4.3.1, we show that

det[ ]i,j=1r= ...1i<jr(mi-mj+j-i)(n/2+j-i)(mi+j-i)(n/2-mj+j-i).

(4.7)

By putting (4.7) into equation (4.5), we leave it to the interested reader to verify that

we get the formula stated in the theorem.

First of all, we note that, for j>i,

52

=(n/2)!(mi+j-i)...(mi+1)mi!(n/2-mi+i-j)!

=(n/2-mi)(n/2-mi-1)...(n/2-.mi+i+1-j)(mi+j-i)(mi+j-i-1)ċċ(mi+1)

=(n/2mi)(mi+r-i)...(mi+1)(n/2-mi+i-1) ...(n/2-mi+1)

×(n/2-mi+i-1)(n/2-mi+i-2)...(n/2-mi+i+1 -j)

×(mi+j-i+1)(mi+j-i+2)...(mi+r-i)

=(n/2mi)(mi+r-i)...(mi+1)(n/2-mi+i-1) ...(n/2-mi+1)

×[-(xi+b2)][-(xi+b3)]...[-(xi+bj)]

×(xi+aj+1)(xi+aj+2)...(xi+ar), (4.8)

where xs:=ms- (s-1), 1<s<r, at:=t-1 and bt:=-n/2+t-2,2<t<r.

Similarly, for j<i,

53

=(n/2).!(mi+j-i)!(n/2-mi+i-j)ċċ(n/2-mi+1)(n/2-mi)!

=mi(mi-1)...(mi+j-i+.1)(n/2-mi+i-j)(n/2-mi+i-j-1)ċċ(n/2-mi+1)

=(n/2mi)(mi+r-i)...(mi+1)(n/2-mi+i-1) ...(n/2-mi+1)

×(n/2-mi+i-1)(n/2-mi+i-2)...(n/2-mi+i+1 -j)

×(mi+j-i+1)(mi+j-i+2)...(mi+r-i)

=(n/2mi)(mi+r-i)...(mi+1)(n/2-mi+i-1) ...(n/2-mi+1)

×[-(xi+b2)][-(xi+b3)]...[-(xi+bj)]

×(xi+aj+1)(xi+aj+2)...(xi+ar), (4.9)

Obviously, for j=i the identities (4.8) and (4.9) hold also. Hence, we obtain

det[ ]i,j=1r

=II (n/2mi)(mi+r-i)...(mi+1)(n/2-mi+i-1) ...(n/2-mi+1)

×det[[-(xi+b2)]...[-(xi+bj)](xi+aj+1)...(xi+ar)]i,j=1r

= ...1i<jr1(mi+j-i)(n/2-mj+j-i)

×det[[-(xi+b2)]...[-(xi+bj)](xi+aj+1)...(xi+ar)]i,j=1r, (4.10)

so our task is reduced to computing the last determinant in (4.10). For this, we

54

apply the following result of Krattenthaler [35], which extends the Vandermonde

determinant :

Theorem 4.3.3. (Krattenthaler's formula) Given arbitrary values for x1, . . . , xr,

a2, . . . ' ar, and b2, . . . ' br, we have

det[(xi+b2)...(xi+bj)(xi+aj+1)...(xi+ar)]i,j=1r

=II(xi-xj)II(bi-aj)1r2r.

In order to use this result, we must factor (-l) out of column j, 1<j<r, in the

last determinant in (4.10). Doing this, we obtain

det[[-(xi+b2)]...[-(xi+bj)](xi+aj+1)...(xi+ar)]i,j=1r

=(-1)(2r)det[(xi+b2)...(xi+bj)(xi+aj+1)...(xi+ar)]i,j=1r

=(-1)(2r)1i<jr(xi-xj)2ijr(bi-aj)

=1i<jr(xi-xj)2ijr(aj-bi), (4.11)

where the second to last equality follows from Krattenthaler's formula. By our defi-

nition of xs, at and bt, (4.11) implies

55

det[[-(xi+b2)]...[-(xi+bj)](xi+aj+1)...(xi+ar)]i,j=1r

=1i<jr(xi-xj)2ijr(aj-bi)

=1i<jr(mi-mj+j-i)2ijr(n/2+j-i+1)

=1i<jr[(mi-mj+j-i)(n/2+j-i)]. (4.12)

Combining (4.10) with (4.12), formula (4.7) is proved and hence so is Theorem 4.3.1.

4.4 A Probabilistic Simplification

Armed with Theorem 4.3.1, and with the combinatorial part behind us, the rest is

relatively straightforward. First, we claim that

P(E)<E[1i<jr(Mi-Mj+j-i)4(n/2+j-i)4(Mi+j-i)4(n/2-Mj+j-i)4], (4.13)

where, to repeat, Mi:=Mi(n/2, n/2), the total number of rows that support Msw(πj).

It should be clear that the Mi are independent, with Mi=DM, a hypergeometric

random variables with parameters n/2, n/2, n/2. That is, the Mi are indepedent

copies of M, which in turn equals the number of red balls in a uniformly random

sample of size n/2 from an urn containing a total of n balls, n/2 of them red and n/2

white. In particular

56

P(Mi=mi, i[r])=i=1rP(Mi=mi)=i=1r(n/2mi)(n/2n/2-mi)(n/2n).

(4.14)

Now (4.13) follows easily from (4.2), Theorem 4.3.1 and (4.14):

P(E)=

m1...mr1i<jr(mi-mj+j-i)4(n/2+j-i)4(mi+j-i)4(n/2-mj+j-i)4 . i=1r(n/2mi)(n/2n/2-mi)(n/2n)

<m1,...,mr1i<jr(mi-mj+j-i)4(n/2+j-i)4(mi+j-i)4(n/2-mj+j-i)4ċP(Mi=mi, i[r])

=E[1IIr(Mi-Mj+j-i)4(n/2+j-i)4(Mi+j-i)4(n/2-Mj+j-i)4]

Of course, this runs parallel to what we did for permutation-pairs.

4.5 Asymptotics

Next, as we did in the case r=2 also, we finish the argument by using known prop-

erties of the random variables Mi. Namely, it remains to prove that this expectation

is O(n-r(r-1)). To avoid unnecessarily repeating things we did for the case r=2,

suffice it to say that the Mi's are close to their expectation, n/4, with exponentially

high probability (see Janson et al. [31, p. 29]). In particular, there is an absolute

constant c>0 such that

57

E[1i<jr(Mi-Mj+j-i)4(n/2+j-i)4(Mi+j-i)4(n/2-Mj+j-i)4]

=O(E[1i<jr(Mi-Mj+j-i)4(n/2+j-i)4(n/4+j-i)4(n/2-n/4+j-i)4]+e-cn1/3)

=O(n-4(2r)E[1i<jr(Mi-Mj+j-i)4

+e-cn1/3),

so we will be done if we can prove that

E[1i<jr(Mi-Mj+j-i)4

=O(n2(2r)) . (4.15)

This will not be difficult, given our careful approach to the similar problem for

permutation-pairs. As we did there, introduce μi=Mi-E[Mi], i[r]. Then

1i<jr(Mi-Mj+j-i)4=1i<jr(μi-μj+j-i)4

<1i<jr[27(μi4+μj4+(j-i)4)]

<27(2r)1i<jr(μi4+μj4+r4)

=27(2r)r4e0μ14e1...μr4er, (4.16)

where the first inequality follows from

(a+b+c)2<3(a2+b2+c2).

58

Here, the sum ranges over some set of exponents e0, e1, ..., er{0,1, ..., } with

e0+...+er=. Removing the dependencies among these exponents implied by

the product range only increases this sum. Therefore, from (4.16) follows

1i<jr(Mi-Mj+j-i)4<27(2r)r4e0μ14e1ċ. . Ir4er

<27(2r)× e0,...,er0 r4e0μ14e1 . . . μr4er

e0+...+er=(2r)

<(27r4)(2r)× e1,...,er0 μ14e1 . . . μr4er.

e1+...+er(2r)

Hence, as the Mi (hence the μi) are independent,

E[1i<jr(Mi-Mj+j-i)4]=O

So since the total number of terms in this sum is (rr+(2r)), (4.15) will be proved if we

demonstrate that

E[μ14e1] . . . E[μr4er]=O(n2(2r)) (4.17)

for some fixed one of these r-tuples (e1, . . . , er). To this end, notice first that E[μi2]

is of order n exactly. Indeed, recall that

59

E[Mi(Mi-1)]=n2(n2-1) (n-2n/2-2)(n/2n)

=n(n-2)216(n-1).

Therefore

E[μi2]= Var [Mi]

=E[Mi(Mi-1)]+E[Mi]-E2[Mi]

=n(n-2)216(n-1)+n4-n216 (4.18)

=n16+O(1).

Furthermore, as a special instance of the hypergeometrically distributed random vari-

able, Mi has the same distribution as the sum of n/2 independent Bernoulli variables

Yj{0,1} (see Vatutin and Mikhailov [38], alternatively [31, p. 30]). Therefore,

(4.18) and the Lindeberg-Feller Central Limit Theorem imply

μin/16N(0,1), (4.19)

where N(0,1) is the standard normal random variable. In fact, since

Yj-E[Yj]n/160, n ,

we can say more. Indeed, we have (4.19) together with convergence of all the moments

(see Billingsley [4, p. 391]). Therefore, in particular

60

E[μi4ei](n/16)4eiE[N(0,1)4ei], n ,

i.e.

E[μ14e1] . . . E[μr4er]=O(n2(e1+...+er))=O(n2(2r))

as e1+...+er< . This completes the proof of (4.17), and thus of Theorem 1.2.2.

61

(JIAPTER 5

SOME PROPERTIES OF THE WEAK ORDERING

We now move away from the Bruhat order to focus on its more restrictive counterpart,

namely the weak ordering. In anticipation of the proof of Theorem 1.4.1, this chapter

is devoted to various properties of this order.

5.1 A Criterion for Weak Comparability

Recall that π precedes a in the weak order (πσ) if and only if there is a chain

σ=ω1...ωs=π where each ωt iiss aa ssiimmppllee reduction of ωt-1, i.e. obtained by

transposing two adjacent elements ωt-1(i), ωt-1(i+1) such that ωt-1(i)>ωt-1(i+1).

Clearly the weak order is more restrictive than the Bruhat order, so that πσ impies

π<σ. In particular, P(πσ)<P(π<σ), hence (Theorem 1.2.1) P(πσ)=

O(n-2). We will show that, in fact, this probability is exponentially small. The proof

is based on an inversion set criterion for πσ implicit in [3, pp. 135-139].

Lemma 5.1.1. Given ωSn, recall the set of non-inversion s of ω:

E(ω):={(i, j) : i<j, ω-1(i)<ω-1(j)}.

πσ if and only if E(π)E(σ).

62

Proof. Assume πσ. Then there exists a chain of simple reductions ωt, 1<t<s,

connecting a =ω1 and π=ωs. By the definition of a simple reduction, for each

t>1 there is i=it<n such that E(ωt)=E(ωt-1){(ωt(i), ωt(i+1))}, where

ωt(i)=ωt-1(i+1), ωt(i+1) =ωt-1(i), and ωt-1(i)>ωt-1(i+1). So the set E(ωt)

increases with t, hence E(π)E(σ).

Conversely, suppose E(π)E(σ). Since a permutation ω is uniquely determined

by its E(ω), we may assume E(π)<E(σ).

Claim If E(π)<E(σ), then there exists u<v<n such that (v, u) is an adjacent

inversion of σ, but (u, v)E(π).

Assuming validity of the claim, we ascertain existence of an adjacent inversion (v, u)

in a with (u, v)E(π). Interchanging the adjacent elements u and v in a =ω1,

we obtain a simple reduction ω2, with E(ω1)E(ω2)E(π). If E(ω2)=E(π)

then ω2=π, and we stop. Otherwise we determine ω3, a simple reduction of ω2,

with E(ω2)E(ω3)E(π) and so on. Eventually we determine a chain of simple

reductions connecting a and π, which proves that πσ.

Proof Of CLAIM. The claim is obvious for n =1,2. Assume inductively that the

claim holds for permutations of length n -1>2. Let π, σSn and E(π)<E(σ).

As in the proof of Theorem 1.2.1, let ln(π)=π-1(n), ln(σ)=σ-1(n), and π*, σ* are

obtained by deletion of n from π and σ. Since E(π)E(σ), we have E(π*)E(σ*).

Suppose first that E(π*)=E(σ*). Then π*=σ*, and as E(π)<E(σ), we must

have ln(π)>ln(σ), i.e. ln(σ)<n. Setting v=n and u=σ(ln(σ)+1), we obtain

an adjacent inversion (v, u) in a with (u, v)E(π).

63

Alternatively, E(π*)<E(σ*). By inductive hypothesis, there exists u<v<n -1

such that (v, u) is an adjacent inversion of σ*, but (u, v)E(π*). Now insert n back

into π*, σ*, recovering π and σ. If n sits to the right of u or to the left of v in σ, then

(v, u) is still an adjacent inversion of σ. Otherwise n is sandwiched between v on the

left and u on the right. Therefore (n, u) is an adjacent inversion in σ. On the other

hand (v, n) E(σ), so since E(π)E(σ), we have (v, n) E(π) also. Hence, the

triple (u, v, n) are in exactly this order (not necessarily adjacent) in π. Therefore the

adjacent inversion (n, u) in a is such that (u, n) E(π), and this proves the inductive

step.

Denote by ω¯ the permutation ω reversed in rank. For example, with ω= 13254 we

have ω¯= 53412. Then it is easy to see that

E(π)E(σ)E(π¯)E(σ¯) .

By Lemma 5.1.1, these statements are equivalent to

πσσ¯π¯.

We immediately obtain the following corollary to Lemma 5.1.1:

Corollary 5.1.2. For ωSn, define

Ei(ω):={j<i : (j, i)E(ω)}, 1<i<n.

Then

64

E(ω)=i=1n{(j, i) : jEi(ω)},

and consequently

πσE(π)E(σ)Ei(π)Ei(σ) , i<n.

5.2 Submultiplicativity of Pn*

Next, we establish one of the claims of Theorem 1.4.1, namely that Pn*:=P(πσ)

is submultiplicative. Of course ([43, p. 23, ex. 98] again) this implies that there

exists limPn*n.

Lemma 5.2.1. Let π, σSn be selected independently and uniformly at random. As

a function of n, Pn* is submultiplicative, i.e. for all n1, n2>1

Pn1+n2*<Pn1*Pn2*.

Consequently there exists limnPn*n=infn1Pn*n.

Proof. Let π, a be two permutations of [n1+n2]. Then πσ if and only if

Ei(π)Ei(σ), 1<i<n1+n2.

Using these conditions for i<n1, we see that

π[1, 2, ..., n1] a [1, 2, ..., n1] .

65

Here π[1, 2, ..., n1], say, is what is left of the permutation π when the elements

n1+1, ..., n1+n2 are deleted.

Likewise , πσ if and only if

Ei(π¯)Ei(σ¯), 1<i<n1+n2.

Using these conditions for i<n2, we see that

π[n1+1, ..., n1+n2] a [n1+1, ..., n1+n2] .

Now, since π and a are uniformly random and mutually independent, so are the four

permutations

π[1, ..., n1], π[n1+1, ..., n1+n2], σ[1, ..., n1], a [n1+1, ..., n1+n2].

Hence,

P(πσ)<P ( π[1, ..., n1] a[1, ..., n1])

×P ( π[n1+1, ..., n1+n2] a [n1+1, ..., n1+n2]),

so that

Pn1+n2*<Pn1*Pn2*.

66

(JIAPTER 6

THE PROOF OF THE WEAK ORDER UPPER BOUND

We now present the proof of Theorem 1.4.1, upper bound. We will prove something

better than what was stated there, showi ng that for each ε>0,

Pn*=O((β+ε)n),

where

β=106531712!6=0.36129 . . .

6.1 A Necessary Condition for Weak Comparability

The proof of this upper bound for Pn* parallels the proof of the lower bound for Pn in

Theorem 1.2.1. As in that proof, given k>1, let πk* and σk* be obtained by deletion

of the elements n, ..., n -k+1 from π and σ, and let li(π)=π-1(i), li(σ)=σ-1(i),

n -k+1<i<n. In the notations of the proof of Lemma 5.2.1, πk*=π[1, ..., n -k]

and σk*=σ[1, ..., n -k], and we saw that πk*σk* if πσ. Our task is to find

the conditions these li(ċ)'s must satisfy if πσ holds.

To start, notice that

67

πσ|En(π)|>|En(σ)|ln(π)>ln(σ).

Next

πσπ*σ*ln-1(π)>ln-1(π*)>ln-1(σ*)>ln-1(σ)-1,

as deletion of n from π, σ decreases the location of n -1 in each permutation by at

most one. In general, for 0<j<k we get

πσπj*σj*ln-j(π)>ln-j(σ)-j.

So, introducing 1(π)={ln-i+1(π)}1ik and 1(σ)={ln-i+1(σ)}1ik,

{πσ}{(l(π), l(σ)Sk}, (6.1)

Sk:={(x, y)R2k : xj>yj-j+1, 1<j<k} .

In addition, on {πσ} every pair of elements, which forms an inversion in π, also

forms an inversion in σ. Applying this to the elements n -k+1, ..., n, we have then

{yrσ}{(l(π), l(σ))Tk}, (6.2)

Tk:={ ( x, y)R2k : Vl <i<j<k , xi<xjyi<yj}.

Combining (6.1) and (6.2), we get

{πσ}{(l(π), l(σ))SkTk}{πk*σk*}.

68

So, since the two events on the right are independent,

Pn*<P((l(π), l(σ))SkTk)Pn-k*. (6.3)

6.2 A Reduction to Uniforms

It remains to estimate

P((l(π), l(σ))SkTk).

As in the proof of Theorem 1.2.1 (lower bound), we observe that (l(π), l(σ)) has the

same distribution as (nU , nV ), conditioned on

An,kBn,k={nU1...nUk}{nV1...nVk}.

Here U1, ..., Uk, V1, ..., Vk are independent [0, 1]-uniforms. Then

P((l(π), l(σ))SkTk)=P({(nU,nV),SkTk}Cn,k)P(Cnk), Cn,k=An,kBn,k.

Introduce the event D˜n,k on which

min{minij|URejectReject-Uj|, minij|Vi-Vj|, mini,j|UReject,-Vj|, k-1minj|Uj-Vj|}>1/n.

Certainly D˜n,kcn,k and, thanks to the factor 1/k by minj|Uj-Vj| , on D˜n,k

nUj>nVj-j+1 Uj>Vj-k/nUj>Vj.

69

Therefore on D˜n,k,

(nU , nV )SkTk(U, V)S˜kTk,

S˜k:={(x, y)R2k : xj>yj, 1<j<k}.

Clearly S˜kTk is a cone-shaped subset of R2k. III addition, P(D˜n,kc)=O(k2/n).

Hence

P((l(π), l(σ))SkTk)<P((U,V)S˜kTk)+O(P(D˜n,kc))1-O(P(D˜n,kc))

=Qk*(1+O(k2/n)), Qk*:=P((U, V)S˜kTk).

This and (6.3) imply

Pn*<Qk*Pn-k*exp(O(k2/n)).

Hence, as in the proof of Theorem 1.2.1 (lower bound),

limsupPn*n<Qk*k, k>1,

and so

Pn*=O((Qk*k+ε)n), k>1, ε>0. (6.4)

Furthermore from the definition of Qk*, it follows directly that Qk* is submultiplicative,

i.e.

70

Qk1+k2*<Qk1*Qk2*, k1, k2>1.

Therefore ([43, p. 23, ex. 98] again)

lim Qk*k=infQk*k.

k k1

So the further we can push tabulation of Qk*, the better our exponential upper bound

for Pn* would probably be. ( "Probably", because we do not have a proof that Qk*k

decreases with k.)

6.3 An Algorithm to Minimize the Bound

As in the case of Qk, Qk*=Nk*/(2k)!. Here, by the definition of the sets S˜k and Tk,

Nk* is the total number of ways to order x1, ..., xk, y1, ..., yk so that two conditions

are met: (1) for each j, xj is to the right of yj ; (2) for all i<j, if xi is to the left of

xj then yi is to the left of yj.

It is instructive first to evaluate Nk* by hand for k=1,2. N1*=1 as there is only

one sequence, y1x1, meeting the conditions (1), (2). Passing to N2*, we must decide

how to insert y2 and x2 into the sequence y1x1 in compliance with conditions (1),

(2). First of all, y2 has to precede x2. If we insert x2 at the beginning of y1x1, giving

x2y1x1, then we can only insert y2 at the beginning of this triple, giving

y2x2y1x1.

71

Alternatively, inserting x2 in the middle of y1x1, we have 2 possibilities for insertion

of y2, and we get two admissible orderings,

y2y1x2x1, y1y2x2x1.

Finally, insertion of x2 at the end of y1x1 brings the condition (2) into play as we now

have x1 preceding x2, and so y1 must precede y2. Consequently, we get two admissible

orderings,

y1y2x1x2, y1x1y2x2.

Hence N2*=1 +2+2=5. Easy so far! However, passing to k=3 is considerably

more time-consuming than it was for computation of N3 in the proof of the lower

bound in Theorem 1.2.1. There, once we had determined the N2 admissible order-

ings, we could afford not to keep track of relative orderings of x1, ..., xk-1, and of

y1, ..., yk-1, whence the coding by Fs and O's. All we needed for passing from k-1

to k was the list of all binary ballot-sequences of length 2 (k-1) together with their

multiplicities. Here the nature of the conditions (1), (2) does not allow lumping vari-

ous sequences together, and we have to preserve the information of relative orderings

of x's, and relative orderi ngs of y's. This substantial complication seriously inhibits

the computer's ability to compute Nk* for k as large as in the case of Nk.

To get a feeling for how sharply the amount of computation increases for k=3, let

us consider one of the N2*=5 admissible sequences, namely y2x2y1x1. As above, we

72

write down all possible ways to insert y3 and x3 into this sequence so that (1) and

(2) hold. Doing this, we produce the 10 sequences:

y3x3y2x2y1x1, y3y2x3x2y1x1,

y2y3x3x2y1x1, y2y3x2x3y1x1,

y2x2y3x3y1x1, y2y3x2y1x3x1,

y2x2y3y1x3x1, y2x2y1y3x3x1,

y2x2y1y3x1x3, y2x2y1x1y3x3.

We treat similarly the other four sequences from the k=2 case, eventually arriving

at N3*=55. We wouldn't even think of computing N4* by hand.

Once again the computer programming to the rescue! Table 6.1 was produced by the

computer after a substantial running time.

Using (6.4) with the value k=6 from this table, we get for each ε>0

Pn*=((Q6*6+ε)n)=((0.361...+ε)n) .

73

Table 6.1: Exact computation of Nk* for smallish k.

74

(JIAPTER 7

THE PROOF OF THE WEAK ORDER LOWER BOUND

We now prove the lower bound stated in Theorem 1.4.1. Despite its sharp qualitative

contrast to the upper bound in this same theorem, its proof requires a much deeper

combinatorial insight. In particular, we will get as a consequence a lower bound

(which is known [46, p. 312, ex. 1]) for the number of linear extensions of an

arbitrary poset P of cardinality n.

7.1 A Formula for P (Ei(π)Ei(σ))

To bound P(πσ) from below we will use the criterion (Corollary 5.1.2)

πσEi(π)Ei(σ), i<n.

First of all,

Lemma 7.1.1. Let i [n], B[i-1] ( [0]=). If πSn is chosen uniformly at

random, then

P(Ei(π)B)=1|B|+1.

75

Proof. By the definition of Ei(π),

{Ei(π)B}={π-1(j)<π-1(i), jB}.

It remains to observe that π-1 is also uniformly random.

Lemma 7.1.1 implies the following key statement:

Lemma 7.1.2. Let π, σSn be selected independently and uniformly at random.

Then, for i [n],

P(Ei(π)Ei(σ))=H(i)/i, H(i):=j=1i1j.

Proof. By Lemma 7.1.1,

P(Ei(π)Ei(σ))=B[i-1]P(Ei(π)B)P(Ei(σ)=B)

=B[i-1]P(Ei(σ)=B)|B|+1

=E[1|Ei(σ)|+1]

=j=0i-11i(j+1)=H(i)i.

Note. In the second to last equality, we have used the fact that |Ei(σ)| is distributed

uniformly on {0, 1, ..., i-1}. In addition, |E1(σ)|,..., |En(σ)| are independent, a

property we will use later. For completeness, here is a bijective proof of these facts.

76

By induction, the numbers |Ei(σ)|, i<t, determine uniquely the relative ordering of

elements 1, . . . ' t in the permutation σ. Hence the numbers |Ei(σ)|, i[n], determine

a uniquely. Since the range of |Ei(σ)| is the set {0, ..., i-1} of cardinality i, and

|Sn|=n !, it follows that the numbers |Ei(σ)|, i[n], are uniformly distributed, and

independent of each other.

7.2 Positive Correlation of the Events {Ei(π)Ei(σ)}

Needless to say we are interested in P(πσ)=P(i=1n{Ei(π)Ei(σ)}). For-

tunately, the events {Ei(π)Ei(σ)} turn out to be positively correlated, and the

product of the marginals P(Ei(π)Ei(σ)) bounds that probability from below.

Theorem 7.2.1. Let π, σSn be selected independently and uniformly at random.

Then

P(πσ)>i=1nP(Ei(π)Ei(σ))=II H(i)i.

Proof. First notice that conditioning on a and using the independence of π and σ,

P(Ei(π)Ei(σ), i<n)

=E [P (Ei(π)Ei(σ), i<n |σ)]

=E[P(Ei(π)Bi, i<n) |Bi=Ei(σ)].

So our task is to bound P(Ei(π)Bi, i<n), where these Bi's inherit the following

property from the Ei(σ)'s:

77

iEj(σ) and jEk(σ)iEk(σ) .

Lemma 7.2.2. Let n>1 be an integer, and let Bi[n], i =1, ..., n, be such that

iBi and iBj, jBkiBk, i, j, k[n].

Then, for πSn selected uniformly at random,

P(Ei(π)Bi, i<n)>i=1n1|Bi|+1.

Proof Of LEIVIIVIA 7.2.2. Notice upfront that iBi[n]. Otherwise there would

exist i1, ..., is such that itBit+1,1<t<s, (is+1=i1), and - using repeatedly

the property of the sets Bi - we would get that, say, i1Bi2 and i2Bi1, hence

i2Bi2 ; contradiction.

Let U1, ..., Un be independent uniform-[0, 1] random variables. Let a random permu-

tation ω be defined by

ω(i)=kUi is kth smallest amongst U1, ..., Un.

Clearly ω is distributed uniformly, and then so is π:=ω-1 With π so defined, we

obtain

{Ei(π)Bi, i<n} ={π-1(i)>π-1(j), jBi, i<n}

={Ui>Uj, jBi, i<n} .

78

Hence, the probability in question equals

P(Ui>Uj, jBi, i<n) .

We write this probability as the n-dimensional integral

P(Ui>Uj, jBi, i<n) =...dx1...dxnD ,

D= {(x1, ..., xn)[0,1]n : xi>xj, jBi, i<n}.

Since iBi[n], we can choose an index k[n] such that kBi for all i. Then we

may rewrite the integral above as

01 ( D(xk)...dx1...dxk-1dxk+1 ...dxn) dxk,

D(xk)={(x1, ..., xk-1, xk+1, ..., xn)[0,1]n-1 : xi>xj, jBi, i<n}.

On D(xk), the only inequalities involving xk are of the form xk>xj, jBk. This

suggests scaling those xj by xk, i.e. introducing new variables tj:=xj/xk, so that

tj[0,1], jBk. To keep notation uniform, let us also replace the remaining xi,

iBk{k}, with ti. Let D (xk) denote the integration region for the new variables

ti, ik. Explicitly, the constraints xj<xk, jBk, become tj<1, jBk.

Obviously each listed constraint xa<xb(a, bBk) is replaced, upon scaling, with

ta<tb. We only rename the other variables, so every constraint xa<xb(a, bBk)

similarly becomes ta<tb. By the property of the sets Bi, there are no inequalities

xa>xb, aBk, bBk (since the presence of this inequality implies bBa). The

79

only remaining inequalities are all of the type xa<xb, aBk, bBk. In the new

variables, such a constraint becomes xkta<tb, and it is certainly satified if ta<tb,

as xk<1. Hence, D (xk)D*, where

D*:= {(t1, ..., tk-1, tk+1, ..., tn)[0,1]n-1 : ti>tj, jBi, ik} ,

and D* does not depend on xk ! Observing that the constraints that determine D*

are those for D with the constraints xi<xk, iBk, removed, we conclude that the

innermost integral over D(xk) is bounded below by

xk|Bk|P(Ui>Uj, jBi, ik).

( xk|Bk| is the Jacobian of the linear transformation {xi}ik{ti}ik.) Integrating

with respect to xk, we arrive at

P(Ui>Uj, jBi, i<n)>1|Bk|+1ċP(Ui>Uj, jBi, ik). (7.1)

By induction on the number of sets Bi, with Lemma 7.1.1 providing basis of induction

and (7.1) - the inductive step, we get

P(Ui>Uj, jBi, i<n)>i=1n1|Bi|+1.

The rest is short. First, by Lemma 7.2.2,

80

P(Ei(π)Ei(σ), i<n) =E[P(Ei(π)Bi, i<n) |Bi=Ei(σ)]

>E[i=1n1|Ei(σ)|+1]

Since the cardinalities |Ei(σ)| are independent, the last expected value equals

i=1nE[1|Ei(σ)|+1]=i=1n(1ij=0i-11j+1)=i=1nH(i)i;

for the second to last equality see the proof of Lemma 7.1.2.

El

7.3 Linear Extensions of Arbitrary Finite Posets

Note. Let P be a poset on [n], and put Bi:={ jP : j<i in P} . Bi{i} is

called the order ideal at i. By the properties of P, the Bi's satisfy the hypotheses of

Lemma 7.2.2, so letting e(P) denote the number of linear extensions of P we get

P(Ei(π)Bi, i<n) =.|{ω.ω(i)>ω(j),jBi,i<n}|n!

=e(P)n!>i=1n1|Bi|+1.

Thus we have proved

Corollary 7.3.1. For a poset P with n elements,

e(P)>n!/i=1nd(i), d(i):=| { jP : j<i in P} |.

81

In a very special case of P, whose Hasse diagram is a forest of rooted trees with edges

directed away from the roots, this simple bound is actually the value of e(P) ([46, p.

312, ex. 1], [34, sect. 5.1.4, ex. 20], [8] ). There exist better bounds for the number

of linear extensions in the case of the Boolean lattice (see Brightwell and Tetali [14],

Kleitman and Sha [33] ), but nothing has been done in the way of improving this

bound for P =P(σ), the permutation-induced poset. Indeed, our proof of the lower

bound for Pn* used only the universal bound of Corollary 7.3.1, and not one specific

to this special poset. So this begs the question of whether we might improve the

bound in this case, and consequently improve on our lower estimate for Pn*. We are

presently working in this direction.

82

(JIAPTER 8

NUMERICS

We now present some numerical results we have generated in hopes of determining

how close our present bounds are to being sharp. These computer simulations were

only done for comparability of permutation-pairs.

8.1 Bruhat Order Numerics

From computer-generated data we have collected, it appears that our O(n-2) upper

bound given in Theorem 1.2.1 correctly predicts the qualitative behavior of P(π<σ).

The data suggests that P(π<σ) is of exact order n-(2+δ) for some δ[0.5,1],

which begs the question of how to improve on our current bound. Writing Pn=

P(π<σ), Figure 8.1 is a graph (based on this numerical experimentation) exhibiting

convergence to the exponent -a in the asymptotic equation Pncn-a, c>0 a

constant, and -a appears to be near -2.5. In table 8.1, we also provide a portion of

the accompanying data used to generate this graph.

In this table, Rn represents the number of pairs (π, σ) out of 10 randomly-generated

pairs such that we had π<σ. We have also utilized the computer to find the actual

probability Pn for n =1, 2, ..., 9. Table 8.2 lists these true proportions.

83

Table 8.1: Computer simulation data for Pn.

Table 8.2: Exact computation of Pn for smallish n.

84

Value of n

-1

.0˙"esˆ-1.5

.-D˙-

0.eo.¯ts-ˆ

z- -2

-2.5

Figure 8.1: Experimental determination of the exponent -a in the asymptotic equa-

tion Pncn-a.

8.2 Weak Order Numerics

Concerning the weak order, computer-generated data suggests that P(πσ) is of

exact order (0.3)n. So our current upper bound O((0.362)n) is a qualitative match

for P(πσ), but it appears that improvements are possible here also. Writing Pn*=

P(πσ), Figure 8.2 is a graph (based on our numerical experiments) exhibiting

convergence to the ratio ρ in the asymptotic equation Pn*cρn, c>0 a constant, and

ρ appears to be near 0.3. In table 8.3 we also provide a portion of the accompanying

data used to generate this graph.

85

Value of n

1.1

0.9

".se0D˙˙-|0.7

Doe,0.,`tsts

z˙¯ 0.5

0.3

Figure 8.2: Experimental determination of the ratio ρ in the asymptotic equation

Pn*cρn.

In this table, 77; is defined analogously to Rn above. Table 8.4 lists the true propor-

tions P: for n =1, 2, ..., 9.

Surprisingly, our Theorem 1.4.1 lower bound for Pn* is quite good for these smallish

values of n, as is seen in table 8.5.

86

Table 8.3: Computer simulation data for Pn*.

Table 8.4: Exact computation of Pn* for smallish n.

87

Table 8.5: Our theoretical lower bound for Pn* applied for smallish n.

88

(JIAPTER 9

ON INFS AND SUPS IN THE WEAK ORDER LATTICE

Finally, we focus on the proof of Theorem 1.4.2. Before we prove what was stated

there, we have a good deal in the way of preliminaries to take care of. The discussion

below is inspired almost exclusively by material contained in the work [3].

9.1 A Connection with Complete, Directed, Acyclic Graphs

Given ωSn, recall the set of non-inversions of ω,

E(ω):={(i, j) : i<j, ω-1(i)<ω-1(j)},

and the set of inversions of ω,

E*(ω):={(i, j) : i>j, ω-1(i)<ω-1(j)}.

Note that ω is uniquely determined by its E(ω) (equivalently, by its E*(ω)). We have

seen that, given permutations π, σSn, we have π<σ in the weak order (written

πσ) if and only if E(π)E(σ) (equivalently E*(π)E*(σ)). It is beneficial to

consider the sets E(ω) and E*(ω) as directed edges in a complete, simple, labelled

digraph. Namely, we define

89

G(ω)=( [n], E(ω) I E*(ω))

by joining i and j with an arc directed from i to j if (i, j)E(ω)((i, j)E*(ω)

resp.). Note that G(ω) is acyclic, where we are considering paths (hence cycles) in

the sense of directed graphs, always moving in the direction specified by arcs.

Now consider an arbitrary complete, simple, labelled digraph G=( [n], EU E*), where

E:={(i, j) : i<j},

E*:={(i, j) : i>j}.

Given a subset AE E* of edges, we define the transitive closure A¯ of A in G to

be the set of ordered pairs (i, j) of vertices which are joined by a path consisting of

A-edges in G directed from i to j. The transitive part of this closure A¯ is defined to

be

T(A):=A¯A

so that

A¯=A T(A).

In particular, E and E* are subsets of edges of G so we may consider their transitive

closure in G. Note that E and E* (equivalently G) coming from a permutation will

be unchanged by this transitive closure operation, i.e. in this case we would have

90

T(E)==T(E*). The following is a trivial, but important, observation about

taking transitive closures:

Lemma 9.1.1. Given a subset A of edges of G, we have A¯¯=A. Equivalently,

T(A¯)=.

Proof. Evidently A¯¯ A. For the opposite containment, let (i, j)A¯¯. This means

there is a path P consisting of edges e1, ..., ekA¯ directed from i to j (if k=1, this

means (i, j)=e1A¯). Here, we have indexed the edges e1 , . . . , ek in the order they

appear in P. Namely, e1 has initial vertex i and terminal vertex equal to the initial

vertex of e2, and so on. Of course, ek has terminal vertex j.

Note that each ei is either an original edge of A, or else comes from a directed path

Pi consisting of edges from A directed from the initial end to the terminal end of ei.

Hence, we can construct from P a path P consisting only of A-edges in the following

way: if eiA, keep it; otherwise, replace ei with the directed path Pi. Then P is a

directed path of A-edges from i to j, so (i, j)A.

In other words, Lemma 9.1.1 says that taking the transitive closure of a set of edges

produces a set of edges which is transitively closed. We are ready to give some

equivalent criteria which guarantee that G is induced by a permutation:

Lemma 9.1.2. The following are equivalent:

(i) G=G(ω) for some unique permutation ωSn.

(ii) G is acyclic.

91

(iii) E=E¯ and E*=E*¯(equivalently T(E)==T(E*)).

Proof. (i)(ii). This is obvious, as all edges of G(ω) are directed from ω(i) to ω(j)

for each 1<i<j<n.

(ii)(i). Suppose G is acyclic. We claim that there exists a unique vertex v1[n]

such that all edges incident there are inwardly-directed. Indeed, if there were no such

vertex then we could enter and leave every vertex, eventually constructing a cycle as

G is finite; contradiction. We get uniqueness of v1 since, for any other vertex vv1,

G complete implies there is an edge directed from v to v1(v1 has all inwardly-directed

incident edges) so that v has an outwardly-directed incident edge.

Define ω(n)=v1, and delete v1 from G, giving a new labelled, complete, simple

digraph G-{v1} with vertex set [n]{v1}. Of course G-{v1} is still acyclic, so we may

repeat the above argument on this new digraph, giving a unique vertex v2[n]{v1}

such that all edges incident there are inwardly-directed. We put ω(n-1) =v2 and

continue in this way, finally arriving at a unique permutation ωSn such that

G=G(ω).

(ii) (iii). Suppose, say, EE. Then there exists (i, j)E¯E. Hence, we can find

edges e1, ..., ekE, k>1, that form a directed path from i to j in G (i.e., the

terminal end of et is the initial end of et+1 for each 1<t<k- 1). Since (i, j)E

and G is complete, we have (j, i)E* Therefore C:=(e1, ..., ek, (j, i)) forms a

cycle in G. By a similar argument we can show that E*E*¯ implies G contains a

cycle.

(iii) (ii). Suppose G contains a cycle. Since G is both antisymmetric and complete,

it contains a cycle of length 3. Let a, b and c be the distinct vertices in [n] that form

92

this cycle. Re-labelling if necessary, we may assume a<b<c. If the cycle is (a, b, c),

then

(a, b), (b, c)E; (c, a)E*

so that (a, c)E¯E, i.e., EE. On the other hand, if (a, c, b) is the cycle, then

(a, c)E; (c, b), (b, a)E*

so that (c, a)E*¯E*, i.e., E*E*¯. This completes the proof of Lemma 9.1.2.

9.2 Computing Infs and Sups in the Weak Order Lattice

With this machinery, we now show that the poset (Sn,) is a lattice. What's more,

we can say precisely how to compute inf{π1 , . . . ' πr} ( sup{π1 , . . . ' πr} resp.), where

π1, ..., πrSn.

Lemma 9.2.1. (Sn,) is a lattice with

E(inf{π1, ..., πr})=i=1rE(πi)

and

E* (sup{π1, ..., πr})=i=1rE*(πi).

93

Proof. We will prove this only for infimums; the proof for supremums is completely

analogous. By Lemma 9.1.2 it is sufficient to prove that the complete, simple, labelled

digraph G=([n], EUE*), where E=i=1rE(πi), contains no cycle.

Suppose G does contain a cycle. Then, since G is both antisymmetric and complete,

it contains a cycle of length 3, passing through the vertices a, b and c, say. We may

assume a<b<c; otherwise just re-label the vertices. If the cycle is (a, b, c), then

(a, b), (b, c)E; (c, a)E*,

which violates the transitivity of E (note that E is transitively closed by Lemma

9.1.1). So this is impossible.

On the other hand, suppose the cycle is (a, c, b). Then

(a, c)E; (c, b), (b, a)E*

Therefore (a, b), (b, c)i=1rE(πi), and hence

(c, b), (b, a)i=1rE*(πi).

From transitivity, (c, a)i=1rE*(πi), and therefore

(a, c)i=1rE(πi).

So, as (a, c)E, there exist indices i1, ..., ik and vertices a=x1, x2, ..., xk, xk+1=c

with xj<xj+1, xjb and

94

(xj, xj+1)E(πij), j<k.

Let 1<p<k be the index such that x<b<x+1. If it happens that (b, x)

E*(πi), then as (x, x+1)E(πi) we must have (b, x+1) E(πi) by transitivity

of the permutation πi. Hence (b, x+1)E, and since (x+1, x+2)E we get

(b, x+2)E by transitivity of E. Using repeatedly the transitivity of E in this way,

we eventually obtain (b, c)E, contradicting (c, b)E*

Hence, it must be that (x, b)E(πi). So (x, b)E, and by the transitivity of E we

have (a, x)E. Therefore, using transitivity once more, (a, b)E, contradicting

(b, a)E* Therefore G must be acyclic, and hence (Lemma 9.1.2) G=G(π) for

some unique permutation πSn. Finally, any permutation ωSn that is a lower

bound for all of π1, ..., πr will have

E(ω)i=1rE(πi)

by definition of the weak order. Hence, since E(ω) is transitively closed, we have

E(ω)E. We have just shown E=E(π), and hence

E(ω)E(π)i=1rE(πi)

so that ωππi, 1<i<r. That is, π=inf{π1 , . . . ' πr} and we are done.

95

9.3 Some Equivalent Criteria for inf{π1, ..., πr}=12 ... n

Let T(Er) denote the transitive part of the closure of Er:==1rE(π). Note that

any pair (i, k)T(Er) has k>i+2 since we must be able to find j with i<j<k.

Hence, no pair (i, i+1), 1<i<n -1, could possibly belong to T(Er). By Lemma

9.2.1,

E(inf{π1, ..., πr})=Er¯=Er T(Er).

So, if inf{π1, . . . , πr}=12 ...n, the unique minimum in this lattice, then every

pair (i, j) with i<j belongs to E(inf{π1, ..., πr}) and hence every pair (i, i+1),

1<i<n -1, must belong to Er . Thus, choosing π1, ..., πrSn independently and

uniformly at random, we have proved the containment of events

{inf{π1, ..., πr}=12 ...n}¯{(i, i+1)n1i=1 =1rE(π)}.

But the event on the right is also sufficient for {inf{π1, ..., πr}=12 ...n}! Indeed,

if every pair (i, i+1), 1<i<n -1, belongs to Er, then taking the transitive closure

of this set gives us every pair (i, j) with i<j! We have therefore proved

{inf{π1, ..., πr}=12 ...n} =¯n1i=1{(i, i+1) =1rE(π)}. (9.1)

We can take this a step further. Given ωSn, introduce the set of descents of ω:

D(ω):={i : ω(i)>ω(i+1)}.

Consider the event on the right-hand side of (9.1). We have

96

(i, i+1) =1rE(π)i[n -1] i[n -1], l[r], (i, i+1) E(π)

i[n -1], El [r], iD(π-1)

=1rD(π-1)=.

(9.2)

Moreover, observe that

iD (inf{π1, ..., πr}-1)(i+1, i)E*(inf{π1, ..., πr})

(i, i+1) E(inf{π1, ..., πr})

(i, i+1) E(πj)j

(i+1, i)E*(πj)j

iD(πj-1)j.

This shows that D (inf{π1, ..., πr}-1)==1rD(π-1). Combining this with (9.1) and

(9.2), we have therefore proved:

Lemma 9.3.1. Let π1, . . . , πrSn be selected independently and uniformly at ran-

dom, and let Pn(r):=P (inf{π1, ..., πr}=12 ...n). Then

Pn(r)(a)=P(i=1n¯1{(i, i+1) =1rE(π)})

(b)=P(D(inf{π1, ..., πr}-1)==1rD(π-1)=)

97

This allows us to instead study the probabilities (a) and (b), whichever happens to

be convenient for us.

Given ωSn, let ω denote ω=ω(1)...ω(n) reversed in order, so that ω=

ω(n)...ω(1), i.e. ω(j)=ω(n-j+1), 1<j<n. For example, if ω= 45123 then

ω= 32154. It is trivial to check that

inf{π1, . . . , πr}=τ sup{π1, . . . , πr}=τ.

Indeed, this only requires the observation

=1rE*(π)={(j, i) : (i, j)=1rE(π)}

followed by an application of Lemma 9.2.1. So we have

Lemma 9.3.2. Let π1, ..., πrSn be selected independently and uniformly at ran-

dom. Then

Pn(r)=P(inf{π1, ..., πr}=12 ...n) =P(sup{π1, ..., πr}=n(n -1) ...1).

Proof. We need only observe that π1, ..., πrSn independent and uniformly ran-

dom implies that the permutations π1, . . . , πr are as well.

Hence, when answering the question "How likely is it that r independent and uni-

formly random permutations have infimum (supremum resp.) equal to the unique

minimum (maximum resp.)?", Lemma 9.3.2 allows us to restrict our attention to

infimums. We are now in a position to prove Theorem 1.4.2, part 1.

98

9.4 Submultiplicativity Again

We wish to prove the submultiplicativity of Pn(r) as a function of n, thus proving

existence of

limnPn(r)n=infn1Pn(r)n

([43, p. 23, ex. 98] again). For this, we make use of Lemma 9.3.1.

Let π1, ..., πr be independent and uniformly random permutations of [n1+n2]. In-

troduce

πi[1, 2, ..., n1], 1<i<r,

the permutation of [n1] left after deletion of the elements n1+1, n1+2, ..., n1+n2

from πi. Similarly

πi[n1+1, n1+2, ..., n1+n2], 1<i<r,

is the permutation of {n1+1, n1+2, ..., n1+n2} left after deletion of the elements

1, 2, . . . ' n1 from πi. Then the permutations

π1[1, ..., n1], . . . ' πr[1, ..., n1], π1[n1+1, ..., n1+n2], . . . ' πr[n1+1, ..., n1+n2]

are all uniform on their respective sets of permutations, and are mutually independent.

By Lemma 9.3.1,

99

inf{π1, ..., πr}=12 ...(n1+n2)(i, i+1) =1rE(π), 1<i<n1+n2-1,

and hence

inf{π1, ..., πr}=12 ...(n1+n2)

(i, i+1) =1rE(π[1, ..., n1]), 1<i<n1-1

inf{π1[1, ..., n1], ..., πr[1, ..., n1]}=12 ...n1.

Denote this first event by En1+n2, and the last by En1. Thus we have proved the

containment of events En1+n2En1. Similarly, we have

inf{π1, . . . , πr}=12 ...(n1+n2)

(i, i+1) =1rE(π[n1+1, ..., n1+n2]), n1+1<i<n1+n2-1

inf{π1[n1+1, ..., n1+n2], ..., πr[n1+1, ..., n1+n2]}

=(n1+1)(n1+2) . . . (n1+n2).

Denote the last event by En2*, so that we have the containment En1+n2En2*. Conse-

quently

En1+n2En1En2*,

and since the events on the right are independent, this implies Pn1+n2(r)<Pn1(r)Pn2(r) Of

course, the rest of the statement follows from the (by now familiar) classical Fekete

lemma concerning sub(super)multiplicative sequences [43, p. 23, ex. 98].

El

too

9.5 Sharp Asymptotics of Pn(r)

We are now ready to finish the proof of Theorem 1.4.2. The proof divides naturally

into three steps. First, we will establish the exact formula

Pn(r)=k=0n-1(-1)k b1+...+bn-k=nb1,...,bn-k1,(b1!)r...(bn-k!)r1. (9.3)

which in turn facilitates computation of a bivariate generating function related to

Pn(r) Finally, analytical techniques applied to a special case of this generating function

yields the asymptotic result stated in Theorem 1.4.2:

Pn(r)-1z*hr(z*)1(z*)n, r>2, n ,

where z*=z*(r)(1, 2) is the unique (positive) root of the equation

hr(z):=j0(-1)j(j!)rzj=0

within the disk |z|<2.

Specifically, we will use this exact formula for Pn(r) to show that

Pn(r)=[zn]1hr(z), r>1, (9.4)

followed by some asymptotic analysis. As a partial check, for r=1 we obtain

Pn(1)=[zn]1e-z=1n!,

101

as we should! Also, we immediately see that for r>2, the limit limnPn(r)n,

whose existence we established last section, equals 1/z*

9.5.1 Step 1: An Exact Formula for Pn(r)

Here, we establish formula (9.3). Notice that, if π1, ..., πrSn are independent

and uniformly random, then so are the n-permutations π1-1, ..., πr-1. Hence, the

probability (b) in Lemma 9.3.1 is the same as

P(i=rD(ωi)1=),

where ω1, ..., ωrSn are independent and uniformly random. That is, we need

to compute the probability that r independent and uniformly random permutations

have no common descents.

Now, given I[n -1], let EI denote the event "I belongs to D(ωj), 1<j<r" So

EI is the event that I is common to all of the D(ωj)'s. Then, by Lemma 9.3.1,

1-Pn(r)=P(i[n-1]E{i})

By the principle of inclusion-exclusion,

P(i[n-1]E{i})=k=1n-1(-1)k-1I |I|=k[n-1], P(iE{i})I (9.5)

But notice that, given I[n -1],

102

iIE{i}=EI.

Hence, (9.5) becomes

P (i[n-1]E{i}) =(-1n-1 )k-1 P (EI). (9.6)

k=1 I[n-1]

|I|=k

So it only remains to compute P(EI) for a fixed I[n -1], |I| =k, k[n -1]. This

computation is an r-analog of the formula in Bona's book [11, pg. 4]. We present a

modification of his argument.

Observe that

P(EI)=|EI|(n!)r,

so we need to count the number of r-tuples (ω1 , . . . , ωr)EI.

Write I ={i1<...<ik}, and J:=[n -1]I={j1<...<j(n-1)-k}. For

ωSn, let ω¯ denote ω reversed in rank. So if ω= 45123, then ω¯= 21543. Formally,

ω¯(j)=n -ω(j)+1, 1<j<n. Notice that D(ω) D(ω¯)=[n -1]. Hence

D(ωj)I, 1<j<r D(ω¯j)J, 1<j<r.

Again, ω1 , . . . , ωr independent and uniformly random implies that so are the permuta-

tions ω¯1, ..., ω¯r, so our task becomes to count the number of r-tuples of permutations

(τ1, . . . , τr) such that D(τj)J for every j. As the τj are independent, this is just

103

|{ωSn : D(ω)J}|r

To count |{ωSn : D(ω)J}|, we arrange the n entries of ω into n -k segments

so that the first i segments together have ji entries for each i. Then, within each

segment, we put the entries into increasing order. Then the only places where the

resulting ω could possibly have a descent is where two segments meet, i.e., at entries

j1, ..., j(n-1)-k, and hence D(ω)J.

The first segment of ω has to have length j1, and therefore can be chosen in ways.

The second segment has to be of length j2-j1, and must be disjoint from the first

one, so may be chosen in ways. In general, segment i must have length ji-ji-1

if 1<i<n -k, and has to be chosen from the remaining n -ji-1 entries, in

ways. There is only one choice for the last segment, as all remaining n -j(n-1)-k

entries must go there. Therefore

|{ωSn : D(ω)J}|=. . .

n !

=j1!(j2-j1)!...(n-j(n-1)-k)!¯ '

and consequently

P(EI)=|EI|(n!)r=1j1!r(j2-j1)!r...(n-j(n-1)-k)!r.

Putting this into (9.6), we obtain

104

1-Pn(r)=P(i[n-1]E{i})

=k=1n-1(-1)k-1I |I|=k[n-1],1j1!r(j2-j1)!r...(n-j(n-1)-k)!r

=k=1n-1(-1)k-1 b1+...+bn-k=nb1,...,bn-k1,(b1!)r...(bn-k!)r1,

where b1=j1, bi=ji-ji-1,1 <i<n -k, and bn-k=n -j(n-1)-k. This is clearly

equivalent to (9.3).

9.5.2 Step 2: A Generating Function for Pn(r)

Let us next use the formula (9.3) to establish the relation (9.4). Recall that we have

defined E{i} as the event " i belongs to every D(πj-1), 1<j<n -1", and that

1-Pn(r)=P(i=1n-1E{i})

Introduce the random variable Sn(r)=Sn(r) (π1, ..., πr), the number of events E{i} that

are satisfied. As we have seen (Lemma 9.3.1), Sn(r) is also the number of descents in

inf{π1 , . . . ' πr}-1 Formally, Sn(r) is the sum of indicators

Sn(r)=i=1n-1IE{i} .

Observe that

105

Pn(r)=P(Sn(r)=0),

so the formula (9.3) gives the probability P(Sn(r)=0) . BBuutt, in ffaacctt, this formula tells

us even more about Sn(r) Indeed, consider the k-th (unsigned) term in this expression

1P (iE{i})I = b1,...,bn-k1 (b1!)rċċ.1(bn-k!)r.

I[yt-1]

|I|=k b1+...+bn-k=n

This is the expected number of k-sets of the events E{i} that occur simultaneously.

That is,

E[ ]= b1+...+bn-k=nb1,...,bn-k1,(b1!)r...(bn-k!)r1, 0<k<n -1. (9.7)

This produces the simple expression

Pn(r)=k=0n-1(-1)kE[ ]

We could have seen this another way, by observing that

Pn(r)=P(Sn(r)=0)=E[(1-1)Sn(r)]=E[k=0n-1(-1)k ],

and using the linearity of expectation.

We will use these observations about Sn(r) to get a compact generating function re-

lated to this random variable, which happens to be amenable to asymptotic analysis.

Introduce the bivariate generating function

106

Fr(x, y):=n1xnE[(1+y)Sn(r)],

and let

fr(z):=β0zβ(β+1)!r .

Using what we know about Sn(r), we can simplify Fr(x, y)s

Fr(x, y)=n1xnE[(1+y)Sn(r)]

=n1xnk=0n-1ykE[ ]

=n1xnk=0n-1yk

=k0(xy)kn>k

=k0(xy)k1

=k0(xy)kν1

b1+...+bn-k=nb1,...,bn-k1 (b1 ! )r. . 1. (bn-k!)r

xn-k b1+...+bn-k='nb1,...,bn-k1 (b1!)r...(bn-k!)r1

xUb1.'.ċ,bU1b1+ċ+bU=ν+k(b1!)rċċ(bU!)r1.

1

xU β1+...+βU='kβ1,...,βU0(β1+1)!r...(βU+1)!r

107

=k0(xy)k[zk]ν1(xfr(z))ν

=k0(xy)k[zk]xfr(z)1-xfr(z)

=xfr(xy)1-xfr(xy)

=11-xfr(xy)-1.

Therefore

E[(1+y)Sn(r)]=[xn]11-xfr(xy), n>1. (9.8)

Plugging y=-1 into this expression, we obtain

Pn(r)=P(Sn(r)=0)=E[(1-1)Sn(r)]=[xn]11-xfr(-x)

(9.9)

=[xn]1hr(x), n>1,

where hr(x)=j0((-1)j/(j!)r)xj, and this is (9.4). It should be duly noted that

this generating function is a special case of one found by Richard Stanley [45], but it

is probably safe to say that he was unaware of any connection with the weak ordering.

9.5.3 Step 3: Asymptotics

We are about to finish the proof; all of the combinatorial insights are behind us, and

only some asymptotic analysis remains. Armed with formula (9.9), our goal is to use

Darboux's theorem [2] to estimate

108

[zn]1hr(z), hr(z)=j0(-1)j(j!)rzj, r>2.

First of all notice that for z>0 we have

1-z <hr(z)<1 -z+z2/(2!)r

Hence, we get

0=1 -(1) <hr(1); hr(2)<1 -(2)+(2)2/(2!)r<0, r>2.

So hr(z)=0 has a root in (1, 2) by the intermediate value theorem.

Now consider the circle |z|=u, where u>1 will be specified later. Let

g(z)=1 -z, G(z)=j2(-1)j(j!)rzj.

g(z)=0 has a single root, of multiplicity 1, within the circle |z|=u. For |z|=u,

|g(z)|>mint[0,2π)|1 -ueit|=u-1,

and

|G(z)|<u22r(1 +u3r+u3ru4r+...)

<u22r. 11-u3r, u<3r

If we can find u (1, 3r) such that

109

u-1 >u22r1-u3r,

(9.10)

then, by Rouche's theorem [48], hr(z)=g(z)+G(z) also has a unique, whence real

positive, root z* within the circle |z|=u. The inequality (9.10) is equivalent to

F(u):=u2(2-r+3-r)-u(1 +3-r)+1 <0.

F(u) attains its minimum at

u¯=1+3-r2(2-r+3-r) (1, 3r),

and

F(u¯)=1 -(1+3-r)24(2-r+3-r).

For r>2,

4 (2-r+3-r)<8 . 2-3=1,

and so F(u¯)<0 in this case, and we are done. Actually, notice that our choice of

circle radius

|z|=u¯=1+3-r2(2-r+3-r)(2,3r), r>2.

So we have proved hr(z)=0 has a unique (positive) root z*=z*(r)(1, 2) within

the disk |z|<2, r>2, which is what we wanted.

110

On the other hand, for r=2,

F(u¯)=1 -(1+1/9)21+4/9>0,

so this case requires a bit more attention. Instead, consider

g(z)=1 -z+z2(2!)2-z3(3!)2, G(z)=j4(-1)j(j!)2zj,

and our strategy will be analogous to the above. First,

g(z)=-1+z/2-z2/12=-(z-3)2+312<0, zR,

so g(z)=0 has one real root, z1. Since g(1) =2/9>0 and g(2)=-2/9<0, we

have z1 (1, 2).

Let z2=a+ib, z2-=a-ib denote the two complex roots of g(z)=0. Then (Vieta's

relations [48] )

2a+z1=9, (a2+b2)z1=36.

In particular

a=9-z12>3.5,

hence |z2|=|z2-|>3.5. So, if we can find u (z1 , 3.5 ) with

|g(z)|>|G(z)|, |z|=u,

nt

we will be done once again by Rouche's theorem. For |z|=u,

|G(z)|<u4(4!)2(1 +u52+u52u62+...)

(9.11)

<u4(4!)2. 11-u52, u<52

Take u=2. Let us show that

min|z|=2|g(z)|=|g(2)|=29.

To this end, we bound

|g(z)|=136|(z-z1)(z-z2)(z-z2-)|

>136(2-z1)min|z|=2|z-z2||z-z2-|.

Setting z=2eit, we obtain

|z-z2|2|z-z2-|2=[(2 cos7 -a)2+(2 sint -b)2] [(2 cos7 -a)2+(2 sint +b)2]

=(4-4acost+a2+b2-4bsint)(4-4acost+a2+b2+4bsint)

=(4-4acost+a2+b2)2-16b2sin2t

:=F(t).

Then

F(t)=8asint(4-4acost+a2+b2)-32b2sintcost

=8 sint[a(4+a2+b2)-4(a2+b2)cost]

112

So F(t)=0 if and only if t=0, π, since

a(4+a2+b2)4(a2+b2)=a4+aa2+b2

=9-z18+z1(9-z1)72

=81-z1272>7772>1.

This inequality also shows that F(t) always has the same sign as sint, hence F(t)>0

for t(0, π) and F(t)<0 for t(π, 2π). So F(t) attains its minimum at t=0,

and consequently on |z|=2

|g(z)|>(2-z1)F(0)=(2-z1)(4-4a+a2+b2)

=(2-z1)(2-z2)(2-z2-)

=g(2)=29.

Combining this with (9.11), we are done since

|g(z)|>29>24(4!)21-252>|G(z)|, |z|=2.

113

(JIAPTER 10

OPEN PROBLEMS

In this final chapter, we present some problems that we find important and/or inter-

esting, and which we intend to pursue in future research.

10.1 The Problems

Problem 10.1.1. Compute exactly the limit limnQnn in the proof of Theorem

1.2.1, lower bound.

Problem 10.1.2. Compute exactly the limit limnQn*n in the proof of Theorem

1.4.1, upper bound.

Problem 10.1.3. Find an argument that improves the lower bound in Theorem Le. 1

to something on the order of n-' for some a >0.

Problem 10.1.4. Find an argument that improves the lower bound in Theorem 1.4.1

to something of exponentially small order.

Problem 10.1.5. Find extensions to chains of length r for the other bounds in both

orders, similar to that in the case of Bruhat order (upper bound).

114

BIBLIOGRAPHY

[1] R. M. ADIN, Y. ROICHIVIAN, On Degrees in the Hasse Diagram of the

Strong Bruhat Order, Seminaire Lotharingien de Combinatoire 53 (2004),

12 pp.

[2] E. BENDER, Asymptotic Methods in Enumeration, SIAM Review 16 (1974),

no. 4, pp. 485-515.

[3] C. BERGE, Principles of Combinatorics, Vol. 72 of Mathematics in Science

and Engineering: A Series of Monographs and Textbooks, Academic Press,

Inc., New York, 1971.

[4] P. BILLINGSLEY, Probability and Measure, 3 d ed., John Wiley & Sons, 1nc.,

New York, 1995.

[5] A. BJÖRNER, Orderings of Coxeter Groups, in Combinatorics and Algebra

(C. GREENE, ed.), Vol. 34 of Contemp. Math., American Mathematical

Society, Providence, RI, 1984 pp. 175-195.

[6] A. BJÖRNER, F. BRENTI, An Improved Tableau Criterion for Bruhat Or-

der, Electron. J. Combin. 3 (1996), #R22, 5 pp. (electronic).

[7] A. BJÖRNER, F. BRENTI, Combinatorics of Coxeter Groups, Springer, New

York, 2005.

[8] A. BJÖRNER, M. Wachs, q-Hook Length Formulas for Forests, Journal of

Combinatorial Theory, Series A 52 (1989), pp. 165-187.

[9] IVI. BOacuteNA, The Permutation Classes Equinumerous to the Smooth Class,

Electron. J. Combin. 5 (1998), #R31, 12 pp. (electronic).

[10] IVI. BOacuteNA, The Solution of a Conjecture of Stanley and Wilf for all Layered

Patterns, Journal of Combinatorial Theory, Series A 85 (1999), pp. 96-104.

[11] IVI. BOacuteNA, Combinatorics of Permutations, Discrete Mathematics and its

Applications, Chapman & Ha11/CRC, Boca Raton, 2004.

115

[12]

IVI. BOacuteNA, Personal communication about an early preprint, August 2006.

[13] D. BRESSOUD, Proofs and Confirmations: The Story of the Alternating

Sign Matrix Conjecture, Cambridge University Press, Cambridge, 1999.

[14] G. BRIGHTWELL, P. TETALI, The Number of Linear Extensions of the

Boolean Lattice, Order 20 (2003), pp. 333-345.

[15] E. R. CANFIELD, The Size of the Largest Antichain in the Partition Lattice,

Journal of Combinatorial Theory, Series A 83 (1998), no. 2, pp. 188-201.

[16] E. R. CANfiELD, Meet and Join within the Lattice of Set Partitions, Elec-

tron. J. Combin. 8 (2001), #R15, 8 pp. (electronic).

[17] E. R. CANFIELD, Integer Partitions and the Sperner Property: Selected

Papers in Honor of Lawrence Harper, Theoret. Comput. Sci. 307 (2003),

no. 3, pp. 515-529.

[18] E. R. CANFIELD, K. ENGEL, An Upper Bound for the Size of the Largest

Antichain in the Poset of Partitions of an Integer, Proceedings of the Con-

ference on Optimal Discrete Structures and Algorithms-ODSA '97 (Ros-

tock), Discrete Appl. Math. 95 (1999), no. 1-3, pp. 169-180.

[19] IVI. -P. DELEST, G. VIENNOT, Algebraic Languages and Polyominoes Enu-

meration, Theoret. Comput. Sci. 34 (1984), no. 1-2, pp. 169-206.

[20] V. DEODHAR, Some Characterizations of Bruhat Ordering on a Coxeter

Group and Determination of the Relative Möbius Function, Inventiones

Math. 39 (1977), pp. 187-198.

[21] B. DRAKE, S. GERRISH, IVI. SKANDERA, Trvo Nerv Criteria for Compar-

ison in the Bruhat Order, Electron. J. Combin. 11 (2004), #N6, 4 pp.

(electronic).

[22] C. EHRESIVIANN, Sur la Topologie de Certains Espaces Homogenes, Ann.

Math. 35 (1934), pp. 396-443.

[23] K. ENGEL, Sperner Theory, Encyclopedia of Mathematics and its Applica-

tions 65, Cambridge University Press, Cambridge, New York, 1997.

[24] S. FOIVIIN, Personal communication at the Michigan University Combina-

torics Seminar, April 2005.

116

[25]

W. FULTON, Young Tableaux: With Applications to Representation The-

ory and Geometry, Vol. 35 of London Mathematical Society Student Texts,

Cambridge University Press, New York, 1997.

[26] I. GESSEL, G. VIENNOT, Binomial Determinants, Paths, and Hook Length

Formulae, Advances in Mathematics 58 (1985), no. 3, pp. 300-321.

[27] A. HAIVIIVIETT, On the Random Permutation-induced Poset, in preparation.

[28] A. HAIVIIVIETT, B. PITTEL, Horv Often are Trvo Permutations Comparable l.?,

to appear in Transactions of the American Mathematical Society, accepted

2006; arXiv:math.PR/0605490.

[29] A. HAIVIIVIETT, B. PITTEL, Meet and Join in the Weak Order Lattice,

preprint, 2006; available at http://www.math.ohio-state.edu/ hammett.

[30] J. E. HUIVIPHREYS, Reflection Groups and Coxeter Groups, Cambridge

Studies in Advanced Mathematics, No. 29, Cambridge University Press,

Cambridge, 1990.

[31] S. JANSON, T. Luczak, A. RUCIN´ SKI, Random Graphs, John Wiley &

Sons, Inc., New York, 2000.

[32] J. H. KIM, B. PITTEL, On Tail Distribution of Interpost Distance, Journal

of Combinatorial Theory, Series B80 (2000), no. 1, pp. 49-56.

[33] D. KLEITIVIAN, J. SHA, The Number of Linear Extensions of Subset Order-

ing, Discrete Math. 63 (1987), pp. 271-279.

[34] D. KNUTH, Sorting and Searching: The Art of Computer Programming,

Vol. III, Addison-Wesley Publishing Company, Inc., Reading, 1973.

[35] C. KRATTENTHALER, Generating Functions for Plane Partitions of a Given

Shape, Manuscripta Mathematica 69 (1990), pp. 173-201.

[36] A. Lascoux, IVI. P. SCHUTZENBERGER, Treillis et bases des groupes de

Coxeter, Electron. J. Combin. 3 (1996), #R27, 35 pp. (electronic).

[37] A. MARCUS, G. TARDOS, Excluded Permutation 7atrices and the Stanley-

Wilf Conjecture, Journal of Combinatorial Theory, Series A 107 (2004), no.

1, pp. 153-160.

117

[38]

V. G. MIKHAILOV, V. A. VATUTIN, Limit Theorems for the Number of

Empty Cells in an Equiprobable Scheme for Group Allocation of Particles,

Teor. Veroyatnost. i Primenen. 27 (1982), pp. 684-692 (Russian); Theor.

Probab. Appl. 27, pp. 734-743 (English transl.).

[39] B. PITTEL, Random Set Partitions: Asymptotics of Subset Counts, Journal

of Combinatorial Theory, Series A 79 (1997), pp. 326-359.

[40] B. PITTEL, Confirming Trvo Conjectures About the Integer Partitions, Jour-

nal of Combinatorial Theory, Series A 88 (1999), pp. 123-135.

[41] B. PITTEL, Where the Typical Set Partitions Meet and Join, Electron. J.

Combin. 7 (2000), #R5, 15 pp. (electronic).

[42] B. PITTEL, R. TUNGOL, A Phase Transition Phenomenon in a Random

Directed Acyclic Graph, Random Structures and Algorithms 18 (2001), no.

2, pp. 164-184.

[43] G. POacuteLYA, G. SZEGO, Problems and Theorems in Analysis, Springer, New

York, 1976.

[44] V. N. SACHKOV, Probabilistic Aethods in Combinatorial Analysis, Encyclo-

pedia of Mathematics and its Applications, Vol. 56, Cambridge University

Press, Cambridge, 1997.

[45] R. STANLEY, Binomial Posets, Möbius Inversion, and Permutation Enu-

meration, Journal of Combinatorial Theory, Series A 20 (1976), pp. 336-

356.

[46] R. STANLEY, Enumerative Combinatorics, Vol. I, Cambridge University

Press, Cambridge, 1997.

[47] R. STANLEY, Enumerative Combinatorics, Vol. II, Cambridge University

Press, Cambridge, 1999.

[48] E. STEIN, R. SHAKARCHI, Princeton Lectures in Analysis II.. Complex

Analysis, Princeton University Press, Princeton, 2003.

[49] L. TAKACS, Combinatorial 0ethods in the Theory of Stochastic Processes,

John Wiley & Sons, Inc., New York, 1967.

ns

[50]

R. WARLIIVIONT, Permutations Avoiding Consecutive Patterns, Ann. Univ.

Sci. Budapest. Sect. Comput. 22 (2003), pp. 373-393.

[51] R. WARLIIVIONT, Permutations Avoiding Consecutive Patterns, If Arch.

Math. (Basel) 84 (2005), no. 6, pp. 496-502.

119