The present thesis is concerned with the development and practical implementation of robust a-posteriori error estimators for discontinuous Galerkin (DG) methods for convection-diffusion problems.It is well known that solutions to convection-diffusion problems may have boundary and internal layers of small width where their gradients change rapidly. A powerful approach to numerically resolve these layers is using hp-adaptive finite element methods, which control and minimize the discretization errors by locally adapting the mesh sizes and the approximation orders to the features of the problems. In this work, we choose DG methods to realize adaptive algorithms. As compared to standard finite element discretizations, DG methods make use of approximating spaces that are discontinuous over inter-elemental boundaries. As a result, these methods yield stable and robust discretization schemes for convection-dominated problems, and are naturally suited for hp-adaptive algorithms.At the heart of adaptive finite element methods are a-posteriori error estimators. They provide information on the errors on each element and indicate where local refinement/derefinement should be applied. An efficient error estimator should always yield an upper and lower bound of the discretization error in a suitable norm. For convection-diffusion problems, it is desirable that the estimator is also robust, meaning that the upper and lower bounds differ by a factor that is independent of the Péclet number of the problem.We develop a new approach to obtain robust a-posteriori error estimates for convection-diffusion problems. As a starting point, we consider the h-version DG method. Then we extend our techniques to hp-version methods, both on isotropically and anisotropically refined meshes. The main technical tools in our analysis are new hp-version approximation results of an averaging operator, which are derived for irregular hexahedral meshes in three dimensions, as well as for irregular anisotropic rectangular meshes in two dimensions. Our numerical results indicate that the error estimator is effective in resolving layers. For the hp-adaptive algorithms, once the local mesh size is of the same order as the width of layers, both the energy error and the error estimator are observed to converge exponentially fast in the number of degrees of freedom.

Motivated by geometry, we consider a `less discrete' way of counting lattice points in polytopes, in which one assigns a certain `weight' to each lattice point. On the combinatorial side, this approach reveals some `hidden symmetry' which improves upon and makes transparent some classical results in Ehrhart theory. On the geometric side, the combinatorial invariants count orbifold Betti numbers of toric stacks. If time permits, we will discuss a generalization involving motivic integration.

We will discuss our motivation for understanding what the analogue of DeRham
cohomology should be for E_{\infty}-algebras in connection with algebraic
K-theory and topological cyclic homology

The Schur functions $s_\lambda$ and ubiquitous
Littlewood-Richardson coefficients $c_{\mu \nu}^{\lambda}$ are
instrumental in describing representation theory, symmetric functions,
and even certain areas of algebraic topology.
Determining when two skew diagrams $D_1$, $D_2$ have the same skew
Schur function or determining when the difference of two such skew
Schur functions $s_{D_1}-s_{D_2}$ is Schur-positive reveals
information about the structures corresponding to these functions.

By defining a set of staircase diagrams that we can augment with other
diagrams, we discover collections of skew diagrams for which the
question of Schur-positivity among each difference can be resolved.
Furthermore, for certain Schur-positive differences we give explicit
formulas for computing the coefficients of the Schur functions in the
difference.

We extend from simple staircases to fat staircases, and carry on to
diagrams called sums of fat staircases. These sums of fat staircases
can also be augmented with other diagrams to obtain many instances of
Schur-positivity.

We establish explicit criteria of solvability for the quasilinear Riccati type equation $-\Delta_p u =|\nabla u|^q + \omega$ in a bounded $\mathcal{C}^1$ domain $\Omega\subset\mathbb{R}^n$, $n\geq 2$. Here $\Delta_p$, $p>1$, is the $p$-Laplacian, $q$ is critical $q=p$, or super critical $q>p$, and the datum $\omega$ is a measure. Our existence criteria are given in the form of potential theoretic or geometric (capacitary) estimates that are sharp when $\omega$ is compactly supported in the ground domain $\Omega$. A key in our approach to this problem is capacitary inequalities for certain nonlinear singular operators arising from the $p$-Laplacian.

We give an explicit description of the representations of a finite group acting on the cohomology of a `general' invariant hypersurface in a toric variety. We show how this naturally leads to an equivariant generalization of Ehrhart theory, the study of lattice points in dilations of lattice polytopes, and prove several equivariant versions of classical results. As an example, we show how the representations of a Weyl group acting on the cohomology of the toric variety associated to a root system naturally appear.

The next talk in the TAAP seminar series is by Fok-Shuen Leung. Graduate students will have their attendance credited toward their eventual accreditation.

Title: What's next for 599? MATH 599 is the department's main tool for helping graduate instructors get off the ground. In this seminar, we'll discuss how well it does that. What did you expect, and what did it deliver? What works, and
what doesn't? And most importantly, where does the course go from here? The goal is to produce a road map for future iterations.

The Schur functions and ubiquitous Littlewood-Richardson coefficients are instrumental in describing representation theory, symmetric functions, and even certain areas of algebraic topology. Determining when two skew diagrams have the same skew Schur function or determining when the difference of two such skew Schur functions is Schur-positive reveals information about the structures corresponding to these functions. By defining a set of staircase diagrams that we can augment with other diagrams, we discover collections of skew diagrams for which the question of Schur-positivity among each difference can be resolved. Furthermore, for certain Schur-positive differences we give explicit formulas for computing the coefficients of the Schur functions in the difference.We extend from simple staircases to fat staircases, and carry on to diagrams called sums of fat staircases.These sums of fat staircases can also be augmented with other diagrams to obtain many instances of Schur-positivity. We note that several of our Schur-positive differences become equalities of skew Schur functions when the number of variables is reduced. Finally, we give a factoring identity which allows one to obtain many of the non-trivial finite-variable equalities of skew Schur functions.

The calculus of functors provides a framework for analyzing
functors of spaces, and more generally, model categories, in
terms of polynomial-like approximations to the functors. In this
talk, I will give an introduction to the calculus of functors
and discuss recent work with Kristine Bauer and Randy McCarthy
aimed at using calculus to better understand DeRham cohomology
in new contexts.

From knowledge of the conservation law multipliers of a given PDE system, one can determine whether it can be mapped invertibly to a linear PDE system and explictly find such a mapping when it exists. This method will be compared with the symmetry approach to this problem and it will be explained why the conservation law approach should be simpler computationally. Several examples comparing these approaches will be given. A systematic way of extensions to non-invertible mappings of nonlinear PDEs to linear PDEs will be presented. If time permits, it will also be shown how to systematically find non-invertible mappings of linear PDEs with variable coefficients to linear PDES with constant coefficients.

Tree polymers are simplifications of 1+1 dimensional lattice polymers made up of polygonal paths of a (nonrecombining) binary tree having random path probabilities. As in the case of lattice polymers, the path probabilities are (normalized) products of i.i.d. positive random weights. The a.s. probability laws of these paths are of interest under weak and strong types of disorder. The case of no disorder provides a benchmark since the polymers are simple symmetric random walk paths where all of the probability laws are known. We will discuss some recent results, speculation and open problems for this class of models. This is largely based on joint work with Stan Williams.

Basics of Mathematical Finance will be presented using concepts from probability. Risk measures and option replication strategies will be introduced, which allow the introduction of fundamental theorems of asset pricing. Brief results from the author’s research concerning regularity of profits under technical trading rules will be also presented.

I will present this open problem in complex geometry via the example of a Hopf surface. This is (roughly) the only compact Hermitian surface that admits infinitesimal isometries which are not holomorphic, and it exhibits some interesting behaviour which is indicative of some known results in higher dimensions. This talk will not require very much knowledge of differential geometry, with most of the discussion on the more intuitive level of group actions by Lie groups.

A celebrated result of Halasz characterizes the multiplicative functions taking values in the complex unit disc which have a non-zero mean value; recent work of Granville and Soundararajan characterizes the Dirichlet characters which have large character sums. I'll describe how one can prove a hybrid of these two, and show how this leads to improvements over Granville and Soundararajan's bounds. In particular, on the assumption of the Generalized Riemann Hypothesis the method yields a sharp bound on cubic character sums.

Note for Attendees

Refreshments will be served between the two talks.

I will talk about work in progress on certain adelic period integrals of modular forms on SL_2 and GL_2. it turns out that the situation for SL_2 is quite different from that of GL_2 and we'll try to explain what some of the differences mean for non-vanishing of L-functions.

Research on mathematical cognition indicates that learning the basics – that is, mastering the facts and procedures of the discipline – is only a small part of what learning to think mathematically is all about. Other vitally important aspects of mathematical thinking and problem solving are:

·heuristic problem solving strategies (rules of thumb for making progress when you're “stuck”);

·“control” skills (having a degree of self-awareness during problem solving that keeps you on the right track, and keeps you from squandering problem solving resources on wild goose chases); and

·“having a sense of what mathematics is all about” – developing a mathematician's point of view and being able to engage in mathematics rather than merely knowing about it.

There is nothing special about mathematics, at least in this regard: I argue that the same is true of all problem-solving domains, including the physical sciences, engineering, and even writing! A mistaken focus on subject matter mastery alone can have some disastrous consequences. This talk outlines the story in mathematics, with a few examples from other fields. I describe what can go wrong, and provide a few examples of what can go right if we attend to the broad spectrum of problem solving competencies in all of our instruction.

Biography: Alan Schoenfeld is the Elizabeth and Edward Conner Professor of Education and Affiliated Professor of Mathematics at the University of California at Berkeley. He is a Fellow of the American Association for the Advancement of Science an inaugural Fellow of the American Educational Research Association, and a Laureate of the education honor society Kappa Delta Pi. He has served as President of the American Educational Research Association and as the vice president of the National Academy of Education. In 2008 he was given the Senior Scholar Award by AERA’s Special Interest Group for Research in Mathematics Education.

After obtaining his Ph.D. in mathematics from Stanford in 1973, Schoenfeld turned his attention to issues of mathematical thinking, teaching, and learning. His work has focused on problem solving (what makes people good problem solvers, and how can people get better at it?), assessment, teachers’ decision-making, and issues of equity and diversity, with the goal of making meaningful mathematics truly accessible to all students.

Schoenfeld was lead author for grades 9-12 of the National Council of Teachers of Mathematics’ Principles and Standards for School Mathematics. He was one of the founding editors of Research in Collegiate Mathematics Education, and has served as associate editor of Cognition and Instruction. He has written, edited, or co-edited twenty-two books and nearly two hundred articles on thinking and learning. He has an ongoing interest in the development of productive mechanisms for systemic change and for deepening the connections between educational research and practice. His most recent book, How we Think, provides detailed models of human decision making in complex situations such as teaching.

The recently developed graph limit theory is part of a bigger picture in which limits of axiomatizable structures are studied. We present new results about the case when the underlying structures are functions on groups. The corresponding limit theory is deeply connected to a theory called "higher order Fourier analysis" which was founded by Gowers to generalize Roth's approach to three term arithmetic progressions in integer sets.

Rigid cohomology is one flavor of Weil cohomology. This entails for instance that one can asociate to a scheme X over F_p a collection of finite dimensional Q_p-vector spaces H^i(X) (and variants with supports in a closed subscheme or compact support), which enjoy lots and lots of nice properties (e.g. functorality, excision, Gysin, duality, a trace formula -- basically everything one needs to give a proof of the Weil conjectures).

Classically, the construction of rigid cohomology is a bit complicated and requires many choices, so that proving things like functorality (or even that it is well defined) are theorems in their own right. An important recent advance is the construction by le Stum of an `Overconvergent site' which computes the rigid cohomology of X. This site involves no choices and so it trivially well defined, and many things (like functorality) become transparent.

In this talk I'll explain a bit about classical rigid cohomology and the overconvergent site, and explain some new work generalizing rigid cohomology to algebraic stacks (as well as why one would want to do such a thing).

We will describe the large scale behaviour that is conjectured to be universal for a large class of one dimensional systems, and recent progress on the continuum versions of these models.

A complex semi-simple Lie algebra has a basis made up of a basis of a Cartan subalgebra and one root vector x_r for each root r.
Thus [x_{r}, x_{s}] = N_{r,s} x_{r+s} for some constant N_{r,s} whenever r+s is also a root. One of Chevalley's remarkable results from about 1950-1955 was that a basis can be chosen so that N_{r,s} is an integer, and more particularly |N_{r,s}| = p_{r,s} where p_{r,s} is the greatest p such that s-p.r is a root. This was the first, crucial step in constructing reductive groups over arbitrary fields in terms of root data.The derivation of this equation remains mildly mysterious even after fifty years, as does the derivation of the signs of the constants. In 1966 Jacques Tits discussed these matters in a paper published in the `Publications de l'IHES', but this work has the reputation of being extremely difficult, and I imagine few have understood it. I hope to make Tits' presentation digestible.

The omega transformation takes a Schur function indexed by a partition to the Schur function indexed by the partition's transpose. In this joint work with Jeff Remmel, we explore a refinement of the omega transformation defined on the quasisymmetric Schur functions. The resulting polynomials are called row-strict quasisymmetric Schur functions since they are described combinatorially as generating functions for row-strict composition tableaux. The interaction between row-strict quasisymmetric Schur functions and quasisymmetric Schur functions provides a natural method for interpolating between the compositions that rearrange a given partition and those that rearrange the partition's transpose. This allows us to define an operation on compositions which is similar to the transposition operation on partitions.

The last UBC/UMC talk of the year is by Alex Duncan.

Title: Solving equations with Groebner bases

The theory of Groebner bases is fundamental to modern computational algebra. One of their most useful applications is the solution of systems of polynomials in multiple variables. I will give a gentle introduction to Groebner bases emphasizing this application.

A systematic way of obtaining non-invertible mappings of nonlinear PDEs to linear PDEs will be presented. It will also be shown how to systematically find non-invertible mappings of linear PDEs with variable coefficients to linear PDES with constant coefficients. In particular, an extension to non-invertible mappings of the problem posed by Kolmogorov in his original 1931 paper will be shown (i.e, for which coefficients can a Kolomogorov equation be mapped into the heat equation).

If time permits, Raouf Dridi will further discuss the Cartan equivalence problem.

Let T be the return time to the origin of a simple random walk on an infinite recurrent graph. We show that T is heavy tailed and non-concentrated. More precisely, we have

i) P(T>t) > c/sqrt(t)
ii) P(T=t|T>=t) < C log(t)/t

Inequality i) is attained on Z, and we construct an example demonstrating the sharpness of ii). We use this example to answer negatively a question of Peres and Krishnapur about recurrent graphs with the finite collision property (that is, two independent SRW on them collide only finitely many times, almost surely).

In the 1990's, Greg Arone gave a description of the Snaith splitting of spaces of the form $\Omega^m \Sigma^m X$. His method extended to give a kind of functorial filtration of any space of the form Maps(K, X) where K is a finite complex. In good cases, this leads to a stable splitting of these mapping spaces. We will describe Arone's result with an eye toward applying this to other mapping space functors.

In this talk we will present our recent work on ‘’Closed geodesics in Alexandrov spaces of curvature bounded from above’’. This is an extension of Colding and Minicozzi’s width-sweepout construction of closed geodesics on closed Riemannian manifold to the Alexandrov setting, which provides a generalized version of the Birkhoff-Lyusternik theorem on the existence of non-trivial closed geodesics. We will explain how the width-sweepout construction works and discuss some future work in this direction.

This is the last talk this term in the teaching seminar associated with the TA Accreditation Program. All are welcome. Graduate students will have their attendance credited toward their eventual accreditation.

Title: The TAAP meta-seminar

The goal of this last session is to address what direction we would like the seminar to take in the Fall. What are the concerns, wishes for improvement or techniques related to your activity as a TA that you would like to discuss? What formats would best serve these themes? Take this opportunity to help shape your seminar.

Let $a,b,c \geq 2$ be integers satisfying $1/a + 1/b + 1/c > 1$. Darmon and Granville proved that the generalized Fermat equation $x^a + y^b = z^c$ has only finitely many coprime integer solutions; conjecturally something stronger is true: for $a,b,c \geq 3$ there are no non-trivial solutions and for $(a,b,c) = (2,3,n)$ with $n \geq 10$ the only solutions are the trivial solutions and $(\pm 3,-2,1)$ (or $(\pm 3,-2,\pm 1)$ when n is even).

I'll explain how the modular method used to prove Fermat's last theorem adapts to generalized Fermat equations and use it to solve the equation $x^2 + y^3 = z^{10}$. One new ingredient is the use of number field enumeration techniques to classify Galois representations associated to hypothetical solutions; classically one uses Ribet's level lowering theorem, but here the representations are wildly ramified and his method does not apply.

A fundamental problem in the area of quantum chaos is to understand the distribution of high eigenvalue eigenfunctions of the Laplacian on certain Riemannian manifolds. A particular case which is of interest to number theorists concerns hyperbolic manifolds arising as a quotient of the upper half-plane by a discrete ``arithmetic" subgroup of SL_2(R) (for example, SL_2(Z), and in this case the corresponding eigenfunctions are called Maass cusp forms). In this case, Rudnick and Sarnak have conjectured that the high energy eigenfunctions become equi-distributed. I will discuss some recent progress which has led to a resolution of this conjecture, and also on a holomorphic analog for classical modular forms. I will not assume any familiarity with these topics, and the talk should be accessible to graduate students.

This talk will discuss mathematical problems which are challenged by the fact they involve functions of a very large number of variables. Such problems arise naturally in learning theory, partial differential equations or numerical models depending on parametric or stochastic variables. They typically result in numerical difficulties due to the so-called ''curse of dimensionality''. We shall explain how these difficulties may be handled in various contexts, based on two important concepts: (i) variable reduction and (ii) sparse approximation.

Tropicalization is a technique that transforms algebraic geometric objects to combinatorial objects. Specifically, it associates polyhedral complex to subvarieties of an algebraic torus. One may ask which polyhedral complexes arise in this fashion. We focus on curves which are transformed by tropicalization to immersed graphs. By applying toric geometry and Baker's specializing of linear systems from curves to graphs, we give a new necessary condition for a graph to come from an algebraic curve. In genus 1 and in certain geometric situation, this condition specializes to the well-spacedness condition discovered by Speyer and generalized by Nishinou and Brugalle-Mikhalkin. The techniques in this talk give a combinatorial way of thinking about deformation theory which we hope will have further applications.

This thesis consists of four research papers and one expository note that study factors of point processes in the contexts of thinning and matching.In “Poisson Splitting by Factors,” we prove that given a Poisson point process on R^{d}, with intensity l, as a deterministic function of the process, we can colour the points red and blue, so that each colour class forms a Poisson point process on R^{d}, with any given pair of intensities summing l; furthermore, the function can be chosen as a isometry-equivariant finitary factor (that is, if a isometry is applied to the points of the original process the points are still coloured the same way). Thus using only local information, without a central authority or additional randomization, points of a Poisson process can split into two groups, each of which are still Poisson.In “Deterministic Thinning of Finite Poisson Processes,” we investigate similar questions for Poisson point processes on a finite volume. In this setting we find that even without considerations of equivariance, thinning can not always be achieved as a deterministic function of the Poisson process and the existence of such a function depends on the intensities of the original and resulting Poisson processes. In “Insertion and Deletion Tolerance of Point Processes,” we define for point processes a version of the concept of finite-energy. This simple concept has many interesting consequences. We explore the consequences of having finite-energy in the contexts of the Boolean continuum percolation model, Palm theory and stable matchings of point processes. In “Translation-Equivariant Matchings of Coin-Flips on Z^{d},” as a factor of i.i.d. fair coin-flips on Z^{d}, we construct perfect matchings of heads and tails and prove power law upper bounds on the expected distance between matched pairs.

In the expository note “A Nonmeasurable Set from Coin-Flips,” using the notion of an equivariant function, we give an example of a nonmeasurable set in the probability space for an infinite sequence of coin-flips.

The Hitchin fibration is a map M--->A, where M is the moduli space of Higgs bundles on a curve over C and A=Spec(C[M]). In this talk, I will define this fibration for G=GL_n bundles and hint to its extensions to SL_n and PGL_n bundles. Time permitting, I will discuss the fibres of the Hitchin map and their appearance in Ngo's proof of the Fundamental Lemma.

I aim to explain a recent paper of my collaborator Bai-Ling Wang in which he proves that there is a generalisation of the Baum-Douglas geometric cycles which realise ordinary K-homology classes to the case of twisted K-homology. We propose that these twisted geometric cycles are D-branes in string theory. There is an analogous picture for manifolds that are not string.

We classify bifurcations of the asymmetric states from a family of symmetric states in the focusing (attractive) Gross-Pitaevskii equation with a symmetric double-well potential. Depending on the shape of the potential, both supercritical and subcritical pitchfork bifurcations may occur. We also consider the limit of large energies and show that the asymmetric states always exist near a non-degenerate extremum of the symmetric potential. These states are stable (unstable) in the case of subcritical nonlinearity if the extremum is a minimum (a maximum). All states are unstable for large energy in the case of supercritical nonlinearity. This is a joint work with E. Kirr and P. Kevrekidis.

Abstract: This will be a mostly expository talk on the new 3-manifold invariant, Heegaard-Floer homology, developed in the last decade in a series of remarkable papers by Ozsvath and Szabo. Emphasis will be on the remarkable applications of the theory.

We discuss channels for which the input is constrained to be from a given set of D-dimensional arrays over a finite alphabet. Such a set is called a constraint. An encoder for such a channel transforms arbitrary arrays over the alphabet into constrained arrays in a decipherable manner. The rate of the encoder is the ratio of the size of its input to the size of its output. The capacity of the channel or constraint is the highest achievable rate of any encoder for the channel.

Given a binary D-dimensional constraint, a D-dimensional array with entries in {0,1, } is called “valid”, for the purpose of this abstract, if any “filling” of the ‘ ’s in the array with ‘0’s and ‘1’s, independently, results in an array that belongs to the constraint. The density of ‘ ’s in the array is called the insertion rate. The largest achievable insertion rate in arbitrary large arrays is called the maximum insertion rate. An unconstrained encoder for a given insertion rate transforms arbitrary binary arrays into valid arrays having the specified insertion rate. The tradeoff function essentially specifies for a given insertion rate the maximum rate of an unconstrained encoder for that insertion rate.

Given a 1-dimensional constraint, one can consider the D-dimensional constraint formed by collecting all the D-dimensional arrays for which the original 1-dimensional constraint is satisfied on every row in a direction along an axis. The sequence of capacities of these D-dimensional constraints has a limit as D approaches infinity, sometimes called the infinite-dimensional capacity.

As time permits, we will present some of our results: we computed the exact capacity of 2 families of multidimensional constraints; we generalized a known method for obtaining lower bounds on the capacity, for a certain class of 2-dimensional constraints, and improved the best known bounds for a few constraints of this class; we determined the tradeoff function for a certain family of 1-dimensional constraints; and finally, we partially answer a question of Poo et al., by proving that for a large class of 1-dimensional constraints with 0 maximum insertion rate, the infinite dimensional capacity equals 0 as well.

## Note for Attendees

Tea & cookies afterwards![Rescheduled from March 18.]