Department of Statistics Seminars

Current | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011

Fast goodness-of-fit tests for copulas, Dr. Ivan Kojadinovic
3rd Generation Bioinformatics - its up to you guys, Prof. Allen Rodrigo
Near Matches and Applications, Prof. Sreenivasa Jammalamadaka
Some principles of flows and steps in designing tertiary statistics curricula for learning, Prof. Helen MacGillivray
Spatial-temporal Poisson cluster models of rainfall: Applications and further developments, Dr. Paul Cowpertwait
Estimating Diagnostic Test Likelihood Ratios, Prof. David Matthews
On vector generalized linear and additive models and all that, Dr. Thomas Yee
Developing Statistical Perception, Prof. Cliff Konold
Mixtures, modes, and clusters, Prof. Bruce Lindsay
Resampling methods in change-point analysis, Prof. Claudia Kirch
Exploring student histories, Dr. Paul Murrell
Viewing R Objects, Dr. Paul Murrell
Monitoring Survival Time Data, Assoc. Prof. Stefan Steiner
A hierarchical state-space model of introduced rat population dynamics, Dr. James Russell
Some adaptive MCMC algorithms, A. Prof. Renate Meyer
(STATISTICAL PHYSICS/ANALYSIS SEMINAR)The Critical Temperature of Dilute Bose Gases, Dr. Daniel Ueltschi
plyr: a design pattern for data analysis, Dr. Hadley Wickham
Bayesian inference in population genetics and phylogenetics, A. Prof. Alexei Drummond
An Analysis of the Impact of the Availability of NCEA Standards Upon Success at Achieving UE, Dr. Rolf Turner
Generating synthetic micro data from published marginal tables and confidentialised files, Professor Alan Lee
Impartial-culture asymptotics: a central limit theorem for manipulation of elections, Geoffrey Pritchard
What's in a (Kiwi) name?, A. Prof. David O'Sullivan
Visualisation toolkit, GUI architectures, and their use in statistical graphics, Derek Law
The climate history of New Zealand reconstructed from speleothems and kauri tree rings: enhancements through the application of advanced statistics, Maryann Pirie
Insurance and Modern Finance, Prof. Tom Salisbury
A multi-site stochastic rainfall model, Dr Xiaogu Zheng
An Institute for Social Science Research? Queensland experience, Auckland prospects, Professor Mark Western
Cyclic Cellular Automata : A Tool for Self-organizing Sleep-Wake Protocols in Sensor Networks, Professor Ed G. Coffman, Jr.
Fast goodness-of-fit tests for copulas
Dr. Ivan Kojadinovic

Speaker: Dr. Ivan Kojadinovic

Affiliation: U. Auckland

When: Thursday, 26 November 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

Copulas are increasingly used to model multivariate distributions with continuous margins. The first half of the talk will be devoted to a non-technical presentation of the principles behind this very general approach and to the different steps involved in the modeling process. In the second half, recent results on goodness-of-fit testing for copulas will be presented. This is joint work with Jun Yan (University of Connecticut) and Mark Holmes.

3rd Generation Bioinformatics - its up to you guys
Prof. Allen Rodrigo

Speaker: Prof. Allen Rodrigo

Affiliation: U. Auckland

When: Thursday, 12 November 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

Near Matches and Applications
Prof. Sreenivasa Jammalamadaka

Speaker: Prof. Sreenivasa Jammalamadaka

Affiliation: U.C. Santa Barbara

When: Thursday, 5 November 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

When two judges rank the same n objects, we say a ``near match of order k'' occurs on the i-th object if their ranks for this, are close to within k. Of interest is the number of near matches in such a context, and its large-sample distribution. Applications to a nonparametric test in randomized block designs and to a new measure of association will be presented, along with some efficiency comparisons.

Some principles of flows and steps in designing tertiary statistics curricula for learning
Prof. Helen MacGillivray

Speaker: Prof. Helen MacGillivray

Affiliation: Queensland University of Technology

When: Thursday, 29 October 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 279, Science Centre


We propose some principles of flow and steps in progression in teaching tertiary statistics, and discuss their roles in facilitating student learning. Many excellent principles for teaching statistics have been advocated and developed over the past two decades. These include data- and context-driven approaches using real data and situations with which students can identify; emphasis on concepts and statistical thinking; use of technology and graphics; incorporation of active and experiential learning; and development of rich and alternative assessment methods. Calls for tertiary educators to identify learning objectives, and to align assessment with those objectives, appear in both general and discipline-specific higher education literature. Although there is strong awareness of the importance of the `story' in learning statistics, less explicit attention has been given to course progression and the alignment of progression, of both content and learning stages, with objectives. Such attention is particularly important in statistics, with its conceptual, interpretative and communication demands, underpinned by sound quantitative models, techniques and problem-solving. Combining our proposed additional principles with those described above, we show how the same principles can give rise to distinct `first' courses in tertiary statistics through alignment with slightly different learning objectives. We also discuss the advantages of overt awareness of these principles in constructing later courses for optimal student learning. We advocate that greater explicit attention to principles of flow and appropriately-spaced learning stages will assist in the next steps in statistical education reform, in considering progression of content, of development of student learning and of advancement beyond the first course.

Spatial-temporal Poisson cluster models of rainfall: Applications and further developments
Dr. Paul Cowpertwait

Speaker: Dr. Paul Cowpertwait

Affiliation: Massey U. (Albany)

When: Thursday, 22 October 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

Spatial-temporal models of rainfall based on Poisson cluster processes are discussed. The application of the models in large urban drainage engineering projects (e.g. Auckland City, Glasgow, and Thames, London) is described. Further developments based on superposing multiple Poisson processes to represent different types of precipitation (e.g. convective and stratiform rain) are then given. Ways of reducing model parameters for multiple types of storms are considered, which include using a continuous probability distribution for storm types z and functional relationships between key parameters based on z. Using a uniform distribution for z, statistical properties up to third order are derived, and used to fit a Neyman-Scott Poisson cluster model to a 60-year record of hourly rainfall data taken from a site near Wellington. The performance of the fitted model is assessed by comparing observed and simulated extreme values over a range of time scales.

Estimating Diagnostic Test Likelihood Ratios
Prof. David Matthews

Speaker: Prof. David Matthews

Affiliation: U. Waterloo

When: Thursday, 8 October 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

Let p1 and p2 represent the individual probabilities of response to a particular diagnostic test in two subpopulations consisting of diseased and disease-free individuals, respectively. In the terminology of diagnostic testing, p1 is called the sensitivity of the given test, and p2 is the probability of a false positive error, i.e., the complement of 1-p2, which is the test specificity.

Since 1975, the ratios r+ = p1/p2 and r- = (1-p1)/(1-p2) have been of particular interest to advocates of evidence-based medicine. These functions of sensitivity and specificity have been called the ``likelihood ratio of a positive test result" and the ``likelihood ratio of a negative test result," respectively.

We describe methods of deriving individual interval estimates of r+ and r-, and a simultaneous confidence region for both ratios. Using various performance characteristics of these confidence intervals, we compare our estimates with methods of interval estimation in common use. Via examples from various studies of diagnostic tests, we illustrate the merits of our computationally simple methods of deriving interval estimates of these medically relevant characteristics of diagnostic tests. As time permits, various extensions of the simplest version of this problem will also be discussed and illustrated with examples.

On vector generalized linear and additive models and all that
Dr. Thomas Yee

Speaker: Dr. Thomas Yee

Affiliation: U. Auckland

When: Thursday, 1 October 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

The first half of this talk will give a broad overview of the project I have been working on over the last decade or so. It will mainly survey the classes of vector generalized linear and additive models (VGLMs/VGAMs) which are very large and contains many statistical models. For example, univariate and multivariate distributions, categorical data analysis, time series, survival analysis, extreme value analysis, mixture models, correlated binary data, and nonlinear regression. The framework will be tied in with my VGAM package for R. There are some natural extensions, e.g., reduced-rank ideas that perform ordination (a useful technique in ecology). The second half of this talk will focus on two sub-topics:

the xij problem and quantile/expectile regression. The former is useful for fitting a multinomial logit model where there are covariates specific to each alternative. Applications of the latter are becoming widespread in many fields.

Developing Statistical Perception
Prof. Cliff Konold

Speaker: Prof. Cliff Konold

Affiliation: University of Massachusetts

When: Wednesday, 23 September 2009, 3:00 pm to 4:00 pm

Where: Statistics Seminar Room 222, Science Centre

Statistics is typically portrayed as a set of methods for collecting and analyzing data. But more fundamentally, statistics is a way of seeing the world. I describe our efforts to understand the perceptions and ideas young students bring to bear on data and how we work to shape students' perceptions to make them more expert like.

To facilitate learning, we have built into the data visualization software, TinkerPlots, tools that support and then build on novice perceptions. More recently, we have added modeling capabilities which students use to create and explore ``worlds'' of their own making. In this way, we hope to develop the crucial understanding that statistics is not only about seeing, but also about questioning what we see.

Mixtures, modes, and clusters
Prof. Bruce Lindsay

Speaker: Prof. Bruce Lindsay

Affiliation: Penn. State U.

When: Thursday, 17 September 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

This talk will have three main sections. The first part of the talk will be concerned with the mathematical properties of a mixture of normal densities, focusing on the modes of the mixture. The second part will then describe a method of clustering data hierarchically based on the modes of a kernel density estimator.

The final section will be a sweep across a range of inferential issues that arise when the clustering method is used in higher dimensions, where the role of bandwidth parameters becomes more crucial. This last part is ongoing research, and so will have multiple open questions.

Resampling methods in change-point analysis
Prof. Claudia Kirch

Speaker: Prof. Claudia Kirch

Affiliation: Technical University Kaiserslautern

When: Thursday, 27 August 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

Real life data series are frequently not stable but exhibit changes in parameters at unknown time points. We encounter changes (or the possibility thereof) everyday in such diverse fields as economics, finance, medicine, geology, physics and so on. Therefore the detection, location and investigation of changes is of special interest. Change-point analysis provides the statistical tools (tests, estimators, confidence intervals). Most of the procedures are based on distributional asymptotics, however convergence is often slow -- or the asymptotic does not sufficiently reflect dependency. Using resampling procedures we obtain better approximations for small samples which take possible dependency structures more efficiently into account.

In this talk we give a short introduction into change-point analysis. Then we investigate more closely how resampling procedures can be applied in this context. We have a closer look at a classic location model with dependent data as well as a sequential location test, which has become of special interest in recent years.

Exploring student histories
Dr. Paul Murrell

Speaker: Dr. Paul Murrell

Affiliation: U. Auckland

When: Thursday, 13 August 2009, 12:00 pm to 1:00 pm

Where: Statistics Seminar Room 222, Science Centre

The prerequisites for the paper STATS 220 are EITHER one stage 1 Statistics paper OR one stage 1 Computer Science paper. Because STATS 220 contains several computing topics, there has been an anxiety that final results would reveal a bimodal distribution, with the Comp Sci students doing well and the Stats students doing not so well, BUT this doomsday scenario has never actually eventuated.

Does this mean that the quality of the teaching in STATS 220 is so high that the students' past history counts for naught? Or is it possible that the students' backgrounds cannot be so neatly classified as the Computer Scientists versus the Statisticians? Without data, it is very hard to tell.

This year, it became possible to obtain reports on students histories - which papers they have taken in the past - which provided an opportunity to explore this question in a rational manner.

This talk will outline a simple exploration of the backgrounds of the students in the 2009 STATS 220 class. The focus will be on problems (and solutions) with data preparation and on graphical displays of the data.

Viewing R Objects
Dr. Paul Murrell

Speaker: Dr. Paul Murrell

Affiliation: U. Auckland

When: Thursday, 30 July 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

When working with a data set in R, it is only practical to print a small number of data values at one time. If a data set is large, this means that it is only practical to view a small subset of the data.

Alternatively, we are forced to view only numerical summaries of the data. This talk describes a prototype data viewer for R that provides several features that facilitate the viewing of raw data values for even moderately large data sets: interactive zooming, which allows more values to be viewed at once; a thumbnail view, which shows the overall "shape" of the data set; and the ability to load into memory only the values that are being viewed, which allows larger data sets to be viewed.

Monitoring Survival Time Data
Assoc. Prof. Stefan Steiner

Speaker: Assoc. Prof. Stefan Steiner

Affiliation: Department of Statistics and Actuarial Science, U. Waterloo

When: Friday, 24 July 2009, 12:00 pm to 1:00 pm

Where: Statistics Seminar Room 222, Science Centre

Monitoring medical outcomes is desirable to help quickly detect performance changes. Previous applications have focused mostly on binary outcomes, such as 30 day mortality after surgery. However, in many applications survival time data are routinely collected. In this talk we propose an updating exponentially weighted moving average (EWMA) control chart to monitor risk-adjusted survival times. The updating EWMA (uEWMA) operates in continuous time so scores for each patient always reflect the most up-to-date information. The uEWMA can be implemented based on a variety of survival time models and can be setup to provide an ongoing estimate of a clinically interpretable average patient score. The performance of the uEWMA is shown to compare favorably to competing methods.

A hierarchical state-space model of introduced rat population dynamics
Dr. James Russell

Speaker: Dr. James Russell

Affiliation: University of California, Berkeley

When: Thursday, 18 June 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

Black rats on Bagaud Island (45ha; Port-Cros National Park, France, Mediterranean Sea) play a keystone ecological role as an introduced species. Rats benefit from the presence of two habitat-dependent seasonal allochthonous resource inputs; invasive fig plants at one site and super-abundant gulls at another. Capture-mark-recapture data of 395 rats spanning two years (14 sessions) across three habitats (81 traps) are used to construct a multi-strata hierarchical state-space model of population dynamics (survival) among the three habitats. Age, habitat and rainfall are treated as covariates on survival and capture probability, as well as individual-based random effects. Whereas age structure can be treated as a fully observed covariate using independent biological data, habitat must be treated as a partially observed covariate. Model construction and analysis within a Bayesian framework are presented and discussed, including intricacies related to model fitting when time between sessions is not constant and covariate attribution is not clear (i.e.

Some adaptive MCMC algorithms
A. Prof. Renate Meyer

Speaker: A. Prof. Renate Meyer

Affiliation: Statistics dept. UoA

When: Thursday, 4 June 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

Different strategies have been proposed to improve mixing of Markov Chain Monte Carlo algorithms. These are mainly concerned with customizing the proposal density in the Metropolis-Hastings algorithm to the specific target density. Various Monte Carlo algorithms have been suggested that make use of previously sampled states in defining a proposal density and adapt as they run, hence called 'adaptive' Monte Carlo.

In the first part of this talk, we look at the crucial problem in applications of the Gibbs sampler: sampling efficiently from an arbitrary univariate full conditional distribution. We propose an alternative algorithm, called ARMS2, to the widely used adaptive rejection sampling technique ARS by Gilks and Wild (1992, JRSSC 42, 337-48) for generating a sample from univariate log-concave densities. Whereas ARS is based on sampling from piecewise exponentials, the new algorithm uses truncated normal distributions and makes use of a clever auxiliary variable technique (Damien and Walker, 2001, JCGS 10, 206-15).

Next we propose a general class of adaptive Metropolis-Hastings algorithms based on Metropolis-Hastings-within-Gibbs sampling. For the case of a one-dimensional target distribution, we present two novel algorithms using mixtures of triangular and trapezoidal densities. These can also be seen as

improved versions of the all-purpose adaptive rejection Metropolis sampling algorithm (Gilks et al., 1995, JRSSC 44, 455-72) to sample from non-logconcave univariate densities. Using various different examples, we demonstrate their properties and efficiencies and point out their advantages over ARMS and other adaptive alternatives such as the Normal Kernel Coupler.

(Joint work with Francois Perron and Bo Cai)

(STATISTICAL PHYSICS/ANALYSIS SEMINAR)The Critical Temperature of Dilute Bose Gases
Dr. Daniel Ueltschi

Speaker: Dr. Daniel Ueltschi

Affiliation: Warwick University

When: Thursday, 28 May 2009, 11:00 am to 12:00 pm

Where: Statistics Seminar Room 222, Science Centre

The description of interacting systems of quantum bosons is

a challenge to both physicists and mathematicians, especially

analysts and probabilists. I will explain what is the Bose-Einstein

condensation, and I will briefly review the literature about the effects

of interactions on the critical temperature. Then I will present rigorous

bounds. The proofs are based on Feynman-Kac representation

and on estimates of certain integral kernels of Schroedinger operators.

(This is joint work with R. Seiringer.)

plyr: a design pattern for data analysis
Dr. Hadley Wickham

Speaker: Dr. Hadley Wickham

Affiliation: Rice University

When: Thursday, 21 May 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

plyr is a set of tools for a common set of problems: you need to break down a big data structure into manageable pieces, operate on each piece and then put all the pieces back together. For example, you might want to:

* fit the same model to subsets of a data frame

* quickly calculate summary statistics for each group

* perform group-wise transformations like scaling or standardising

* eliminate for-loops in your code

It's already possible to do this with built-in functions (like split and the apply functions), but plyr just makes it all a bit easier with:

* absolutely consistent names, arguments and outputs

* input from and output to data.frames, matrices and lists

* progress bars to keep track of long running operations

* built-in error recovery, and informative error messages

plyr is a codification of a strategy for data analysis: split up the data into individual pieces, apply a summary function to each piece and then join all the pieces back together. I'll show how this recognising this simple pattern makes many problems much simpler and much less dependent on the type of data structure.

Bayesian inference in population genetics and phylogenetics
A. Prof. Alexei Drummond

Speaker: A. Prof. Alexei Drummond

Affiliation: Computational Evolution Group, University of Auckland

When: Thursday, 7 May 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

In the last decade Bayesian inference and Markov chain Monte Carlo have descended on the study of molecular evolution with the promise of a new era of rich statistical models and inference. Our research group has been focused on genealogy-based population genetics (the coalescent; which models the genetic relationships within a population) and molecular phylogenetics (which models genetic relationships between species). These two fields have a number of similarities in their statistical treatment. In recent years there has been an energetic pursuit of inference strategies at the intersection of these two fields. The multi-species coalescent is one such model. We have recently developed a fully Bayesian MCMC sampler for the multi-species coalescent, but several open questions and challenges still remain. I will cover the background of Bayesian coalescent and phylogenetic inference as well as outline our recent work on the multi-species coalescent and some outstanding questions.

Computational Evolution Group

An Analysis of the Impact of the Availability of NCEA Standards Upon Success at Achieving UE
Dr. Rolf Turner

Speaker: Dr. Rolf Turner

Affiliation: Starpath Project

When: Friday, 1 May 2009, 1:00 pm to 2:00 pm

Where: Pacific Fale 273-108 (Arts)

The study that I will describe was undertaken by the Starpath Project at the University of Auckland. I will commence my talk by giving a brief introduction to the Starpath Project; what it is and why and when it was set up. I will also describe my role in Starpath and some of the problems, difficulties, and traps for young players that have confronted me as I have found my feet in this new (to me) milieu. Having set the scene I will discuss what I consider to be the main piece of work that I have produced since joining Starpath.

In the process of trolling through the national NCEA data shortly after commencing work with Starpath, I discovered that the number of Level 3 standards attempted by Maori and Pacific students is on average substantially lower than the number attempted by Pakeha and Asian students. This is particularly true of standards from the Approved List of subjects, and appears to have a substantial adverse impact on the chances of Maori and Pacific students' achieving University Entrance. My colleagues and I undertook a study to investigate whether the deficit in the number of standards attempted by Maori and Pacific students is due, at least in part, to the availability (or lack thereof) of standards at schools attended by the majority of these students.

Analysis of the data obtained indicates that there is minimal impact of the number of standards available upon the number of standards attempted. However there is striking evidence that student performance is influenced by (or at least, is related to) the availability of standards. Logistic models for success at achieving UE in terms of various possible predictors reveal intricate relationships between performance, availability of standards, and ethnicity. These relationships will be depicted graphically. The main conclusion of this work is that there appears to be convincing evidence that Maori and Pacific students who, by a certain measure have moderately high academic potential, would be more likely to achieve success if more standards were available to them.

Generating synthetic micro data from published marginal tables and confidentialised files
Professor Alan Lee

Speaker: Professor Alan Lee

Affiliation: Dept. of Statistics, U. Auckland

When: Thursday, 23 April 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

This seminar will focus on work which arose out of an Statistics NZ Official Statistics Research project.

In this talk I will describe several methods for generating synthetic data sets, using a combination of publicly available marginal tables, and micro-data samples. The methods are based on fitting parsimonious statistical models to high-dimensional tables of relative frequencies, and then generating synthetic data from these models.

I will describe a set of R functions which implement the methods under study, and apply the methods to data from the 2001 Census of Population and Dwellings.

Impartial-culture asymptotics: a central limit theorem for manipulation of elections
Geoffrey Pritchard

Speaker: Geoffrey Pritchard

Affiliation: Department of Statistics, The University of Auckland

When: Monday, 30 March 2009, 4:00 pm to 5:00 pm

Where: Maths Seminar Room 401, Science Centre

We consider the problem of manipulation of elections using positional voting rules under Impartial Culture voter behaviour.

The minimum number of voters required to form a manipulating coalition can be expressed as the solution of an integer linear program. In the limiting case of a large electorate, the problem simplifies enough that a central limit theorem can be derived. It is seen that the manipulation resistance of positional rules with 5 or 6 (or more) candidates is quite different from the more commonly analyzed 3- and 4-candidate cases.

(This is part of the Mathematical Social Sciences seminars.)

What's in a (Kiwi) name?
A. Prof. David O'Sullivan

Speaker: A. Prof. David O'Sullivan

Affiliation: Dept. of Geography, SGES, UoA

When: Thursday, 26 March 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

I will report on progress made in extending a names-based method for ethnic-cultural classification to handle the names in the New Zealand electoral roll. This work is in collaboration with the University College London based World Names project, and seeks to apply methods developed on that project to a New Zealand context. Maori and Pacific Island names are rare in the UK setting where this work has been developed, and so this project has presented some challenges which will be discussed. Nevertheless preliminary results looking at spatial patterns of Scottish names and also at Eastern European names in New Zealand show some promise.

Visualisation toolkit, GUI architectures, and their use in statistical graphics
Derek Law

Speaker: Derek Law

Affiliation: Dept. of Statistics, UoA

When: Thursday, 19 March 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

This talk will be separated into two main themes. The first half is an introduction to a 3D data visualisation toolkit prototype. Before the emergence of RGL and GiR, 3D data visualisation was mainly restricted to static images, 3D point cloud rotation or the grand tour. The two R packages actually provides a big way forward, allowing users to explore new ways of presenting data in 3D. However, both of these tools suffer from their own limitations.

This data visualisation toolkit prototype, while not in a position to replace the two packages, will offer a way to fill in some of the gaps left by these tools. If time permits, the second half will be an introduction to the MVC (Model-View-Controller) family of GUI architectures, with more emphasis on one called Taligent MVP (Model-View-Presenter). With only a mouse and a keyboard, the amount of interactions we can have on a plot/plots can be very limiting. The introduction of GUIs will provide us with a much richer set of interactions, or even to connect with other software like spreadsheets. However, at the same time, GUI also adds a whole new level of complexity to the development of applications. Without use of an architecture to keep this complexity under control, applications will not be scalable nor reusable at all. In particular, plot linking is arguably the most important aspect of high-interactive statistical graphics, and we will look at how these architectures address the issue with data linkage.

The climate history of New Zealand reconstructed from speleothems and kauri tree rings: enhancements through the application of advanced statistics

Speaker: Maryann Pirie

Affiliation: PhD candidate, SGGES and Department of Statistics, University of Auckland

When: Friday, 13 March 2009, 4:00 pm to 5:00 pm

Where: HSB 429

There is a need for increased knowledge of past climates to allow a coherent picture of climate change to be developed and to increase current knowledge of past climate systems. Proxies can help improve our knowledge of past climates. There is a wealth of proxy material in New Zealand, with two of these sources being kauri tree rings and speleothems. Climate reconstructions from these two sources can be enhanced by applying advanced statistical methods to known roadblocks. This seminar will address some of these known roadblocks and outline possible steps for resolving them.

Insurance and Modern Finance
Prof. Tom Salisbury

Speaker: Prof. Tom Salisbury

Affiliation: York University

When: Thursday, 12 March 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

Insurance firms are used to managing risk through the law of large numbers. But in response to changes in the prevalence of traditional pension plans, North American insurers have increasingly offered annuity products that carry significant market risk. The law of large numbers no longer applies in that context, so the hedging techniques of modern finance (based on Brownian motion and stochastic calculus) have entered the actuarial world. I will describe the changing demographics of retirement planning, and then discuss hedging in general, before applying those ideas to embedded options such as Guaranteed Minimum Withdrawal Benefits (GMWBs).


A multi-site stochastic rainfall model
Dr Xiaogu Zheng

Speaker: Dr Xiaogu Zheng

Affiliation: NIWA

When: Tuesday, 3 March 2009, 4:00 pm to 5:00 pm

Where: Statistics Seminar Room 222, Science Centre

In meteorology, it is often of interest, yet known to be difficult, to accurately estimate the conditional probability distribution function of rainfall at multi-site from meteorological data and to generate a multi-site rainfall sequence based on the estimated model using Monte Carlo method. Current challenges include the accurate representation of extremal behaviour, the generation of multi-site sequences with realistic spatial dependence, the need to represent realistic levels of inter-annual variability in the generated sequences, and the representation of complex dynamical meteorological structures within a relatively cheap computational framework. No existing rainfall model has been able to meet all of these requirements.

In this talk, I will discuss these challenges in details, present a rainfall model recently developed at NIWA, New Zealand, and demonstrate through a case study how the proposed model can successfully meet the above requirements in one framework. Issues on future extensions and mathematical rigorousness will also be briefly discussed. Comments are extremely welcome.

[Dr. Zheng has been awarded the Edward Kidson Medal in 2007 by the Meteorological Society of New Zealand for outstanding contributions to studying patterns for seasonal forecasting of rainfall in New Zealand. For the news, see here: ]

An Institute for Social Science Research? Queensland experience, Auckland prospects
Professor Mark Western

Speaker: Professor Mark Western

Affiliation: ISSR, University of Queensland

When: Monday, 16 February 2009, 10:30 am to 11:30 am

Where: Business School, 12 Grafton Road (Case Room 3, 260-055, Level 0 - Wynyard Street Level, one down from Grafton (follow the signs)

An Institute for Social Science Research (ISSR) has recently been established at the University of Queensland (, and one is being actively canvassed at Melbourne. There are also examples elsewhere (, What is the rationale for an ISSR? How has it worked at Queensland? Is there an argument for establishing one at Auckland? Professor Mark Western, Director of the ISSR at UQ, will outline the rationale and experience at UQ and this will open the issue for us to consider.

Cyclic Cellular Automata : A Tool for Self-organizing Sleep-Wake Protocols in Sensor Networks
Professor Ed G. Coffman, Jr.

Speaker: Professor Ed G. Coffman, Jr.

Affiliation: Department of Electrical Engineering, Columbia University

When: Monday, 2 February 2009, 3:00 pm to 4:00 pm

Where: Statistics Seminar Room 222, Science Centre

The real-time scheduling of energy-conserving periods of sensor networks presents problems vital to the maximization of system lifetimes. We will describe a scalable, easily implemented, self-organizing sleep-wake protocol which generalizes concepts from cellular automata theory. (The presentation is tutorial and entails no sophisticated mathematics and no prior knowledge of sensor networks.)

The system is fault tolerant, can be operated in an asynchronous mode, works seamlessly around obstacles in the sensor field, and is highly effective even in the case of intelligent intruders. System performance as it varies with parameters of the protocol is a focus of the talk, and is brought out by detailed experimental studies.

Suggestion Box

You can help improve this website. If you've found something wrong with this page, whether broken, incorrect, or missing, then let us know so we can improve it.

If you need course advice or confidentiality, then please contact a Postgraduate Advisor.

Please tick as many options as apply:

Comments or any additional details: