Department of Statistics


2013 Seminars


»
Spatially explicit capture-recapture using acoustic detection data, Ben Stevenson
»
Changepoint inference for network data, Dr. Elena Yudovina
»
Stochastic control of Markov chains based on complete and incomplete information, Prof. Boris Miller
»
User Equilibria in systems of Processor Sharing queues, Niffe Hermansson
»
Incorporating time in line transect and capture-recapture models, Prof. David Borchers
»
The Orthogonalised Multivariate Test: when it works and when it don't., A. Prof. Brian McArdle
»
A small matter of scale: Translating research from the laboratory to application, Prof. Margaret Hyland
»
On nested case control and countermatched designs efficiency., Claudia Rodriguez
»
Allelic drop-out in forensic genetics - importance and estimation, Dr Torben Tvedebrink
»
Multiple Behaviour Statistical Implicative Analysis, Thomas Delacroix
»
Working with Synthetic Data, Dr. Barry Milne and Jessica McLay
»
Inference with Counterfactuals , Prof. Peter Davis and Roy Lay-Yee
»
[Note change of date and room] Bayes In Space: Cosmology meets Statistics , Prof. Richard Easther
»
Bayesian data assimilation for epidemics -- making the most of the available data, Dr Chris Jewell
»
Random Trajectory Models, A. Prof. Jeff Picka
»
(**note change of title) Backward Coupling and Perfect Simulation for Markov Chains and Stochastic Recursions, Prof. Sergey Foss
»
A Piece of the Web Graphics Puzzle, Simon Potter
»
Visualizing Large Datasets, Prof. Antony Unwin
»
Research at NIWA's Bream Bay Aquaculture Park - the challenge of new species, Dr. Jane Symonds
»
The design of the PoPART trial: a community randomized trial of the population effect of universal HIV testing and immediate antiretroviral therapy to reduce HIV Transmission., Dr Deborah Donnell
»
A class of distribution free tests for discrete distributions, Prof. Estate Khmaladze
»
Accounting for uncertainties when placing fossils on a tree, Sergii Krasnozhon
»
A Remark on ``Exact'' Monte Carlo p-values., Dr. Rolf Turner
»
Mosaic Mind Games - Visualising Categorical Data, Prof. Antony Unwin
»
Generalized Volatility-Stabilized Processes, Radka Pickova
»
Joint Statistics/Mathematics Colloquium: Large deviations with applications to random graphs, Prof. S.R.Srinivasa Varadhan
»
Analysis of DNA Mixtures with Artefacts, Prof. Robert Cowell
»
Dynamic Thresholds and a Summary ROC Curve: Assessing the Prognostic Accuracy of Longitudinal Markers, Dr. Paramita Chaudhuri
»
Shape-restricted density estimation with applications to financial data, Yu Liu
»
Assessing price of anarchy in a market for wind power, Dr. Elena Yudovina
»
Ordinal data and 'ordinal' analyses, Prof. Thomas Lumley
»
Open Data in New Zealand + Interactive Web-based Graphics (NOTE CHANGE OF TIME), Jimmy Oh
»
Scaling problems in MCMC, Prof. Gareth Roberts
»
Optimal Threshold Strategies for Queues in Series, Dr. Bernardo D'Auria
»
Past seminars

Seminars by year: Current | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013

Spatially explicit capture-recapture using acoustic detection data

Speaker: Ben Stevenson

Affiliation: U. St. Andrews

When: Wednesday, 18 December 2013, 4:00 pm to 5:00 pm

Where: 303-B11

Traditional capture-recapture (CR) approaches to estimating animal abundance ignore an obvious spatial component of capture probability; individuals that are based close to traps are more likely to be captured than those that live far away. Spatially explicit capture-recapture (SECR) methods have recently been developed, and these account for the locations of detected animals. One particular advantage is that they allow for acoustic animal detection (using human observers or microphones) rather than detection by physical capture, and estimates of density or abundance can then be obtained from a single sampling session. This can result in surveys that are both cheaper and far more time efficient in comparison to those that collect traditional CR data. The acoustic nature of the resulting data pose new statistical challenges, and this talk will focus on the associated recent developments in SECR methodology.

http://creem2.st-andrews.ac.uk/staffProfile.aspx?sunID=bcs5

Changepoint inference for network data
Dr. Elena Yudovina

Speaker: Dr. Elena Yudovina

Affiliation: U. Michigan

When: Thursday, 12 December 2013, 4:00 pm to 5:00 pm

Where: 303.B11

Folk political science says that the US Congress has become abruptly more bipartisan some time ago; can we construct a model to make this precise? Consider a model of a network where the edges can be present or absent, and their status evolves according to some mechanism. Suppose the mechanism changed at some point; looking back, how well can we estimate this point? I will discuss the various scalings involved, and will probably weasel out of showing data.

http://www-personal.umich.edu/~yudovina/

Stochastic control of Markov chains based on complete and incomplete information
Prof. Boris Miller

Speaker: Prof. Boris Miller

Affiliation: Monash U.

When: Thursday, 28 November 2013, 4:00 pm to 5:00 pm

Where: 303.B11

During recent years Controlled Markov Chains (CMC) became the useful mathematical tool applicable to the solution of stochastic optimal control problems with constraints. Models of CMC arise in stock and production management, internet congestion control, renewable resources' management and many others. The most attractive feature of this class of models is that they reduce the complex stochastic optimization problem to the solution of the dynamic programming equation (DPE) which in this case has a form of the system of ordinary differential equations which by-turn may be solved numerically on existing computers avoiding usual difficulties inherent to dynamic programming equation for controlled systems described by stochastic differential equations.

In this talk the following aspects of CMC will be covered:

* Martingale representation of CMC

* Tensor form representation of Connected CMC

* Dynamic programming equation and existence of the solution for the optimal control problem with constraints

* Approaches to control problems with incomplete information

* Adaptive control of MC

* Applications

http://www.cmss.monash.edu.au/Members/Boris.php

User Equilibria in systems of Processor Sharing queues

Speaker: Niffe Hermansson

Affiliation: U. Auckland

When: Monday, 25 November 2013, 11:00 am to 12:00 pm

Where: 303.B11

(PhD provisional talk)

That a lack of both central control and cooperation can lead to suboptimal as well as unstable system behaviour is well known, and intuitively sensible. This talk details the study of this situation, selfish routing, in the context of two parallel processor sharing (-/M/1-PS) queues.

In addition to the static properties of the system, such as the service rates at the two queues, the users also base their decisions on the current state of the system. This means that each state of the system is associated with a decision rule. The central object of interest in this study, the Decision Policy, is created by aggregating the decision rules for all possible states of a system. Different formulations of Decision Policies have been studied with interesting, and sometimes quite counter-intuitive results.

https://www.stat.auckland.ac.nz/showperson?firstname=Niffe&surname=Hermansson

Incorporating time in line transect and capture-recapture models
Prof. David Borchers

Speaker: Prof. David Borchers

Affiliation: U. St. Andrews

When: Tuesday, 19 November 2013, 4:00 pm to 5:00 pm

Where: 303.B11

Line transect (LT) and capture-recapture (CR) are two of the most widely-used methods of estimating animal abundance. Both involve sampling in space, although the spatial component of CR surveys has only very recently been incorporated in CR models. LT surveys involve sampling continuously in time, as do CR surveys with passive detectors (like camera traps). However, standard LT models treat the survey as if it was instantaneous and standard CR models treat survey data as if it arose from a series of instantaneous samples. Incorporating detection time in LT models leads to a kind of survival model in which the detection function plays the role of the survival function, and in which the number of censored subjects is unknown. Incorporating capture time in CR models leads to a spatial arrival process model with unknown intensity that varies spatially and temporally. Modelling the temporal component of LT and CR surveys makes the methods more powerful. In the case of LT surveys it provides a means of dealing with stochastic animal availability and uncertain detection at perpendicular distance zero, while in the case of CR surveys it removes the subjectivity associated with cutting the survey period into discrete occasions when sampling is actually continuous, a general means of modelling varying catchability with time, and a likelihood for single-catch trap data.

http://www.creem.st-and.ac.uk/dlb/dlb.html

The Orthogonalised Multivariate Test: when it works and when it don't.
A. Prof. Brian McArdle

Speaker: A. Prof. Brian McArdle

Affiliation:

When: Thursday, 7 November 2013, 4:00 pm to 5:00 pm

Where: 303.310

I will introduce the orthogonalised multivariate test, a way of applying univariate tests to multivariate data. I will illustrate it's potential using ANOVA and the Kruskal Wallis non-parametric ANOVA. I will show the conditions under which it performs well or badly, with an explanation, and then show that it also can be applied to linear mixed models for heterogeneous data and nested designs. Other extensions of the method will be outlined, and a case study used as an example of its potential.

https://www.stat.auckland.ac.nz/showperson?firstname=Brian&surname=McArdle

A small matter of scale: Translating research from the laboratory to application
Prof. Margaret Hyland

Speaker: Prof. Margaret Hyland

Affiliation: U. Auckland

When: Thursday, 17 October 2013, 4:00 pm to 5:00 pm

Where: 303.B11

Translating engineering research from the fundamental sphere in to practice brings many challenges. Not the least of these is how to extrapolate what is observed at the nano-scale under controlled conditions using gram sized quantities, to an industrial environment and often tonnes of production where the conditions may not even be measured, let alone minutely controlled. Successful translation is rarely achieved in a linear step-by-step transition from fundamental to applied. There is frequently a leap based on a critical insight and courage on the part of the adapter even in the face of a less than complete knowledge of how a system will work. Drawing from my own experience and that of colleagues in the Light Metals Research Centre I will look at models of working across the fundamental-applied boundary.

http://www.ecm.auckland.ac.nz/uoa/margaret-hyland

On nested case control and countermatched designs efficiency.

Speaker: Claudia Rodriguez

Affiliation: U. Auckland

When: Thursday, 26 September 2013, 4:00 pm to 5:00 pm

Where: 303.B11

(PhD provisional talk)

When studying risk factors for disease incidence in a cohort, some of them could become costly relative to their collection. For example, laboratory analysis of blood and urine samples or records of diet and exercise exposure. Additionally, if the exposure is rare, it would be needless and vain to sample many subjects. Countermatching and nested case control have been proposed as techniques

to sample a group of controls from the cohort. Expressing these designs as two phase sampling designs is of interest when fitting Cox model because it allows to reuse controls and gain efficiency. These weights are used in a pseudolikelihood approach find the estimated coefficients for the model and by adjusting the weights with calibration some efficiency could be gained as well. We aim to implement these methods and to show asymptotic properties for these estimators.

https://www.stat.auckland.ac.nz/showperson?firstname=Claudia&surname=Rivera%20Rodriguez

Allelic drop-out in forensic genetics - importance and estimation
Dr Torben Tvedebrink

Speaker: Dr Torben Tvedebrink

Affiliation: Aalborg University

When: Tuesday, 24 September 2013, 4:00 pm to 5:00 pm

Where: 303-B09

In this talk, I will discuss the concept of allelic drop-out, which is a phenomenon that occurs in forensic genetics when an allele of a donor fails to be detected in the e.g. crime scene DNA profile.

Unless drop-out is an allowable possibility in the interpretation, this implies that a true donor to a biological stain could be excluded as possible contributor due to a mismatch between the reference and crime scene sample. In order to account for allelic drop-out in a probabilistic frame work it is necessary to quantify the drop-out probability. I will discuss methods for estimating this probability based on simple, standard statistical models and compare the results to other approaches.

http://people.math.aau.dk/~tvede/

Multiple Behaviour Statistical Implicative Analysis

Speaker: Thomas Delacroix

Affiliation: U. Auckland

When: Thursday, 19 September 2013, 4:00 pm to 5:00 pm

Where: 303.B11

Statistical Implicative Analysis (SIA) was first invented by Regis Gras in the late 70s, early 80s, and has gradually been developed by a small community of researchers since. This statistical analysis focuses on relations of "pseudo-implication" between variables. After an introduction on the general concept behind SIA, I will be presenting my contribution to SIA which aims to incorporate multiple behaviour hypotheses on individuals to SIA analyses. This talk will be followed the next day by a Mathematics and Statistics Education Seminar on Friday the 20th at 3pm. In this second talk, I will present an application of Multiple Behaviour SIA to Math Education studies.

All welcome!

https://plus.google.com/115249068243354680939/about

Working with Synthetic Data

Speaker: Dr. Barry Milne and Jessica McLay

Affiliation: COMPASS group, U. Auckland

When: Thursday, 12 September 2013, 4:00 pm to 5:00 pm

Where: 303.B11

This is the second of two seminars featuring work being conducted at the COMPASS Research Centre, Faculty of Arts.

Much work in the social sciences relies on the use of synthetic data, often because of confidentiality and other restrictions on access to empirical data. However, another use is in creating computer-generated replicates for model building and counterfactual testing. This seminar will present two examples of work in this tradition from research at COMPASS. Barry Milne, Research Fellow, will outline an innovative method for creating synthetic data from the Census by constructing composites that are empirically plausible and meet SNZ requirements, and then Jessica McLay, Consultant Statistician, will show how the process of synthetic data generation for a dynamic micro-simulation model can be used to test the performance of different statistical techniques.

Each speaker will present for about 15 minutes, and take questions for up to ten.

http://www.arts.auckland.ac.nz/uoa/centre-of-methods-and-policy-application-in-the-social-sciences-compass

Inference with Counterfactuals

Speaker: Prof. Peter Davis and Roy Lay-Yee

Affiliation: COMPASS group, U. Auckland

When: Thursday, 5 September 2013, 4:00 pm to 5:00 pm

Where: 303.B11

This is the first of two seminars featuring work being conducted at the COMPASS Research Centre, Faculty of Arts.

The current approach in social science research is to regard causal analysis as an exercise in empirically testing and drawing credible conclusions about scientific propositions couched as counterfactuals (or potential outcomes). This seminar will present two examples of work in this tradition from work at COMPASS. Peter Davis, Centre Director, will first provide a brief summary of key issues in causal inference in observational settings, as covered in a four-volume reader he has edited, due to be published by Sage in December, and then Roy Lay-Yee, Senior Research Fellow, will give an example of how counterfactual reasoning can be fruitfully applied to policy analysis using a micro-simulation approach.

Each speaker will present for about 15 minutes, and take questions for up to ten.

http://www.arts.auckland.ac.nz/uoa/centre-of-methods-and-policy-application-in-the-social-sciences-compass

[Note change of date and room] Bayes In Space: Cosmology meets Statistics
Prof. Richard Easther

Speaker: Prof. Richard Easther

Affiliation: U. Auckland

When: Monday, 26 August 2013, 4:00 pm to 5:00 pm

Where: 303.310

Traditionally, cosmology has been viewed as something of a playground for theorists, given that ideas about the origin, global structure and evolution of the universe were more plentiful than observational data that could provide insight into these questions. In recent years, however, cosmology has been become an increasingly data-driven and empirical science. And, as a consequence, cosmologists have had to learn statistics. I will discuss the statistical challenges associated with cosmology, and my own work on the parameter estimation and model selection problems associated with competing models of the very early universe.

http://cosmology.auckland.ac.nz/

Bayesian data assimilation for epidemics -- making the most of the available data

Speaker: Dr Chris Jewell

Affiliation: Massey U

When: Thursday, 8 August 2013, 4:00 pm to 5:00 pm

Where: 303.B11


Seminar recording:

Click to view recording

Recent advances in Bayesian inference have provided a powerful solution for harnessing epidemic models for decision support during livestock epidemics. The approach quantifies the statistical uncertainty associated with the unpredictable nature of the epidemic process, whilst integrating over unobserved infection times and the presence of undetected infections. The sequential inclusion of case incidence data as the outbreak unfolds provides a predictive risk distribution through time. Estimates are obtained for quantities such as the risk to individuals posed by the current state of the epidemic, the locations of undetected infections, and identification of high-risk farms. However, the methodology currently admits only case report data, when other sources of information exist. Chief among such additional data is that from contact tracing of infected individuals. The contact tracing process is commonly used to identify individuals at high risk of exposure to a disease, following a known contact with an infected individual. Surprisingly, little work has been done in a statistical sense to make use of this information in the context of epidemic models. I will present some work that focuses directly on incorporating contact tracing data into a continuous time epidemic model framework, with application to the "Dangerous Contact" dataset relating to the 2001 UK foot and mouth outbreak.

http://www.massey.ac.nz/massey/learning/colleges/college-of-sciences/about/fundamental-sciences/staff-list.cfm?stref=719040

Random Trajectory Models

Speaker: A. Prof. Jeff Picka

Affiliation: U. New Brunswick

When: Thursday, 25 July 2013, 4:00 pm to 5:00 pm

Where: 303.B11

Many modelling problems in physics and engineering represent stochastic phenomena on a large scale by using many interacting elements on a smaller scale. When the disorder in a particular system can be represented probabilistically (e.g. as in the atomic model for an ideal gas), then it is possible to use existing methods from statistical physics to establish useful results. When the disorder is not amenable to probabilistic representation because inter-element interactions are strong and numerous, then an explicit model of element trajectories in phase space needs to be used. The construction of such models, their assessment, and the implications for science in properly treating these models as statistical models will be discussed. Examples of the use of these models include models for powder flow, agent-based models of epidemics or ecological systems, models of pattern formation, and climate and weather models.

http://www.math.unb.ca/~jdp/

(**note change of title) Backward Coupling and Perfect Simulation for Markov Chains and Stochastic Recursions

Speaker: Prof. Sergey Foss

Affiliation: Heriot-Watt U.

When: Thursday, 4 July 2013, 4:00 pm to 5:00 pm

Where: 303.B11

I will review some techniques for perfect/exact simulation of unknown stationary distribution of a Markov chain or, more, generally, a stochastic recursive sequence

$$

X_{n+1} = f(X_n, \xi_n)

$$

where the drivers $\xi$'s an either i.i.d. or stationary ergodic.

The techniques is applied if the distribution of a Markov chain (or stochastic recursion) converges to the stationary one in the total variation norm.

Further, I show by example how to extend the techniques to simulation of distribution of functionals of Markov chains (in case where there is no convergence in the total variation).

A Piece of the Web Graphics Puzzle

Speaker: Simon Potter

Affiliation: U. Auckland

When: Thursday, 27 June 2013, 4:00 pm to 5:00 pm

Where: 303.310

Recently there has been tremendous interest in exposing R's statistical facilities to a web browser. Many of the approaches to creating web-based statistical plots either use static images, or offload the task of creating graphics to a JavaScript-based graphics system. The latter approach often provides highly interactive and animated graphics but is inflexible in comparison to R graphics. Recent developments in the 'gridSVG' package allow for 'grid' graphics and indeed pieces of grid graphics to be created and served to a web browser. This approach intends to be a bridge between the statistical and graphical facilities provided by R and the highly interactive environment that JavaScript graphics supports.

In addition, the 'selectr' and 'animaker' packages will be discussed. 'selectr' is a CSS3 to XPath translator and 'animaker' is a tool for sequencing complex animations. Both of these are packages used to assist the creation of responsive graphics with gridSVG.

NOTE: This seminar presents Simon's MSc Thesis research

Visualizing Large Datasets
Prof. Antony Unwin

Speaker: Prof. Antony Unwin

Affiliation: U. Augsburg

When: Thursday, 20 June 2013, 4:00 pm to 5:00 pm

Where: 303.310


Seminar recording:

Click to view recording

Large Datasets are different, presenting new and difficult challenges. Datasets may be large in numbers of cases, large in numbers of variables or large in both. Largeness tends to mean greater complexity as well. Graphics can make a major contribution. They are valuable for exploring the data, for assessing data quality, for supporting modelling, and for understanding results. Existing graphical displays need to be amended for examining large datasets and interactivity is particularly important. This talk surveys the issues, offers solutions, and also discusses just how large large can be.

http://stats.math.uni-augsburg.de/~unwin/

Research at NIWA's Bream Bay Aquaculture Park - the challenge of new species

Speaker: Dr. Jane Symonds

Affiliation: NIWA

When: Friday, 7 June 2013, 3:00 pm to 4:00 pm

Where: 303-610

The New Zealand aquaculture sector is currently expanding towards a target of NZ$1 billion dollars by 2025. This expansion will benefit from diversification into new high value finfish. An assessment of potential new aquaculture species recognised the merits of yellowtail kingfish (Seriola lalandi) and hapuku (Polyprion oxygeneios) as good candidates for aquaculture. A concerted, commercially-focussed research effort, has yellowtail kingfish ready for expansion. Both species are seen as high value export products and are well suited to land-based and sea pen culture.

Research programmes at NIWA's Bream Bay Aquaculture Park, Ruakaka, are focused on overcoming production bottlenecks, optimising husbandry and producing high quality juveniles and selected broodstock. The research programmes include topics such as photothermal manipulation of spawning, broodstock improvement through selective breeding, larval development studies and growth, feed efficiency and nutrition trials. During a spawning season the broodstock of each species produce millions of eggs and larvae. Using DNA markers an individual egg, larva or tagged juvenile can be traced back to specific broodstock parents. This information, collected on thousands of individuals each season, is used to identify elite broodstock and optimise production techniques to improve survival. Understanding all the potential components leading to reliable production is an essential part of the research at Bream Bay.

http://www.niwa.co.nz/key-contacts/jane-symonds

The design of the PoPART trial: a community randomized trial of the population effect of universal HIV testing and immediate antiretroviral therapy to reduce HIV Transmission.
Dr Deborah Donnell

Speaker: Dr Deborah Donnell

Affiliation: SCHARP

When: Thursday, 6 June 2013, 4:00 pm to 5:00 pm

Where: 303.B11

The HIV Prevention Trials Network is engaged in a partnership to design and carry out a large community randomized trial of universal HIV test and treat in 21 communities in southern Africa. The protocol is expected to begin in July 2013. I will describe the design, discuss some of the current operational and trial monitoring issues, and outline plans for the ancillary use of mathematical modelling and phylogenetic substudies.

http://www.scharp.org/bios/deborah_donnell.html

A class of distribution free tests for discrete distributions
Prof. Estate Khmaladze

Speaker: Prof. Estate Khmaladze

Affiliation: Victoria U. Wellington

When: Friday, 24 May 2013, 4:00 pm to 5:00 pm

Where: MLT3

This general class follows as a corollary of a certain unitary transformation which maps F-Brownian bridge to G-Brownian bridge, with distributions F and G equivalent (mutually absolutely continuous).

The transformation exists in any R^d, d-finite.

Some form of this transformation maps Brownian bridge into "almost" Brownian motion - which is a fact of some strange consequences.

Unlike my previous work, the transformations here are not connected with theory of innovations and look simpler.

http://www.victoria.ac.nz/smsor/about/staff/estate-khmaladze

Accounting for uncertainties when placing fossils on a tree
Sergii Krasnozhon

Speaker: Sergii Krasnozhon

Affiliation: U. Auckland

When: Thursday, 23 May 2013, 11:00 am to 12:00 pm

Where: 303.310

[PhD provisional talk.]

It is possible to infer the timing of species divergence by incorporating fossil information into a statistical phylogenetic framework. However, determining the position of these fossils along a phylogeny is prone to errors. Because it is difficult to correct for such errors, we elected to integrate over them using a Bayesian approach. The proposed method assumes that speciation events occur according to a Yule process. We derive the joint density of the times of divergence according to this model and account for the truncations arising from the incorporation of fossil data. We will show how to extend this framework and deal with multiple truncation factors when the position of fossils in the tree is not determined without ambiguity. We have implemented a Markov Chain Monte Carlo algorithm that samples from the corresponding target distribution and will present preliminary results.

http://www.stat.auckland.ac.nz/showperson?firstname=Sergii&surname=Krasnozhon

A Remark on ``Exact'' Monte Carlo p-values.

Speaker: Dr. Rolf Turner

Affiliation: U. Auckland

When: Thursday, 18 April 2013, 4:00 pm to 5:00 pm

Where: 303-B11

I will start off by giving a brief introduction to the idea of exact Monte Carlo p-values. Such p-values can be used in hypothesis testing settings in which a "reasonable" test statistic is available, but the null distribution of this statistic is unknown and/or completely intractable. It is often possible to simulate values of the test statistic under the null hypothesis, in which case Monte Carlo methods give you a handle on the p-value. When using Monte Carlo methods there is an "obvious" approach by which one can obtain an approximation to the unknown analytic p-value and a sneaky approach (due to G. A. Barnard) which yields a p-value which is exact in a certain sense (which I shall explain). The idea is completely elementary, but seems to confuse many people (including a referee for a journal to which I submitted a paper on this topic).

The issue that I really want to focus on is that the usual method for obtaining exact p-values works only if there are no ties in the data. I was recently interested in a scenario in which the null distribution of the test statistic has a point mass, resulting in lots of ties. It turns out that one can modify the usual procedure for calculating an exact Monte Carlo p-value to handle this setting. Dealing with the modified procedure leads to an intriguing identity involving the binomial probability function and its derivatives (with respect to the success probability). I will explain the modified procedure and discuss simulation studies which demonstrate that it achieves a reasonable level of power.

The ideas discussed in this talk will all be elementary and the talk should be readily accessible to graduate students.

Mosaic Mind Games - Visualising Categorical Data
Prof. Antony Unwin

Speaker: Prof. Antony Unwin

Affiliation: U. Augsburg

When: Thursday, 11 April 2013, 4:00 pm to 5:00 pm

Where: 303.B11

Seminar Slides: Click to View Seminar Slides

Visualising multivariate categorical data is not easy. Some interesting graphics have been proposed and many are implemented in the R package vcd. Not all of these suggestions are as successful as we would like, as they do not always manage to show much information. A better approach is provided by the family of mosaicplot variants, especially when equipped with interactive tools. However, there are two drawbacks. Firstly, there are no established rules on how to pick an effective mosaicplot from the many possibilities, so that finding a good one is more of a puzzle than it might be. Secondly, people are not used to looking at mosaicplots, so that a lot still has to be explained, even when you have drawn an informative display.

This talk will discuss mosaicplots and the important role interactivity plays in finding ways of dealing with the drawbacks. Martin Theus's software Mondrian will be used to illustrate the ideas.

http://stats.math.uni-augsburg.de/~unwin/

Generalized Volatility-Stabilized Processes
Radka Pickova

Speaker: Radka Pickova

Affiliation: Columbia U.

When: Wednesday, 3 April 2013, 11:00 am to 12:00 pm

Where: PLT1

We consider systems of interacting diffusion processes which generalize the volatility-stabilized market models introduced in Fernholz and Karatzas (2005). We show how to construct a weak solution of the underlying system of stochastic differential equations. In particular, we express the solution in terms of time changed squared-Bessel processes, and discuss sufficient conditions under which one can show that this solution is unique in distribution (respectively, does not explode). Sufficient conditions for the existence of a strong solution are also provided. Moreover, we discuss the significance of these processes in the context of arbitrage relative to the market portfolio within the framework of Stochastic Portfolio Theory.

Joint Statistics/Mathematics Colloquium: Large deviations with applications to random graphs
Prof. S.R.Srinivasa Varadhan

Speaker: Prof. S.R.Srinivasa Varadhan

Affiliation: New York U./Courant Institute

When: Thursday, 28 March 2013, 2:00 pm to 3:00 pm

Where: PLT2

Although the theory of large deviations deals with events with small probabilities it is essentially a way of analyzing sums that have an exponentially large number of terms each summand being exponentially small (or large). You can use the theory to examine the number of graphs on a large number of vertices, with prescribed number of edges, triangles and/or other finite graphs as subgraphs.


Raghu Varadhan is a mathematician/statistician who has made fundamental contributions to probability, including the theory of large deviations.

Among his many honours, Prof. Varadhan has received the Abel Prize (2007) from the King of Norway, and the National Medal of Science (2010) from President Barack Obama.

http://www.math.nyu.edu/faculty/varadhan/

Analysis of DNA Mixtures with Artefacts

Speaker: Prof. Robert Cowell

Affiliation: City U. london

When: Tuesday, 26 March 2013, 11:00 am to 12:00 pm

Where: 303.310

I present a statistical model for the quantitative peak information obtained from an electropherogram of a forensic DNA sample. The model directly describes peak height information and the dropout of an allele is interpreted as failure for its associated peak to be observed above a detection threshold. Stutter and dropin are readily represented in the model. The parameters of the model, and their standard errors, are estimated by maximum likelihood in the presence of multiple unknown contributors, exploiting a Bayesian network representation for efficient computation. Data from a (murder) case taken from the literature is used to illustrate the efficacy of the model.

(This is joint work with: T. Graversen, S. L. Lauritzen and J. Mortera)

http://www.staff.city.ac.uk/~rgc/index.html

Dynamic Thresholds and a Summary ROC Curve: Assessing the Prognostic Accuracy of Longitudinal Markers
Dr. Paramita  Chaudhuri

Speaker: Dr. Paramita Chaudhuri

Affiliation: Duke U.

When: Thursday, 21 March 2013, 4:00 pm to 5:00 pm

Where: 303-B11

Cancer patients, chronic kidney disease (CDK) patients, and subjects infected with HIV are commonly monitored over time using biomarkers that represent key health status indicators. Furthermore, biomarkers are frequently used to guide initiation of new treatments or to inform changes in intervention strategies. Since key medical decisions can be made on the basis of a longitudinal biomarker it is important to evaluate the potential accuracy associated with longitudinal monitoring. We introduce a summary receiver operating characteristic (ROC) curve that displays the overall sensitivity associated with a time-dependent threshold that controls specificity. The proposed statistical methods are similar to concepts considered in disease screening, yet our methods are novel in choosing a potentially time-dependent threshold to denote a positive test, and our methods allow time-specific control of the false-positive rate. Finally, the proposed summary ROC curve is a natural averaging of time-dependent incident/dynamic ROC curves proposed by Heagerty and Zheng (2005) and therefore provides a single summary of net error rates that can be achieved in the longitudinal setting.

https://sites.google.com/site/paramitasaharesearch/

Shape-restricted density estimation with applications to financial data

Speaker: Yu Liu

Affiliation: U. Auckland

When: Tuesday, 19 March 2013, 11:00 am to 12:00 pm

Where: 303-310

Parametric models are often used for modelling financial data, such as the skewed t-distribution or the stable distribution for stock prices and bond yields, as well as the log-gamma or the log-negative-binomial model for options pricing. Using parametric models can be advantageous in, e.g., computing and interpretation, but they may suffer, sometimes badly, in performance owing to model mis-specification. For my PhD project, I plan to use nonparametric (or semiparametric) models for various types of financial data, in particular models under certain shape restrictions that may be safely enforced in practice from a financial perspective.

As the first step of my project, I studied the estimation of a univariate density under the log-concavity restriction. A new algorithm has been developed and will be described in detail in this talk, which extends the constrained Newton method for fitting a nonparametric mixture model. Compared with a couple of existing algorithms, the new one is quite fast and stable. Some applications to financial data will be given. Also will be described a tentative extension that adds smoothness to a log-concave density estimate. Finally, some potential extensions will be briefly discussed, e.g., to the multivariate case or in a semiparametric setting.

This talk is given as part of the PhD provisional requirements for the thesis "Nonparametric Modelling for Financial Data".

Assessing price of anarchy in a market for wind power
Dr. Elena Yudovina

Speaker: Dr. Elena Yudovina

Affiliation: U. Michigan

When: Thursday, 14 March 2013, 4:00 pm to 5:00 pm

Where: 303-B11

We are motivated by the problem of setting up a marked for wind-generated power, an inherently stochastic supply. Rather than attempting to reduce the variability by using conventional reserve generation methods, we are interested in a market where the power is sold together with the uncertainty, much as in the case of financial assets. In theory, this should allow consumers to reduce the price of their electricity, while passing on the costs of reducing variability to the electricity suppliers. Our interest is in the price-of-anarchy that can be seen in such an auction: that is, how much power will go unsold, or how much demand will go unfulfilled, in a decentralized auction as compared to one with central planning. We look at a very simple model of this auction, which turns out to have rather interesting behavior.

http://www-personal.umich.edu/~yudovina/

Ordinal data and 'ordinal' analyses
Prof. Thomas Lumley

Speaker: Prof. Thomas Lumley

Affiliation: U. Auckland

When: Thursday, 7 March 2013, 4:00 pm to 5:00 pm

Where: 303.B11

Seminar Video Recording: Click to View Recording

Ordinal data, where the measurement process ensures agreement on the ordering but not on the numerical value, are common. There is a plausible argument that analyses of ordinal data should not impose particular numerical scores but should rely only on the known ordering. I will argue against this, showing that any two-sample two-sided test that does not reduce to comparing a one-sample summary statistic is either non-transitive or has even worse problems. I will also argue that the desire for ordinal analyses is misplaced, because tests compare whole distributions, which are not uniquely ordered, rather than single data points. There will be cameo appearances by non-separable Banach spaces and pathological counterexamples, but the majority of the talk should be broadly accessible.


The seminar was recorded on request and the video can be seen here.

http://www.stat.auckland.ac.nz/showperson?firstname=Thomas&surname=Lumley

Open Data in New Zealand + Interactive Web-based Graphics (NOTE CHANGE OF TIME)

Speaker: Jimmy Oh

Affiliation: U. Auckland

When: Thursday, 14 February 2013, 2:00 pm to 3:00 pm

Where: 303.B09

Firstly, this talk will provide an overview of Open Data released by New Zealand Government Agencies, covering how they are released and difficulties that may be encountered when trying to access and use the data, followed by potential interim solutions (i.e. something we can do while we wait for government policy to evolve, which may take years).

Secondly, this talk will provide an overview of tools used to produce Interactive Web-based Graphics, from simple GUI (Graphical User Interface) tools to more complex, but powerful tools, that involve non-trivial coding. Areas with potential for great benefit but are not as developed are identified and ideas are raised that may address these areas.

This talk is given as part of the PhD Provisional requirements for the thesis "Presentation Methods for Open Data".

http://www.stat.auckland.ac.nz/showperson?firstname=Jimmy&surname=Oh

Scaling problems in MCMC
Prof. Gareth Roberts

Speaker: Prof. Gareth Roberts

Affiliation: Warwick U.

When: Tuesday, 29 January 2013, 11:00 am to 12:00 pm

Where: 303.B05

The talk will present some established and then some recent results on the optimal scaling of MCMC algorithms. The common theme is the multi-dimensional one where, according to some parameterisation, components of the chain converge at different speeds. Examples will include optimal spacing of temperatures for simulated tempering and Metropolis algorithm for ill-posed statistical models as the amount of data goes to infinity.

http://www2.warwick.ac.uk/fac/sci/statistics/staff/academic-research/roberts/

Optimal Threshold Strategies for Queues in Series
Dr. Bernardo D'Auria

Speaker: Dr. Bernardo D'Auria

Affiliation: Madrid University Carlos III

When: Tuesday, 22 January 2013, 4:00 pm to 5:00 pm

Where: 303-B07, Science Centre

We analyze a 2-queue tandem network from an economic viewpoint.

At their arrival time, customers may decide to join (or balk) the network by estimating the expected system time they are going to spend in the network.

The estimation is made according to the information they receive about the current state of the network. The information may be: complete, i.e. including the number of users already present in the first and second nodes; partial; or nonexistent. We show how to determine, according to these different levels of information, the individual optimal pure-threshold strategies that customers are going to follow.

http://www.uc3m.es/portal/page/portal/dpto_estadistica/home/members/bernardo_d_auria

Top


Please give us your feedback or ask us a question

This message is...


My feedback or question is...


My email address is...

(Only if you need a reply)

A to Z Directory | Site map | Accessibility | Copyright | Privacy | Disclaimer | Feedback on this page