Department of Statistics SeminarsCurrent | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011
Stochastic Networks With Resource Sharing
Speaker: Professor Ruth J Williams Affiliation: University of California, San Diego When: Tuesday, 27 November 2007, 3:00 pm to 4:00 pm Where: Seminar Room 222, Science Centre Stochastic networks are used as models for complex systems involving dynamic interactions subject to uncertainty. Application domains include manufacturing, the service industry, telecommunications, and computer systems. Networks arising in modern applications are often highly complex and heterogeneous, with network features that transcend those of conventional queueing models. The control and analysis of such networks present challenging mathematical problems. In this talk, a concrete application will be used to illustrate a general approach to the study of stochastic networks using more tractable approximate models. Specifically, we consider a connection-level model of Internet congestion control that represents the randomly varying number of flows present in a network where bandwidth is shared fairly amongst elastic documents. This model, introduced by Massoulie and Roberts, can be viewed as a stochastic network with simultaneous resource possession. Elegant fluid and diffusion approximations will be used to study the behavior of this model. The talk will conclude with a summary of the current status and description of open problems associated with the further development of approximate models for general stochastic networks. This talk is based in part on joint work with W. Kang, F. P. Kelly, and N. H. Lee. http://www.math.ucsd.edu/~williams/ Quasi variancesSpeaker: Professor David Firth Affiliation: Department of Statistics, University of Warwick When: Friday, 23 November 2007, 4:00 pm to 5:00 pm Where: Seminar Room 222, Science Centre The notion of quasi variances, as a device for both simplifying and enhancing the presentation of additive categorical-predictor effects in statistical models, was developed in Firth and de Menezes (Biometrika, 2004, 65-80). The approach generalizes the earlier idea of "floating absolute risk" (Easton et al., Statistics in Medicine, 1991), which has become rather controversial in epidemiology. In this talk I will outline and exemplify the method, and discuss its extension to some other contexts such as parameters that may be arbitrarily scaled and/or rotated. http://www2.warwick.ac.uk/fac/sci/statistics/staff/academic/firth On a Bull/Bear Contract Call Signal Based Trading StrategySpeaker: Dr Alan Wan Affiliation: Department of Management Sciences, City University of Hong Kong When: Thursday, 8 November 2007, 4:00 pm to 5:00 pm Where: Seminar Room 222, Science Centre *** Please note that the topic has been changed from one earlier listed on the website. *** This paper considers a relatively new derivative security, the Callable Bull/Bear Contract (CBBC). Our primary aim is to utilize a significant feature of the CBBC, namely, its call feature, to devise a trading strategy with respect to the underlying asset. Drawing on data of Hong Kong's Hang Seng Index, we show that the Vector Error Correction Model with cointegration restrictions is useful for forecasting the stock index's daily highs and lows. These forecasts are very reliable in predicting subsequent call events for the CBBCs. Our trading strategy is to either buy or short-sell the index's futures or a relevant Exchange-Traded Fund in the event of a call-alert confirmation for the CBBC. A simple profit and loss analysis reveals that the proposed trading strategy is very effective and profitable. At best, the strategy has an 85% chance of generating positive profits of around 2% on average after discounting transaction costs. These profits are not bad given that the position to realize profits are typically made a few days after buying or short selling. The findings of this study should be of interest to both individual and institutional investors. Modelling the heterogeneity of molecular evolution processesSpeaker: Dr Stéphane Guindon Affiliation: Department of Statistics, University of Auckland When: Thursday, 1 November 2007, 4:00 pm to 5:00 pm Where: Seminar Room 222, Science Centre The differences we observe among contemporaneous homologous DNA sequences are explained by the accumulation of mutations during the course of evolution. Getting a fine description of the mutational processes is crucial if one wants to explain evolution at higher levels (i.e., at the organism, the population and the species levels). A better understanding of molecular evolution is also crucial for measuring biodiversity and demonstrating how this diversity is shaped by Darwin's natural selection. An important feature of molecular evolution is its variability. Different mechanisms occur at different places within a given gene or/and at various stages during evolution. Statistical phylogenetics provides an adequate framework to take this variability into account. The core of phylogenetic models consists of time-continuous Markov models that describe the mutational process. I will provide an overview of these models and explain how their parameters can be estimated in a maximum likelihood or a Bayesian framework. I will then describe a relatively new class of models, the Markov-modulated Markov models, that extend mixture models and potentially capture important features of molecular evolution. A survey of some self-interacting random walksSpeaker: Dr Mark Holmes Affiliation: Department of Statistics, University of Auckland When: Thursday, 25 October 2007, 4:00 pm to 5:00 pm Where: Seminar Room 222, Science Centre We will introduce some examples of self-interacting random walks, and watch some videos of some of them. We will briefly discuss some of the things that are known and more importantly, some properties that are "obvious" but not known! Assessment of Hierarchical Models for Count DataSpeaker: Dr Russell Millar Affiliation: Department of Statistics, University of Auckland When: Thursday, 11 October 2007, 4:00 pm to 5:00 pm Where: Seminar Room 222, Science Centre The deviance information criterion (DIC) was found to be a potentially dangerous tool for comparison between hierarchical models of count data and the results herein show that it has been used inappropriately in several recent publications that have employed these models. DIC was useful only when the likelihood was expressed at the subject level, and so it can be used for comparison between the Poisson and negative binomial. In addition, despite zero-inflation being a form of mixture, the DIC also performed well for comparison of the Poisson and negative binomial with their zero-inflated counterparts. However, DIC was not reliable for likelihoods expressed at the replicate level, and it can not be used to distinguish between Poisson-gamma (the negative binomial implemented at replicate level) and Poisson-lognormal models, or to assess whether these models require zero-inflation. For example, when fitting Poisson-gamma and Poisson-lognormal models to simulated Poisson-lognormal data, the Poisson-gamma model always had lower DIC. Bayesian predictive checks (BPCs) were also investigated and were found to be extremely conservative. For example, under 100 simulations of the Poisson model fitted to Poisson data, the lower 5% quantile of the BPC p-value for goodness of fit was approximately 0.3. Nonetheless, BPCs were a useful aid in model comparison and confirmation of DIC. http://www.stat.auckland.ac.nz/~millar/ Application of mixed models and multivariate hypothesis testing to long-term tropical tree assemblage data from a BACI experimentSpeaker: Ilyas Siddique Affiliation: The University of Queensland and Instituto de Pesquisa Ambiental da Amazônia, Brazil When: Thursday, 27 September 2007, 4:00 pm to 5:00 pm Where: Seminar Room 222, Science Centre Background spatial heterogeneity and "random" temporal variability pose major challenges to the analysis of ecological treatments in field experiments, and thus to our mechanistic understanding of ecosystems. Therefore, space-for-time chronosequences have been shown to be of limited use for our understanding of complex tropical secondary forest succession. I present univariate and multivariate analyses of repeated measures of forest regrowth attributes in response to factorial nutrient addition in permanent plots of a BACI experiment. Shapes of response fits of tree biomass measures over time are examined in the light of multiple possible trajectories, unknown end points, and inevitably too short measurement periods relative to the multi-decade process of forest regrowth. Second order polynomial fits of woody biomass of individual, common species indicate distinct trajectories over time in response to experimental fertilization, which explains poor fit of total biomass of all species pooled. Pooling species based on the type of their individual nutrient response, and subsequent refitting pooled biomass of these 'functional' groups of species reveals more subtle interactions between the effects of time of regrowth, different nutrients added, and covariates, which are detectable neither in total woody biomass, nor individual species biomass fits. Hellinger-standardization is applied to woody biomass of all species in the tree assemblage to avoid distortion of subsequent ordinations due to high zero-inflation, mainly rare species, and the autocorrelation structure associated with repeated measures. Hellinger-Principal Component Analysis reveals that most of the compositional variance is associated with spatial heterogeneity among plots. After conditioning the variance associated with plot differences, partial Hellinger-Redundancy Analysis indicates clear compositional change over time, but also interactions between time and fertilization, suggesting non-accelerating shifts in tree species biomass composition in response to nitrogen addition. http://www.uq.edu.au/uqresearchers/researcher/siddiquei1.html Hierarchical clustering of continuous variables based on the empirical copula processSpeaker: Dr Ivan Kojadinovic Affiliation: Department of Statistics, University of Auckland When: Thursday, 20 September 2007, 4:00 pm to 5:00 pm Where: Seminar Room 222, Science Centre In the framework of the likelihood linkage analysis method put forward by Lerman, the agglomerative hierarchical clustering of continuous variables is studied. The similarity between variables is defined from the independence test based on the empirical copula process proposed by Deheuvels and recently made operational by Genest and Rémillard. Unlike more classical similarity coefficients based on rank statistics, the comparison measure considered in this work can also be sensitive to non-monotonic dependencies. As aggregation criteria, besides classical linkages, the use of several $p$-value combination methods is considered. The performances of the corresponding clustering algorithms are compared through thorough simulations. Next, in order to guide the choice of a partition, several indices of homogeneity/heterogeneity and separation of partitions are considered and compared through extensive simulations. The resulting variable clustering procedure can be equivalently regarded as a less computationally expensive alternative to more powerful tests of multivariate independence. Keywords: clustering of continuous variables; hierarchical clustering; independence tests; empirical copula process; p-value combination methods. http://www.stat.auckland.ac.nz/~ivan/ Direct Maximization of the Likelihood of a Hidden Markov ModelSpeaker: Dr Rolf Turner Affiliation: Faculty of Education, University of Auckland When: Thursday, 13 September 2007, 4:00 pm to 5:00 pm Where: Seminar Room 222, Science Centre Hidden Markov models form a popular and versatile means of handling serial dependence in data. Ever since these models were introduced by Baum and his coworkers in about 1970, the method of choice for fitting them has been the EM (expectation/maximization) algorithm. This is due to the fact that the likelihood of a hidden Markov model is a bit hard to handle. Recently however a couple of authors have noticed that it is actually possible to calculate the Hessian of the log likelihood of a hidden Markov model, which suggests that one might simply maximize the likelihood by applying Newton's method. I have implemented the calculations in R and tested out the idea on a couple of fairly complicated examples. Newton's method turns out to be insufficiently stable. However the Levenberg-Marquardt algorithm (which essentially interpolates between Newton's method and the method of steepest ascent) seems to work like a charm. A seven-fold increase in speed over the EM algorithm was achieved. This talk will be aimed at non-specialists, so I will explain a bit about hidden Markov models, the EM algorithm, Levenberg-Marquardt, the two complicated models that I have fitted using this technique, and possibly Life, the Universe and Everything. Shrinkage and Variable Selection in Partially Linear ModelsSpeaker: Dr S Ejaz Ahmed Affiliation: Department of Mathematics & Statistics, University of Windsor When: Wednesday, 15 August 2007, 4:00 pm to 5:00 pm Where: Seminar Room 222, Science Centre In this talk, I consider a partially linear model where the vector of coefficients $a$ in the linear part can be partitioned as ($a1$ , $a2$) where $a1$ is the coefficient vector for main effects (e.g. treatment effect, genetic effects) and $a2$ is a vector for 'nuisance' effects (e.g., age, lab). In this situation, inference about $a1$ may benefit from moving the least squares estimate for the full model in the direction of the least squares estimate without the nuisance variables (Steinian shrinkage), or to drop the nuisance variables if there is evidence that they do not provide useful information (pre-testing). We investigate the asymptotic properties of Stein-type and pretest semi-parametric estimators under quadratic loss and show that under general conditions a Stein-type semi-parametric estimator improves on the full model conventional semi-parametric least square estimator. We also consider a LASSO type estimator for partially linear models and give a Monte Carlo simulation comparison of theses estimators. The comparison shows that shrinkage method performs better than LASSO when the number of restriction on parameter space is large. http://www.uwindsor.ca/seahmed Spatial graph theory: from boundary detection to landscape connectivitySpeaker: Professor Marie-Josée Fortin Affiliation: When: Friday, 10 August 2007, 12:00 pm to 1:00 pm Where: 260.055, Owen G Glenn Building They are several ways to characterize landscape spatial heterogeneity by quantifying either its degree of fragmentation (i.e., delineating spatial homogeneous patches) or its degree of connectivity. Interestingly, a unifying formal spatial graph theory can be used to analyze these different properties of landscape spatial configuration. Here I present how graph theoretic and computational geometric methods can help determining landscape structure. Specifically, I describe how spatial graph theory encompasses previous graph-based methods such as those developed to detect boundary (triangulation-wombling) and characterize them (boundary width, boundary length, boundary width) as well as spatial graph-based methods allowing to quantify habitat connectivity (minimum planar graph, least-cost path and patch importance). For illustration purpose, I apply these methods to study woodland caribou habitat connectivity in Labrador, Canada. http://www.zoo.utoronto.ca/zfa/Newsletter-htm/nov-01/fortin.htm Bayesian inference on gravitational waves - The maths, the computation, and examplesSpeaker: Christian Roever Affiliation: When: Thursday, 9 August 2007, 4:00 pm to 5:00 pm Where: Seminar Room 222, Science Centre Observatories are being set up around the world in order to detect gravitational radiation. Gravitational waves are predicted by general relativity theory, and their measurement would not only confirm general relativity, but also open a new window for interesting observations. Valuable information will be encoded in gravitational waves, about the processes generating them, or about cosmology in general. In this talk I will present my work on a Bayesian analysis framework for a particular kind of signal, emitted by a pair of inspiralling stars or black holes. This includes the model setup, and the computational methods employed for the practical analysis. Over the course of this work, interesting insights were gained into the proper setup of the (parallel tempering) MCMC algorithm, and the original model was generalised to include the noise spectrum as an unknown, allowing to estimate it 'on the fly'. http://www.stat.auckland.ac.nz/~christian/ Local sensitivity analysis using differentiationSpeaker: Dr Wayne Stewart Affiliation: Department of Statistics, The University of Auckland When: Thursday, 19 July 2007, 4:00 pm to 5:00 pm Where: Seminar Room 222, Science Centre There are essentially three inputs into a Bayesian analysis, the loss function, the prior and the likelihood. Since any one or more of these could be subjective to some degree, the question of posterior sensitivity to inputs is of some concern. The literature contains principally three ways in which Bayesian sensitivity analysis can proceed - 1. Informally with a try and see approach (perhaps some competing priors are run and posterior quantities compared) 2. Globally where a class of priors are used to determine a range in some posterior quantity such as the posterior mean and 3. The local approach where differentiation is used to determine the rate of change of some posterior quantity with respect to some input measure evaluated at a baseline. My talk will look at a particular parametric local sensitivity analysis and will concentrate on some of the more simple results and their implementation as well as indicate where the research could develop. New results will be given as they pertain to the unit Bayes factors and calibration using empirical Bayes estimates. A brief example or two will be used to show how easy the sensitivity can be incorporated into an analysis. http://www.stat.auckland.ac.nz/showperson?firstname=Wayne&surname=Stewart Objective Bayesian EstimationSpeaker: Professor Jim Berger Affiliation: Institute of Statistics and Decision Sciences, Duke University When: Monday, 2 July 2007, 4:00 pm to 5:00 pm Where: Computer Science Seminar Room 279, Science Centre The history of objective Bayesian analysis will first be discussed, starting with Bayes and Laplace through Jeffreys. The benefits in utilizing the objective Bayesian approach range from ease of modeling to ease of understanding to computational efficiency. Examples that will be given include medical diagnosis, high-dimensional multiple comparisons, and hierarchical modeling. Some discussion of the alternative approaches to objective Bayesian estimation will be given, although the lecture will be more oriented to understanding the strengths and dangers of the methodologies. If time permits, the Bayesian/frequentist unification that is possible through the objective Bayesian approach will be discussed. http://www.stat.duke.edu/~berger/ Goodness of fit problem for errors in non-parametric regression: a new approachSpeaker: Professor Estate Khmaladze Affiliation: Dept of Mathematics and Statistics and Operations Research, Victoria University When: Thursday, 7 June 2007, 4:00 pm to 5:00 pm Where: Computer Science Seminar Room 279, Science Centre The usual empirical process based on residuals (or estimated errors) has a limiting distribution which depends on both the hypothesis and the non-parametric estimator of the regression function. This dependence may seem natural and unavoidable. However, it will be shown that, at a very little cost, one can have a version of the empirical process, which is not only distribution free, but will also actually converge to a standard Brownian motion. And, in addition, no matter what is the hypothesis and what estimator is used, this process leads to tests with better power. http://www.mcs.vuw.ac.nz/~estate/ Normal-Laplace Distributions and their ApplicationsSpeaker: Professor Bill Reed Affiliation: University of Victoria, Canada When: Thursday, 24 May 2007, 4:00 pm to 5:00 pm Where: Seminar Room 222, Science Centre In this talk I will introduce the normal-Laplace (NL) and the generalized normal-Laplace (GNL) distributions and discuss some of their applications. These include fitting size distributions; option pricing for financial assets; directional statistics and survival analysis. The four-parameter NL distribution provides a good model for size distributions. It can also be used to provide a flexible family of hazard rate functions (including a 'bath-tub' shaped hazard) for use in survival analysis. The five-parameter GNL distribution is used in the creation of a Lévy process (Brownian-Laplace motion) whose increments can exhibit skewness and excess kurtosis (as seen in empirical logarithmic returns on stocks and other financial assets). An option pricing formula for assets following Brownian-Laplace motion is derived. Finally wrapped versions of both the NL and GNL distributions provide attractive parametric models for directional data. They can exhibit both skewness and kurtosis. Does Globalization Affect Public Perceptions of 'Who in Power Can Make a Difference'? Evidence From 37 Countries, 1996-2005Speaker: Professor Jack Vowles Affiliation: Political Studies, University of Auckland When: Thursday, 17 May 2007, 4:00 pm to 5:00 pm Where: MLT2, Science Centre Economic globalization is often said to promote policy convergence between political parties in government in democratic states, and thus substantially constrain voters' choice options. Using data from the Comparative Study of Electoral Systems (CSES) modules one and two, this paper tests whether and how cross-national differences in exposure to the international economy may influence the voter perceptions that are needed to underpin expectations of differences between alternative governments, one of the main preconditions for the effective practice of responsible party government. It identifies two dimensions of economic globalization, trade dependence and international financial integration, and uncovers evidence that international financial integration does indeed encourage pessimism about 'making a difference'. http://www.arts.auckland.ac.nz/staff/index.cfm?S=STAFF_jvow002 Multivariate trimmed means based on depth functionsSpeaker: Professor Jean-Claude Massé Affiliation: Université Laval, Canada When: Thursday, 10 May 2007, 3:00 pm to 4:00 pm Where: Seminar Room 222, Science Centre In univariate statistics, the trimmed mean has long been regarded as a robust and efficient alternative to the sample mean. A multivariate analogue calls for a notion of trimmed region around the center of the sample. Depth functions provide a convenient way of measuring the centrality of a point with respect to a multivariate probability distribution. Informally speaking, points with high depth are viewed as being close to the "center" of the distribution, while those with low depth are understood as belonging to the tails. Given a d-dimensional data set, a depth-based α-trimmed mean is thus defined by averaging data points of depth ≥α with respect to the empirical distribution. This talk will examine two types of multivariate trimmed means based on the Tukey depth function, focusing on relative efficiency with respect to to the sample mean as well as robustness. The results provide convincing evidence that these nonparametric location statistics have highly desirable asymptotic behavior and finite-sample robustness. http://archimede.mat.ulaval.ca/pages/jcmasse/ Coupling and Mixing Times in Markov chainsSpeaker: Professor Jeff Hunter Affiliation: Institute of Information and Mathematical Sciences, Massey University When: Thursday, 26 April 2007, 4:00 pm to 5:00 pm Where: Seminar Room 222, Science Centre The time to stationarity in a Markov chain is an important concept, especially in the application of Markov chain Monte Carlo methods. The time to stationarity can be defined in a variety of ways. In this talk we explore two possibilities - the" time to mixing" (as given by the presenter in a paper on "Mixing times with applications to perturbed Markov chains" in Linear Algebra Appl. 417, 108-123, (2006)) and the "time to coupling". Both these related concepts are explored with derivations given for the expected time to mixing and the expected time to coupling in a general finite state space Markov chain. As well as deriving some general results, some special cases are explored in order to provide some general comparisons between the two expectations. http://www.massey.ac.nz/~jhunter/ Bayesian Mixed Membership Models for Soft ClusteringSpeaker: Professor Stephen Fienberg Affiliation: Department of Statistics, Machine Learning Department, and Cylab Carnegie-Mellon University When: Monday, 23 April 2007, 11:00 am to 12:00 pm Where: Seminar Room 222, Science Centre In many problem settings involving clustering and classification, units can conceivably belong to multiple group. Bayesian mixed membership models provide a natural way to address such "soft" clustering and classification problems. These models typically rely on four levels of assumptions: population, subject, latent variable, and sampling scheme. Population level assumptions describe a general structure of the population that is common to all subjects. Subject level assumptions specify the distribution of observable responses given the population structure and individual membership scores. Membership scores are usually unknown and hence can also be viewed as latent variables which can be treated as fixed or random in the model. Finally, the last level of assumptions specifies the number of distinct observed characteristics (attributes) and the number of replications for each characteristic. We describe four applications of mixed membership modeling: (i) to disability indicators from the National Long Term Care Survey, (ii) abstracts and bibliographies of research reports in The Proceedings of the National Academy of Sciences, (iii) genetic SAGE libraries, and (iv) protein-protein interactions in yeast (this involves extensions that incorporate stochastic block-modeling). Our methods include the computation of full posterior distributions as well as various forms of variational approximations. In the examples, we also discuss issues of model assessment and specification. The Birthday Problem and DNA ProfilesSpeaker: Professor Bruce Weir Affiliation: Department of Biostatistics, University of Washington When: Friday, 30 March 2007, 12:00 pm to 1:00 pm Where: Seminar Room 222, Science Centre Some critics of DNA profiling have claimed that there is a discrepancy between an estimated profile frequency of 1 in several billion for a particular DNA profile and the finding that two profiles match in a data base of less than 100,000. At one level, the criticism can be dismissed by appeal to the birthday problem (there is over a 50% probabiltity of a group of 23 people having two people with the same birthday) but there are some more subtle issues that take into account dependencies among profiles. These dependencies may reflect shared family history or shared evolutionary history. http://www.biostat.washington.edu/people/faculty.php?netid=bsweir Using Stationary Queueing Models to Set Staffing Levels in Nonstationary Service SystemsSpeaker: Dr Linda Green Affiliation: Columbia Business School When: Tuesday, 27 March 2007, 2:00 pm to 3:00 pm Where: Seminar Room 222, Science Centre A common feature of many service systems is that demand for service often varies greatly by time of day. In many cases, including telephone call centers, police patrol, and hospital emergency rooms, staffing levels are adjusted in an attempt to provide a uniform level of service at all times. Analyzing these systems is not straightforward because standard queueing theory focuses on the long-run steady-state behavior of stationary models. In this talk, I'll discuss how stationary queueing models can be adapted for use in nonstationary environments so that time-dependent performance is captured and staffing requirements can be identified. Specific applications to telephone call centers and hospital emergency rooms will be described. http://www2.gsb.columbia.edu/divisions/dro/green.html Haplotype inference using an empirical linkage disequilibrium modelSpeaker: Dr Sharon Browning Affiliation: University of Auckland When: Thursday, 22 March 2007, 4:00 pm to 5:00 pm Where: Computer Science Seminar Room 279, Science Centre Each person has two copies of the 22 autosomal chromosomes. Genotypes assay the specific sequence on the two copies at positions that are known to exhibit variability in humans. Genotypes are unordered pairs of alleles (variants), thus it is not possible to determine which alleles come from the same chromosome copy. Alleles that are located at positions nearby on a chromosome tend to be correlated (this is known as linkage disequilibrium). Thus it is possible to statistically infer the sequence of alleles on each of the two chromosome copies. These sequences are known as haplotypes. I will review several existing methods for haplotype inference, and present a new method based on an empirical model for linkage disequilibrium. Simulation results will indicate the relative strengths of the different methods. Inferences on estimating linkage disequilibrium effective population sizeSpeaker: James Russell Affiliation: PhD candidate, Department of Statistics, University of Auckland When: Thursday, 8 March 2007, 3:00 pm to 4:00 pm Where: Seminar Room 222, Science Centre Data on linkage disequilibrium generated by genetic drift of unlinked loci from reproduction in a finite population provides an estimate of effective population size, and its inverse of identity by descent, for a population. Effective population size can be used to make inferences on the recent population dynamics of a population. This talk will outline the method, present simulation results assessing its statistical properties, and apply it to real rat populations from small islands around northern New Zealand. Use of the method in conservation biology is considered, not just for threatened species, but also invasive species that create a paradox of invasion through being able to establish populations from only a small number of founders despite genetic bottlenecks. The work is part of a PhD thesis on the invasion ecology of rats on islands. Equality constraints and exact approximations for Markov Decision ProcessesSpeaker: Adam Shwartz Affiliation: Technion - Israel Institute of Technology When: Tuesday, 13 February 2007, 11:00 am to 12:00 pm Where: Computer Science Seminar Room 279, Science Centre We consider Markov decision processes (MDP) with finite state space and with the average cost criterion. A constrained MDP is one where the minimization of the cost is subject to a constraint. The constraint is in terms of another "average-cost" - which is required to stay below a given bound. While much is known in the case of inequality constraints (including sensitivity issues), these results typically fail in the case of equality. We propose a new method, and gives some results for this case, when the stats space is finite. Countable state MDPs are often difficult to deal with - in terms of both analysis, simulation, and use. In approximaing an MDP we look for a finite approximating MDP so that, in some sense, the optimal cost and perhaps also optimal policy for both models are "close". This is especially delicate if we try to approximate constrained MDPs, since conceptually a constraint is a hard-set value, and violating it even by a little may be unacceptable. We provide an "exact approximation" which has the following properties. The optimal costs of both models are actually equal, and moreover the optimal policies agree on the approximating MDP (that is, we take the same action if we are in the same state). This method extends to constrained MDPs and avoids the issue metioned above. |
Contact DetailsPostal address: Courier address: Phone: +649 3737599 x86893 or x87510 Enquiries: |