Seminar

Normalization

Thursday, 26 June 2014
Time: 
9.30am-10.30am
Seminar Rm 2, Level 5, The Alfred Centre
99 Commercial Rd
Prahran
Australia

Normalization is a term that has come to describe a range of adjustments done to data prior to carrying out conventional statistical analyses. Usually it is not model-based. It first came to my attention with microarray data, but is now used for a wide range of "omic" data (genomic, proteomic, metabolomic,...) and beyond. In this talk I'll describe some of my experiences with this notion. Along the way, I'll explain why and how people normalize data, and how it affects their ultimate results. My main story will be from microarray gene expression data, but I'll also mention RNA-seq data. My view now is that it can be model-based.

 
Prof Terry Speed

Prof. Terry Speed

Division of Bioinformatics
Walter & Eliza Hall Institute

Terry Speed completed a BSc (Hons) in mathematics and statistics at the University of Melbourne (1965), and a PhD in mathematics at Monash University (1969). He  held appointments at the University of Sheffield, U.K. (1969-73) and the University of Western Australia in Perth (1974-82), and he was with Australia‚Äôs CSIRO between 1983 and 1987. In 1987 he moved to the Department of Statistics at the University of California at Berkeley (UCB), and has remained with them ever since. In 1997 he took an appointment with the Walter & Eliza Hall Institute of Medical Research (WEHI) in Melbourne, Australia, and was 50:50 UCB:WEHI until 2009, when he became emeritus professor at UCB and full-time at WEHI, where he heads the Bioinformatics Division. His research interests lie in the application of statistics to genetics and genomics, and to related fields such as proteomics, metabolomics and epigenomics.