Faculty Summaries
Karthik Devarajan, PhD
Karthik Devarajan, PhD
Associate Member & Assistant Professor
Office Phone: 215-728-2794
Fax: 215-728-2553
Office: R383
  • 1. Unsupervised Dimension Reduction

    We have developed unsupervised dimension reduction methods for model-based clustering of gene expression data and for text mining applications in biomedical informatics using NMF. An important, but often ignored, aspect of high-throughput genomic data is its heteroscedasticity, or signal-dependent nature of noise in the measurements. We have developed information-theoretic methods that extract relevant components from large-scale biological data by accounting for signal-dependent noise. In addition, we have developed computational tools for dimension reduction and visualization using NMF that are freely available to the academic research community. These include hpcNMF, a C++ package that uses high-performance computing clusters and the R package GNMF. Furthermore, by extending nonnegative matrix factorizations using the theory of generalized linear models, we are developing methods that provide a unified framework for the modeling and analysis of data obtained in different scales (Devarajan et al., 2015; Cheung et al., 2015; Devarajan & Cheung, 2014; Devarajan, 2008; Devarajan & Ebrahimi, 2008).

  • 2. Supervised and semi-supervised dimension reduction

    In studies where prior knowledge on phenotype is available, the focus is on correlating the outcome variable of interest with covariates. For example, when information on an outcome variable such as time to an event (or survival time) is available, one of the goals of an investigator is to understand how the expression levels of genes, clinical and demographic variables (covariates) relate to an individual’s survival in the course of a disease. The analysis of time to event data, generally called survival analysis, arises in many fields of study including biology, medicine, public health, engineering, and economics. Its role and significance in cancer research cannot be overstated. The Cox proportional hazards (PH) model is the most celebrated and widely used statistical model linking survival time to covariates. It is a multiplicative hazards model that implies constant relative risk, that is, it postulates that the risk (or hazard) of death of an individual given their covariate measurements is simply proportional to their baseline risk in the absence of any covariate. The model assumes that the hazard and survival curves corresponding to two different values of covariates do not cross. While this model has proved to be very useful in practice due to its simplicity and interpretability, the assumption of constant relative risk has been shown to be invalid in a variety of situations in medical studies. For example, non-proportional hazards are typical when treatment effect increases or decreases over time leading to converging or diverging hazards. This situation cannot be handled by the Cox PH model, and more general models that consider non-proportionality of hazards are required for modeling survival data. To this end, we have developed a class of non-proportional hazards models that embeds the Cox PH model as a special case. We proposed a theoretical and a computational framework for estimation using this generalized model that allows us to rigorously test the assumption of proportional hazards. This approach accounts for varying trends in the relative risk over time. Furthermore, we have developed information-theoretic methods to test the effect of an individual covariate or a group of covariates in the PH model. Our preliminary work involved the development of a model for predicting patient survival by extracting components of gene expression that were strongly correlated with it. In this high-dimensional setting, it is unreasonable to expect the expression levels of the many thousands of genes to exhibit proportionality in hazards. Our current research interests in this area include the systematic comparison of several well-known models for correlating gene expression with patient survival and the identification of genes that demonstrate a time-varying effect using publicly available data from repositories such as Gene Expression Omnibus and The Cancer Genome Atlas. Indeed our recent investigations involving the re-analysis or meta-analysis of existing gene expression data sets have revealed such a time-varying trend exhibited by several genes implicated in kidney cancer (Devarajan & Ebrahimi, 2009, 2011, 2013; Peri et al., 2013; Devarajan et al., 2010).

  • 3. Assessment of technical reproducibility and outlier detection in large-scale biological data

    A problem that arises frequently in high-throughput studies is the assessment of technical reproducibility of data obtained under homogeneous experimental conditions. This is an important problem considering the exponential growth in the number of high-throughput technologies that have become available to the researcher in the past decade. Although methods for determining the quality of data obtained from microarrays have been in existence for many years, these methods do not necessarily translate to data obtained from other technologies. Moreover, these methods tend to be typically graphical in nature and do not employ rigorous statistical methods. There is an inherent need for the quantitative evaluation of the reproducibility of technical replicates obtained using novel approaches such as next-generation sequencing, high-throughput compound and siRNA screening, and SNP arrays. To this end, we have developed model-based methods that account for the technical variability and potential asymmetry that arises naturally in replicate data. This data driven approach borrows strength from the large volume of data available in these studies and can be used for assessing technical reproducibility independent of the technology used to generate the data (Anastassiadis et al., 2011, 2013).

  • 4. Biomarker Discovery

    Another area of active interest is the development of statistical models that detect the presence of cancer in a cohort of patients based on biomarker measurements and clinical variables. To this end, we have systematically compared the performance of various methods and developed algorithms that are better able to detect the presence of hepatocellular carcinoma in the background of cirrhosis using levels of established serum biomarkers and other relevant clinical characteristics of the patient. One of our proposed algorithms has been independently validated by several institutions across the nation that are members of the Early Detection Research Network as well as the National Cancer Institute, and provides a significant improvement in prediction accuracy of up to 10% (Wang et al., 2012, 2013; Communale et al., 2013).