Genomics, electronic health records, participant-provided data, sensors, and mobile health technologies can all contribute to personalized medicine. However, we currently cannot achieve statistical correlations amongst these almost unlimited number of parameters that will be collected by the PMI and the depth of mechanistic understanding that will be required for treatment stratification and the development of novel, targeted therapies. The promise of personalized medicine requires deep knowledge of the relationships between genotype, phenotype, and environmental variables - but we simply don’t have enough data. For example, in the ExAC database there are 3,230 genes with near-complete depletion of predicted protein-truncating variants, where 72% of these genes having no currently established human disease phenotype. If we look across organisms, we see that of these 2311 genes with unknown causal phenotypes/diseases, 88% have an associated phenotype in an ortholog, with 56% having ortholog-phenotypes in at least two species.
To much more deeply understand disease mechanisms, we must identify the relevant effector genes, variants, protein interactions, metabolites, environmental factors, etc. such that diagnostics and treatment selection strategies can be designed. The only way to bridge this gap is to rely on experimental studies in genetic diseases models. The 62K publications in PubMed (search “model organism”) and overwhelming support for model organism research is a testament to this perspective. However, there are enormous extant data sets that continue to accrue, but for which we have not yet made computationally accessible enough for the scale of the PMI. We must have a two pronged approach: put existing knowledge to more effective use now, and create processes such that new knowledge that can be applied more readily in the future.
A few of the biggest roadblocks in making model organism data computable is the lack of interoperability between the way in which we describe clinical phenotypes, and phenotypes in model organisms, and in the way these data are shared. The use of informatics approaches, or “phenomics” are pivotal to overcoming these issues and thereby enabling clinical use of animal model genotype-phenotype data. A computable representation of model organism phenotype data has already been shown to allow personalized diagnostics, and a number of publications (1), NIH-driven workshops (2), and RFIs (3) have aimed to inform such approaches. Mice and other model organisms have been invaluable in developing treatments and are increasingly being used to model disease subclasses associated with specific mutations in human and to test novel, stratified therapies - as per the goal of the PMI (for example).
Although it is possible to understand such studies without sophisticated cross-species analysis, as ever more data accumulates, it will be essential to have sophisticated data models for humans that by design will be compatible for integration of phenotypic and treatment related data across the spectrum of biology. What is needed now are advancements in the computable representation of phenotype data across species, annotation of existing and future data in a computable fashion, algorithms to relate model data to humans, and tools to make publication of computable phenotypic data easier, more accessible, and of higher quality.
We are only beginning to realize the vision of the PMI, but it can only happen if we effectively utilize all the biological knowledge that we have at hand. And for that, we require improvements to phenomics - similar to what has been previously done for genomics - to leverage model organism data to fill the gaps that we will never fill via human data alone.