NLM Overview

The NLM Training Program addresses challenges facing biomedical informatics, namely how to address the digital information streaming from innumerable sensors, instruments and simulations that is outrunning our capacity to accumulate, organize and analyze it for making healthcare decisions, in order to:

  1. to make sense of datasets that may be massive, heterogeneous, deficient or even contain errors;
  2. to cull from them important insights into fundamental problems of biomedicine and
  3. to convey information in ways readily understood by researchers and clinicians.

To do this, our training program focuses on what is loosely known as “big data” – that is, data-driven discovery and decision-making tools, meaning using computer programs to seek associations in databases whose complexity hides such relations from even expert humans, and to make discovered associations and patterns intelligible to humans. We have designed a curriculum, one that will develop the core competencies for biomedical informaticians as defined in the report by the American Medical Informatics Association (AMIA). The curriculum of foundation courses and electives will be customized to meet each trainee’s research interests, previous coursework, and knowledge gaps.

Our training program supports training and research primarily in three domains of informatics as defined by the NLM:

  1. healthcare/clinical informatics: applications of informatics principles and methods to direct patient care; examples include advanced clinical decision support systems, or multimedia electronic health records.
  2. translational bioinformatics: applications of informatics principles and methods to support “bench to bedside to practice” translational research; examples include genome-phenome relationships, pharmacogenomics, personalized medicine, or genome-wide association studies (GWAS).
  3. clinical research informatics: applications of informatics principles and methods to support basic clinical trials and comparative effectiveness research that use human versus animal models; examples include biostatistics, in-silico trials, or merging and mining large disparate data sets that mix images, text, and data.