Protein biomarker discovery

May 23, 2017

Diagnostic testing plays a vital role in modern medicine, helping clinicians make informed decisions regarding disease identification and treatment. The majority of routine chemistry tests are currently based on spectrophotometric or immunologic analysis.1 There has been a growing realization, however, that effective diagnostic assays will require screening for multiple (rather than individual) markers using in vitro diagnostic multivariate index assays (IVDMIAs) and the inherent ability of mass spectrometry (MS) to multiplex analytes efficiently and precisely. This has put MS-based protein biomarker discovery at the forefront of molecular diagnostics research.

And much progress has been made over the past two decades. The latest advances in proteomics technologies, from advances in MS technology and tandem mass tag reagents to the creation of powerful bioinformatics software, spectral libraries, and peptide databases, are creating new opportunities for the development of protein biomarkers for disease diagnosis, prognosis, and prediction of response to therapeutic treatment.

However, despite the large number of candidate protein biomarkers reported, there is a well-documented shortfall between the number of candidate biomarkers identified and those cleared or approved by the FDA for clinical use.1-5 Here, we consider whether the use of robust quality control (QC) measures and robust experimental design can help bridge this gap and accelerate the translation of biomarkers from bench to bedside.

Protein biomarkers

Proteins are particularly useful molecules to use as biomarkers as they are often the effectors of diseases and the targets of therapeutic treatments. Using panels of protein biomarkers, healthcare experts can perform accurate disease diagnosis through convenient non-invasive testing. Such screening enables early disease diagnosis in donor samples from individuals who otherwise present no unusual symptoms.

But protein biomarkers offer more than just early disease diagnosis; they also present significant opportunities in terms of personalized medicine. In the treatment of cancer, for instance, protein biomarkers are now being used to guide treatment choices. The detection of proteins associated with tumor drug resistance or sensitivity toward chemotherapy, hormone therapy, or immunotherapy is already being used to predict the type of treatment that may be most effective. Used in combination with genome and transcriptome sequencing, targeted proteomics can help healthcare experts deliver more effective treatment tailored to an individual’s condition.

Bridging the gap

The past two decades have witnessed significant advances in the proteomics technologies used to identify new protein biomarkers.1 Research has resulted in the identification of many thousands of candidate protein biomarkers.6,7 However, relatively few of these candidates have successfully translated into FDA-approved clinical diagnostic tests. Too often, biomarkers identified in initial discovery studies have not shown reproducible activity during subsequent validation.

A key consideration when developing clinical diagnostics is defining the clinical intended use.2 For example, when developing cancer protein biomarkers, it is important to establish whether they will be used for screening, diagnosis, or prognosis. The intended use will determine the target population used to progress the biomarkers from the discovery stage to approval, and will have a significant impact on the overall clinical performance of the diagnostic test.

Careful study design is also essential to reduce the potential for systematic bias and ensure that conclusions are meaningful. Some of the most common sources of bias involve systematic differences in subject selection or specimen collection between donors and control subjects. This bias can be minimized by adopting uniform collection protocols with accurate knowledge of donor history and, for example, ensuring that not all disease samples are from one institution while healthy samples are from another.8

It is also important to ensure that proteomics experiments at the discovery stage are designed appropriately to reduce analytical variability, which can increase the likelihood of false positives when identifying biomarkers. Factors associated with poor sample handling, such as storage duration and temperature, can impact reproducibility in proteomics investigations.9,10

Challenges associated with proteomics

One way to improve confidence in the clinical viability of candidate biomarkers and increase statistical significance is the use of larger population sizes.6 And, thanks to advances in the analytical performance of mass spectrometry (MS) instrumentation and ultra-high performance liquid chromatography (UHPLC) technologies, as well as rapid improvements in the capability of bioinformatics software, proteomics investigations can now be performed on an unprecedented scale, with exceptional analytical precision and workflow robustness.

Though the ability to study thousands of donor samples in a single proteomics study offers significant advantages in terms of investigational capability, it also presents a number of challenges around the design of analytical workflows. While the acquisition time required for initial discovery stage proteomics investigations may be a matter of days, for large-scale studies involving several hundred or thousands of donor samples this can be several weeks. With a longer analytical run comes the greater likelihood of workflow disruption; this is a factor that is often underestimated and even overlooked completely.

Understanding how factors such as the gradual decline in chromatographic performance caused by the impurities present in biological samples, and the need to recalibrate MS instruments mid-investigation, affect the reliability of experimental data over the duration of an analytical investigation is therefore of increased importance for large-scale studies. Even using the most rugged and robust MS and UHPLC technologies, analytical workflows will need to be interrupted for recalibration and closely monitored for performance issues.

To draw meaningful conclusions from large-scale proteomics studies, it is therefore essential to deploy robust QC and system suitability strategies to separate biological differences from analytical variance.

Analytical vs. biological variance

Establishing statistical significance in proteomics studies requires hard metrics based on robust analytical controls. There are two approaches that are often adopted for the assessment of analytical variance in these types of studies.

The first approach involves spiking donor samples with one or more non-human proteins that act as internal controls. Proteins such as alcohol dehydrogenase from yeast or beta-galatosidase from E. coli are often used for this purpose, although a wide variety of others are also available. To ensure that these controls are representative of the entire sample preparation and analysis workflow, these control proteins should undergo the same protocols that donor samples are subjected to, including sample handling, digestion, and recovery. Proteins should also be chosen to avoid overlap with endogenous peptides where possible.

A parallel approach involves creation of a pooled sample containing all of the donor samples, analyzed at regular intervals throughout the analytical run. This global QC sample would be injected every five to ten injections, thus providing a real-time system suitability check for the full set of donor samples. Using the same sample with exactly the same molecular complexity, and analyzed via the same method, this replicate can be used to determine the statistical variance of the method.

While the QC protein provides system suitability for each individual donor sample, the pooled sample provides a global QC metric at well-defined time points to assess the statistical significance of the overall acquisition workflow. Used in combination, these QC measures enable an assessment of statistical significance of results to be made, and therefore the contribution of analytical variance to experimental variance.

Many software packages exist that can help automate this process and simplify the statistical analysis required. And as the use of certain controls become established, it is likely that software packages will incorporate these more commonly used protocols as default options, further simplifying analysis.

The broader use of biomarker panels in clinical diagnostics promises more personalized and effective treatment of diseases. However, the translational gap that exists between proteomic biomarker discovery and validation is a significant one that must be closed if we are to accelerate the progression of sensitive and selective biomarkers from the research laboratory into the clinic.

The latest comprehensive proteome profiling approaches are helping to meet this challenge by facilitating biomarker discovery on an unprecedented scale.

REFERENCES

  1. Crutchfield CA, Thomas SN, Sokoll LJ, Chan DW. Advances in mass spectrometry-based clinical biomarker discovery. Clin Proteomics. 2016;13:1. DOI: 10.1186/s12014-015-9102-9.
  2. Li D, Chan DW. Proteomic cancer biomarkers from discovery to approval: it’s worth the effort. Expert Rev Proteomics. 2014;11(2):135-136. DOI: 10.1586/14789450.2014.897614.
  3. Goossens N, Nakagawa S, Sun X, Hoshida Y. Cancer biomarker discovery and validation. Transl. Cancer Res. 2015;4(3):256-269. DOI: 10.3978/j.issn.2218-676X.2015.06.04.
  4. Füzéry AK, Levin J, Chan MM, Chan DW. Translation of proteomic biomarkers into FDA approved cancer diagnostics: issues and challenges. Clin Proteomics. 2013;10(1):13. DOI: 10.1186/1559-0275-10-13.
  5. Diamandis EP. The failure of protein cancer biomarkers to reach the clinic: why, and what can be done to address the problem? BMC Med. 2012;10:87. DOI: 10.1186/1741-7015-10-87.
  6. Hernández B, Parnell A, Pennington SR. Why have so few proteomic biomarkers “survived” validation? (sample size and independent validation considerations).
    Advanced Biosystems. 2014;14(13-14):1587-1592. DOI: 10.1002/pmic.201300377.
  7. Paulovich AG, Whiteaker JR, Hoofnagle AN, Wang P. The interface between biomarker discovery and clinical validation: The tar pit of the protein biomarker pipeline. Proteomics Clin Appl. 2008;2(10-11):1386-1402. DOI: 10.1002/prca.200780174.
  8. Petri AL, Høgdall C. Marchiori E, et al. Sample handling for mass spectrometric proteomic investigations of human urine. Proteomics Clin Appl. 2005;77(16):5114–5123. DOI: 10.1002/prca.200780010.
  9. Hu J, Coombes KR, Morris JS, Baggerly KA. The importance of experimental design in proteomic mass spectrometry experiments: Some cautionary tales. Brief Funct Genomics Proteomics. 2005;3(4):322-331.
  10. Pepe MS, Feng Z et al. Pivotal evaluation of the accuracy of a biomarker used for classification or prediction: standards for study design. J. Natl Cancer Inst. 2008;100(20):1432-1438. DOI: 10.1093/jnci/djn326.

Scott Peterman, PhD, serves as marketing manager and senior scientist at the BRIMS (Biomarkers Research Initiatives in Mass Spectrometry) Center, Thermo Fisher
Scientific. He has been involved in targeted protein quantitation research since 2007.

Lisa Thomas, BS, MBA, serves as senior director of marketing for the clinical and forensic markets in the chromatography and mass spectrometry division of Thermo Fisher
Scientific. She has developed and marketed scientific solutions, medical devices, software, and professional services.