Defining success early when transitioning biomarkers from research to the clinic

Nov. 19, 2020

Within the field of biomarker research, interest in seeking early, preventive indicators of prodromal diseases has risen, as has the development of multiplex biomarker panels that better capture disease complexity, compared with single biomarkers. These efforts have the potential to transform many therapeutic areas and have been made possible by the emergence of powerful technologies, such as advanced mass spectrometry and whole genome sequencing for the discovery of protein, metabolite, and genomic biomarkers.

Biomarkers can improve health outcomes by enabling better treatment management, earlier disease detection, and improved monitoring of drug efficacy and toxicity in clinical trials.1 However, the transition from biomarker discovery in a research laboratory to real-world application in the clinic is currently a challenging one. A significant amount of method redevelopment is often required to bridge the gap, as the demands and priorities of research and clinical laboratories differ significantly. In this article, we explore best practices for developing a streamlined proteomics workflow for large-scale biomarker detection and explain how adopting standardized protocols and automation can help smooth the transition from the research laboratory to the clinic.

Define success with quantitative data

Having confidence in the accuracy of biomarker measurements is paramount, as the results are used to guide important decisions. For example, biomarker data may influence the management of an individual’s healthcare, or it may be used to assess the safety of a new therapy in a clinical trial. High-quality data are needed to support these endeavors, and in some cases, those of future studies, as data may also be interrogated in later studies in the context of additional information.

The key to building confidence, in the case of clinical tests, lies in defining and limiting variability within the workflow. Although common practice, it is not sufficient to report a single, cumulative measurement of error. To truly assess analytical errors, we need to take a closer look at the individual components within a protocol, from sample preparation to liquid chromatography to mass spectrometry, for example. Without a detailed breakdown of the sources of error, confidently comparing data generated across different laboratories is difficult. In this context, a widely circulated quote comes to mind: “if you can’t measure it, you can’t improve it.” That is to say, if sources of error aren’t identified within a workflow, they are unlikely to be addressed.

While one individual might be able to reliably produce consistent results day to day, it is easy to underestimate the extent of manual errors that can occur when a workflow is adopted and multiplied on a larger scale. Introducing quantitative assessments at every stage of the workflow allows laboratories in any country to reproduce the method and trust the results – a critical test of method quality that is not matched by publication in a peer-reviewed journal. Establishing specific, objective checkpoints enables a laboratory technician to have greater confidence throughout the process, allowing them to ensure the assay is robust before sharing biomarker results with the patient or clinician.

Consider statistical power as early as possible

While the adoption of quantitative assessments helps generate accurate biomarker measurements, the data will be of no real value if the study is too small. To ensure results are meaningful, it is wise to consider statistical power early in the development of biomarker tests for clinical use – rather than as an afterthought. The statistical power of a study is sometimes referred to as sensitivity and is a measure of how likely the study is to distinguish an actual effect from one of chance. If the sample size is too small, the statistical power will be low and the validity of test results will be compromised, as the probability of making a Type II error increases. In other words, in a study with a low statistical power there will be a higher chance of failing to detect a difference between groups.

Tests that seek to determine the presence or absence of a biomarker are few and far between; it is more common to compare the magnitude of a biomarker across groups in order to detect a concentration above a certain threshold (e.g. two- or three-fold). Ensuring sample sizes are reflective of realistic expected differences helps improve the chances of detecting true biological differences; a greater sample size will be needed to detect smaller differences in magnitude as assessed by repeatability and specificity. Even at the early stages of research, it is important to consider the size of the cohort and the specific population in which the biomarker will be measured.

Streamline steps and remove room for interpretation

Although research and clinical laboratories share many common goals and values, their priorities differ. Maximizing the sensitivity of a biomarker test is a fundamental priority in the research laboratory, while in clinical applications, reproducibility becomes more important as it is critical that biomarker test results can be reproduced across laboratories for them to be of benefit. Without an efficient and standardized protocol, reproducibility is impossible to achieve. On a larger scale, small inefficiencies are amplified, creating a high demand for time and labor. In order to remove this unnecessary bottleneck, protocols should be developed with clarity and efficiency in mind.

Removing ambiguous language allows the workflow to be more easily adopted in multiple laboratories; the method should be written in a way that allows any technician to follow the procedure and have confidence in their results. Instructions such as “shake gently” are common yet are devoid of the details needed to ensure that every technician would execute the step in the same way. Without clarity, operators will be left to navigate unclear instructions, experimenting with trial and error to find a protocol that works.

Reduce variability in reagent source and dispensing

Even the clearest, most efficient method, performed to perfection, will be difficult to reproduce at scale if standardized reagents are not used. Often, reagents produced in-house are not subject to rigorous quality control (QC) assessment, so there is no guarantee they will be produced consistently to enable the generation of reproducible results. As such, they are not suited to use on a wider scale. Likewise, commercially produced reagents are not immune to variability, as not all companies have the same QC standards.2 Therefore, selecting reagents with minimal batch-to-batch variability is paramount.

In addition to reagent reliability, another challenge lies in dispensing them accurately and consistently. Over time, biomarker tests implemented at scale may be executed by thousands of scientists across multiple laboratories. As a result, some variability in pipetting is inevitable – even if most laboratory technicians are highly accurate. Implementing liquid-handling robotic systems removes this variability and is often a necessity for laboratories faced with a high volume of samples.

For laboratory scientists unaccustomed to automation, the transition to using programmed robotics may be unappealing. Fortunately, many developers are appreciative of this barrier and are actively working to remove it. User-friendly platforms are becoming more widely available and expected.3 Indeed, there is a strong and growing view that experience with liquid-handling robots should not be a prerequisite for using them. Instead, the goal for many in the space is to offer a “plug and play” platform that enables scientists to easily control the platform and obtain consistent results.

Place quality control at the core

Although quality assurance (QA) of biomarker protocols is a widely acknowledged issue, few workflows include standard QC/QA procedures. Consistent QA/QC procedure not only drives quality, but also facilitates a comparison of the data among different laboratories. Part of the solution lies in having reference samples available. For example, using a standard peptide assay to assess peptide recovery after sample preparation and prior to mass spectrometry analysis, enables reproducibility to be assessed within and across laboratories. Similarly, frequent testing of pooled, known blood samples across longitudinally different runs can help identify any analytical issues as they arise.4 Data from pooled blood samples help operators assess whether a seemingly outstanding data point is simply a result of biological variability, or an indication that an analytic error has occurred, such as a mass shift in the mass spectrometer. Such tools are beneficial for technicians in the laboratory who can stop and address any sources of error before continuing.

Standardization of the method itself is a crucial aspect of maintaining quality results at scale and is beneficial when considering protocol changes in the future. By collecting thorough, concrete data, protocol developers can gauge a baseline level of quality, variability, and throughput, and can objectively consider what criteria should be superseded in order to move forward with more innovative approaches. Achieving a truly optimized end-to-end workflow solution requires close scrutiny of important parameters at each step. For instance, asking “what is the highest throughput that can be achieved while continuing to meet the quality criteria?” Scientific progress is built on precision and defined parameters; building workflows with this mindset is the best way forward.

Conclusion

Currently, transitioning biomarker identification from a research laboratory to application on a large scale is highly challenging, largely because the priorities of research laboratories differ from those in a clinical setting. To address this gap, it is important to incorporate quantitative checkpoints into defined workflows to ensure biomarker measurements can be reproduced – a prerequisite for delivering real-world value. Harmonizing methods while removing ambiguity within the protocol, enables laboratories to implement optimized workflows that deliver reliable results. For many laboratories, the transition to automation is necessary to ensure they can control the variability that occurs on a larger scale, while maintaining quality and throughput. If those in the field of biomarker discovery can maintain awareness of the demands, best practices and priorities of a clinical laboratory, the upscaling transition will be a lot smoother when the time comes.

References:

  1. Bhawal R, Oberg AL, Zhang S, Kohli M. Challenges and Opportunities in Clinical Applications of Blood-Based Proteomics in Cancer. Cancers. 2020;12(9):2428. doi:10.3390/cancers12092428
  2. Baker M. Reproducibility crisis: Blame it on the antibodies. Nature. 2015;521(7552):274-276. doi:10.1038/521274a
  3. Chu X, Roddelkopf T, Fleischer H, Stoll N, Klos M, Thurow K. Flexible robot platform for sample preparation automation with a user-friendly interface. 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO). December 2016. doi:10.1109/robio.2016.7866628
  4. Zhang G, Fenyö D, Neubert TA. Evaluation of the Variation in Sample Preparation for Comparative Proteomics Using Stable Isotope Labeling by Amino Acids in Cell Culture. J Proteome Res. 2009;8(3):1285-1292. doi:10.1021/pr8006107