Back to basics: nucleic acid amplification in the clinical lab

Dec. 22, 2016
Before we begin—a note from the author and editor: As “The Primer” enters its fifth year of publication, we thought that it might be time to go back and revisit some of the core topics for the benefit of new readers. A number of this year’s topics will thus be labeled “Back to basics” and act as something of a refresher (or introduction) to key underlying techniques in the molecular diagnostics (MDx) field. Rather than recreate the focus on technical aspects of these methods as presented in earlier coverage, however, these installments of “The Primer” will summarize the concept and application of, as well as any relevant advances in, these techniques from an operational and functional perspective. For readers who wish to go back into the more technical aspects of these topics, references to specific earlier articles in this series, available from the MLO online archives, will be provided as appropriate.

If there’s one single technique which underpins much of molecular diagnostics, it’s nucleic acid amplification. The most common method for doing this, the polymerase chain reaction, or PCR, is so ubiquitous that it’s nearly synonymous with DNA or RNA amplification. A first question, then, is: Why do we care so much about amplifying nucleic acids?

Why amplify?

The primary reason hinges on the fact that PCR, its RNA-focused version RT-PCR, and most other nucleic acid amplification methods are highly sequence-specific. This specificity arises from the intrinsic structure of DNA, and the concepts of complementarity and nucleic acid strand hybridization. Essentially, if you have a known DNA (or RNA) target sequence, if you make the appropriate synthetic complementary strand, you can selectively bind your synthetic DNA to the target to the exclusion of other sequences. (For more on the mechanistic details, see the January 2013 “Primer” article, “DNA and RNA structure: nucleic acids as genetic material,” (

Now that you’ve selectively bound something (a primer) to the target sequence, the amplification part occurs. Using the innate activity of polymerase enzymes (See the February 2013 article, “DNA replication: polymerases,”, and by cycling reaction parameters to induce repeated reamplification of product, it’s possible for a single target sequence to lead to in excess of 1x10e12 amplicons in less than an hour. (See the April 2013 article, “PCR: the basics of the polymerase chain reaction,” All that’s left to do now is detect the presence (or absence) of our amplicon, either in a qualitative or quantitative fashion.

One common application of this begins with asking the simple question: Is this target sequence present in my patient’s sample? These targets can be exogenous sequences found in a particular pathogen such as TB or Zika virus; they may be characteristic pathogenic rearrangements of endogenous sequences (such as the c-MYC/IGH fusion characteristic for Burkitt’s lymphoma); they may be specific common mutations leading to conditions such as cystic fibrosis. They might also be single nucleotide markers (single nucleotide polymorphisms, or SNPs) which aren’t in themselves causal of anything, but have a tight statistical association with particular phenotypic traits such as drug metabolism kinetics. Yet another application is in forensics and specimen tracking, where a pattern of presence/absence/form of a limited number of markers can act to define a unique “genetic fingerprint” characteristic of a sample.

Interpretation of PCR results in this presence/absence context is not always as straightforward as it might seem. To consider two of the examples mentioned above, detection of a marker for mutation causing cystic fibrosis is straightforward in interpretation, while detection of pathogen nucleic acids is not. The reason for this lies in the fact that detectable nucleic acid fragments may be present in a host long past the presence of a live, infectious pathogen. The entire biological and medical context of the assay result must be considered in converting indirect molecular data to answers of clinical significance.

Real-time PCR (qPCR)

There are a number of methodological variations on PCR. A pair of terms which are commonly encountered in the clinical context and merit a brief explanation in this review are real-time PCR (or “qPCR”), and multiplex PCR. Real-time PCR is a process whereby the process of the amplification reaction and its production (or not) of the amplicon(s) being tested is analyzed directly on the thermocycling instrument as the reaction proceeds. In almost all cases, this detection process is done optically through the wall of the sealed reaction vessel, usually through some application of fluorescence or fluorescence resonant energy transfer, or FRET. (See the July 2013 article, “Real-time PCR II: probe methods,”  Compared to endpoint, post-PCR product detection approaches, real-time methods speeds up the assay result determination process—a definite plus in a clinical setting where turnaround time is important. Real-time systems also lend themselves to highly automated multi-sample instrument designs, also a significant plus in the diagnostic laboratory. Both of these significant advantages of real-time PCR pale in comparison, however, to its biggest advantage: because the reaction vessel is not opened, there is a much-reduced chance of laboratory contamination with amplicon and subsequent risk of false positive results.

Because of these benefits of real-time PCR over endpoint analyzed methods, it is by far the standard approach for clinical labs where diagnostics by presence of amplicon is performed. Note also that real-time PCR methods generally lend themselves to being quantitative as opposed to qualitative. Certainly, either kind of data can be informative, but quantitative data can be particularly useful in applications such as monitoring viral load or circulating tumor cell counts in response to relevant therapies. This quantitative aspect is the origin of the “q” in the abbreviation qPCR, used to describe the approach whether or not actual quantitation is performed. (A perhaps natural inclination to use the abbreviation “rt-PCR” for this meaning creates the potential for confusion with “reverse transcription PCR,” or PCR performed on RNA targets; most authors thus use qPCR in this meaning, and the term rt-PCR, when encountered, should be read carefully in context to know which of the two possible meanings is intended.)

Multiplex PCR

As mentioned above, multiplex PCR is another common variation. (See the September 2013 article, “Applications of multiplexing and array method” This is the parallel performance of several independent PCR assays in a single reaction vessel on a single sample. This can be done for small numbers (three to five) of simultaneous targets by use of qPCR methods with distinguishable fluorophore reporters for each target, or by some version of an array-based endpoint analysis technology where large numbers of targets need to be analyzed. Many apparently single-target real-time PCR assays in clinical use today are in fact multiplex “under the hood,” through the inclusion of simultaneous internal control reactions along with the target of interest. More obviously multiplexed reactions are common in applications such as acute respiratory infection testing, where a given presentation could arise from a number of pathogens; the ability to test for them all simultaneously on a single sample is more efficient in terms of lab resources than sequentially testing for the same set of targets in singleplex.

While the majority of clinical applications of nucleic acid amplification methods fall under the category described above of detecting amplicon presence/absence, there are less common reasons a clinical MDx lab might care about amplifying DNA. One of the most obvious and mundane of these would be when we want to perform more in-depth analysis of a sample such as sequencing but have limited and insufficient starting materials to do so directly. In cases such as this, amplification of starting material (either “unbiased” whole genome amplification [WGA] or targeted amplification of specific loci of interest) can allow for downstream analyses to be performed from miniscule starting samples. Of course you can’t get something from nothing, so there are finite lower bounds for starting material content which even these methods can’t solve; but those bounds are orders of magnitude lower than without amplification.

Advancements in PCR

Finally, let’s consider what some of the most apparent changes have been in laboratory application of PCR in the four years since “The Primer” series began in MLO.

  • Quantitative PCR methodologies have continued to supplant earlier qualitative methods in many applications, as the instrumentation needed has become cheaper and more commonplace. Related to this, incremental changes in the underlying chemistry of common qPCR methods (such as better fluorophores and better quenchers) have led to gradual improvements in the sensitivity and accuracy of these approaches.
  • An entirely new approach to quantitative PCR—digital droplet PCR or ddPCR—has been established and is now available on multiple platforms. Based on the automated performance and statistical analysis of large numbers of microscopic qualitative assays, ddPCR can provide data of very high accuracy. (For a review of the basis of this method and its pros and cons, see the article in the December 2013 issue, “Digital PCR: theory and applications,”
  • An increasing number of sample-to-answer instruments—add in your specimen, and all steps of a PCR-based MDx process from extraction through data interpretation—have come on the market, as well as expanded test menus for pre-existing instruments of this type. Such devices and systems should be expected to become even more common, bringing particular MDx tests to more venues that otherwise lack infrastructure and expertise needed for more comprehensive (and more hands-on) testing methods.

If that does not seem to be a large number of changes compared to the pace at which research lab molecular biology techniques have continued to evolve over this timeframe, bear in mind that the clinical lab often sees progress at a very slow pace. This arises naturally out of the great deal of effort and cost needed to appropriately validate a test for its application to patient diagnosis. And even when a newer and incrementally better assay becomes available, it may not be worthwhile for many labs to undertake the effort needed to support its supplanting a currently accepted assay—so adoption can be slow.

This series focuses on the technical as opposed to the regulatory matters surrounding MDx. Since we have just broached the topic of assay validation, however, one final change to the diagnostic PCR landscape should at least be mentioned in passing. For molecular laboratorians in the United States, there have been significant proposed and/or nascent changes relating to regulatory oversight of laboratory-developed tests (LDTs). The impact of this as yet remains uncertain and highly contentious, with a potential to significantly influence the availability of many molecular diagnostics assays. Hopefully, a retrospective look in another four or five years’ time will see further improvements to MDx assay performance, cost effectiveness, and accuracy without loss of access to those assays which are less common yet still critical for specific cases.

John Brunstein, PhD, is a member of the MLO Editorial Advisory Board. He serves as President and Chief Science Officer for British Columbia-based PathoID, Inc., which provides consulting for development and validation of molecular assays.