A few months back, we looked into classes of antiretroviral (ART) drugs, particularly NRTIs and NNRTIs, which are key components of therapy for HIV. This month’s episode will carry on from that by examining testing patient samples for ART resistance, using NRTI as the model. We’ll consider why it’s important to do this testing, what method(s) are used and when it’s relevant to do. We’ll also look at some technical aspects of the testing methods and their various pros and cons. Although our focus will be on NRTIs in the HIV setting where this is widely employed, the generic points covered (those not specific to a particular drug / target interaction) are applicable both to other classes of HIV ART drugs and, more broadly, to other viruses with associated ART regimens (for instance, HCV or CMV).
Why do ART resistance testing?
Let’s deal with the easiest question first – why do NRTI resistance testing? The shortest answer is that long-term suppression of HIV replication is the current key to AIDS management; if it can be instituted early enough and is effective enough, an HIV infection can be supressed from either progressing to symptomatic AIDS, or from being readily transmissible; a win both for the patient and for public health. NRTIs are an important component of best practices (High Activity Anti Retroviral Therapy “HAART,” applying a cocktail of NRTI, NNRTI and “Portmanteau inhibitors” targeting both viral protease and integrase).
Multiple NRTI class drugs are available, and while all target the same viral reverse transcriptase (RT) enzyme, they have unique variations in how they effectively bind and inhibit activity. This means that viral sequence changes in the region coding for this gene, leading to amino acid substitutions, can modify drug binding and effect. If the RT develops a mutation, or combination of mutations, which retain enzyme function while reducing binding of an inhibitory drug being applied, viral replication becomes unchecked and disease progression occurs.
The risk of this happening is not insignificant; like most RNA viruses, HIV replication is error prone, leading to frequent mutations and a constantly changing pool of viral variants subject to selective pressure. One biological fact works in our favor here: if within a cell, somewhere in the quasi-species swarm of viral sequence variants one is created which escapes inhibition by the current drug regimen, this resistant variant enzyme has no ability to selectively replicate just its own progenitor sequence. It will spend its time replicating the entire sequence pool, including all the non-adapted sequence variants.
This means that a drug resistance mutation doesn’t immediately take over as the majority sequence, although it may begin to lead to an increase in net viral load. If a resistant variant virus inoculates a new host cell, however, genetic founder effect plays out and the drug-resistant form may rapidly expand. This delay in selection means that if we can detect an increase in viral load early enough – evidence that somewhere, there’s functional RT enzyme forms escaping the drug – we can switch to a different NRTI. Because each NRTI has a slightly different physical interaction with the viral reverse transcriptase (RT) enzyme, mutations conferring RT resistance to one NRTI won’t necessarily be effective against a different one. When this is the case, a therapy change can be employed and continue to drive virological suppression. Our goal is to be able to make these therapy changes when needed, and in an effective manner.
Why genotype and not phenotype – and how do we get this data?
If you’re familiar with microbial antibiotic resistance testing, while molecular testing is a great rapid tool, it’s not the final word; phenotypic testing is key to definitive assessment of resistance. The situation is different here, most simplistically due to pathogen genome size. For a bacterium, many gene products alone or in combinations may result in a phenotypic drug resistance. Tackling that from molecular tools would be a lot of targets to sequence; more importantly, we may not even know the relevant effect of many genetic variations and their combinations, so genetic data alone is not definitive in determining bacterial antibiotic resistance.
By contrast, the HIV genome is small and so extensively studied that we have at our disposal data on how almost any relevant mutation or combination of mutations in the only relevant target – the RT enzyme - effects binding and activity of the available NRTIs. This data is available in databases such as the Stanford HIV Genotypic Resistance Interpretation Algorithm.1 This means that by sequencing the virus present in the patient, and checking against this data, it’s possible to make informed decisions as to which NRTIs will be effective. (A question we won’t delve into here is source of sample. Generally, peripheral plasma is used for simplicity; however, it may not fully represent the same viral sequence population(s) in particular cellular subpopulations such as memory T-cells. The clinical relevance of this unclear at present and out of our scope).
For completeness, it’s worth mentioning there’s a second line of argument against phenotypic testing in this context – cost and complexity. Conducting phenotypic antibiotic resistance testing is generally straightforward and inexpensive for many bacteria; it would be much more expensive and complex, not to mention all the biosafety headaches, to perform on HIV cultures. We’re fortunate that molecular will suffice.
OK, so we want molecular data – how?
If we know we want molecular data to address this, the question becomes what method to use. If we were only looking for a very few specific mutations, simple allele specific PCRs could be used. The reality here, however, is that we must consider the possible impact of many possible mutations scattered across the viral pol gene, so sequencing is the rational approach. Since we’re starting with an RNA genome, and usually at low numbers, we’ll want to do an RT-PCR process to both amplify viral material and convert it to more readily handled DNA. This can of course create headaches of its own, both by potentially biasing the pool (amplifying some viral sequences more than others) or by PCR errors occurring, which can then masquerade as true viral sequence variants. In general, we’ll try to avoid these by using high-fidelity (low error rate) PCR enzymes where possible and be suspicious of any very low-abundance sequence variations (these are more likely PCR errors than high-abundance variations). All of this has only given us starting material for DNA sequencing; we still have to decide what method to apply.
Older technology in the form of Sanger sequencing has been (and at present probably still is) the most common approach. This is partly predicated around the relatively low cost of the platform, the low per-reaction cost and the relatively simple sample preparation (if you can do a regular PCR, you’re equipped to do Sanger cycle sequencing and all that’s left to do is load products on a benchtop capillary electrophoresis machine). The resulting data is human readable. A deficiency of Sanger sequencing, however, is that it works on an entire population of sequences as template for each reaction, with the results representing a “population average” of each base position. Generally, an individual base position variant has to reach something around 20 percent of the population to be detectable in this approach. The consequence is that a small subpopulation of drug-resistant forms could exist in a specimen, but not be seen by this technology. Multiple examples exist in the literature demonstrating Sanger sequencing not detecting HIV variants in 10 percent of population range. Since there’s also published data indicating levels as low as 1 percent of drug-resistant viral forms is enough to have a demonstrable negative impact on therapy, that’s a bit concerning.
Next Generation Sequencing (NGS) methods offer a means to avoid that particular problem. While there are multiple competing NGS platforms, the key in this application is that they all work around the concept of capturing massive numbers of sequence reads, each derived from a single template. We can thus consider this class of assay generically without reference to particular platforms. By putting a sample through an NGS library preparation and then analyzing it, a much more detailed picture of viral sequences present can be obtained. While we may not reliably capture extremely rare variants, reliable detection can be made down into the 5 percent of population range. (Exact lower bounds depend on a number of factors, including sample size, viral load, depth of sequencing, platform employed and particulars of bioinformatics workflow. While these methods can almost all, in theory, detect down to ~0.2 percent or lower, our previously stated concern that low-abundance results might be PCR artifacts is also at play. Five percent is a reasonable generic cutoff for our purposes.)
Further, the NGS approach allows the identification of multiple variations at single sites and may, in certain cases, be amenable to considering “phasing”; that is, being able to determine which of multiple variants at different sites in the target gene are associating together. This can, for instance, be helpful in assessing if individual viral sequences may have multiple resistances. Such linkage information is not available from Sanger sequencing, even if all the variations are visible in the sequence trace files. Finally, NGS data is inherently quantitative in a relative sense, meaning it’s possible to see what the relative viral population proportions are of various sequence isoforms.
The downsides of NGS are that the platforms are generally fairly expensive, and the library preparation methods (while improving) remain relatively complex and labor-intensive. The cost per instrument run is also fairly high in most platforms but note that when multiplexed across enough samples (with attendant implications for batch sizes and possibly assay turnaround times), the cost per individual sample can actually be less than with Sanger sequencing. Bioinformatics workflows remain an issue as well, as the data is not as readily interpreted by non-expert users. Pre-packaged workflows such as that proposed are a solution to this, if regulatory requirements in appropriate context are met.2
Which is better, Sanger or NGS?
The answer for most labs today, is “which one does your clinical lab already have access to in a suitably robust form?” Sanger is, however, a mature method or maybe even long in the tooth, whereas NGS methods continue to improve in accuracy, cost and ease of use. Its inherent capability to resolve lower-abundance viral forms will probably prove useful. If you’re planning for the future, the answer is almost certainly NGS but the longer you can wait to take the plunge, the cheaper and better it’s going to be.
Guidelines – when?
We’ve left the easiest part for last – when should NRTI resistance testing be done? In the U.S, the NIH’s Aidsinfo program’s most recent guidelines (October 2018) for ART resistance testing (including, but not limited to, NRTI) in HIV includes the following key points:3
- For patients just starting on ART, testing at outset is recommended (or if drug therapy is deferred for some reason, testing immediately prior to therapy commencement). This allows for an evidence-based selection of best efficacy agents; note, however, that initiation of empiric therapy shouldn’t be delayed while waiting for test results.
- For patients already on ART, testing should be repeated when there is evidence from viral load testing of “virologic failure” (increases in viral load above 1,000 copies/ml plasma) or lack of response (significant drop in viral load) to current drug regimen.
- Taylor, T., Lee, E.R., Nykoluk, M. et al. A MiSeq-HyDRA platform for enhanced HIV drug resistance genotyping and surveillance. Sci Rep 9, 8970 (2019) doi:10.1038/s41598-019-45328-3