HIV testing update

Nov. 1, 2011

Introduction

While it has been more than 25 years since the advent of the first HIV diagnostic test and testing has become cheaper, more efficient, and more widely available, it is estimated that a fourth of those in the United States who are infected with HIV are unaware of their serostatus. Just as diagnostic testing has improved and evolved over the last quarter of a century, so has therapeutic monitoring to determine CD4 T-cell counts and plasma HIV viral loads, as well as to determine HIV drug susceptibility. It is vital that laboratory medical personnel be familiar with the nuances of HIV diagnostic, therapeutic, and monitoring tests.

The goal of this article is to discuss HIV testing, CD4 T-cell count determination, plasma viral load monitoring, drug susceptibility testing including tropism assays, and therapeutic drug monitoring. Emphasis will be placed on new and emerging technologies.

In general, patients presenting for initial HIV care should have the following tests performed: HIV antibody testing for confirmation; CD4 T-cell count for immunologic staging; plasma HIV RNA (viral load) for both its predictive value in the rate of disease progression and as a baseline to determine response to therapy; complete blood count; chemistry profile; transaminase levels; blood urea nitrogen; creatinine; serologies for hepatitis A, B, C, and syphilis; fasting blood glucose; serum lipids; and genotypic HIV resistance testing regardless of the anticipated timing of antiretroviral drug initiation.1

Ongoing HIV care requires routine monitoring of CD4 T-cell count and plasma HIV RNA testing. Specific tests with special indications such as a viral tropism assay should be used prior to beginning a CCR5-cell surface receptor antagonist drug, while Human Leukocyte Allele-B*5702 gene testing should be done prior to starting the nucleoside reverse transcriptase inhibitor drug Abacavir.1

DIAGNOSTIC HIV TESTING

HIV diagnostic testing is categorized as either screening or confirmatory testing. A screening test is the first-line testing approach, while confirmatory testing is used for those whose screening tests are repeatedly reactive, or for patients who have indeterminate screening tests but there is a high index of clinical suspicion. Screening tests include traditional enzyme immunofluorescence assays (EIAs) and newer rapid tests, while confirmatory tests include a Western blot (WB), indirect immunofluorescent antibody assay (IFA), and HIV RNA detection by nucleic-acid amplification testing (NAAT).1,2

Enzyme immunoassays (EIAs)

EIAs represent the most commonly used screening test due to their simplicity, high throughput, and high sensitivity and specificity.2 There are numerous commercially available FDA cleared tests that utilize a variety of specimen types including serum, plasma, finger-stick whole-blood, oral fluid, and urine. In general, EIA tests utilize antigen coated microwell plates to capture HIV antibodies in the specimen. EIA testing has greatly evolved, and there are now four generations of tests. In general, the first- and second-generation tests detect IgG antibodies to HIV; however, the first-generation tests use viral lysate and the second-generation tests use recombinant HIV capsid and envelope proteins as the source of bound antigen.

In order to detect both IgG and IgM, one must look to third-generation tests which detect HIV antibodies via antigens conjugated to alkaline phosphatase or horseradish peroxidase (the “sandwich” technique). In addition to the broadening of antibody class detection, third-generation testing also recognizes all HIV-M and O and -1 and -2 subgroups. In addition to detecting IgG and IgM, fourth-generation EIA tests also detect viral capsid antigen p24 through the use of monoclonal p24 antibodies. Because p24 antigen may be present before full seroconversion, fourth-generation testing can detect HIV infection within two weeks of exposure. Currently there are nine commercially available fourth-generation tests; only two are FDA cleared. Fourth-generation assays allow for the differentiation between acute infection (p24 only, no HIV-1 antibody) and established infection (both p24 antigen and HIV-1 antibody). Generally, the fourth-generation tests have a high level of sensitivity and specificity (94.5-99.8%).3

Second-generation testing is the most common mode of screening in the U.S.; however, the FDA recently cleared a second fourth-generation test which may increase thos generation of tests’ availability.2,3

Nucleic acid amplification testing

While fourth generation EIAs can detect HIV antigen prior to seroconversion, the gold standard for acute infection screening is nucleic acid amplification testing (NAAT). HIV-1 RNA detection may identify HIV infection as early as five days after exposure.4 Quantitative NAAT is used to determine if a patient’s serum contains HIV RNA. While this method is highly sensitive, it is also expensive and labor intensive. Thus several public health facilities in the U.S. routinely test HIV antibody negative specimens through pooling protocols.2 NAAT can also be used for quantitative testing yielding viral load information which can be useful for theraputic monitoring. More information may be obtained on pooling methodologies for the detection of acute HIV infection from the senior author.

Rapid point-of-care HIV antibody tests

There are currently six rapid point-of-care tests with FDA clearance.5 Since their initial development in the late 1980s, rapid tests have gone through numerous iterations. The newer tests are self-contained immunochromatographic one-step assays that use whole blood, oral fluid, or serum. Tests can be completed in fewer than ten minutes; one test can be completed in about one minute. Tests use either protein A colloidal gold that permits visual detection of HIV antibodies or second-generation EIA “sandwich” technology with HIV-1 gp41 and HIV-2 gp36 synthetic antigen. The sensitivity of the currently available rapid tests is similar to that of second-generation EIA assays.6

Rapid testing not only allows for quick results, but can also be performed by non-laboratory personnel and may use non-invasive specimens (e.g., oral fluid). In 2004 the FDA first cleared oral-fluid rapid testing. Studies using 135,000 whole-blood samples and over 26,000 oral-fluid samples demonstrated a specificity of 99.80% and a positive predictive value of 99.24% for blood as compared to a specificity of 99.89% and a positive predictive value of 90.00% for oral fluid.7 Rapid tests that use direct, unprocessed specimens and can be easily performed by personnel without laboratory training can qualify for Clinical Laboratory Improvement Amendments of 1998 (CLIA) waiver status, which enables the tests to be run outside traditional laboratory settings.

Drawbacks to currently available rapid tests include a lower sensitivity than third- and fourth-generation EIA tests and, depending on the prevalence of infection in the population being tested, a low (< 90%) positive predictive value.8

Urine tests

Intact IgG antibodies can be found in urine, thus enabling EIA and Western blots to be performed on collected urine. The disadvantages of urine testing are the absence of a current rapid or a confirmatory urine test. Because urine pH is highly variable, creating a rapid test, while possible, is difficult as reaction time varies with pH.9

Confirmatory HIV tests

Positive HIV infection screening tests in the U.S. must be confirmed with either Western blot or indirect immunofluorescent antibody assay (IFA). Recently, FDA clearance was given to a quantitative RNA assay for the confirmation of HIV infection. Western blot has long been the gold standard, and a positive result is indicated by the presence of any two of the following bands: p24, gp41, and gp120/160.10 A negative test demonstrates the absence of bands. Indeterminate results can be found in 10-20% of EIA positive tests. Generally, the presence of a band at p24, p31, or p55, while still classified as indeterminate, is more indicative of true infection as compared to other banding patterns.

VIRAL LOAD TESTING:

Viral load testing should be performed at a patient’s initiation of HIV care. Subsequent viral load testing is used as a marker for HIV viremia and should be tested every three to six months among those on treatment. New guidelines also recommend viral load testing two to eight weeks after the initiation of HAART to determine early response to therapy.3 Viral load can also be used to determine non-adherence or treatment failure as defined by a viral load of >200 copies/mL.11

There are four commercially available HIV-1 viral load assays.12 The different characteristics and indicators of performance are shown in Table 1. Urdea et al. showed that bDNA technology could be used to detect viral load in a directly proportional manner more accurately than real-time PCR because the bDNA signal amplification process is not altered by sample matrix in the same way as real-time PCR.13 Additionally, individual assays were found to have different reliabilities for HIV viral load values across HIV subtypes.14 The easiest of the four assays to run is fully automated. The fully automated assay was associated with the lowest total labor cost per run, and had the fastest turn-around time (six hr for 96 tests per run).12 Overall, fully-automated assay and real-time PCR had lower rates of false negatives as compared to nucleic acid sequence-based amplification (NASBA).14

Table 1. Select characteristics of commercially available HIV nucleic acid detection assays, 201112

HIV viral load testing

HIV Target Region

Internal Control

Amplification

Quantification

Linear Dynamic Range

Detection

Labor

Specificity (%)
(95% CI)

Reliability •

Abbott RealTime HIV-1 (m2000rt)

Conserved region of pol gene

Yes

Real-time PCR

Copies/ml, log10 copies ml, IU/ml, or log10 IU/ml

40 copies/ml from 850ml

Fluorescence

Fully automated

100 (98.05-100.00)

Very reliable

COBAS TaqMan 48 HIV-1 (Roche)

Conserved region of gag gene

Yes

Real-time PCR

Copies/ml

40 copies/ml from 850ml

Dual labeled fluorescent probes with unique quencher dyes

Easy to use, however cartridges are difficult to load

100
(99.30-100.00)

2 crashes affecting 9 specimens and 3 no specimen errors

NucliSens EasyQ HIV-1 v1.2 (bioMerieux)

Conserved region of gag gene

No

Real-time NASBA

Copies/ml

25 copies/ml to 10 million copies/ml

Flourescence

Some pipetting required, but otherwise automated

100

Software caused computer to crash multiple times

VERSANT 440 HIV-1 RNA v3.0 (Siemens)

Several regions of pol

No

bDNA signal amplification

Copies/ml

50 copies/ml to 500,000 copies/ml

Alkaline phosphatase labeled probe incubated with CL substrate

Very labor intensive

100 (98-100)

LP tube broke and lost 95 samples.

HIV viral load testing

Extraction time (h/24 tests)

Detection time (h)

Workflow and
time to results

Cost per run
(including labor) (US $/test)

Abbott RealTime HIV-1 (m2000rt)

1.25

3.00

6h

0.40

COBAS TaqMan 48 HIV-1 (Roche)

2.00

3.75

6h

1.07

NucliSens EasyQ HIV-1 v1.2 (bioMerieux)

0.67

1.00

4h

0.86

 

* Data not available for VERSANT 440 HIV-1 RNA V3.0 (Siemens)

 

CD4 T-cell testing

CD4 T-cell testing yields a quantitative measure of the immune system. CD4 T-cell testing is used to help determine when to initiate antiretroviral therapy (ART) as well as to monitor the response to antiretroviral treatment. Traditional CD4 T-cell counting presents logistical barriers, particularly for resource-limited areas where access to centralized laboratories is difficult. Recently point-of-care CD4 T-cell devices have become available, which might allow for greater access to CD4 T-cell testing.

Generally, CD4 T-cell counts are expressed as absolute counts for a set volume or as a percentage of the total lymphocyte population. For individuals whose lymphocytes are expected to be increased, such as children, CD4 T-cells should be reported as a percentage of the total population or CD4/CD8 lymphocyte ratio.15

CD4 testing using flow cytometry

Flow cytometry in conjunction with fluorescence-activated cell sorting (FACS) is regarded as the gold standard in CD4 T-cell counting. Output usually gives CD4 T-cell count in either percentage of CD4 or CD8 T-cells, although volumetric approaches can give precise CD4 T-cell percentage and absolute CD4 T-cell counts. In general, flow cytometry and FACS utilize fluorescently conjugated antibodies to human CD4 T cell (anti-hCD4) as a means of specifically labeling and directly measuring CD4 T cells. The technology can be broken down into two main types—single and dual platform approaches. Single platform technology utilizes a hematology analyzer and allows for measurement of either the absolute CD4 T-cell count in a fixed volume or the absolute number of CD4 T cells from a ratio of a known number of beads to CD4 T cells. Both those single platform approaches are less expensive and easier to carry out than the dual platform technique. The single platform approach does not require a large reference laboratory; however, both approaches do require FACS. Of note, one of the single platform systems has the extra advantage of utilizing microcapillary technology that results in decreased liquid waste-output and decreased need for highly pure water. In general, single platform systems are preferred over dual systems because of the above advantages.

An important innovative, cost-effective, and instrument-independent method CD4 T-cell counting is the panleukocyte gating (PLG) method, which uses a gating technique along with CD45 and CD4 monoclonal antibody reagents. The method utilizes a dual platform-approach leukocyte count instead of lymphocyte count as the common denominator, and only requires a hematological analyzer plus flow cytometer.16 Jani et. al. showed that this approach was as accurate as the single platform volumetric technique.17 Later studies have shown that test results from the PLG strategy had more between-laboratory precision than conventional flow cytometry methods due to PLG’s standardized protocol, which employs standardized monoclonal antibody reagents and relies on total white blood cell count instead of total lymphocyte count.18 Currently this technology is used by the national public laboratory in South Africa, which maintains one of the largest CD4 T-cell testing programs in the world.

Manual CD4 T-cell counting methods

Manual CD4 T-cell counting methods utilize a variety of laboratory supplies including pipettes, test tubes, manual counter, and refrigerated reagents, as well as either a microscope with a 40x objective and a hemocytometer. Manual counting requires beads that become visible by microscopy after having contact with CD4 T cells, thus enabling the cells to be quantified. While individual products vary, most test kits rely on capturing CD4 T-cells either with antibody capture or hydrophilicity, and illuminating the cells with fluorescently conjugated antibodies or staining the cell nuclei. One of the greatest drawbacks to this technology is its reliance on highly trained technicians to ensure accurate cell counting. Manual CD4 T-cell counting can be aided with the use of automated cell counting devices. One study demonstrated a 93% concordance with FACS.19 In addition to necessitating highly qualified technicians, manual techniques require a robust quality assurance program compared with more automated methods.

Point-of-care CD4 T-cell counting technologies

Currently there are three point-of-care CD4 T-cell counting technologies commercially available. Two of those kits use finger-stick whole blood specimens, have flexible power options, and use stable dry pre-measured reagents. Those technologies require minimal operator training. Two systems use modified flow cytometers; however, only one has current FDA clearance. The third commercially available point-of-care testing device relies on dual-fluorescence image analysis to count both CD3 and CD4 T-cells that have been labeled with anti-hCD3 and anti-hCD4. Values are given as CD3+/CD4+ ratio. This assay requires a rigorously collected finger-stick specimen or venous whole blood specimen. It uses a disposable CD4 cartridge containing sealed stable dried reagents and a highly portable analyzer capable of yielding results within 20 minutes (See Table 2).

Recently, two simple, highly mobile rapid CD4 T-cell assays have attracted attention. Both are self-contained point-of-care assays that can be used by non-laboratory-trained individuals. One assay consists of a fixed sealed cartridge containing dried, stable reagents. After the specimen is introduced, the cartridge is inserted into a portable analyzer and reports results in minutes. The other uses a single disposable test tube containing anti-CD14 beads and anti-CD4 beads. The assay does not require electricity. A health worker must read the CD4 T-cell count by visually inspecting the test tube20 (See Table 2).

 

Table 2. Select characteristics of Point-of-Care CD4 T-cell assays, 201115

 

 

 

CD4 T Cell Point-of-Care Test

Stabilized reagents

Time to result (min)

Automation

Measured parameter

Throughput per hour

Yes

8

Yes

CD4 and CD4%

5-6

Yes

1

Yes

CD4 and CD4%

30-40

Yes

20

Yes

CD4

3

Yes

8

Yes

CD4

7

HIV ANTIRETROVIRAL DRUG RESISTANCE TESTING

In general there are two types of resistance assays: genotypic and phenotypic testing. Genotypic assays use sequenced regions of the pol gene of the HIV genome, the target site of most antiretroviral drugs. The enzymes coded for within those regions, reverse transcriptase, protease and integrase, are key to HIV replication. Mutations in those regions can produce active enzymes that are not susceptible to antiretroviral drug inhibition. Phenotypic assays depend on cell culture-based viral replication assays with and without drug exposure.

The performance of genotypic assays can vary among laboratories due to reproducibility, specimen viral load, and specimen viral subtype. In addition, individuals who are more treatment-experienced have more viral heterogeneity, which can be difficult to detect using genotype-based assays Genotypic testing is also highly user-dependent and requires highly trained laboratory personnel.21

Genotypic tests utilize predefined drug resistance mutations or can be classified using automated rule-based algorithms that label the majority species of virus as “susceptible,” “possibly resistant,” or “resistant.” Those algorithms exhibit some degree of variation in the classification of expected drug activity, with the highest variation seen for certain classes of drugs such as nucleoside reverse transcriptase inhibitors and protease inhibitors.

There are currently two FDA-cleared genotype testing kits in addition to the many in-house tests that reference laboratories use. While studies have shown that the two FDA-cleared kits provide concordant results (concordance of 99%–249 of 252 mutation sites), because they use different computer algorithms, they may differ in the way that the mutations are defined.22 Of note, concordance between the in-house reference laboratories and the commercial assays were also good (concordance of 80%—201 of 252).23

Limitations to the interpretation of genotypic test results rest in the fact that the predictive power of the test result depends upon the number of matched datasets, meaning that for a small dataset there will be more variation. Thus variation is often higher for new drugs or less widely used drugs.24 This test only integrates preselected codons, not the entire sequence, and thus can potentially miss novel mutations. Genotypic tests also require specimens with at least 500 to 700 HIV RNA copies/ml. Other than the differences already discussed, the two commercial assays are fairly similar; however, one assay has more limited throughput and requires additional laboratory instruments. Its advantage over the other is that the output data are already synthesized, yielding relevant resistance patterns and obviating the need for the testing laboratory to cross-reference established databases.

Allele-specific PCR assay, single-genome, and ultra-deep sequencing assays are three new modes of genotypic resistance testing that are used in research and are under evaluation for use in clinical practice. Each of those assays has potential advantage of cost-saving, using a single assay to detect a specific point mutation and identifying low-frequency mutations (< 20%) that might have implications for the clinical response to treatment.

Phenotypic testing

Traditional phenotypic testing measures the degree to which a specific drug inhibits viral replication in vitro. Like other methods used to determine resistance patterns in bacteria, HIV phenotypic testing assesses the inhibitory concentration (IC) that is needed to inhibit infected cell growth by 50% (IC50) as compared to virus-infected cells replicating in the absence of drug.24 Phenotypic testing hinges on creation of a chimeric virus using PCR amplified genes from the HIV virus in question and a genetic skeleton from laboratory constructs. Native HIV-1 DNA sequences coding for protease, transcriptase, and the 3′-terminus of gag are combined with other HIV-1 genes derived from unique laboratory constructs. This chimeric virus is then used to transfect cultured cells, and viral replication is then monitored in the presence and absence of specific antiviral drugs.

When different phenotypic assays are compared, studies have shown an overall 86.9% concordance rate. Individual concordance rates for antiretroviral protease inhibitors are the highest (93.4%), and rates are the lowest for antiretroviral nucleoside reverse transcriptase inhibitors (79.8%).25 Output data from phenotypic assays come in the form of fold-change in susceptibility of the test sample compared with a control or known drug susceptible isolate. A biologic cutoff has been adapted to use as a baseline. This cutoff represents the normal distribution of susceptibility to a given drug among treatment-na”ive individuals. In order to help aid in clinical decision making, two levels must be defined—the intermediate resistance range and the full resistance range. As the name suggests, the intermediate resistance range is the point at which there is a perceptibly diminished clinical response, while the full resistance range is the point at which there is no drug response. Those cutoff values are available for most approved drugs and are based on clinical trials and cohort data.21

The major limitations to phenotypic testing are cost, turn-around-time and availability. In the U.S, only two commercial laboratories currently offer HIV phenotypic resistance testing.26,27 The main advantage to phenotypic resistance testing is that it may identify resistant strains secondary to novel mutations.

Virtual phenotype

By using computer algorithms to compare genotypic data from native HIV-1 RNA to a large database of corresponding phenotypes and genotypes, one can generate a “virtual phenotype.” While this technique shows high correlation for the majority of drugs, virtual phenotype as compared to genotyping alone has not proven to provide superior results.21 The main advantage of this commercially available test is that the output incorporates clinical cutoffs based on viral response for the 14 most common forms of combination antiretroviral therapy.28

TROPISM TESTING

Since certain antiretroviral drugs act by blocking HIV entry through co-receptor activation, additional resistance tests are necessary to determine the efficacy of those drugs. Tropism testing or co-receptor testing, like resistance testing, can be done either by phenotype or genotype.21 Phenotypic testing requires amplification of the env sequence (the HIV gene that encodes for the CCR5 receptor) and creation of the viral pseudotypes. Otherwise infectious recombinant virus expressing patient-derived env sequences along with reporter genes can be used. Regardless of how the viral material is derived, it is then inoculated onto CD4, CCR5, and CXCR4 expressing T cells. Reporter gene activity is monitored, indicating the presence or absence of infection. If the HIV-1 isolate exclusively uses the CCR5 co-receptor for entry, it is termed an “R5 virus” or “R5 tropic,” while those using only the CXCR4 co-receptor are termed “X4 virus” or “X4 tropic,” and mixed isolates are “dual-mixed viruses” or “dual tropic.”

The oldest genotypic tropism assay interrogates the coding region of the gp 160 HIV-1 envelope protein and is 100% sensitive at detecting 0.3% CXCR4-using minor variant.29 Another genotypic assay, however, unlike the former assay, utilizes deep sequencing of the whole envelope protein gene.30

Tropism testing requires plasma samples with a minimum of 1000 viral copies/ml. Studies have also shown that the assays can detect to a 5-10% population level of X4 viruses; however, newer assays can detect X4 populations as low as 0.3%.29 Genotypic approaches to tropism assessment depend upon sequencing the V3 loop of env and referencing predictive algorithms. The two most commonly used algorithms include the “11/25 rule” and the total charge rule. The “11/25 rule” is based on the charge of amino acids at positions 11 and 25. If the charges are both positive, the virus generally is classified as CXCR4 tropic. Conversely, one can look at the overall charge of the V3 loop, and if the total charge is greater than five, the virus is again thought to have a CXCR4 tropism.

Heteroduplex tracking assays can also be used to detect viruses with CXCR4 tropism. The assay utilizes PCR-amplified env gene and hybridizes those genes to V3-coding sequences from viruses with known tropisms.

Overall, genotypic approaches, when compared to phenotypic assays, have proven to have better specificity, but poorer sensitivity. That observation can be explained in part by the extreme heterogeneity of HIV-1 env genes as well as the unfortunate fact that not all viral tropism reside in the V3 loop.29 Current guidelines in the U.S. favor phenotype testing.1

THERAPEUTIC DRUG MONITORING

Antiretroviral blood levels are influenced by medication adherence as well as individual patterns of metabolism and the resulting bioavailability.31 Because low antiretroviral drug levels can contribute to the development of viral drug resistance, and high antiretroviral drug levels may result in toxicity, therapeutic drug monitoring may be used by clinicians to aid in their clinical monitoring. Looking at longer intervals of exposure has been proposed to be a more accurate predictive marker of treatment outcomes.32 Several research groups have shown that hair, which is easy to collect and store and does not require biohazard precautions, contains trace amounts of antiretroviral drug through exposure to blood, sweat, and sebum.33 It is important though to note that while serum drug-level plays a large role in hair drug-levels, hair levels are also highly influenced by the drug’s biochemistry, hair color, hair growth rate, and hair cosmetic procedures.34 Huang et al. reported a novel method of using hair for therapeutic drug monitoring utilizing highly sensitive liquid chromatography/tandem mass spectrometry.32 Their study focused on the efficacy to evaluate drug levels of efavirenz (a non-nucleoside reverse transcriptase inhibitor), lopinavir and ritonavir (both protease inhibitors). Unlike previous methods using high-performance liquid chromatography that required large amounts of hair, Huang’s group used only 10 to 30 strands of hair. Using highly sensitive liquid chromatography/tandem mass spectrometry and 2mg of cut hair, as little as 0.01ng ritonavir/mg hair and 0.05 ng lopinavir and efavirenz/mg hair was detected.32

Conclusion

In the last 25 years, HIV testing has progressed and expanded from laboratory-based HIV antibody testing to rapid and easy point-of-care testing, sophisticated drug susceptibility assays and laboratory-based therapeutic drug monitoring. Testing continues to become more accurate, faster, and cheaper. In the future, we expect to see more efficient automated testing platforms, over-the-counter assays for self-testing, a further expansion in point-of-care testing for CD4 T-cell counts and viral load determination, as well as newer, simpler assays for resource-limited settings.

Acknowledgments: We would like to thank Dr. Tulio de Oliveria at the Africa Centre for Health & Population Studies, the Nelson R Mandela School of Medicine, University of KwaZulu-Natal, South Africa for his review and constructive comments.


Authors: Jenny K. Cohen, BA, UCSF School of Medicine, and UC Berkeley School of Public Health. Jeffrey D. Klausner, MD, MPH, UCSF


References

  1. Marquez C, et al. HIV testing: An update. MLO Med Lab Obs. 2008;40:12-18.
  2. Fiebig FW, et al. Dynamics of HIV vivemia & antibody seroconversion in plasma donors: implications for diagnosis & staging of primary HIV infection. AIDS. 2003;17:1871-1879.
  3. Panel on Antiretroviral Guidelines for Adults and Adolescents. Guidelines for the use of antiretroviral agents in HIV-1-infected adults and adolescents.
  4. Louie B., et al. Assessment of rapid tests for detection of human immunodeficiency virus-specific antibodies in recently infected individuals. J Clin Microbiol. 2008 Apr;46(4):1494-7.
  5. Stanger K et al. FDA-approved rapid HIV antibody screening tests page.
    http://www.cdc.gov/hiv/topics/testing/rapid/rt-comparison.htm Updated February 15, 2008. Accessed

PointCare NOW CD4NOW(TM)

(PointCare Technologies, Inc.)

CyFlow`O miniPOC

(Partee)

PimaTM

(Alere)

Daktari

(Daktari Diagnostics, Inc.)