This book includes a plain text version that is designed for high accessibility. To use this version please follow this link.
MLO1001 Cover Story

PREDICTING DISASTER
By Carren Bersch, Editor

A re we really prepared for emergencies and/or disasters — natural or man-made? Released on Dec.16, 2009, the 7th annual Ready or Not? Protecting the Public’s Health from Diseases, Disasters, and Bioterrorism report says, basically, that we are not. The report cites the nation’s ability to respond to public-health emergencies has serious underlying gaps — gaps brought to light during the H1N1 flu outbreak. The report also points to the ongoing economic crisis as straining “an already fragile” public-health system.1
The Ready or Not? report says a “Band-Aid” approach to public health is inadequate. Key infrastructure concerns highlighted by the study were lack of real-time coordinated disease surveillance and laboratory testing, outdated vaccine-production capabilities, limited hospital surge capacity, and a shrinking public-health workforce:
More than half of the states had experienced public-health funding cuts and a 27% cut in federal preparedness funds since FY2005, which “puts improvements that have been made since the Sept. 11, 2001, tragedies at risk”;
Not one state received points on all 10 indicators;
Eight states tied for the highest score of nine out of 10:
Arkansas, Delaware, New York, North Carolina, Oklahoma, Texas, Vermont, and North Dakota;
With three out of 10, Montana scored lowest;
14 states do not have the capacity in place to assure the timely pick-up and delivery of laboratory samples on a 24/7 basis to the Laboratory Response Network, or LRN;
11 states and Washington, DC, report not having enough laboratory staffing capacity to work five 12-hour days for six to eight weeks in response to an infectious-disease outbreak, such as H1N1; and
among a series of recommendations for improving preparedness is a suggestion for public-health systems to reach out quickly and effectively to high-risk populations, including strengthening culturally competent communications around the safety of vaccines. Health disparities among low-income and racial/ethnic minorities, who are often at higher risk during emergencies, must also be addressed.1
Another disaster preparedness expert speaks out
At the National Center for Disaster Preparedness at Columbia University’s Mailman School of Public Health — which offers resources to the nation’s system of hospitals, public-health agencies, clinics, law enforcement, and emergency services — Irwin Redlener, MD, its director, speaks and writes extensively on national disaster-preparedness policies, pandemic influenza, the threat of terrorism in the United States, and other related issues, some of which echo the recent Ready or Not? report — main among his relevant points is that we have paid no attention to vulnerable populations like the disabled and children.2 In fact, Redlener goes so far as to point out that we have not yet defined what a “prepared America” is.2 And he strongly suggests that we have a failure of imagination in disaster planning: “No one is asking how far the consequences of disasters go.”2
Dr. Redlener’s lectures include material from his book, “Americans at Risk: Why We Are Not Prepared for Megadisasters and What We Can Do Now,” authored in 2006, which lists several major points regarding disaster/emergency preparedness. He mentions “communications” as a major necessity for disaster/emergency preparedness. His example emanates from the terrorist attacks on Sept. 11, 2001, during which police officers on the ground could not communicate with the firefighters who were in the higher floors of the World Trade Center in order to warn them to evacuate the buildings. Redlener laments the loss of 343 firefighters on Sept. 11 due to the simple fact that police and firefighters did not have interoperable communications devices. Yet, five years later, Redlener says the situation had not been resolved. In the interim, he discovered that Hurricane Katrina’s first responders had the identical communications problem.2 Now, a little more than eight years after Sept. 11, have we achieved any better measure of communication among first responders?
Redlener points to the slow, steady degradation in our governmental agencies such as the Federal Emergency Management Agency (FEMA) which was, in the 1990s, the “government’s crown jewel.”2 During and after Hurricane Katrina, Redlener claims FEMA’s leaders had little or no experience. The handling of Katrina, he says, was the “ultimate reality show, ‘Bureaucracy Gone Wild’” and “the level of incompetency and functionality demonstrated by the United States to the entire world in New Orleans was without precedent.” (FEMA is now housed under Homeland Security.)2 Redlener claims “Achilles’ heel” of American disaster planning is its transportation and communications infrastructures — and weak infrastructures complicate disaster preparations, “making us vulnerable.”

Pandemic influenza
What about our preparations for a healthcare surge during a flu pandemic? Redlener’s commentary on preparedness during the H1N1 flu season and potential pandemic, include his example of hospitals full to overflowing. In New York City alone, a flu pandemic could see 2 million of its 8.2 million citizens suffer from the H1N1 flu, with 200,000 of those needing hospitalization. He asks whether or not anyone has thought through the consequences of this type of need in surge capacity.3
Disaster-preparedness expert, John M. Barry, a prize-winning and New York Times best-selling author of five books including “The Great Influenza,” answered that query in a September 2006 MLO article: “Our penchant for just-in-time operations would have a crippling effect on all institutions, but particularly upon those charged with taking care of large patient populations suffering from influenza. Supply-chain interruptions would wreak havoc, and few, if any, industries have any ‘surge capacity.’”4
Debating the flu
Throughout the latter part of 2009, citizens nationwide debated the wisdom of getting H1N1 vaccinations; at the same time, vaccine producers — for a variety of reasons — failed to meet established deadlines. In fact, an almost laissez-faire philosophy about H1N1 permeated many communities. The one item that seemed to rouse popular interest was a government announcement for employers: The Centers for Disease Control and Prevention suggested that they develop “flexible leave policies” so that employees with flu-like symptoms or with family members with symptoms could use those without fear of losing their jobs. Statistics from that 1918 flu epidemic did not seem to make much of an impact on today’s potential victims:
The 1918 flu pandemic killed about 675,000 Americans.5
The 1918 flu’s high death rate hit 15- to 34-year-olds.5
The influenza epidemic came in three waves. The first wave, in the spring 1918, took far fewer victims than the second.5
People's indifference to the 1918 flu epidemic, both at the time and like today, led directly to the rapid and deadly spread of the disease. Many believed lethal epidemics were a common factor in their lives; at first, the flu seemed no different.5
Most people had already lived through at least one smaller-scale epidemic (e.g., cholera, yellow fever, malaria), so another incurable disease had little effect on those who had already survived one. Flu news paled in comparison to the news from the European Front during World War I.5
Terrorism threats
In the past year alone, we have had a sampling of different situations that either threatened to or actually put first responders, public-health professionals, and healthcare workers on the front lines and/or at risk, and created potential problems and disasters for others down the disaster-/emergency-containment line. When it comes to terrorists, Redlener admits that America has some “tasty targets for terrorists.”2 He pointed out in 2006 that in a national disaster or emergency concerning disease spread or terrorist activity, we had no plans for closing our borders. We still do not. Redlener recognizes Israel as a good example of a country that is prepared because its citizens actively practice their preparations for such eventualities.
Fire and flooding threats
Outside Los Angeles in September 2009, several wildfires tore through the Angeles National Forest. The largest, the Station Fire, burned more than 140,000 acres, destroyed nearly 100 structures, and claimed the lives of two firefighters whose vehicle fell from a road into a steep canyon. Evacuation orders affected thousands in and around the city. 6 A year ago in January, river waters spread over highways, farms, towns, and parks in Washington, shutting down traffic on a 20-mile stretch of heavily traveled Interstate 5 between Seattle and Oregon and threatening the federal roadway north of Seattle. The risk of landslides also was high, leading to closure of all passes across the Cascades.7
Shooting threats
The Fort Hood shooting incident in late 2009 came as a shock to most people. Maj. Nidal Malik Hasan faces 13 counts of premeditated murder and 32 counts of attempted premeditated murder stemming from the Nov. 5 shootings at a processing center at the Fort Hood Army Post.8 For lab personnel, however, Nov. 10, 2009, may have been an even more shocking incident.
As reported by The Dark Daily, in a suburb near Portland, OR, a gunman entered a laboratory facility owned by Legacy MetroLab, shot his estranged wife, then killed himself. Two other individuals in the laboratory were injured, including a man admitted to the hospital with multiple gunshot wounds.9
One other example of a shooting attack within a clinical laboratory or pathology group practice took place on June 28, 2000, when Rogert Haggitt, MD, professor of Pathology and director of Hospital Pathology at the University of Washington was shot and killed in his office by a pathology resident, Jian Chen, MD, who then turned the gun on himself and died. Chen’s contract had not been renewed. Chen had spoken to colleagues at least 60 days earlier about purchasing a gun, and had been offered but refused counseling.9
Because attacks like these are such a rarity, it is difficult for clinical lab managers and pathologists to develop specific contingency plans for such an event. On the other hand, these events are a reminder that the unexpected can always happen. Thus, establishing a clear emergency-response chain of command and protocols for all types of events can prove invaluable when the need arises.9
Earthquake threats
A minor earthquake rattled southeast Nebraska on December 17th but caused little damage and no injuries. The U.S. Geological Survey (USGS) reported the 3.5 magnitude quake struck two miles north-northwest of Auburn. Though not known for its earthquakes, Nebraska has experienced several notable ones since its founding in 186, the most serious a magnitude 5.1 quake in 1877 in the east-central part of the state.10 The National Earthquake Information Center reports from 12,000 to 14,000 earthquakes yearly or an average of about 35 earthquakes every day.11
Conclusion
Not only have we heard the results of a notable annual report and read the arguments of disaster/emergency experts, throughout this section, Drs. Poutanen and Williams and Mr. Sharar share the outcomes of their varied real-life disaster/emergency situations, each of which resulted in lessons learned and incorporated into their institution’s preparedness plans. This is a new year, and new years typically begin with some mighty resolutions. Perhaps 2010 is the year for a thorough evaluation and upgrade of your laboratory’s disaster/emergency plan(s), taking into consideration the types of disasters discussed here. We predict that by year’s end we will all be better prepared, beginning now.

References (more detailed listing online)
http://healthyamericans.org/reports/bioterror09/. Accessed 12/23/09.
http://fora.tv/2006/09/15/Irwin_Redlener#comments_section. Accessed 12/23/09.
www.youtube.com/watch?v=saM7iG_aKDg.) Accessed 12/23/09.
www.mlo-online.com/articles/0906/0906clinical_issues.pdf Accessed 12/23/09.
www.haverford.edu/biology/edwards/disease/viral_essays/redicanvirus.htm Accessed 12/23/09.
www.cnn.com/2009/US/weather/01/08/washington.floods/index.html. Accessed 12/23/09.
www.cnn.com/2009/US/weather/01/08/washington.floods/index.html. Accessed 12/23/09.
http://www.foxnews.com/story/0,2933,578945,00.html. Accessed 12/23/09.
www.darkdaily.com/gunman-enters-clinical-laboratory-in-oregon-yesterday-two-dead-two-injured-111. Accessed 12/23/09.
www.1011now.com/home/headlines/79467002.html. Accessed 12/23/09.
www.crew.org/home/eqfacts.html. Accessed 12/23/09.

Why lab professionals should care about mass-fatalities planning
If your disaster-preparedness plan fits on one page, it is not a complete plan. Most preparedness-oriented medical or emergency professionals would opine that, in general, disaster/emergency planning is an underdeveloped asset — for hospitals, laboratories, and communities — for many reasons: time (e.g., competing priorities; contemporary business; funding; politics (regional, community, internal); lack of a functional preparedness-organizational structure; and/or lack of a communal belief that “it” could happen here and, therefore, a presumption that preparedness planning is optional.
And mass-fatalities plans are (by anecdotal experience) often overlooked; and if a plan exists, it often merely states a location where decedents will be located, or states, “Get refrigerated trucks.” The single most common, and egregious, error in mass-fatalities planning is to think the plan only requires a place to “keep the bodies.”
Second, labs (in hospitals) as well as pathologists are the likely recipient of fatalities in plans-gone-bad — they have personal incentives to appraise and promote improving the condition of their facilities’ plans. And the only way to appraise a plan is to read it. A comprehensive forensic plan, which can provide guidance for a much simpler hospital plan, can be found at www.dmort.org/FilesforDownload/NAMEMFIplan.pdf (National Association of Medical Examiners).
Third, in certain situations (i.e., pandemic influenza), there will be no help from outside (DMORT) because everyone will be affected — communities, states, and facilities will need to respond locoregionally.
And fourth, good plans are highly interdisciplinary, providing compassionate, secure, and respectful care for both the living (relatives) and the deceased.
Those of us from the lab (anatomic pathology manager, the cytologist who serves on the hospital DECON team, and I) have been working with a hospital interdisciplinary team to update and enhance our mass-fatalities plan — which is the proper term and means how we will manage an unusual number of deceased persons. The Omaha Metropolitan Medical Response System (OMMRS) created a Mass-Fatalities Subcommittee several years ago — chaired by a forensic dentist who is a DMORT member — to enhance planning for a mass-fatalities incident in the Metro Omaha area. I became the informal facilitator and principal author of the plan when our hospital Emergency Planning Committee was tasked with upgrading ours. We did not have a decent plan (which is not uncommon), but are now nearing finalization of a much more comprehensive effort. We have received lively and extensive content input from ED, DECON team, nursing, infection-control, safety, security, maintenance/engineering, and lab personnel — so, to be sure, it is not “my” plan. It was sent recently to the chair of the OMMRS Mass-Fatalities Subcommittee for his input. The goal of the OMMRS and our plans is to seamlessly integrate them as fully as possible.
While our Methodist Hospital plan is not perfect, here are its contents thus far:
1. Overview
2. Policy statement
3. Authority for plan activation
4. Oversight of operations
5. When to activate the plan
6. Facility sites used for the plan.
7. Pre-intake (to morgue site) procedures during plan activation:
7.1 Inpatient deaths
7.2 ED deaths
7.3 Pre-admission deaths.
7.3.1 Fragmented remains, decontamination of remains, evidence.
8. Accessory morgue site operations:
8.1 Laboratory/Pathology morgue (Site 1): Intake and release, security, supplies and staffing, movement of remains, site operations outline, site temperature management, ordinary operations, accessory site comments, occurrence management.
8.2 Site 2: Site preparation and security, intake and release procedures, supplies and equipment, staffing, site operations plan, site temperature management, occurrence management.
8.3 Site 3: reference to Site 2 plans. Site 3 operations plan.
9. Family Assistance Center
10. Conclusion of mass fatalities operations:
10.1 Authority for deactivation
10.2 Procedures for deactivation
11. Post-incident procedures and involved personnel behavioral health assessment.
As for accreditation, there are many valid preoccupations for labs, inspectors, and accrediting agencies today, and I do not believe one can rely solely on the lab inspection process to ensure emergency preparedness. Laboratory accrediting agencies should be encouraged to adopt instruments exploring those more specific attributes of preparedness, which a lab may have or lack. Some such preparedness attributes have been explored in the CLSI (formerly NCCLS) Document X4-R, Planning for Challenges to Clinical Laboratory Operations During a Disaster; A Report, which, parenthetically is, being revised at this time into a consensus document, GP-36 (exact title pending).
Participate in facility and community emergency planning committees and training. For example, your hospital emergency planning committee; Local Emergency Planning Committee (http://yosemite.epa.gov/oswer/lepcdb.nsf/HomePage); Metropolitan Medical Response System (www.hhs.gov/mmrs); Citizen Emergency Response Team (www.citizencorps.gov/cert); emergency communications (www.arrl.org or www.reactintl.org/public); other. Disaster response plans are local concoctions, and most preparedness learning and progress is experiential — by preparing and planning with others. Participation is number one! For example, a cytotechnologist in our lab is a DECON team member and helped develop our institution’s mass-fatalities plan; she now knows an enormous amount about mass-fatalities planning in a hospital, which was learned by the interdisciplinary process of developing an actual plan. That knowledge is portable.
Peruse existing Web and print resources, which can help guide and supplement the above. The CLSI X4-R document was written specifically by and for laboratorians, with expert reviewers, as an overview of non-analytic challenges of disaster which can affect operations.
Promote preparedness, in a collegial/team fashion, in appropriate professional and voluntary activities where you have gained preparedness knowledge.
Practice personal preparedness (self and family) is important and easy to neglect (www.ready.gov).
Bottom line: Everyone and every lab exists somewhere on the preparedness spectrum. Whatever you have done — whether nothing, a little, or a lot — that is your plan. When the unexpected happens, no one is permitted to “opt out.”
—Thomas Williams, MD
Medical Director
Methodist Pathology Center
Methodist Hospital
Omaha, NE

Infectious-disease
pandemic planning
The first case of severe acute respiratory syndrome (SARS) appeared in November 2002, killing 800 people around the world, including 44 in Toronto. Dr. Poutanen was a frontline healthcare worker involved in the clinical, laboratory, infection-control, and public-health response directed against the Toronto SARS outbreak.

We were involved in the response to SARS in 2002-2003. We learned many lessons from the experience, and we used these as starting points for our influenza pandemic-preparedness plans developed ahead of the H1N1 pandemic. This year, we were able to put these plans into action when H1N1 arrived. The key lessons that were useful in influenza pandemic-planning were:
Have a preparedness plan! We did not have one for SARS and were forced to respond in real time. But we had a plan before H1N1 was recognized, so we just had to “fine-tune” it for H1N1 specifics.
Have a communications plan. Throughout the H1N1 waves, efficient communication between public health and our lab occurred primarily via e-mail lists, created as part of our preparedness plans. Keeping in touch with public health was allowed us to how the outbreak was unfolding in our community so we could respond accordingly.
Prepare for biosafety. Since SARS, our hospital has had mandatory annual biosafety training, covering personal protective gear and infection-control practices, with a focus on pandemic preparedness. In addition, public health has had an educational campaign focusing on practicing healthy hygiene. All of this helped prepare our laboratory personnel so that it was easier for them to follow tailored advice for H1N1.
Prepare for increased demands for testing. We anticipated increase workload with respiratory testing and the need for molecular testing as part of pandemic preparedness and trained many staff. While we could have done more ahead of time, the measures in place enabled us to efficiently respond to increased demand that occurred with H1N1, including for novel investigational molecular tests.
Follow metrics in real time. We learned quickly from SARS that following a paper trail of results did not result in efficient data tracking. We had plans in place ahead of time to develop real-time laboratory and hospital information system codes for new tests, such that [this time] all results were available online as soon as they were available.
Have psychosocial support. After SARS, as part of pandemic preparedness, our hospital had a resiliency team speak to every department, including our laboratory personnel, advising them of resources available in the event that a pandemic or other emergency evolved. Having these resources available ahead of time made coping with a new emergency less daunting.
Have a preparedness plan! Going through the motions for a possible emergency is key to being able to respond to one. The plan will have to be fine-tuned to the specific emergency.
Be prepared to introduce new tests with little notice. Ensure you have the right resources (money, staff, and space) to efficiently validate new tests as required, depending on the emergency. Have dedicated persons who can efficiently validate tests, write procedures, and train personnel accordingly.
Use bar codes, interfaces, and electronic reporting to free up skilled technologists from data entry and to avoid data-entry errors.
The most important item that I would want to have in case I was stuck in the lab during a disaster or an emergency? Assuming I had access to food, water, and functioning toilets (which may not always be the case), I would want to make sure there was adequate and comfortable space to sleep. Responding to an emergency is exhausting; being able to get sleep is critical.
—Susan M. Poutanen, MD, MPH, FRCPC
Microbiologist/Infectious Disease Consultant
University Health Network
and Mount Sinai Hospital;
Asst. Prof. Dept. of Laboratory Medicine and Pathobiology (Medicine, Microbiology) and Dept. of Medicine (Infectious Diseases)
University of Toronto
Toronto, Ontario, Canada


Uninterruptible power supplies keep the lights on in the lab
Plug in your lab equipment, turn it on, and never lose power. This exclusive MLO interview with Mike Stout, VP of Engineering at Falcon Electric provides valuable insight into how to keep the lab operating even with “dirty power,” brown-outs, and power outages.
MLO: Can you tell us when you began to provide backup power to medical laboratories and why?
Stout: Falcon products have been sold to medical laboratories for more than 20 years. Some of our early customers were hospitals and clinical laboratories. The bulk of our medical-oriented sales were through OEM customers, such as Baxter Health Care, Beckman Coulter, and Wyeth, who bundled Falcon uninterruptible power supply (UPS) equipment with their own electronic medical products. With the advent of DNA sequencing, Falcon has also provided products to many research labs. In fact, Falcon UPS products were used to protect and back up the DNA sequencing equipment used for the Human Genome Project.
Further, we have supplied UPS units to the FBI Crime Lab and law enforcement agencies around the world. In addition, our voltage and frequency converters and UPS units are used by MIT Lincoln Labs, Sandia and Los Alamos National Labs, Lawrence Livermore National Lab, and CERN (the European organization for nuclear research). Our voltage and frequency converters are utilized when the project calls for the lab to deliver 230V/50Hz required for lab systems used overseas or when the instrument, designed to operate at 120V/60Hz, is being sent out of the United States and a converter is required.
The Falcon true double-conversion on-line UPS provides a high level of power protection against the widest range of power problems. The low-cost line-interactive, or “Smart UPS,” primarily provides battery backup and has limited power-protection capabilities for suppressing high-voltage transients. In addition, voltage regulation can be poor. Basically, the utility power coming into these types of UPS units goes through some surge-protection circuitry and then out to the device. It is only when utility power is lost that a line-interactive’s inverter turns on and switches in. Due to the low cost of the typical line-interactive UPS, the typical battery inverter output is a distorted sinewave having a high level of harmonic distortion.
By contrast, the output inverter in our double-conversion on-line UPS is operating continuously, both in AC utility and battery modes of operation. The Falcon UPS converts the incoming utility or generator power, filters it, and then rectifies it to DC. This removes all of the unwanted AC frequency and voltage problems, including generator frequency shift, voltage transients, voltage, and current harmonics. Once the UPS has converted the incoming AC to DC, it regulates the DC voltage and uses it to power our continuous duty insulated-gate bipolar transistor (IGBT) pulse width modulated (PWM) inverter.
This provides an output with superior voltage regulation (120Vac +/-2% domestic or 230Vac +/-2% European), even if the utility power supplied to the UPS drifts by +/-15%. As a result, any voltage sags and surges in the utility power are eliminated, along with most other power problems. Should the utility power be lost, the UPS will simply start to draw its power from the internal batteries without any switchover or transfer required.
The on-line UPS is like installing a “power firewall” between incoming power and sensitive laboratory equipment — essential for medical electronics, which often must be connected to outlets and circuits shared by heavy-duty equipment that can corrupt the quality of utility power. And, of course, medical gear is often most needed in times and places that utility-power quality suffers, is interrupted, or goes away for significant periods when the use of these instruments is paramount.
As the connected lab equipment is always receiving optimum power conditions and voltage, the equipment accuracy, performance, and reliability are assured, irrespective of the utility or lab outlet-power quality. Seamless backup-power capability is a secondary benefit. Additional battery banks may be added, providing up to several hours of backup.
MLO: Please describe your double-conversion on-line UPS units in more detail. How do they differ from standby power supply (SBS) units?
Stout: There are three basic UPS types, each offering more power protection than the preceding: Off-line (SBS), the lowest grade; line interactive (SBS), the middle grade; and on-line (UPS), the highest grade.
The off-line SBS offers bare-bones power protection for basic surge protection and battery backup. Through this type of SBS, equipment is connected directly to incoming utility power with the same voltage transient clamping devices used in a common surge-protected plug strip connected across the power line. When the incoming utility voltage falls below a predetermined level, the SBS turns on its internal DC-AC inverter circuitry, which is powered from an internal storage battery. The SBS then mechanically switches the connected equipment on to its DC-AC inverter output. The switch-over time is stated by most manufacturers as being less than 4 milliseconds, but typically can be as long as 25 milliseconds, depending on the amount of time it takes the SBS to detect the lost utility voltage.
The line-interactive SBS offers the same bare-bones surge protection and battery backup as the off-line, except it has the added feature of minimal voltage regulation, while the SBS is operating from the utility source. This SBS design came about due to the off-line SBS’s inability to provide an acceptable output voltage to the connected equipment during brown-out conditions. A brown-out happens when the utility voltage remains excessively low for a sustained period. Under these conditions, the off-line SBS would go to battery operation; and, if the brown-out was sustained long enough, the SBS battery would become fully discharged, turn off the power to the connected lab equipment, and not be able to be turned back on until the utility voltage returned to normal. To prevent this from happening, a voltage-regulating transformer was added; hence, the term line-interactive was born. This feature really does help, as low-voltage utility conditions are common. The downside for this design is that most of the units available have to switch to battery momentarily when making transformer voltage adjustments, and the associated switch-over voltage transients may not be tolerated by many pieces of lab equipment, especially those that are microprocessor-based or have a connected computer system. The true advantage to the on-line UPS is its ability to provide an electrical firewall between the incoming utility power and sensitive laboratory equipment.
While the off-line and line-interactive designs leave the equipment connected directly to the utility power with minimal surge protection, the on-line UPS provides multiple electronic layers of insulation from power-quality problems. This is accomplished inside the UPS in several tiers of circuits. First, the incoming AC utility voltage is passed through a surge-protected rectifier stage where it is converted to a direct current and is heavily filtered by large capacitors. This tier removes line noise, high-voltage (Hz) transients, harmonic distortion, and all 50/60 Hertz (Hz) frequency-related problems. The capacitors also act as an energy storage reservoir giving the UPS the ability to “ride-through” momentary power interruptions. The battery is also connected to this tier and takes over as the energy source in the event of a utility loss. This makes the transition between utility and battery power seamless, without any interruption.
The filtered DC is sent into the next tier, a voltage-regulator stage. In the regulator stage, the DC voltage is tightly regulated and fed to a second set of storage capacitors. The regulator stage gives the UPS its ability to sustain a constant output even during sustained brown-out or low-line conditions. The additional stored energy in the second set of capacitors yields even more ride-through time without any battery drain. The regulated DC voltage is next fed to the inverter stage where a totally new 50/60 Hz, true AC sinewave output power is generated. This tier gives the UPS a new, clean output with superior voltage and frequency regulation, providing the ideal power source for laboratory equipment.
The on-line UPS can also provide other benefits like frequency conversion for operating equipment designed for a 60-Hz utility source on European 50-Hz utility power, or the reverse. The continuous duty inverter also allows for the connection of large extended battery packs that can provide up to several hours of backup time. In the case of a critical process like DNA sequencing in a crime lab where only one DNA sample may be available, this assures process completion in the event of long-term utility-power loss. Only the on-line UPS can provide the level of voltage regulation and power protection required by power-sensitive lab equipment.
MLO: Are the power quality and availability needs of medical laboratories in other countries greater than those in the United States, depending on where they are located?
Stout: Utility power in the United States and Europe is typically much better than the power quality in developing nations. Using this as a rule, however, is a poor measure of the power quality inside any specific lab located anywhere in the world — including the United States and Europe. One reason is localized power pollution often being created by other equipment operating on the same lab power circuits, which can happen anywhere. Large motors or other power-hungry devices in a lab can create voltage sags, surges, and even high-voltage transients that can disrupt the operation of sensitive lab equipment.
Also, many areas within the United States are subject to power utility problems — sags, rolling brown-outs, even power outages — due to seasonal conditions like a high rate of air-conditioning use during heat waves. During the rest of the year, power-line problems often are caused by snow and ice storms, hurricanes, tornadoes, and flooding. Other causes of power interruption include accidents due to construction, vehicles downing utility poles, or the ripple effect on the power grid from an event that may be a thousand miles away.
That being said, developing countries’ power can present the harshest of power problems. Local power grids may be without power for several hours a day. This is often the case in locations like Iraq or Mexico City in the summer months. Voltage sags and surges may be excessive, even destructive, beyond the operational limits of most lab equipment. Differing countries have unique power problems.
To address these potential problems, Falcon’s 230Vac European models are designed with the widest input voltage range, typically 170Vac to 275Vac, while providing a regulated user-settable 208Vac, 220Vac, 230Vac, or 240Vac output. We can also provide battery-backup options of up to several hours. In addition, we offer models with galvanic isolation (completely separating the input and output) for use in locations where grounding and common mode noise is a problem. In addition to our wide-input range European models supplied to developing-world customers, we also supply specialized rugged military systems in developing countries.
MLO: What about oversight in foreign labs regarding protecting refrigeration and other laboratory equipment?
Stout: Most medical labs throughout the world attempt to meet either U.S. or European standards. Of course, some laws change from country to country. In the case of vaccines, maintaining them in a proper refrigerated environment is critical to their viability and, as such, “universal” in nature.
Most UPS units are shipped with valve-regulated sealed lead-acid, or VRLA, batteries designed for a typical five-year life. The two factors that can drastically reduce the expected battery life are heat and allowing the batteries to become overly discharged. If the UPS is installed in an environment where the average temperature is 72°, and the charge is maintained, it should last the five years. Should the same unit be installed in a 122° environment, the battery may only last about nine months. Should the UPS not be plugged in for a period of six to 12 months, the batteries may self-discharge down to the level where they cannot be recharged and must be replaced.
In our SSG Series laboratory and industrial grade on-line UPS, Falcon is the first to equip those with eight- to 10-year life batteries. Again, at 72°, these will yield an eight- to 10-year life; at 122°, these batteries may last only four years. Both are a vast improvement over the three- to five-year batteries. Allowing the batteries to become overly discharged, however, will result in the same battery damage. Our SSG Series UPS is UL Listed for operation in environments up to 55°C (131°F).
MLO: How can lab managers properly maintain their UPS units to make sure their equipment will operate in a disaster-recovery mode?
Stout: A UPS-testing plan is dependent on the level or critical nature of the connected lab equipment and associated process. In some cases, UPS testing may not be required. If the process is critical, we recommend the following plan. When the UPS is first received and connected to the lab equipment, allow the UPS to charge for 24 hours. Next, with the equipment operating normally, disconnect the utility power to the UPS, and use a stopwatch to record the amount of battery runtime until the low-battery alarm sounds. Immediately reconnect the utility power to the UPS. Record the amount of runtime in minutes and seconds on a label, along with the date, and attach the label to the top of the UPS. Every four months, conduct the 24-hour recharge and runtime test, and record the results on the label. When the battery runtime reaches 80% of the time recorded for the first (installed) runtime test, the batteries should be replaced. Our website has a UPS tutorial and other information for those who seek more in-depth details.
MLO: What are some common problems you see in clinical labs’ UPS units?
Stout: First, improper selection of UPS type. Typically the lab purchases laboratory equipment based on its performance, with cost being secondary. They will, however, purchase a $120 UPS to protect a $50,000 piece of lab equipment with little research and without regard for the UPS performance. Second, they have no battery-testing plan or program. Third, improper installation of the UPS that does not allow for proper cooling. This shortens both the UPS life and battery life.
MLO: Did your company have backup systems in operation in New Orleans during Hurricane Katrina? If so, what was the outcome for labs with UPS units there?
Stout: Falcon, along with most other UPS companies, did have many UPS units operating in New Orleans during Hurricane Katrina. The big problem experienced by hospitals and labs occurred after the storm moved inland, and the city was flooded. Backup generators for the hospitals and large labs were located in lower levels of the buildings, below ground level. They were completely flooded and rendered unusable. This left the facilities with no long-term source of emergency power, since the UPS units installed were only intended to supply backup power for the limited amount of time during the emergency-generator startup. Therefore, the batteries were discharged during the first few minutes of the power outage.
This incident led many government agencies to issue new regulations specifying the installation of backup generators. These generators must now be installed on roof locations in hospitals, in addition to provisions implemented for mobile generator power connections outside the facility to allow a mobile generator to be driven or flown in as a third source of power to the hospital.
MLO: What challenges did/does Falcon Electric as an organization face in developing its global business?
Stout: The Internet is the vehicle that opens the world market up. In other words “the Internet is the great equalizer.” Our website is not only our primary sales vehicle, it is also a power-information site: truly a toolbox that will solve many problems not addressed elsewhere.
Regarding logistics, our UPS units are small enough to be shipped all over the world and require minimal service that can be performed by the average lab technician. The only service required, aside from cleaning the unit to ensure airflow to internal components, is battery replacement. The simple instructions on this procedure are posted on our website. Should a unit need to be returned to the factory to repair damage from a power event, the batteries can be removed and the unit returned. Users remove the batteries to reduce the weight significantly, which saves on freight costs. In the case of our large-volume customers, we offer service classes to their technical staff and provide spare parts.

Reach Falcon Electric at www.falconups.com, or call 800-842-6940 (toll-free in the United States), or 626-962-7770. Send comments to MLO Editor Carren Bersch at cbersch@nelsonpub.com.

Quake shakes up California lab
The Northridge earthquake occurred in Southern California in 1994, with Northridge Hospital Medical Center located within a half mile of the epicenter. The worst aspects of this disaster were disruption of water service and having two countertop instruments fall to the floor, which remained completely out of service for 10 days. A mess was created by all objects from countertops and shelves that fell to the floor. The pathology grossing area was a mess with spilled specimens and formalin. Various pieces of equipment also moved from their original locations including blood-bank refrigerators and other analyzers. Refrigerators had the potential to topple over but did not, while filing cabinets posed a significant risk of falling and potentially injuring personnel.
Lessons that we learned were to secure as much equipment to the floor or counters as possible. Labs need to ensure they have adequate utilities or backup (water supplies, emergency electrical, medical gases). A disaster plan should include both short- and long-term goals in case some or all testing cannot be performed. Provisions should be made to send specimens to other labs or hospitals in case of an emergency. Blood supplies may need to be relocated if operations are compromised.
Our chemistry analyzers require water for operation, so we stockpile an emergency supply of water — enough for a week to 10 days — in case of another disaster. The hospital’s Engineering staff has portable air-conditioners in case of a failure with the main air-conditioning or chilled water system. We also arrange for outside electrical generators to be brought in case of a emergency generator failure.
—Donald P. Sharar, CLA, MT(ASCP)
Director of Laboratory Services
Northridge Hospital Medical Center
Northridge, CA
Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52
Produced with Yudu - www.yudu.com