Monthly Archives: November 2013

Clinical Engineering Program Indicators

Department Philosophy

Monitoring Internal Operations • Process for Quality Improvement • External Comparisons

Standard Database

Measurement Indicators

Dennis D. Autio

Dybonics, Inc.

Robert L. Morris

Dybonics, Inc.

подпись: dennis d. autio
dybonics, inc.
robert l. morris
dybonics, inc.
Indicator Management Process

Indicator Example 1: Productivity Monitors

Indicator Example 2: Patient Monitors IPM

Completion Time

Summary

The role, organization, and structure of clinical engineering departments in the modern health care environment continue to evolve. During the past 10 years, the rate of change has increased considerably faster than mere evolution due to fundamental changes in the management and organization of health care. Rapid, significant changes in the health care sector are occurring in the United States and in nearly every country. The underlying drive is primarily economic, the recognition that resources are finite.

Indicators are essential for survival of organizations and are absolutely necessary for effective man­agement of change. Clinical engineering departments are not exceptions to this rule. In the past, most clinical engineering departments were task-driven and their existence justified by the tasks performed. Perhaps the most significant change occurring in clinical engineering practice today is the philosophical shift to a more business-oriented, cost-justified, bottom-line-focused approach than has been generally the case in the past.

Changes in the health care delivery system will dictate that clinical engineering departments justify their performance and existence on the same basis as any business, the performance of specific functions at a high quality level and at a competitive cost. Clinical engineering management philosophy must change from a purely task-driven methodology to one that includes the economics of department performance. Indicators need to be developed to measure this performance. Indicator data will need to be collected and analyzed. The data and indicators must be objective and defensible. If it cannot be measured, it cannot be managed effectively.

Indicators are used to measure performance and function in three major areas. Indicators should be used as internal measurements and monitors of the performance provided by individuals, teams, and the department. These essentially measure what was done and how it was done. Indicators are essential during quality improvement and are used to monitor and improve a process. A third important type of program indicator is the benchmark. It is common knowledge that successful businesses will continue to use benchmarks, even though differing terminology will be used. A business cannot improve its competitive position unless it knows where it stands compared with similar organizations and businesses.

Different indicators may be necessary depending on the end purpose. Some indicators may be able to measure internal operations, quality improvement, and external benchmarks. Others will have a more restricted application.

It is important to realize that a single indicator is insufficient to provide the information on which to base significant decisions. Multiple indicators are necessary to provide cross-checks and verification. An example might be to look at the profit margin of a business. Even if the profit margin per sale is 100%, the business will not be successful if there are few sales. Looking at single indicators of gross or net profit will correct this deficiency but will not provide sufficient information to point the way to improvements in operations.

Department Philosophy

A successful clinical engineering department must define its mission, vision, and goals as related to the facility’s mission. A mission statement should identify what the clinical engineering department does for the organization. A vision statement identifies the direction and future of the department and must incorporate the vision statement of the parent organization. Department goals are then identified and developed to meet the mission and vision statements for the department and organization. The goals must be specific and attainable. The identification of goals will be incomplete without at least implied indicators. Integrating the mission statement, vision statement, and goals together provides the clinical engineering department management with the direction and constraints necessary for effective planning.

Clinical engineering managers must carefully integrate mission, vision, and goal information to develop a strategic plan for the department. Since available means are always limited, the manager must carefully assess the needs of the organization and available resources, set appropriate priorities, and determine available options. The scope of specific clinical engineering services to be provided can include maintenance, equipment management, and technology management activities. Once the scope of services is defined, strategies can be developed for implementation. Appropriate program indicators must then be developed to document, monitor, and manage the services to be provided. Once effective indicators are implemented, they can be used to monitor internal operations and quality-improvement processes and complete comparisons with external organizations.

Monitoring Internal Operations

Indicators may be used to provide an objective, accurate measurement of the different services provided in the department. These can measure specific individual, team, and departmental performance param­eters. Typical indicators might include simple tallies of the quantity or level of effort for each activity, productivity (quantify/effort), percentage of time spent performing each activity, percentage of scheduled IPMs (inspection and preventive maintenance procedures) completed within the scheduled period, mean time per job by activity, repair jobs not completed within 30 days, parts order for greater than 60 days, etc.

Process for Quality Improvement

When program indicators are used in a quality-improvement process, an additional step is required. Expectations must be quantified in terms of the indicators used. Quantified expectations result in the establishment of a threshold value for the indicator that will precipitate further analysis of the process. Indicators combined with expectations (threshold values of the indicators) identify the opportunities for program improvement. Periodic monitoring to determine if a program indicator is below (or above, depending on whether you are measuring successes or failures) the established threshold will provide a flag to whether the process or performance is within acceptable limits. If it is outside acceptable limits for the indicator, a problem has been identified. Further analysis may be required to better define the problem. Possible program indicators for quality improvement might include the number of repairs completed within 24 or 48 hours, the number of callbacks for repairs, the number of repair problems caused by user error, the percentage of hazard notifications reviewed and acted on within a given time frame, meeting time targets for generating specification, evaluation or acceptance of new equipment, etc.

An example might be a weekly status update of the percentage of scheduled IPMs completed. Assume that the department has implemented a process in which a group of scheduled IPMs must be completed within 8 weeks. The expectation is that 12% of the scheduled IPMs will be completed each week. The indicator is the percentage of IPMs completed. The threshold value of the indicator is 12% per week increase in the percentage of IPMs completed. To monitor this, the number of IPMs that were completed must be tallied, divided by the total number scheduled, and multiplied by 100 to determine the percentage completed. If the number of completed IPMs is less than projected, then further analysis would be required to identify the source of the problem and determine solutions to correct it. If the percentage of completed IPMs were equal to or greater than the threshold or target, then no action would be required.

External Comparisons

Much important and useful information can be obtained be carefully comparing one clinical engineering program with others. This type of comparison is highly valued by most hospital administrators. It can be helpful in determining performance relative to competitors. External indicators or benchmarks can identify specific areas of activity in need of improvement. They offer insights when consideration is being given to expanding into new areas of support. Great care must be taken when comparing services provided by clinical engineering departments located in different facilities. There are number of factors that must be included in making such comparisons; otherwise, the results can be misleading or misinterpreted. It is important that the definition of the specific indicators used be well understood, and great care must be taken to ensure that the comparison utilizes comparable information before interpreting the compar­isons. Failure to understand the details and nature of the comparison and just using the numbers directly will likely result in inappropriate actions by managers and administrators. The process of analysis and explanation of differences in benchmark values between a clinical engineering department and a com­petitor (often referred to as gap analysis) can lead to increased insight into department operations and target areas for improvements.

Possible external indicators could be the labor cost per hour, the labor cost per repair, the total cost per repair, the cost per bed supported, the number of devices per bed supported, percentage of time devoted to repairs versus IPMs versus consultation, cost of support as a percentage of the acquisition value of capital inventory, etc.

Standard Database

In God we trust…all others bring data!

Florida Power and Light

Evaluation of indicators requires the collection, storage, and analysis of data from which the indicators can be derived. A standard set of data elements must be defined. Fortunately, one only has to look at commercially available equipment management systems to determine the most common data elements used. Indeed, most of the high-end software systems have more data elements than many clinical engineering departments are willing to collect. These standard data elements must be carefully defined and understood. This is especially important if the data will later be used for comparisons with other organizations. Different departments often have different definitions for the same data element. It is crucial that the data collected be accurate and complete. The members of the clinical engineering department must be trained to properly gather, document, and enter the data into the database. It makes no conceptual difference if the database is maintained on paper or using computers. Computers and their databases are ubiquitous and so much easier to use that usually more data elements are collected when computerized systems are used. The effort required for analysis is less and the level of sophistication of the analytical tools that can be used is higher with computerized systems.

The clinical engineering department must consistently gather and enter data into the database. The database becomes the practical definition of the services and work performed by the department. This standardized database allows rapid, retrospective analysis of the data to determine specific indicators identifying problems and assist in developing solutions for implementation. A minimum database should allow the gathering and storage of the following data:

In-House Labor. This consists of three elements: the number of hours spent providing a particular service, the associated labor rate, and the identity of the individual providing the service. The labor cost is not the hourly rate the technician is paid multiplied by the number of hours spent performing the service. It should include the associated indirect costs, such as benefits, space, utilities, test equipment, and tools, along with training, administrative overhead, and many other hidden costs. A simple, straightforward approach to determine an hourly labor rate for a department is to take the total budget of the department and subtract parts’ costs, service contract costs, and amounts paid to outside vendors. Divide the resulting amount by the total hours spent providing services as determined from the database. This will provide an average hourly rate for the department.

Vendor Labor. This should include hours spent and rate, travel, and zone charges and any perdiem costs associated with the vendor supplied service.

Parts. Complete information on parts is important for any retrospective study of services provided. This information is similar for both in-house and vendor-provided service. It should include the part number, a description of the part, and its cost, including any shipping.

Timeless. It is important to include a number of time stamps in the data. These should include the date the request was received, data assigned, and date completed.

Problem Identification. Both a code for rapid computer searching and classification and a free text comment identifying the nature of the problem and description of service provided are important. The number of codes should be kept to as few as possible. Detailed classification schemes usually end up with significant inaccuracies due to differing interpretations of the fine gradations in classifications.

Equipment Identification. Developing an accurate equipment history depends on reliable means of identifying the equipment. This usually includes a department – and/or facility-assigned unique identifi­cation number as well as the manufacturer, vendor, model, and serial number. Identification numbers of provided by asset management are often inadequate to allow tracking of interchangeable modules or important items with a value less than a given amount. Acquisition cost is a useful data element.

Service Requester. The database should include elements allowing identification of the department, person, telephone number, cost center, and location of the service requester.

Measurement Indicators

Clinical engineering departments must gather objective, quantifiable data in order to assess ongoing performance, identify new quality-improvement opportunities, and monitor the effect of improvement action plans. Since resources are limited and everything cannot be measured, certain selection criteria must be implemented to identify the most significant opportunities for indicators. High-volume, high­risk, or problem-prone processes require frequent monitoring of indicators. A new indicator may be developed after analysis of ongoing measurements or feedback from other processes. Customer feedback and surveys often can provide information leading to the development of new indicators. Department management, in consultation with the quality-management department, typically determines what indica­tors will be monitored on an ongoing basis. The indicators and resulting analysis are fed back to individuals and work teams for review and improvement of their daily work activities. Teams may develop new indicators during their analysis and implementation of solutions to quality-improvement opportunities.

An indicator is an objective, quantitative measurement of an outcome or process that relates to performance quality. The event being assessed can be either desirable or undesirable. It is objective in that the same measurement can be obtained by different observers. This indicator represents quantitative, measured data that are gathered for further analysis. Indicators can assess many different aspects of quality, including accessibility, appropriateness, continuity, customer satisfaction, effectiveness, efficacy, efficiency, safety, and timeliness.

A program indicator has attributes that determine its utility as a performance measure. The reliability and variability of the indicator are distinct but related characteristics. An indicator is reliable if the same measurement can be obtained by different observers. A valid indicator is one that can identify opportunities for quality improvement. As indicators evolve, their reliability and validity should improve to the highest level possible.

An indicator can specify a part of a process to be measured or the outcome of that process. An outcome indicator assesses the results of a process. Examples include the percentage of uncompleted, scheduled IPMs, or the number of uncompleted equipment repairs not completed within 30 days. A process indicator assesses an important and discrete activity that is carried out during the process. An example would be the number of anesthesia machines in which the scheduled IPM failed or the number of equipment repairs awaiting parts that are uncompleted within 30 days.

Indicators also can be classified as sentinel event indicators and aggregate data indicators. A perfor­mance measurement of an individual event that triggers further analysis is called a sentinel-event indicator. These are often undesirable events that do not occur often. These are often related to safety issues and do not lend themselves easily to quality-improvement opportunities. An example may include equipment failures that result in a patient injury.

An aggregate data indicator is a performance measurement based on collecting data involving many events. These events occur frequently and can be presented as a continuous variable indicator or as rate – based indicators. A continuous variable indicator is a measurement where the value can fall anywhere along a continuous scale. Examples could be the number of IPMs scheduled during a particular month or the number of repair requests received during a week. A rate-based variable indicator is the value of a measurement that is expressed as a proportion or a ratio. Examples could be the percentage of IPMs completed each month or the percentage of repairs completed within one workday.

General indicators should be developed to provide a baseline monitoring of the department’s perfor­mance. They also should provide a cross-check for other indicators. These indicators can be developed to respond to a perceived need within a department or to solve a specific problem.

Indicator Management Process

The process to develop, monitor, analyze and manage indicators is shown in Fig. 170.1. THe different steps in this process include defining the indicator, establishing the threshold, monitoring the indicator, evaluating the indicator, identifying quality-improvement opportunities, and implementing action plans.

Define Indicator. The definition of the indicator to be monitored must be carefully developed. This process includes at least five steps. The event or outcome to be measured must be described. Define any specific terms that are used. Categorize the indicator (sentinel event or rate-based, process or outcome, desirable or undesirable). The purpose for this indicator must be defined, as well as how it is used in specifying and assessing the particular process or outcome.

Establish Threshold. A threshold is a specific data point that identifies the need for the department to respond to the indicator to determine why the threshold was reached. Sentinel-event indicator thresholds are set at zero. Rate indicator thresholds are more complex to define because they may require expert consensus or definition of the department’s objectives. Thresholds must be identified, including the process used to set the specific level.

Monitor Indicator. Once the indicator is defined, the data-acquisition process identifies the data sources and data elements. As these data are gathered, they must be validated for accuracy and completeness. Multiple indicators can be used for data validation and cross-checking. The use of a computerized database allows rapid access to the data. A database management tool allows quick sorting and organization
Of the data. Once gathered, the data must be presented in a format suitable for evaluation. Graphic presentation of data allows rapid visual analysis for thresholds, trends, and patterns.

FIGURE 170.1 Indicator management process.

подпись: 
figure 170.1 indicator management process.
Evaluate Indicator. The evaluation process analyze and reports the information. This process includes comparing the information with established thresholds and analyzing for any trends or patterns. A trend is the general direction the indicator measurement takes over a period of time and may be desirable or undesirable. A pattern is a group­ing or distribution of indicator measurements. A pattern analysis is often triggered when thresholds are crossed or trends identified. Additional indicator information is often required. If an indictor threshold has not been reached, no further action may be necessary, other than continuing to monitor this indicator. The department also may decide to improve its performance level by changing the threshold.

Factors may be present leading to variation of the indi­cator data. These factors may include failure of the tech­nology to perform properly, failure of the operators to use the technology properly, and failure of the organization to provide the necessary resources to implement this tech­nology properly. Further analysis of these factors may lead to quality-improvement activities later.

Identify Quality-Improvement Opportunity. A quality – improvement opportunity may present itself if an indica­tor threshold is reached, a trend is identified, or a pattern is recognized. Additional information is then needed to further define the process and improvement opportuni­ties. The first step in the process is to identify a team. This

Team must be given the necessary resources to complete this project, a timetable to be followed, and an opportunity to periodically update management on the status of the project. The initial phase of the project will analyze the process and establish the scope and definition of the problem. Once the problem is defined, possible solutions can be identified and analyzed for potential implementation. A specific solution to the problem is then selected. The solution may include modifying existing indictors or thresholds to more appropriate values, modifying steps to improve existing processes, or establishing new goals for the department.

Implement Action Plan. An action plan is necessary to identify how the quality-improvement solution will be implemented. This includes defining the different tasks to be performed, the order in which they will be addressed, who will perform each task, and how this improvement will be monitored. Appropriate resources must again be identified and a timetable developed prior to implementation. Once the action plan is implemented, the indicators are monitored and evaluated to verify appropriate changes in the process. New indicators and thresholds may need to be developed to monitor the solution.

Indicator Example 1: Productivity Monitors

Defines Indicators. Monitor the productivity of technical personnel, teams, and the department. Pro­ductivity is defined as the total number of documented service support hours compared with the total
number of hours available. This is a desirable rate-based outcome indicator. Provide feedback to technical staff and hospital administration regarding utilization of available time for department support activities.

Establish Thresholds. At least 50% of available technician time will be spent providing equipment maintenance support services (revolving equipment problems and scheduled IPMs). At least 25% of available technician time will be spent providing equipment management support services (installations, acceptance testing, incoming inspections, equipment inventory database management, hazard notifica­tion review).

Monitor Indicator. Data will be gathered every 4 weeks from the equipment work order history database. A trend analysis will be performed with data available from previously monitored 4-week intervals. These data will consist of hours worked on completed and uncompleted jobs during the past 4-week interval.

Technical staff available hours is calculated for the 4-week interval. The base time available is 160 hours (40 hours/week x 4 week) per individual. Add to this any overtime worked during the interval. Then subtract any holidays, sick days, and vacation days within the interval.

CJHOURS: Hours worked on completed jobs during the interval UJHOURS: Hours worked on uncompleted jobs during the interval AHOURS: Total hours available during the 4-week interval Productivity = (CJHOURS + UJHOURS)/AHOURS

Evaluate Indicator. The indicator will be compared with the threshold, and the information will be provided to the individual. The individual team member data can be summed for team review. The data from multiple teams can be summed and reviewed by the department. Historical indicator information will be utilized to determine trends and patterns.

Quality-Improvement Process. If the threshold is not met, a trend is identified, or a pattern is observed, a quality-improvement opportunity exists. A team could be formed to review the indicator, examine the process that the indicator measured, define the problem encountered, identify ways to solve the problem, and select a solution. An action plan will then be developed to implement this solution.

Implement Action Plan. During implementation of the action plan, appropriate indicators will be used to monitored the effectiveness of the action plan.

Indicator Example 2: Patient Monitors IPM Completion Time

Define Indicator. Compare the mean to complete an IPM for different models of patient monitors. Different manufacturers of patient monitors have different IPM requirements. Identify the most timely process to support this equipment.

Establish Threshold. The difference between the mean time to complete an IMP for different models of patient monitors will not be greater than 30% of the lessor time.

Monitor Indicator. Determine the mean time to complete an IPM for each model of patient monitor. Calculate the percentage difference between the mean time for each model and the model with the least mean time.

Evaluate Indicator. The mean time to complete IPMs was compared between the patient monitors, and the maximum difference noted was 46%. A pattern also was identified in which all IPMs for that one particular monitor averaged 15 minutes longer than those of other vendors.

Quality-Improvement Process. A team was formed to address this problem. Analysis of individual IPM procedures revealed that manufacturer X requires the case to be removed to access internal filters. Performing an IPM for each monitor required moving and replacing 15 screws for each of the 46 monitors.

The team evaluated this process and identified that 5 minutes could be saved from each IPM if an electric screwdriver was utilized.

Implement Action Plan. Electric screwdrivers were purchased and provided for use by the technician. The completion of one IPM cycle for the 46 monitors would pay for two electric screwdrivers and provide

Hours of productive time for additional work. Actual savings were greater because this equipment could be used in the course of daily work.

170.7 Summary

In the ever-changing world of health care, clinical engineering departments are frequently being evaluated based on their contribution to the corporate bottom line. For many departments, this will require difficult and painful changes in management philosophy. Administrators are demanding quantitative measures of performance and value. To provide the appropriate quantitative documentation required by corporate managers, a clinical engineering a manager must collect available data that are reliable and accurate. Without such data, analysis is valueless. Indicators are the first step in reducing the data to meaningful information that can be easily monitored and analyzed. The indicators can then be used to determine department performance and identify opportunities for quality improvement.

Program indicators have been used for many years. What must change for clinical engineering depart­ments is a conscious evaluation and systematic use of indicators. One traditional indicator of clinical engineering department success is whether the department’s budget is approved or not. Unfortunately, approval of the budge as an indicator, while valuable, does not address the issue of predicting long-term survival, measuring program and quality improvements, or allowing frequent evaluation and changes.

There should be monitored indicators for every significant operational aspect of the department. Common areas where program indicators can be applied include monitoring interval department activ­ities, quality-improvement processes, and benchmarking. Initially, simple indicators should be developed. The complexity and number of indicators should change as experience and needs demand.

The use of program indicators is absolutely essential if a clinical engineering departments is to survive. Program and survival are now determined by the contribution of the department to the bottom line of the parent organization. Indicators must be developed and utilized to determine the current contribution of the clinical engineering department to the organization. Effective utilization and management of program indicators will ensure future department contributions.

References

AAMI. 1993. Management Information Report MIR 1: Design of Clinical Engineering Quality Assurance Risk Management Programs. Arlington, Va, Association for the Advancement of Medical Instru­mentation.

AAMI. 1993. Management Information Report MIR 2: Guideline for Establishing and Administering Medical Instrumentation Maintenance Programs. Arlington, Va, Association for the Advancement of Medical Instrumentation.

AAMI. 1994. Management Information Report MIR 3: Computerized Maintenance Management Systems for Clinical Engineering. Arlington, Va, Association for the Advancement of Medical Instrumen­tation.

Bauld TJ. 1987. Productivity: Standard terminology and definitions. J Clin Eng 12(2):139.

Betts WF. 1989. Using productivity measures in clinical engineering departments. Biomed Instrum Technol 23(2):120.

Bronzino JD. 1992. Management of Medical Technology: A Primer for Clinical Engineers. Stoneham, Mass, Butterworth-Heinemann.

Coopers and Lybrand International, AFSM. 1994. Benchmarking Impacting the Boston Line. Fort Myers, Fla, Association for Services Management International.

David Y, Judd TM. 1993. Risk management and quality improvement. In Medical Technology Manage­ment, pp 72-75. Redmond, Wash, SpaceLab Medical.

David Y, Rohe D. 1986. Clinical engineering program productivity and measurement. J Clin Eng 11(6):435.

Downs KJ, McKinney WD. 1991. Clinical engineering workload analysis: A proposal for standardization. Biomed Instrum Technol 25(2):101.

Fennigkoh L. 1986. ASHE Technical Document No 055880: Medical Equipment Maintenance Perfor­mance Measures. Chicago, American Society for Hospital Engineers.

Furst E. 1986. Productivity and cost-effectiveness of clinical engineering. J Clin Eng 11(2):105.

Gordon GJ. 1995. Break Through Management—A New Model For Hospital Technical Services. Arling­ton, Va, Association for the Advancement of Medical Instrumentation.

Hertz E. 1990. Developing quality indicators for a clinical engineering department. In Plant, Technology and Safety Management Series: Measuring Quality in PTSM. Chicago, Joint Commission on Accreditation of Healthcare Organizations.

JCAHO. 1990. Primer on Indicator Development and Application, Measuring Quality in Health Care. Oakbrook, Ill, Joint Commission on Accreditation of Healthcare Organizations.

JCAHO. 1994. Framework for Improving Performance. Oakbrook, Ill, Joint Commission on Accreditation of Healthcare Organizations.

Keil OR. 1989. The challenge of building quality into clinical engineering programs. Biomed Instrum Technol 23(5):354.

Lodge DA. 1991. Productivity, efficiency, and effectiveness in the management of healthcare technology: An incentive pay proposal. J Clin Eng 16(1):29.

Mahachek AR. 1987. Management and control of clinical engineering productivity: A case study. J Clin Eng 12(2):127.

Mahachek AR. 1989. Productivity measurement. Taking the first steps. Biomed Instrum Technol 23:16.

Selsky DB, Bell DS, Benson D, et al. 1991. Biomedical equipment information management for the next generation. Biomed Instrum Technol 25(1):24.

Sherwood MK. 1991. Quality assurance in Biomedical or clinical engineering. J Clin Eng 16(6):479.

Stiefel RH. 1991. Creating a quality measurement system for clinical engineering. Biomed Instrum Technol. 25(1):17.

McClain, J. P. “Quality of Improvement and Team Building.” The Biomedical Engineering Handbook: Second Edition.

Ed. Joseph D. Bronzino

Boca Raton: CRC Press LLC, 2000

Risk Factors, Safety, and Management of Medical Equipment

169.1

Risk Management: A Definition

169.2

Risk Management: Historical Perspective

169.3

Risk Management: Strategies

169.4

Risk Management: Application

Michael L. Gullikson

169.5

Case Studies

Texas Children’s Hospital

169.6

Conclusions

Risk Management: A Definition

Inherent in the definition of risk management is the implication that the hospital environment cannot be made risk-free. In fact, the nature of medical equipment—to invasively or noninvasively perform diagnostic, therapeutic, corrective, or monitoring intervention on behalf of the patient—implies that risk is present. Therefore, a standard of acceptable risk must be established that defines manageable risk in a real-time economic environment.

Unfortunately, a preexistent, quantitative standard does not exist in terms of, for instance, mean time before failure (MTBF), number of repairs or repair redos per equipment item, or cost of maintenance that provides a universal yardstick for risk management of medical equipment. Sufficient clinical man­agement of risk must be in place the can utilize safeguards, preventive maintenance, and failure analysis information to minimize the occurrence of injury or death to patient or employee or property damage. Therefore, a process must be put in place that will permit analysis of information and modification of the preceding factors to continuously move the medical equipment program to a more stable level of manageable risk.

Risk factors that require management can be illustrated by the example of the “double-edge” sword concept of technology (see Fig. 169.1). The front edge of the sword represents the cutting edge of technology and its beneficial characteristics: increased quality, greater availability of technology, timeliness of test results and treatment, and so on. The back edge of the sword represents those liabilities which must be addressed to effectively manage risk: the hidden costs discussed in the next paragraph, our dependence on technology, incompatibility of equipment, and so on [1].

For example, the purchase and installation of a major medical equipment item may only represent 20% of the lifetime cost of the equipment [2]. If the operational budget of a nursing floor does not include the other 80% of the equipment costs, the budget constraints may require cutbacks where they appear to minimally affect direct patient care. Preventive maintenance, software upgrades that address “glitches,” or overhaul requirements may be seen as unaffordable luxuries. Gradual equipment deterio­ration without maintenance may bring the safety level below an acceptable level of manageable risk.

HIDDEN COSTS MULTIPLE OPTIONS NEW SKILLS/RETRAINING BUILT-IN OBSOLESCENCE TECHNOLOGY

DEPENDENCE

NON-STANDARDIZATION INCOMPATIBILITY TECHNICAL LANGUAGE

QUALITY DIAGNOSTICS TECHNOLOGY AVAILABILITY TIMELINESS

PRODUCTIVITY CONSISTENCY COST SAVINGS

Risk Factors, Safety, and Management of Medical Equipment
Risk Factors, Safety, and Management of Medical Equipment

FIGURE 169.1 Double-edged sword concept of risk management.

 

Risk Factors, Safety, and Management of Medical Equipment

Since economic factors as well as those of safety must be considered, a balanced approach to risk management that incorporates all aspects of the medical equipment lifecycle must be considered.

The operational flowchart in Fig. 169.2 describe the concept of medical equipment life-cycle manage­ment from the clinical engineering department viewpoint. The flowchart includes planning, evaluation, and initial purchase documentation requirements. The condition of sale, for example, ensures that technical manuals, training, replacement parts, etc. are received so that all medical equipment might be fully supported in-house after the warranty period. Introduction to the preventive maintenance program, unscheduled maintenance procedures, and retirement justification must be part of the process. Institu­tional-wide cooperation with the life-cycle concept requires education and patience to convince health care providers of the team approach to managing medical equipment technology.

This balanced approach requires communication and comprehensive planning by a health care team responsible for evaluation of new and shared technology within the organization. A medical technology evaluation committee (see Fig. 169.3), composed of representatives from administration, medical staff, nursing, safety department, biomedical engineering, and various services, can be an effective platform for the integration of technology and health care. Risk containment is practiced as the committee reviews not only the benefits of new technology but also the technical and clinical liabilities and provides a 6-month followup study to measure the effectiveness of the selection process. The history of risk man­agement in medical equipment management provides helpful insight into its current status and future direction.

Risk Management: Historical Perspective

Historically, risk management of medical equipment was the responsibility of the clinical engineer (Fig. 169.4). The engineer selected medical equipment based on individual clinical department consul­tations and established preventive maintenance (PM) programs based on manufacturer’s recommenda­tion and clinical experience. The clinical engineer reviewed the documentation and “spot-checked” equipment used in the hospital. The clinical engineer met with biomedical supervisors and technicians to discuss PM completion and to resolve repair problems. The clinical engineer then attempted to analyze failure information to avoid repeat failure.

Pre-Purchase Request Planning Pre-Installation Work Installation

, , ,

Clinical

Department

Request

 

Product

Standards

Committee

 

Purchasing

Department

 

Purchase Requisition to Vendor

 

Equipment

Delivery

 

Equipment

Enters

Program

 

Equipment

Installed

 

Technical &

Clinical Eng.

Clinical Eng

>’

Clinical Eng.

…. M’

Clinical Eng.

Clinical

Writes Condi­

Tags & Inspects

Evaluates

Monitors

Evaluation

Tions of Sale

Equipment

Static Risk

Performance

Planned Maintenance Design

подпись: planned maintenance design

Equipment

Operational

Clinical Eng. Assigns Tech & Sets PM

подпись: equipment
operational
clinical eng. assigns tech & sets pm
Equipment Planned Maintenance Cycle

Equipment

Equipment

Equipment

Unscheduled

Enters PM

Due for PM

—>

PM Inspection

У Pass Inspect?^-‘ ‘

—i

Maintenance

Program

R—

Program

Yes

Clinical Eng.

V

Tcch Starts

Return to

1. Scheduled Rounds

Issues/Reviews

ESR/EIR,

Service

2. Service Calls

Due/Over PM

Inspects/PMs

3. "Walk-Ins"

Equipment Evaluation and Repair

подпись: equipment evaluation and repair

Technician:

Notifies Department of Removal Puts Defective Tag On Informs Supervisor

подпись: technician:
notifies department of removal puts defective tag on informs supervisor

Clinical Engineer Technician Owner Department Reviews ESR History Reviews Cost & Plans

подпись: clinical engineer technician owner department reviews esr history reviews cost & plans

Technician Updates ESR Informs Owner of Status Orders Parts

подпись: technician updates esr informs owner of status orders parts

Technician Completes ESR Informs Owner of Return Enters EIR Form and Turns in to Engineer

подпись: technician completes esr informs owner of return enters eir form and turns in to engineer Risk Factors, Safety, and Management of Medical Equipment

No~

L^Clinical Engineer ‘Processes Retirement Paperwork

подпись: no~
l^clinical engineer 'processes retirement paperwork
2-8-93 mhb bcems. pub

FIGURE 169.2 Biomedical engineering equipment management system (BEEMS).

Risk Factors, Safety, and Management of Medical Equipment

FIGURE 169.3 Medical technology evaluation committee.

Risk Factors, Safety, and Management of Medical Equipment

FIGURE 169.4 Operational flowchart.

However, greater public awareness of safety issues, increasing equipment density at the bed-side, more sophisticated software-driven medical equipment, and financial considerations have made it more diffi­cult for the clinical engineer to singularly handle risk issues. In addition, the synergistic interactions of various medical systems operating in proximity to one another have added another dimension to the risk formula. It is not only necessary for health care institutions to manage risk using a team approach, but it is also becoming apparent that the clinical engineer requires more technology-intensive tools to effectively contribute to the team effort [3].

Medical Equipment Operator Error

Medical Equipment Failure

Medical Equipment Physical Damage

105 Medical Equipment Failed PM

108 Medical Equipment MBA

FIGURE 169.5 Failure codes.

Risk Management: Strategies

Reactive risk management is an outgrowth of the historical attitude in medical equipment management that risk is an anomaly that surfaces in the form of a failure. If the failure is analyzed and proper operational procedures, user in-services, and increased maintenance are supplied, the problem will disappear and personnel can return to their normal work. When the next failure occurs, the algorithm is repeated. If the same equipment fails, the algorithm is applied more intensely. This is a useful but not comprehensive component of risk management in the hospital. In fact, the traditional methods of predicting the reliability of electronic equipment from field failure data have not been very effective [4]. The health care environment, as previously mentioned, inherently contains risk that must be maintained at a manageable level. A reactive tool cannot provide direction to a risk-management program, but it can provide feedback as to its efficiency.

The engine of the reactive risk-management tool is a set of failure codes (see Fig. 169.5) That flag certain anomalous conditions in the medical equipment management program. If operator training needs are able to be identified, then codes 100, 102, 104, and 108 (MBA equipment returned within 9 days for a subsequent repair) may be useful. If technician difficulties in handling equipment problems are of concern, then 108 may be of interest. The key is to develop failure codes not in an attempt to define all possible anomaly modalities but for those which can clearly be defined and provide unambiguous direction for the correction process. Also, the failure codes should be linked to equipment type, manufacturer/model, technician service group, hospital, and clinical department. Again, when the data are analyzed, will the result be provided to an administrator, engineer, clinical departmental director, or safety department? This should determine the format in which the failure codes are presented.

A report intended for the clinical engineer might be formatted as in Fig. 169.6. It would consist of two part, sorted by equipment type and clinical department (not shown). The engineer’s report shows the failure code activity for various types of equipment and the distribution of those failure codes in clinical departments.

Additionally, fast data-analysis techniques introduced by NASA permit the survey of large quantities of information in a three-dimensional display [5] (Fig. 169.7). This approach permits viewing time­variable changes from month to month and failure concentration in specific departments and equipment types.

The importance of the format for failure modality presentation is critical to its usefulness and accep­tance by health care professionals. For instance, a safety director requests the clinical engineer to provide a list of equipment that, having failed, could have potentially harmed a patient or employee. The safety director is asking the clinical engineer for a clinical judgment based on clinical as well as technical factors. This is beyond the scope of responsibility and expertise of the clinical engineer. However, the request can be addressed indirectly. The safety director’s request can be addressed in two steps: first, providing a list of high-risk equipment (assessed when the medical equipment is entered into the equipment inventory) and, second, a clinical judgment based on equipment failure mode, patient condition, and so

TotAl Fail PM PM Iteras

10514 PULMONARY IN5TR TEXAS CHILDREN’S HOSP

I

NON-TAGGED EQUIPMENT

0

1280

THERMOMETER, ELECTRONIC

2

1292

RAOIRNT UARHERr INFANT

1

1306

INCUBATOR, NEONATAL

0

1307

INCUBATOR, TRANSPORT, NEONATAL

9

1320

PHOTOTHERAPY UNIT, NEONATAL

4

1321

INFUSION PUMP

35

1357

SUCTION, VAC POlOEDtBODY FLUID

0

1384

EXAMINATION LIGHT T AC-PGUERED

11

1447

CARDIAC MONITOR W RATE ALAR«

0

1W

SURGICAL NERVE STIMULATOR /LGC

0

1424

OTOSCOPE

73

1475

OXYGEN GAS ANALYZER

0

1481

SPIROMETER DIAGNOSTIC

0

1703

AIRWAY PRESSURE MONITOR

1

1735

BREATHING GAS MIXER

13

174?

HYPO/HYPERTHERMIA DEVICE

1

1762

NEBULIZER

25

1787

VENTILATOR CONTINUOUS

96

1788

VENTILATOR NONCQNTINUOUS

1

Ј014

HEMODIALYSIS SYSTEM ACCESSOR IE

0

Aosi

PERITONEAL DIALYSIS SYS 8. ACC

4

24 B4

SPECTROPHOTOMETER* MASS

0

2695

POWERED SUCTION PUMP

16

Soea

PH METER

0

5035

COMPUTER & PERIPHERALS

0

5081

OXYGEN MONITOR

0

Soaa

RESPIRATION ANALYZER

1

S097

EXAM TABLE

75

5U3

PRINTER

2

5126

ADORESSOGRAPH

2

9102

STADIOMETER

0

17211

ANESTHESIA MONITOR

23

90063

POWER SUPPLYi PORTABLE

20

Total for TEXAS CHILDREN’S HOSP 415 12

Sou ree

подпись: sou reeFIGURE 169.6

X Reported

Physical

Patient Employee

Back

Equip Equip %

Fail Fail-ЬK

Damage

Injury Injury

Again

Tail Count Tail

0.00

1

S

14

0*00

0.00

1

?9

1.01

0.00

3

«S3

4.76

0.00

7

V

S6

16.07

33.33

1

1 *

4

9

44.44

0.00

2

28

7.14

0.00

15

9

3 *

34

514

6.61

0,00

4

358

1.12

0.00

1

47

2*13

0.00

1

1

1 *

1

0.00

0.00

A

IS

13-33

0.00

1

101

0.99

0*00

3

44

6.82

0.00

1

8

12,50

0.00

1 *

7

9

77.78

0-00

1 *

4

39

10.S3

0.00

1

3

33.33

0*00

1

56

1.79

7.29

1

1 *

11

99

11.11

0.00

1

3

27

11.11

0.00

3

1

300.00

0.00

1

5

20.00

0.00

2

3

66.67

0.00

1

1 *

1

32

3.13

0.00

1

1

4

25.00

0-00

3

18

16.67

0_00

3

1

1&

102

14.71

0.00

1

5

20.00

0.00

1

86

1.16

O. oo

1

12

8.33

0.00

4

1

400*00

0.00

1

8

12.50

0.00

2

23

8.70

0.00

1

As

4.00

20

Ј8

9

144

1899

Engineer’s failure analysis report.

Risk Factors, Safety, and Management of Medical Equipment

Risk Factors, Safety, and Management of Medical Equipment

FIGURE 169.7 Failure code analysis using a 3-D display.

On. The flowchart in Fig. 169.8 p Rovides the safety director with useful information but does not require the clinical engineer to make an unqualified clinical judgment. If the “failed PM” failure code were selected from the list of high-risk medical equipment requiring repair, the failure would be identified by the technician during routine preventive maintenance and most likely the clinician still would find the equipment clinically efficacious. This condition is a “high risk, soft failure” or a high-risk equipment item whose failure is least likely to cause injury. If the “failed PM” code were not used, the clinician would question the clinical efficacy of the medical equipment item, and the greater potential for injury would be identified by “high risk, hard failure.” Monitoring the distribution of high-risk equipment in these two categories assists the safety director in managing risk.

Obviously, a more forward-looking tool is needed to take advantage of the failure codes and the plethora of equipment information available in a clinical engineering department. This proactive tool should use failure codes, historical information, the “expert” knowledge of the clinical engineer, and the baseline of an established “manageable risk” environment (perhaps not optimal but stable).

The overall components and process flow for a proactive risk-management tool [6] are presented in Fig. 169.9. It consists of a two-component static risk factor, a two-component dynamic risk factor, and two “shaping” or feedback loops.

The static risk factor classifies new equipment by a generic equipment type: defibrilator, electrocar­diograph, pulse oximeter, etc. When equipment is introduced into the equipment database, it is assigned to two different static risk (Fig. 169.10) Categories [7]. The first is the equipment function that defines the application and environment in which the equipment item will operate. The degree of interaction

Equipment Service Report Service Type = 3 (Repair)

Failure Code = 101 "Equipment Failure"

NO

EQUIPMENT FUNCTION > ] 5 ? PHYSICAL FAILURE >15?

YES

105 FAILED PM?

YES

T

^ NO

HIGH RISK – SOFT FAILURE

HIGH RISK – HARD FAILURE

FIGURE 169.8 High-risk medical equipment failures.

With the patient is also taken into account. For example, a therapeutic device would have a higher risk assignment than a monitoring or diagnostic device. The second component of the static risk factor is the physical risk category. It defines the worst-cases scenario in the event of equipment malfunction. The correlation between equipment function and physical risk on many items might make the two categories appear redundant. However, there are sufficient equipment types where there is not the case. A scale of

To 25 is assigned to each risk category. The larger number is assigned to devices demonstrating greater risk because of their function or the consequences of device failure. The 1 to 25 scale is an arbitrary assignment, since a validated scale of risk factors for medical equipment, as previously described, is nonexistent. The risk points assigned to the equipment from these two categories are algebraically summed and designated the static risk factor. This value remains with the equipment type and the individual items within that equipment type permanently. Only if the equipment is used in a clinically variant way or relocated to a functionally different environment would this assignment be reviewed and changed.

The dynamic component (Fig. 169.11) of the risk-management tool consists or two parts. The first is a maintenance requirement category that is divided into 25 equally spaced divisions, ranked by least (1) to greatest (25) average manhours per device per year. These divisions are scaled by the maintenance hours for the equipment type requiring the greatest amount of maintenance attention. The amount of nonplanned (repair) manhours from the previous 12 months of service reports is totaled for each equipment type. Since this is maintenance work on failed equipment items, it correlates with the risk associated with that equipment type.

If the maintenance hours of an equipment type are observed to change to the point of placing it in a different maintenance category, a flag notifies the clinical engineer to review the equipment-type category. The engineer may increase the PM schedule to compensate for the higher unplanned maintenance hours. If the engineer believes the system “overacted,” a “no” decision adjusts a scaling factor by a -5%. Progres­sively, the algorithm is “shaped” for the equipment maintenance program in that particular institution. However, to ensure that critical changes in the average manhours per device for each equipment type is not missed during the shaping period, the system is initialized. This is accomplished by increasing the average manhours per device for each equipment type to within 5% of the next higher maintenance requirement division. Thus the system is sensitized to variations in maintenance requirements.

STATIC RISK FACTOR

подпись: static risk factorDYNAMIC RISK FACTOR

Equipment

Physical

Maintenance

Risk

Risk

Function

Risk

Requirement

Points

Priority

Risk Factors, Safety, and Management of Medical Equipment

0–Non Patient 5—Patient Related & Other 8-Computer & Related 10-Laboratory Accessories 13-Analytical Laboratory 15—Additional Monitoring 18—Surgical & IC Monitoring 20-Physical Therapy & Treatment 23-Surgical & IC 25—Life Support 5–No Significant Risks 10–Patient Discomfort 15-lnappropriate Therapy or Misdiagnosis 20-Patient or Operator Injury 25—Patient Death

25 Divisions Based on Least to Greatest Equipment Type Unplanned Maintenance Requirements

Initialize Scaling Factor

Pts Risk

+1 Exceeds AHA Useful Life +2/2 E mployee/Patient Injury +1 Equipment Failure +1 Exceeds MTBF +1 Repair Redo (<7 days)

+1 User Operational Error +1 PM Inspection Failure +1 Physical Damage +1 PM Overdue

Total Risk Points for Units MINUS Equipment Type Risk Points/D evice/Y ea r

O

81-100

5

O

61-80

4

O

41-60

3

0

21-40

•»

2

O

1-20

1

1. PM Scheduler 2.. Repair w/PM Skip

Repair Prioritization

Education

Equipment Control Master

Equipment Type Risk Factor

NO-Apply-5%

Correction

Risk Factors, Safety, and Management of Medical Equipment

Equipment Control

Master

Unit

Risk Factor

 

Engineer

Review

 

Increase

Maintenance?

 

Maintenance?

Yes-Adjust PM Scheduler

/

/

>

O

NO—Apply -5% Correction

подпись: no—apply -5% correction

Fast Dynamic Factor

подпись: fast dynamic factorL

Slow Dynamic Factor

Equipment-Type Based Unit-Based

FIGURE 169.9 Biomedical engineering risk-management tool.

EQUIPMENT

FUNCTION

|

25

Life Support

1

G-

23

Surgical & IC

*

20

Physical Therapy &

Treatment

18

Surgical & IC

|

Monitoring

41

15

Additional Monitoring

& Diagnostic

13

Analytical Laboratory

Ј•

10

Laboratory Accessories

E

<

8

Computer & Related

5

Patient Related & Other

I

0

Non-Patient

Risk Factors, Safety, and Management of Medical Equipment

Risk Factors, Safety, and Management of Medical Equipment

25 Patient or Operator Death

20 Patient or Operator Injury

15 Inappropriate Therapy or

Misdiagnosis 10 Patient Discomfort

5 No Significant Risks

подпись: 25 patient or operator death
20 patient or operator injury
15 inappropriate therapy or
misdiagnosis 10 patient discomfort
5 no significant risks
FIGURE 169.10 Static risk components.

MAINTENANCE

REQUIREMENT

 

RISK

POINTS

 

Risk Factors, Safety, and Management of Medical Equipment

Unit Scaling Factor

подпись: unit scaling factor

25 Divisions based on least to greatest Equipment Type annual unplanned maintenance requirements.

Annual Risk Points assigned based on 9 risk categories.

Points

Risk

+ 1

Exceeds AHA Useful Life

+ 2/2

Patient/Employee Injury

+ i

Equipment Failure

+ 1

Exceeds MTBF

+ 1

Repair Redo (<7 day turnaround)

+ 1

User Operational Error

+ I

PM Inspection Failure

+ !

Physical Damage

+ 1

PM Overdue

Total Risk Points For Unit Devices

Minus Equipment Type Annual Risk Points.

FIGURE 169.11 Dynamic risk components.

The baseline is now established for evaluating individual device risk. Variations in the maintenance requirement hours for any particular equipment type will, for the most part, only occur over a substantial period of time. For this reason, the maintenance requirement category is designated a “slow” dynamic risk element.

The second dynamic element assigns weighted risk points to individual equipment items for each unique risk occurrence. An occurrence is defined as when the device

Exceeds the American Hospital Association Useful Life Table for Medical Equipment or exceeds the historical MTBF for that manufacturer and model

Injures a patient or employee

Functionally fails or fails to pass a PM inspection

Is returned for repair or returned for rerepair within 9 days of a previous repair occurrence

Misses a planned maintenance inspection

Is subjected to physical damage

Was reported to have failed but the problem was determined to be a user operational error

There risk occurrences include the failure codes previously described. Although many other risk occur­rences could be defined, these nine occurrences have been historically effective in managing equipment risk. The risk points for each piece or equipment are algebraically summed over the previous year. Since the yearly total is a moving window, the risk points will not continue to accumulate but will reflect a recent historical average risk. The risk points for each equipment type are also calculated. This provides a baseline to measure the relative risk of devices within an equipment type. The average risk points for the equipment type are subtracted from those for each piece of equipment within the equipment type. If the device has a negative risk point value, the device’s risk is less than the average device in the equipment type. If positive, then the device has higher risk than the average device. This positive or negative factor is algebraically summed to the risk values from the equipment function, physical risk, and maintenance requirements. The annual risk points for an individual piece of equipment might change quickly over several months. For this reason, this is the “fast” component of the dynamic risk factor.

The concept of risk has now been quantified in term of equipment function, physical risk, maintenance requirements, and risk occurrences. The total risk count for each device then places it in one of five risk priority groups that are based on the sum of risk points. These groups are then applied in various ways to determine repair triage, PM triage, educational and in-service requirements and test equipment/parts, etc. in the equipment management program.

Correlation between the placement of individual devices in each risk priority group and the levels of planned maintenance previously assigned by the clinical engineer have shown that the proactive risk­management tool calculates a similar maintenance schedule as manually planned by the clinical engineer. In other words, the proactive risk-management tool algorithm places equipment items in a risk priority group commensurate with the greater or lesser maintenance as currently applied in the equipment maintenance program.

As previously mentioned, the four categories and the 1 to 25 risk levels within each category are arbitrary because a “gold standard” for risk management is nonexistent. Therefore, the clinical engineer is given input into the dynamic components making up the risk factor to “shape the system” based on the equipment’s maintenance history and the clinical engineer’s experience. Since the idea of a safe medical equipment program involves “judgment about the acceptability of risk in a specified situation” [8], this experience is a necessary component of the risk-assessment tool for a specific health care setting.

In the same manner, the system tracks the unit device’s assigned risk priority group. If the risk points for a device change sufficiently to place it in a different group, it is flagged for review. Again, the clinical engineer reviews the particular equipment item and decides if corrective action is prudent. Otherwise, the system reduces the scaling factor by 5%. Over a period of time, the system will be “formed” to what is acceptable risk and what deserves closer scrutiny.

Risk Management: Application

The information can be made available to the clinical engineer in the form of a risk assessment report (see Fig. 169.12). The report lists individual devices by property tag number (equipment control number), manufacturer, model, and equipment type. Assigned values for equipment function and physical risk are constant for each equipment type. The maintenance sensitizing factor enables the clinical engineer to control the algorithm’s response to the maintenance level of an entire equipment type. These factors combine to produce the slow risk factor (equipment function + physical risk + maintenance require­ments). The unit risk points are multiplied for the unit scaling factor, which allows the clinical engineer to control the algorithm’s response to static and dynamic risk components on individual pieces of equipment. This number is then added to the slow risk factor to determine the risk factor for each item. The last two columns are the risk priority that the automated system has assigned and the PM level set by the clinical engineer. This report provides the clinical engineer with information about medical equipment that reflects a higher than normal risk factor for the equipment type to which it belongs.

The proactive risk management tool can be used to individually schedule medical equipment devices for PM based on risk assessment. For example, why should newer patient monitors be maintained at the same maintenance level as older units if the risk can be demonstrated to be less? The tool is used as well to prioritize the planned maintenance program. For instance, assume a PM cycle every 17 weeks is started on January 1 for a duration of 1 week. Equipment not currently available for PM can be inspected at a later time as a function of the risk priority group for that device. In other words, an equipment item with a risk priority of 2, which is moderately low, would not be overdue for 2/5 of the time between the current and the next PM start date or until the thirteenth week after the start of a PM cycle of 17 weeks. The technicians can complete more critical overdue equipment first and move on to less critical equipment later.

Additionally, since PM is performed with every equipment repair, is it always necessary to perform the following planned PM? Assume for a moment that unscheduled maintenance was performed 10 weeks into the 17 weeks between the two PM periods discussed above. IF the equipment has a higher risk priority of the three, four, or five, the equipment is PMed as scheduled in April. However, if a lower equipment risk priority of one or two is indicated, the planned maintenance is skipped in April and resumed in July. The intent of this application is to reduce maintenance costs, preserve departmental resources, and minimize the war and tear on equipment during testing.

Historically, equipment awaiting service has been placed in the equipment holding area and inspected on a first in, first out (FIFO) basis when a technician is available. A client’s request to expedite the equipment repair was the singular reason for changing the work priority schedule. The proactive risk­management tool can prioritize the equipment awaiting repair, putting the critical equipment back into service more quickly, subject to the clinical engineer’s review.

Case Studies

Several examples are presented of the proactive risk-assessment tool used to evaluate the performance of medical equipment within a program.

The ventilators in Fig. 169.13 sHow a decreasing unit risk factor for higher equipment tag numbers. Since devices are put into service with ascending tag numbers and these devices are known to have been purchased over a period of time, the X axis represents a chronological progression. The ventilator risk factor is decreasing for newer units and could be attributable to better maintenance technique or man­ufacturer design improvements. This device is said to have a time-dependent risk factor.

A final illustration uses two generations of infusion pumps from the same manufacturer. Figure 169.14 shows the older vintage pump as Infusion Pump 1 and the newer version as Infusion Pump 2. A linear regression line for the first pump establishes the average risk factor as 53 with a standard deviation of

For the 93 pumps in the analysis. The second pump, a newer version of the first, had an average risk factor of 50 with a standard deviation of 1.38 for 261 pumps. Both pumps have relatively time-indepen­dent risk factors. The proactive risk-management tool reveals that this particular brand of infusion pump in the present maintenance program is stable over time and the newer pump has reduced risk and variability of risk between individual units. Again, this could be attributable to tighter manufacturing control or improvements in the maintenance program.

Equip

Control

Number

Manuf

Model

Equipment Equip Type Func

Phys

Risk

Maint

Requir

Avg

Hours

Maint Slow Sensitiz Risk Factor Factor

Unit

Risk

Points

Unit Risk Scaling Factor Factor

Risk Equip Priority Type

Priority

Manager: 17407 322

PHYSIOLOGICAL GROUP 4000A N1BP SYSTEM 18

15

1

1.66

1.78

38

6.62

1.00

41

3

2

17412

322

4000A

NIBP SYSTEM 18

15

1

1.66

1.78

38

6.62

1.00

41

3

2

17424

322

4000A

NIBP SYSTEM 18

15

1

1.66

1.78

38

6.62

1.00

41

3

2

17431

322

4000A

NIBP SYSTEM 18

15

1

1.66

1.78

38

6.62

1.00

41

3

2

15609

65

BW5

BLOOD &PLMA 5

5

1

2.51

1.17

14

10.64

1.00

22

2

1

3538

167

7370000

WARMING DEVICE HR/RESP MONITOR 18

15

1

0.10

29.47

35

8.69

1.00

43

3

2

3543

167

7370000

HR/RESP MONITOR 18

15

1

0.10

29.47

35

7.69

1.00

42

3

2

15315

167

7370000

HR/RESP MONITOR 18

15

1

0.10

29.47

35

7.69

1.00

42

3

2

17761

167

7370000

HR/RESP MONITOR 18

15

1

0.10

29.47

35

6.69

1.00

41

3

2

18382

574

N100C

PULSE OXIMETER 18

15

1

0.70

4.21

35

7.54

1.00

42

3

2

180476

574

N100C

PULSE OXIMETER 18

15

1

0.70

4.21

35

7.54

1.00

42

3

2

16685

167

7275217

2 CHAN CHART REC 18

15

1

0.42

7.02

37

6.83

1.00

41

3

2

I have reviewed this risk analysis report and have investigated these equipment items for whcih the risk priority factor has exceeded the average risk for that equipment type, I have taken one of two actions:

Investigated the equipment item and implemented changes to the maintenance program intended to reduce the risk priority value OR

Indicated on the printout that the dynamic risk factor program has "overreacted" and the risk factor should be reduced by 5%.

ENGINEER:

DATE:

FIGURE 169.12 Engineer’s risk-assessment report.

FIGURE 169.13 Ventilator with time-dependent risk characteristics.

RISK FACTOR

подпись: risk factorRisk Factors, Safety, and Management of Medical Equipment

Risk Factors, Safety, and Management of Medical Equipment

FIGURE 169.14 Time-independent risk characteristics infusion pump #1.

Conclusions

In summary, superior risk assessment within a medical equipment management program requires better communication, teamwork, and information analysis and distribution among all health care providers. Individually, the clinical engineer cannot provide all the necessary components for managing risk in the health care environment. Using historical information to only address equipment-related problems, after an incident, is not sufficient. The use of a proactive risk-management tool is necessary.

The clinical engineer can use this tool to deploy technical resources in a cost-effective manner. In addition to the direct economic benefits, safety is enhanced as problem equipment is identified and monitored more frequently. The integration of a proactive risk-assessment tool into the equipment man­agement program can more accurately bring to focus technical resources in the health care environment.

References

Gullikson ML. 1994. Biotechnology Procurement and Maintenance II: Technology Risk Manage­ment. Third Annual International Pediatric Colloquium, Houston, Texas.

David Y. 1992. Medical Technology 2001. Health Care Conference, Texas Society of Certified Public Accountants, San Antonio.

Gullikson ML. 1993. An Automated Risk Management Tool. Plant, Technology, and Safety Man­agement Series, Joint Commission on the Accreditation of Healthcare Facilities (JCAHO) Mono­graph 2.

Pecht ML, Nash FR. 1994. Predicting the reliability of electronic equipment. Proc IEEE 82(7):990.

Gullikson ML, David Y. 1993. Risk-Based Equipment Management Systems. The 9th National Conference on Medical Technology Management, American Society for Hospital Engineering of the American Hospital Association (AHA), New Orleans.

Gullikson ML. 1992. Biomedical Equipment Maintenance System. 27th Annual Meeting and Expo­sition, Hospital and Medical Industry Computerized Maintenance Systems, Association for the Advancement of Medical Instrumentation (AAMI), Anaheim, Calif.

Fennigkoh L. 1989. Clinical Equipment Management. Plant, Technology, and Safety Management Monograph 2.

David Y, Judd T. 1993. Medical Technology Management, Biophysical Measurement Series. Spacelabs Medical, Inc., Redmond, Wash.

Autio, D. D., Morris, R. L. “Clinical Engineering Program Indicators.” The Biomedical Engineering Handbook: Second Edition.

Ed. Joseph D. Bronzino

Boca Raton: CRC Press LLC, 2000

Management and Assessment of Medical Technology


As medical technology continues to evolve, so does its impact on patient outcome, hospital operations, and financial efficiency. The ability to plan for this evolution and its subsequent implications has become a major challenge in most decisions of health care organizations and their related industries. Therefore, there is a need to adequately plan for and apply those management tools which optimize the deployment of medical technology and the facilities that house it. Successful management of the technology and facilities will ensure a good match between the needs and the capabilities of staff and technology, respectively. While different types and sizes of hospitals will consider various strategies of actions, they all share the need to manage efficient utilization of their limited resources and its monitoring. Technology is one of these resources, and while it is frequently cited as the culprit behind cost increases, the well – managed technology program contribute to a significant containment of the cost of providing quality patient care. Clinical engineer’s skills and expertise are needed to facilitate the adoption of an objective methodology for implantation of a program that will match the hospital’s needs and operational conditions.

Whereas both the knowledge and practice patterns of management in general are well organized in today’s literature, the management of the health care delivery system and that of medical technology in the clinical environment has not yet reached that same high level. However, as we begin to understand the relationship between the methods and information that guide the decision-making processes regard­ing the management of medical technology that are being deployed in this highly complex environment, the role of the qualified clinical engineer becomes more valuable. This is achieved by reformulating the
Technology management process, which starts with the strategic planning process, continues with the technology assessment process, leads to the equipment planning and procurement processes, and finally ends with the assets management process. Definition of terms used in this chapter are provided at the end of the chapter.

The Health Care Delivery System

Societal demands on the health care delivery system revolve around cost, technology, and expectations. To respond effectively, the delivery system must identify its goals, select and define its priorities, and then wisely allocate its limited resources. For most organizations, this means that they must acquire only appropriate technologies and manage what they have already more effectively. To improve performance and reduce costs, the delivery system must recognize and respond to the key dynamics in which it operates, must shape and mold its planing efforts around several existing health care trends and directions, and must respond proactively and positively to the pressures of its environment. These issues and the technology manager’s response are outlined here: (1) technology’s positive impact on care quality and effectiveness, (2) an unacceptable rise in national spending for health care services, (3) a changing mix of how Americans are insured for health care, (4) increases in health insurance premiums for which appropriate technology application is a strong limiting factor, (5) a changing mix of health care services and settings in which care is delivered, and (6) growing pressures related to technology for hospital capital spending and budgets.

Major Health Care Trends and Directions

The major trends and directions in health care include (1) changing location and design of treatment areas, (2) evolving benefits, coverages, and choices, (3) extreme pressures to manage costs, (4) treating of more acutely ill older patients and the prematurely born, (5) changing job structures and demand for skilled labor, (6) the need to maintain a strong cash flow to support construction, equipment, and information system developments, (7) increased competition on all sides, (8) requirement for informa­tion systems that effectively integrate clinical and business issues, (9) changing reimbursement policies that reduce new purchases and lead to the expectation for extended equipment life cycles, (10) internal technology planning and management programs to guide decision making, (11) technology planning teams to coordinate adsorption of new and replacement technologies, as well as to suggest delivery system changes, and (12) equipment maintenance costs that are emerging as a significant expense item under great administrative scrutiny.

System Pressures

System pressures include (1) society’s expectations—highest quality care at the lowest reasonable price, where quality is a function of personnel, facilities, technology, and clinical procedures offered; (2) economic conditions—driven often by reimbursement criteria; (3) legal-pressures—resulting prima­rily from malpractice issues and dealing with rule-intensive “government” clients; (4) regulatory—mul – tistate delivery systems with increased management complexity, or heavily regulated medical device industries facing free-market competition, or hospitals facing the Safe Medical Devices Act reporting requirements and credentialling requirements; (5) ethics—deciding who gets care and when; and (6) technology pressures—organizations having enough capabilities to meet community needs and to compete successfully in their marketplaces.

The Technology Manager’s Responsibility

Technology mangers should (1) become deeply involved and committed to technology planning and management programs in their system, often involving the need for greater personal responsibilities and expanded credentials; (2) understand how the factors above impact their organization and how technology can be used to improve outcomes, reduce costs, and improve quality of life for patients, (3) educate other Health care professionals about how to demonstrate the value of individual technologies through involving financial, engineering, quality of care, and management perspective, and (4) assemble a team of care­givers with sufficient broad clinical expertise and administrators with planning and financial expertise to contribute their knowledge to the assessment process [16].

Strategic Technology Planning

Strategic Planning Process

Leading health care organizations have begun to combine strategic technology planning with other tech­nology management activities in program that effectively integrate new technologies with their existingly technology base. This has resulted in high-quality care at a reasonable cost. Among those who have been its leading catalysts, ECRI (formerly the Emergency Care Research Institute) is known for articulating this program [4] and encouraging its proliferation initially among regional health care systems and now for single or multihospital systems as well [5]. Key components of the program include clinical strategic – planning, technology strategic planning, technology assessment, interaction with capital budgeting, acqui­sition and deployment, resource (or equipment assets) management, and monitoring and evaluation. A proper technology strategic plan is derived from and supports as well-defined clinical strategic plan [15].

Clinical and Technology Strategic Plan

Usually considered long-range and continually evolving, a clinical strategic plan is updated annually. For a given year, the program begins when key hospital participants, through the strategic planning process, assess what clinical services the hospital should be offering in its referral area. They take into account health care trends, demographic and market share data, and space and facilities plans. They analyze their facility’s strengths and weaknesses, goals and objectives, competition, and existing technology base. The outcome of this process is a clinical strategic plan that establishes the organization’s vision for the year and referral area needs and the hospital’s objectives in meeting them.

It is not possible to adequately complete a clinical strategic plan without engaging in the process of strategic technology planning. A key role for technology managers is to assist their organizations through­out the combined clinical and technology strategic planning processes by matching available technical capabilities, both existing and new, with clinical requirements. To accomplish this, technology managers must understand why their institution’s values and mission are set as they are, pursue their institution’s strategic plans through that knowledge, and plan in a way that effectively allocates limited resources. Although a technology manager may not be assigned to develop an institution’s overall strategic plan, he or she must understand and believe it in order to offer good input for hospital management. In providing this input, a technology manager should determine a plan for evaluating the present state of the hospital’s technological deployment, assist in providing a review of emerging technological innova­tions and their possible impact on the hospital, articulate justifications and provisions for adoption of new technologies or enhancement of existing ones, visit research and laboratories and exhibit areas at major medical and scientific meetings to view new technologies, and be familiar with the institution and its equipment users’ abilities to assimilate new technology.

The past decade has shown a trend toward increased legislation in support of more federal regulations in health care. These and other pressures will require that additional or replacement medical technology be well anticipated and justified. As a rationale for technology adoption, the Texas Children’s Hospital focuses on the issues of clinical necessity, management support, and market preference, Addressing the issue of clinical necessity, the hospital considers the technology’s comparison against medical standard of care, its impact on the level of care and quality of life, its improvement on intervention’s accuracy and/or safety, its impact on the rate of recovery, the needs or desires of the community, and the change in service volume or focus. On the issue of management support, the hospital estimates if the technology will create a more effective care plan and decision-making process, improve operational efficiency in the current service programs, decrease liability exposure, increase compliance with regulations, reduce work­load and dependence on user skill level ameliorate departmental support, or enhance clinical proficiency. Weighting the issue of market preference, the hospital contemplate if it will improve access to care, increase customer convenience and/or satisfaction, enhance the organization’s image and market share, decrease the cost of adoption and ownership, or provide a return on its investment.

Technology Strategic Planning Process

When the annual clinical strategic planning process has started and hospital leaders have begun to analyze or reaffirm what clinical services they want to offer to the community, the hospital can then conduct efficient technology strategic planning. Key elements of this planning involve (1) performing an initial audit of existing technologies, (2) conducting a technology assessment for new and emerging technologies for fit with current or desired clinical services, (3) planning for replacement and selection of new technologies, (4) setting priorities for technology acquisition, and (5) developing processes to implement equipment acquisition and monitor ongoing utilization. “Increasingly, hospitals are designating a senior manager (e. g., an administrator, the director of planning, the director of clinical engineering) to take the responsibility for technology assessment and planning. That person should have the primary responsi­bility for developing the strategic technology plan with the help of key physicians, department managers, and senior executives” [4].

Hospitals can form a medical technology advisory committee (MTAC), overseen by the designated senior manager and consisting of the types of members mentioned above, to conduct the strategic technology planning process and to annually recommend technology priorities to the hospital strategic planning committee and capital budget committee. It is especially important to involve physicians and nurses in this process.

In the initial technology audit, each major clinical service or product line must be analyzed to determine how well the existing technology base supports it. The audit can be conducted along service lines (radiology, cardiology, surgery) or technology function (e. g., imaging, therapeutic, diagnostic) by a team of designated physicians, department heads, and technology managers. The team should begin by devel­oping a complete hospital-wide assets inventory, including the quantity and quality of equipment. The team should compare the existing technology base against known and evolving standards-of-care infor­mation, patient outcome data, and known equipment problems. Next, the team should collect and examine information on technology utilization to assess its appropriate use, the opportunities for improvement, and the risk level. After reviewing the technology users’ education needs as they relate to the application and servicing of medical equipment, the team should credential users for competence in the application of new technologies. Also, the auditing team should keep up with published clinical protocols and practice guidelines using available health care standards directories and utilize clinical outcome data for quality-assurance and risk-management program feedback [6].

While it is not expected that every hospital has all the required expertise in-house to conduct the initial technology audit or ongoing technology assessment, the execution of this planning process is sufficiently critical for a hospital’s success that outside expertise should be obtained when necessary. The audit allows for the gathering of information about the status of the existing technology base and enhances the capability of the medical technology advisory committee to assess the impact of new and emerging technologies on their major clinical services.

All the information collected from the technology audit results and technology assessments is used in developing budget strategies. Budgeting is part of strategic technology planning in that a 2- to 5-year long-range capital spending plan should be created. This is in addition to the annual capital budget preparation that takes into account 1 year at a time. The MTAC, as able and appropriate, provides key information regarding capital budget requests and makes recommendations to the capital budget com­mittee each year. The MTAC recommends priorities for replacement as well as new and emerging technologies that over a period of several years guides that acquisition that provides the desired service developments or enhancements. Priorities are recommended on the basis of need, risk, cost (acquisition, operational and maintenance), utilization, and fit with the clinical strategic plan.

Technology Assessment

As medical technology continues to evolve, so does its impact on patient outcome, hospital operations, and financial resources. The ability to manage this evolution and its subsequent implications has become a major challenge for all health care organizations. Successful management of technology will ensure a good match between needs and capabilities and between staff and technology. To be successful, an ongoing technology assessment process must b an integral part of an ongoing technology planning and manage­ment program at the hospital, addressing the needs of the patient, the user, and the support team. This facilitates better equipment planning and utilization of the hospital’s resources. The manager who is knowledgeable about his or her organization’s culture, equipment users’ needs, the environment within which equipment will be applied, equipment engineering, and emerging technological capabilities will be successful in proficiently implementing and managing technological changes [7].

It is in the technology assessment process that the clinical engineering/technology manager professional needs to wear two hats: that of the manager and that of the engineer. This is a unique position, requiring expertise and detailed preparation, that allows one to be a key leader and contributor to the decision­making process of the medical technology advisory committee (MTAC).

The MTAC uses an ad hoc team approach to conduct technology assessment of selected services and technologies throughout the year. The ad hoc teams may incorporate representatives of equipment users, equipment service providers, physicians, purchasing agents, reimbursement mangers, representatives of administration, and other members from the institution as applicable.

Prerequisites for Technology Assessment

Medical technology is a major strategic factor in positioning and creating a positive community percep­tion of the hospital. Exciting new biomedical devices and systems are continually being introduced. And they are introduced at a time when the pressure on hospitals to contain expenditures is mounting. Therefore, forecasting the deployment of medical technology and the capacity to continually evaluate its impact on the hospital require that the hospital be willing to provide the support for such a program. (Note. Many organizations are aware of the principle that an in-house “champion” is needed in order to provide for the leadership that continually and objectively plans ahead. The champion and the program being “championed” may use additional in-house or independent expertise as needed. To get focused attention on the technology assessment function and this program in larger, academically affiliated and government hospitals, the position of a chief technology officer is being created.) Traditionally, executives rely on their staff to produce objective analyses of the hospital’s technological needs. Without such analyses, executives may approve purchasing decisions of sophisticated biomedical equipment only to discover later that some needs or expected features were not included with this installation, that those features are not yet approved for delivery, or that the installation has not been adequately planned.

Many hospitals perform technology assessment activities to project needs for new assets and to better manage existing assets. Because the task is complex, an interdisciplinary approach and a cooperative attitude among the assessment team leadership is required. The ability to integrate information from disciplines such as clinical, technical, financial, administrative, and facility in a timely and objective manner is critical to the success of the assessment. This chapter emphasizes how technology assessment fits within a technology planning and management program and recognizes the importance of corporate skills forecasting medical equipment changes and determining the impact of changes on the hospital’s market position. Within the technology planning and management program, the focus on capital assets management of medical equipment should not lead to the exclusion of accessories, supplies, and the disposables also required.

Medical equipment has a life cycle that can be identified as (1) the innovation phase, which includes the concept, basic and applies research, and development, and (2) the adoption phase, which begins with the clinical studies, through diffusion, and then widespread use. These phases are different from each other in the scope of professional skills involved, their impact on patient care, compliance with regulatory requirements, and the extent of the required operational support. In evaluating the applicability of a device or a system for use in the hospital, it is important to note in which phase of its life cycle the equipment currently resides.

Technology Assessment Process

More and more hospitals are faced with the difficult phenomenon of a capital equipment requests list that is much larger than the capital budget allocation. The must difficult decision, then, is the one that matches clinical needs with the financial capability. In doing so, the following questions are often raised: How do we avoid costly technology mistakes? How do we wisely target capital dollars for technology? How do we avoid medical staff conflicts as they relate to technology? How do we control equipment – related risks? and How do we maximize the useful life of the equipment or systems while minimizing the cost ownership? A hospital’s clinical engineering department can assist in providing the right answers to these questions.

Technology assessment is a component of technology planning that begins with the analysis of the hospital’s existing technology base. It is easy to perceive then that technology assessment, rather than an equipment comparison, is a new major function for a clinical engineering department [8]. It is important that clinical engineers be well prepared for the challenge. They must have a full understanding of the mission of their particular hospitals, a familiarity with the health care delivery system, and the cooperation of hospital administrators and the medical staff. To aid in the technology assessment process, clinical engineers need to utilize the following tools: (1) access to national database services, directories, and libraries, (2) visits to scientific and clinical exhibits, (3) a network with key industry contacts, and (4) a relationship with peers throughout the country [9].

The need for clinical engineering involvement in the technology assessment process becomes evident when recently purchased equipment or its functions are underutilized, users have ongoing problems with equipment, equipment maintenance costs become excessive, the hospital is unable to comply with standards or guidelines (i. e., JCAHO requirements) for equipment management, a high percentage of equipment is awaiting repair, or training for equipment operators is inefficient due to shortage of allied health professionals. A deeper look at the symptoms behind these problems would likely reveal a lack of a central clearinghouse to collect, index, and monitor all technology-related information for future planning purposes, the absence of procedures for identifying emerging technologies for potential acqui­sition, the lack of a systematic plan for conducting technology assessment, resulting in an ability to maximize the benefits from deployment of available technology, the inability to benefit from the orga­nization’s own previous experience with a particular type of technology, the random replacement of medical technologies rather than a systematic plan based on a set of well-developed criteria, and/or the lack of integration of technology acquisition into the strategic and capital planning of the hospital.

To address these issues, efforts to develop a technology microassessment process were initiated at one leading private hospital with the following objectives: (1) accumulate information on medical equipment, (2) facilitate systematic planning, (3) create an administrative structure supporting the assessment process and its methodology, (4) monitor the replacement of outdated technology, and (5) improve the capital budget process by focusing on long-term needs relative to the acquisition of medical equipment [10].

The process, in general, and the collection of up-to-date pertinent information, in particular, require the expenditure of certain resources and the active participation of designated hospital staff in networks providing technology assessment information. For example, corporate membership in organizations and societies that provide such information needs to be considered, as well as subscriptions to certain computerized database and printed sources [11].

At the example hospital, and MTAC was formed to conduct technology assessment. It was chaired by the director of clinical engineering. Other managers from equipment user departments usually serve as the MTAC’s designated technical coordinators for specific task forces. Once the committee accepted a request from an individual user, it identified other users that might have an interest in that equipment or system and authorized the technical coordinator to assemble a task force consisting of users identified by the MTAC. This task force then took responsibility for the establishment of performance criteria that would be used during this particular assessment. The task force also should answer the questions of effectiveness, safety, and cost-effectiveness as they relate to the particular assessment. During any specific period, there may be multiple task forces, each focusing on a specific equipment investigation.

The task force technical coordinator cooperates with the material management department in con­ducting a market survey, in obtaining the specified equipment for evaluation purposes, and in scheduling vendor-provided in-service training. The coordinator also confers with clinical staff to determine if they have experience with the equipment and the maturity level of the equipment under assessment. After establishment of a task force, the MTACs technical coordinator is responsible for analyzing the clinical experiences associated with the use of this equipment, for setting evaluation objectives, and for devising appropriate technical tests in accord with recommendations from the task force. Only equipment that successfully passes the technical tests will proceed to a clinical trial. During the clinical trial, a task force – appointed clinical coordinator collects and reports a summary of experiences gained. The technical coordinator then combines the results from both the technical tests and the clinical trial into a summary report for MTAC review and approval. In this role, the clinical engineer/technical coordinator serves as a multidisciplinary professional, bridging the gap between the clinical and technical needs of the hospital. To complete the process, financial staff representatives review the protocol.

The technology assessment process at this example hospital begins with a department or individual filling out two forms: (1) a request for review (RR) form and (2) a capital asset request (CAR) form. These forms are submitted to the hospital’s product standards committee, which determines if an assessment process is to be initiated, and the priority for its completion. It also determines if a previously established standard for this equipment already exists (if the hospital is already using such a technol­ogy)—if so, an assessment is not needed.

On the RR, the originator delineates the rationale for acquiring the medical device. For example, the originator must tell how the item will improve quality of patient care, who will be its primary user, and how it will improve ease of use. On the CAR, the originator describes the item, estimates its cost, and provides purchase justification. The CAR is then routed to the capital budget office for review. during this process, the optimal financing method for acquisition is determined. If funding is secured, the CAR is routed to the material management department, where, together with the RR, it will be processed. The rationale for having the RR accompany the CAR is to ensure that financial information is included as part of the assessment process. The CAR is the tool by which the purchasing department initiates a market survey and later sends product requests for bid. Any request for evaluation that is received without a CAR or any CAR involving medical equipment that is received without a request for evaluation is returned to the originator without action. Both forms are then sent to the clinical engineering department, where a designated technical coordinator will analyze the requested technology maturity level and results of clinical experience with its use, review trends, and prioritize various manufactures’ presentations for MTAC review.

Both forms must be sent to the MTAC if the item requested is not currently used by the hospital or if it does not conform to previously adopted hospital standards. The MTAC has the authority to recom­mend either acceptance or rejection of any request for review, based on a consensus of its members. A task force consisting of potential equipment users will determine the “must have” equipment functions, review the impact of the various equipment configurations, and plan technical and clinical evaluations.

If the request is approved by the MTAC, the requested technology or equipment will be evaluated using technical and performance standards. Upon completion of the review, a recommendation is returned to the hospital’s products standard committee, which reviews the results of the technology assessment, determines whether the particular product is suitable as a hospital standard, and decides if its should be purchased. If approved, the request to purchase will be reviewed by the capital budget committee (CBC) to determine if the required expenditure meets with available financial resources and if or when it may be feasible to make the purchase. To ensure coordination of the technology assessment Program, the chairman of the MTAC also serves as a permanent member of the hospital’s CBC. In this way, there is a planned integration between technology assessment and budget decisions.

Equipment Assets Management

An accountable, systemic approach will ensure that cost-effective, efficacious, safe, and appropriate equipment is available to meet the demands of quality patient care. Such an approach requires that existing medical equipment resources be managed and that the resulting management strategies have measurable outputs that are monitored and evaluated. Technology managers/clinical engineers are well positioned to organize and lead this function. It is assumed that cost accounting is managed and monitored by the health care organization’s financial group.

Equipment Management Process

Through traditional assets management strategies, medical equipment can be comprehensively managed by clinical engineering personnel. First, the management should consider a full range of strategies for equipment technical support. Plans may include use of a combination of equipment service providers such as manufacturers, third-party service groups, shared services, and hospital-based (in-house) engi­neers and biomedical equipment technicians (BMETs). All these service providers should be under the general responsibility of the technology manager to ensure optimal equipment performance through comprehensive and ongoing best-value equipment service. After obtaining a complete hospital medical equipment inventory (noting both original manufacturer and typical service provider), the management should conduct a thorough analysis of hospital accounts payable records for at least the past 2 years, compiling all service reports and preventative maintenance-related costs from all possible sources. The manager then should document in-house and external provider equipment service costs, extent of maintenance coverage for each inventory time, equipment-user operating schedule, quality of mainte­nance coverage for each item, appropriateness of the service provider, and reasonable maintenance costs. Next, he or she should establish an effective equipment technical support process. With an accurate inventory and best-value service providers identified, service agreements/contracts should be negotiated with external providers using prepared terms and conditions, including a log-in system. There should be an in-house clinical engineering staff ensuring ongoing external provider cost control utilizing several tools. By asking the right technical questions and establishing friendly relationships with staff, the manager will be able to handle service purchase orders (POs) by determining if equipment is worth repairing and obtaining exchange prices for parts. The staff should handle service reports to review them for accuracy and proper use of the log-in system. They also should match invoices with the service reports to verify opportunities and review service histories to look for symptoms such as need for user training, repeated problems, run-on calls billed months apart, or evidence of defective or worn-out equipment. The manager should take responsibility for emergency equipment rentals. Finally, the manager should develop, implement, and monitor all the service performance criteria.

To optimize technology management programs, clinical engineers should be willing to assume respon­sibilities for technology planning and management in all related areas. They should develop policies and procedures for their hospital’s management program. With life-cycle costs determined for key high-risk or high-cost devices, they should evaluate methods to provide additional cost savings in equipment operation and maintenance. They should be involved with computer networking systems within the hospital. As computer technology applications increase, the requirements to review technology-related information in a number of hospital locations will increase. They should determine what environmental conditions and facility changes are required to accommodate new technologies or changes in standards and guidelines. Lastly, they should use documentation of equipment performance and maintenance costs along with their knowledge of current clinical practices to assist other hospital personnel in determining the best time and process for planning equipment replacement [12].

Technology Management Activities

A clinical engineering department, through outstanding performance in traditional equipment manage­ment, will win its hospital’s support and will be asked to be involved in a full range of technology management activities. The department should start an equipment control program that encompasses routine performance testing, inspection, periodic and preventive maintenance, on-demand repair ser­vices, incidents investigation, and actions on recalls and hazards. The department should have multidis­ciplinary involvement in equipment acquisition and replacement decisions, development of new services, and planning of new construction and major renovations, including intensive participation by clinical engineering, materials management, and finance. The department also should initiate programs for training all users of patient care equipment, quality improvement (QI), as it relates to technology use, and technology-related risk management [13].

Case Study: A Focus on Medical Imaging

In the mide-1980s, a large private multihospital system contemplated the startup of a corporate clinical engineering program. The directors recognized that involvement in a diagnostic imaging equipment service would be key to the economic success of the program. They further recognized that maintenance cost reductions would have to be balanced with achieving equal or increased quality of care in the utilization of that equipment.

Programs startup was in the summer of 1987 in 3 hospitals that were geographically close. Within the first year, clinical engineering operations began in 11 hospitals in 3 regions over a two-state area. By the fall of 1990, the program included 7 regions and 21 hospitals in a five-state area. The regions were organized, typically, into teams including a regional manager and 10 service providers, serving 3 to 4 hospitals, whose average size was 225 beds. Although the staffs were stationed at the hospitals, some specialists traveled between sites in the region to provide equipment service. Service providers included individuals specializing in the areas of diagnostic imaging [x-ray and computed tomography (CT)], clinical laboratory, general biomedical instrumentation, and respiratory therapy.

At the end of the first 18 months, the program documented over $1 million in savings for the initial

Hospitals, a 23% reduction from the previous annual service costs. Over 63% of these savings were attributable to “in-house” service x-ray and CT scanner equipment. The mix of equipment maintained by 11 imagining service providers—from a total staff of 30—included approximately 75% of the radiology systems of any kind found in the hospitals and 5 models of CT scanners from the three different manufacturers.

At the end of 3 years in 1990, program-wide savings had exceeded 30% of previous costs for partici­pating hospitals. Within the imaging areas of the hospitals, savings approached and sometimes exceed 50% of initial service costs. The 30 imaging service providers—out of a total staff of 62—had increased their coverage of radiology equipment to over 95%, had increased involvement with CT to include nine models from five different manufacturers, and had begun in-house work in other key imaging modalities.

Tracking the financial performance of the initial 11 hospitals over the first 3 years of the program yields of the following composite example: A hospital of 225 beds was found to have equipment service costs of $540,000 prior to program startup. Sixty-three percent of these initial costs (or $340,000) was for the maintenance of the hospital’s x-ray and CT scanner systems. Three years later, annual service costs for this equipment were cut in half, to approximately $170,000. That represents a 31% reduction in hospital-wide costs due to the imaging service alone.

This corporate clinical engineering operation is, in effect, a large in-house program serving many hospitals that all have common ownership. The multihospital corporation has significant purchasing power in the medical device marketplace and provides central oversight of the larger capital expenditures for its hospitals. The combination of the parent organization’s leverage and the program’s commitment to serve only hospitals in the corporation facilitated the development of positive relationships with medical device manufacturers. Most of the manufacturers did not see the program as competition but rather as a potentially helpful ally in the future marketing and sales of their equipment and systems. What staff provided these results? All service providers were either medical imaging industry or military trained. All were experienced at troubleshooting electronic subsystems to component level, as necessary. Typically, these individuals had prior experience on the manufacture’s models of equipment under their coverage. Most regional managers had prior industry, third party, or in-house imaging service manage­ment experience. Each service provider had the test equipment necessary for day-to-day duties. Each individual could expect at least 2 weeks of annual service training to keep appropriate skills current. Desired service training could be acquired in a timely manner from manufactures and/or third-party organizations. Spare or replacement parts inventory was minimal because of the program’s ability to get parts from manufacturers and other sources either locally or shipped in overnight.

As quality indicators for the program, the management measured user satisfaction, equipment down­time, documentation of technical staff service training, types of user equipment errors and their effect on patient outcomes, and regular attention to hospital technology problems. User satisfaction surveys indicated a high degree of confidence in the program service providers by imaging department mangers. Problems relating to technical, management, communication, and financial issues did occur regularly, but the regional manager ensured that they were resolved in a timely manner. Faster response to daily imaging equipment problems, typically by on-site service providers, coupled with regular preventive maintenance (PM) according to established procedures led to reduced equipment downtime. PM and repair service histories were captured in a computer documentation system that also tracked service times, costs, and user errors and their effects. Assisting the safety committee became easier with ability to draw a wide variety of information quickly from the program’s documenting system.

Early success in imaging equipment led to the opportunity to do some additional value-added projects such as the moving and reinstallation of x-ray rooms that preserved exiting assets and opened up valuable space for installation of newer equipment and upgrades of CT scanner systems. The parent organization came to realize that these technology management activities could potentially have a greater financial and quality impact on the hospital’s health care delivery than equipment management. In the example of one CT upgrade (which was completed over two weekends with no downtime), there was a positive financial impact in excess of $600,000 and improved quality of care by allowing faster off-line diagnosis of patient scans. However, opportunity for this kind of contribution would never have occurred without the strong base of a successful equipment management program staffed with qualified individuals who receive ongoing training.

Equipment Acquisition and Deployment

Process of Acquiring Technology

Typically, medical device systems will emerge from the strategic technology planning and technology assessment processes as required and budgeted needs. At acquisition time, a needs analysis should be conducted, reaffirming clinical needs and device intended applications. The “request for review” docu­mentation from the assessment process or capital budget request and incremental financial analysis from the planning process may provide appropriate justification information, and a capital asset request (CAR) form should be completed [14]. Materials management and clinical engineering personnel should ensure that this item is a candidate for centralized and coordinated acquisition of similar equipment with other hospital departments. Typical hospital prepurchase evaluation guidelines include an analysis of needs and development of a specification list, formation of a vendor list and requesting proposals, analyzing proposals and site planning, evaluating samples, selecting finalists, making the award, delivery and installation, and acceptance testing. Formal request for proposals (RFPs) from potential equipment vendors are required for intended acquisitions whose initial or life-cycle cost exceeds a certain threshold,

E., $100,000. Finally, the purchase takes place, wherein final equipment negotiations are conducted and purchase documents are prepared, including a purchase order.

Acquisition Process Strategies

The cost-of-ownership concept can be used when considering what factors to include in cost comparisons of competing medical devices. Cost of ownership encompasses all the direct and indirect expenses associated with medical equipment over its lifetime [15]. It expresses the cost factors of medical equip­ment for both the initial price of the equipment (which typically includes the equipment, its installation, and initial training cost) and over the long term. Long-term costs include ongoing training, equipment service, supplies, connectivity, upgrades, and other costs. Health care organizations are just beginning to account for a full range of cost-of-ownership factors in their technology assessment and acquisition processes, such as acquisition costs, operating costs, and maintenance costs (installation, supplies, down­time, training, spare parts, test equipment and tools, and depreciation). It is estimated that the purchase price represents only 20% of the life-cycle cost of ownership.

When conducting needs analysis, actual utilization information form the organization’s existing same or similar devices can be very helpful. One leading private multihospital system has implemented the following approach to measuring and developing relevant management feedback concerning equipment utilization. It is conducting equipment utilization review for replacement planning, for ongoing account­ability of equipment use, and to provide input before more equipment is purchased. This private system attempts to match product to its intended function and to measure daily (if necessary) the equipment’s actual utilization. The tools they use include knowing their hospital’s entire installed base of certain kinds of equipment, i. e., imaging systems. Utilization assumptions for each hospital and its clinical procedural mix are made. Equipment functional requirements to meet the demands of the clinical procedures are also taken into account.

Life-cycle cost analysis is a tool used during technology planning, assessment, or acquisition “either to compare high-cost, alternative means for providing a service or to determine whether a single project or technology has a positive or negative economic value. The strength of the life-cycle cost analysis is that it examines the cash flow impact of an alternative over its entire life, instead of focusing solely on initial capital investments” [15].

“Life-cycle cost analysis facilitates comparisons between projects or technologies with large initial cash outlays and those with level outlays and inflows over time. It is most applicable to complex, high-cost choices among alternative technologies, new service, and different means for providing a given service. Life-cycle cost analysis is particularly useful for decisions that are too complex and ambiguous for experience and subjective judgment alone. It also helps decision makers perceive and include costs that often are hidden or ignored, and that may otherwise invalidate results” [12].

“Perhaps the most powerful life-cycle cost technique is net present value (NPV) analysis, which explicitly accounts for inflation and foregone investment opportunities by expressing future cash flows in present dollars” [12].

Examples where LCC and NPV analysis prove very helpful are in deciding whether to replace/rebuild or buy/lease medical imaging equipment. The kinds of costs captured in life-cycle cost analysis, include decision-making costs, planning agency/certificate of need costs (if applicable), financing, initial capital investment costs including facility changes, life-cycle maintenance and repairs costs, personnel costs, and other (reimbursement consequences, resale, etc.).

One of the best strategies to ensure that a desired technology is truly of value to the hospital is to conduct a careful analysis in preparation for its assimilation into hospital operations. The process of equipment prepurchase evaluation provides information that can be used to screen unacceptable per­formance by either the vendor or the equipment before it becomes a hospital problem.

Once the vendor has responded to informal requests or formal RFPs, the clinical engineering depart­ment should be responsible for evaluating the technical response, while the materials management department should devaluate the financial responses.

In translating clinical needs into a specification list, key features or “must have” attributes of the desired device are identified. In practice, clinical engineering and materials management should develop a “must have” list and an extras list. The extras list contains features that may tip the decision in favor of one vendor, all other factors being even. These specification lists are sent to the vendor and are effective in a self-elimination process that results in a time savings for the hospital. Once the “must have” attributes have been satisfied, the remaining candidate devices are evaluated technically, and the extras are consid­ered. This is accomplished by assigning a weighting factor (i. e., 0 to 5) to denote the relative importance of each of the desired attributes. The relative ability of each device to meet the defined requirements is then rated [15].

One strategy that strengthens the acquisition process is the conditions-of-sale document. This multi­faceted document integrates equipment specifications, performance, installation requirements, and fol­low-up services. The conditions-of-sale document ensures that negotiations are completed before a purchase order is delivered and each participant is in agreement about the product to be delivered. As a document of compliance, the conditions-of-sale document specifies the codes and standards having jurisdiction over that equipment. This may include provisions for future modification of the equipment, compliance with standards under development, compliance with national codes, and provision for software upgrades.

Standard purchase orders that include the conditions of sale for medical equipment are usually used to initiate the order. At the time the order is placed, clinical engineering is notified of the order. In addition to current facility conditions, the management must address installation and approval require­ments, responsibilities, and timetable; payment, assignment, and cancellation; software requirements and updates; documentation; clinical and technical training; acceptance testing (hospital facility and vendor); warranty, spare parts, and service; and price protection.

All medical equipment must be inspected and tested before it is placed into service regardless of whether it is purchased, leased, rented, or borrowed by the hospital. In any hospital, clinical engineering should receive immediate notification if a very large device or system is delivered directly into another department (e. g., imaging or cardiology) for installation. Clinical engineering should be required to sign off on all purchase orders for devices after installation and validation of satisfactory operation. Ideally, the warranty period on new equipment should not begin until installation and acceptance testing are completed. It is not uncommon for a hospital to lose several months of free parts and service by the manufacturer when new equipment is, for some reason, not installed immediately after delivery.

Clinical Team Requirements

During the technology assessment and acquisition processes, clinical decision makers analyze the follow­ing criteria concerning proposed technology acquisitions, specifically as they relate to clinical team requirements: ability of staff to assimilate the technology, medical staff satisfaction (short term and long term), impact on staffing (numbers, functions), projected utilization, ongoing related supplies required, effect on delivery of care and outcomes (convenience, safety, or standard of care), result of what is written in the clinical practice guidelines, credentialling of staff required, clinical staff initial and ongoing training required, and the effect on existing technology in the department or on other services/departments.

Defining Terms

Appropriate technology [1]: A term used initially in developing countries, referring to selecting medical

Equipment that can “appropriately” satisfy the following constraints: funding shortages, insufficient numbers of trained personnel, lack of technical support, inadequate supplies of consumables/acces­sories, unreliable water an power utilities/supplies, and lack of operating and maintenance manuals. In the context of this chapter, appropriate technology selection must taken into consideration local health needs and disease prevalence, the need for local capability of equipment maintenance, and availability of resources for ongoing operational and technical support.

Clinical engineers/biomedical engineers: As we began describing the issues with the management of medical technology; it became obvious that some of the terms are being used interchangeably in the literature. For example, the terms engineers, clinical engineers, biomedical equipment technicians, equipment managers, and health care engineers are frequently used. For clarification, in this chapter we will refer to clinical engineers and the clinical engineering department as a representative group for all these terms.

Cost-effectiveness [1]: A mixture of quantitative and qualitative considerations. It includes the health

Priorities of the country or region at the macro assessment level and the community needs at the institution micro assessment level. Product life-cycle cost analysis (which, in turn, includes initial purchase price, shipping, renovations, installation, supplies, associated disposables, cost per use, and similar quantitative measures) is a critical analysis measure. Life-cycle cost also takes into account staff training, ease of use, service, and many other cost factors. But experience and judgement about the relative importance of features and the ability to fulfill the intended purpose also contribute critical information to the cost-effectiveness equation.

Equipment acquisition and deployment: Medical device systems and products typically emerge from

The strategic technology planning process as “required and budgeted” needs. The process that follows, which ends with equipment acceptance testing and placement into general use, is known as the equipment acquisition and deployment process.

Health care technology: Health care technology includes the devices, equipment, systems, software,

Supplies, pharmaceuticals, biotechnologies, and medical and surgical procedures used in the pre­vention, diagnosis, and treatment of disease in humans, for their rehabilitation, and for assistive purposes. In short, technology is broadly defined as encompassing virtually all the human inter­ventions intended to cope with disease and disabilities, short of spiritual alternatives. This chapter focuses on medical equipment products (devices, systems, and software) rather than pharmaceu­ticals, biotechnologies, or procedures [1]. The concept of technology also encompasses the facilities that house both patients and products. Facilities cover a wide spectrum—from the modern hospital on one end to the mobile imaging trailer on the other.

Quality of care (QA) and quality of improvement (QI): Quality assurance (QA) and Quality improve­

Ment (QI) are formal sets of activities to measure the quality of care provided; these usually include a process for selecting, monitoring, and applying corrective measures. The 1994 Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) standards require hospital QA, pro­grams to focus on patient outcomes as a primary reference. JCAHO standards for plant, technology, and safety management (PTSM), in turn, require certain equipment management practices and QA or QI activities. Identified QI deficiencies may influence equipment planning, and QI audits may increase awareness of technology overuse or under utilization.

Risk management: Risk management is a program that helps the hospital avoid the possibility of risks,

Minimize liability exposure, and stay compliant with regulatory reporting requirements. JCAHO PTSM standards require minimum technology-based risk-management activities. These include clinical engineering’s determination of technology-related incidents with follow-up steps to prevent recurrences and evaluation and documentation of the effectiveness of these steps.

Safety: Safety is the condition of being safe from danger, injury, or damage. It is judgment about the

Acceptability of risk in a specified situation (e. g., for a given medical problem) by a provider with specified training at a specified type of facility equipment.

Standards [1]: A wide variety of formal standards and guidelines related to health care technology now

Exists. Some standards apply to design, development, and manufacturing practices for devices, software, and pharmaceuticals; some are related to the construction and operation of a health care facility; some are safety and performance requirements for certain classes of technologies, such as standards related to radiation or electrical safety; and others relate to performance, or even con­struction specifications, for specific types of technologies. Other standards and guidelines deal with administrative, medical, and surgical procedures and the training of clinical personnel. Standards and guidelines are produced and/or adopted by government agencies, international organizations, and professional and specialty organizations and societies. ECRI’s Healthcare Standards Directory lists over 20,000 individual standards and guidelines produced by over 600 organizations and agencies from North America alone.

Strategic technology planning: Strategic technology planning encompasses both technologies new to

The hospital and replacements for existing equipment that are to be acquired over several quarters. Acquisitions can be proposed for reasons related to safety, standard-of-care issues, and age or obsolescence of existing equipment. Acquisitions also can proposed to consolidate several service areas, expand a service areas to reduce cost of service, or add a new service area.

Strategic technology panning optimizes the way the hospital’s capital resources contribute to its mission. It encourages choosing new technologies that are cost-effective, and it also allows the hospital to be competitive in offering state-of-the-art services. Strategic technology planning works for a single department, product line, or clinical service. It can be limited to one or several high – priority areas. It also can be used for an entire mulihospital system or geographic region [4].

Technology assessment: Assessment of medical technology is any process used for examining and

Reporting properties of medical technology used in health care, such as safety, efficacy, feasibility, and indications for use, cost, and cost-effectiveness, as well as social, economic, and ethical con­sequences, whether intended or unintended [2]. A primary technology assessment is one that seeks new, previously nonexistent data through research, typically employing long-term clinical studies of the type described below. A secondary technology assessment is usually based on published data, interviews, questionnaires, and other information-gathering methods rather than original research that creates new, basic data.

In technology assessment, there are six basic objectives that the clinical engineering department should have in mind. First, there should be ongoing monitoring of developments concerning new and emerging technologies. For new technologies, there should be an assessment of the clinical efficacy, safety, and cost/benefit ratio, including their effects on established technologies. There should be an evaluation of the short – and long-term costs and benefits of alternate approaches to managing specific clinical conditions. The appropriateness of existing technologies and their clin­ical uses should be estimated, while outmoded technologies should be identified and eliminated from their duplicative uses. The department should rate specific technology-based interventions in terms of improved overall value (quality and outcomes) to patients, providers, and payers. Finally, the department should facilitate a continuous uniformity between needs, offerings, and capabilities [3].

The locally based (hospital or hospital group) technology assessment described in this chapter is a process of secondary assessment that attempts to judge whether a certain medical equip­ment/product can be assimilated into the local operational environment.

Technology diffusion [1]: The process by which a technology is spread over time in a social system.

The progression of technology diffusion can be described in four stages. The emerging or applied research stage occurs around the time of initial clinical testing. In the new stage, the technology has passed the phase of clinical trials but is not yet in widespread use. During the established stage, the technology is considered by providers to be a standard approach to a particular condition and diffuses into general use. Finally, in the obsolete/outmoded stage, the technology is superseded by another and/or is demonstrated to be ineffective or harmful.

Technology life cycle: Technology has a life cycle—a process by which technology is crated, tested,

Applied, and replaced or abandoned. Since the life cycle varies from basic research and innovation to obsolescence and abatement, it is critical to know the maturity of a technology prior to making decisions regarding its adoption. Technology forecast assessment of pending technological changes are the investigative tools that support systematic and rational decisions about the utilization of a given institution’s technological capabilities.

Technology planning and management [3]: Technology planning and management are an accountable,

Systematic approach to ensuring that cost-effective, efficacious, appropriate, and safe equipment is available to meet the demands of quality patient care and allow an institution to remain com­petitive. Elements include in-house service management, management and analysis of equipment external service providers, involvement in the equipment acquisition process, involvement of appropriate hospital personnel in facility planning and design, involvement in reducing technol­ogy-related patient and staff incidents, training equipment users, reviewing equipment replacement needs, and ongoing assessment of emerging technologies [4].

References

ECRI. Healthcare Technology Assessment Curriculum. Philadelphia, August 1992.

Banata HD, Institute of Medicine. Assessing Medical Technologies. Washington, National Academy Press, 1985.

Lumsdon K. Beyond technology assessment: Balancing strategy needs, strategy. Hospitals 15:25, 1992.

ECRI. Capital, Competition, and Constraints: Managing Healthcare in the 1990s. A Guide for Hospital Executives. Philadelphia, 1992.

Berkowtiz DA, Solomon RP. Providers may be missing opportunities to improve patient outcomes. Costs, Outcomes Measurement and Management May-June: 7, 1991.

ECRI. Regional Healthcare Technology Planning and Management Program. Philadelphia, 1990.

Sprague GR. Managing technology assessment and acquisition. Health Exec 6:26, 1988.

David Y. Technology-related decision-making issues in hospitals. In IEEE Engineering in Medicine and Biology Society Proceedings of the 11th Annual International Conference, 1989.

Wagner M. Promoting hospitals high-tech equipment. Mod Healthcare 46, 1989.

David Y. Medical Technology 2001. CPA Healthcare Conference, 1992.

ECRI. Special Report on Technology Management, Health Technology. Philadelphia, 1989.

ECRI. Special Report on Devices and Dollars, Philadelphia, 1988.

Gullikson ML, David Y, Brady MH. An automated risk management tool. JCAHO, Plant, Tech­

Nology and Safety Management Review, PTSM Series, no 2, 1993.

David Y, Judd T, ECRI. Special Report on Devices and Dollars, Philadelphia, 1988. Medical Tech­nology Management. SpaceLabs Medical, Inc., Redmond, Wash 1993.

Bronzino JD (ed). Management of Medical Technology: A Primer for Clinical Engineers. Stoneham, Mass, Butterworth, 1992.

David Y. Risk Measurement For Managing Medical Technology. Conference Proceedings, PERM­IT 1997, Australia.

Gullikson, M. L. “Risk Factors, Safety, and Management of Medical Equipment.” The Biomedical Engineering Handbook: Second Edition.

Ed. Joseph D. Bronzino

Boca Raton: CRC Press LLC, 2000

ISOTYPE HETEROJUNCTION

HISTORY

An isotype heterojunction is different from an anisotype heterojunction in that the dopants of the two sides are of the same type. It can be an n-n heterojunction or a p-p heterojunction. (Discussions of the anisotype heterojunction can be found in Section 1.5.3.) The first heterojunction was the anisotype, which was suggested by Shockley in 1951, to be incorporated into the emitter-base junction to increase the current gain of a bipolar transistor.1 This application was analyzed in more detail by Kroemer in 1957.2 The isotype heterojunction had been studied in different material systems. These include Ge-GaAs by Anderson in 1962,3 InP-GaAs by Oldham and Milnes in 1963,4 Ge-GaAsP by Chang in 1965,5 and GaAs-AlGaAs by Womac and Rediker in 1972,6 by Chandra and Eastman7,8 and Lechner et al.9 in 1979. Theoretical analysis of the device has been presented by some of these authors, namely Anderson,3 Chang,5 and Chandra and Eastman.10

STRUCTURE

An n-n isotype heterojunction is shown in the schematic cross-section of Fig. 5.1, using the GaAs-AlGaAs system as an example. The layers are grown epitaxially. For good-quality heterostructure epitaxy, the lattice constants of the two materials have to be matched within « 5%. The heterointerface must be extremely abrupt to achieve rectification rather than have ohmic characteristics. This transition region has to be less than « 100 A thick.10-12 Also, for best rectification behavior, the doping level in the wide-energy-gap material should be non-degenerate and

ISOTYPE HETEROJUNCTION

FIGURE 5.1

Schematic cross-section of an isotype heterojunction, using an n-n AlGaAs-GaAs system

ISOTYPE HETEROJUNCTION

FIGURE 5.2

Energy-band diagrams of an isotype heterojunction (a) Isolated layers (b) Joined layers, at equilibrium (c) Under forward bias (d) Under reverse bias

lighter than that in the narrow-energy-gap counterpart. Isolation between diodes can be achieved by mesa etching down to the substrate layer.

CHARACTERISTICS

(5.1)

Подпись: (5.1)For a heterojunction of two materials of different electron affinities, work functions and energy gaps, the band-edge discontinuities in Fig. 5.2(b) are related by

(5.2)

Подпись: (5.2)A Er = q(x.-X7)

A Ey =

For the GaAs-AlGaAs system, GaAs is referred to as material 1. The potential barrier for the majority carriers is usually formed on the wide-energy-gap material, in this case AlGaAs. This system is similar in nature to a Schottky barrier with the narrow-energy-gap layer replacing the metal contact.

(5.3)

Подпись: (5.3)As shown in Fig. 5.2(a), the Fermi level in isolated AlGaAs is higher than that in GaAs. Conceptually, upon contact of these two materials, electrons transfer from AlGaAs to GaAs, causing a depletion layer in AlGaAs and an accumulation layer in GaAs. Such an accumulation layer does not exist in the anisotype heterojunction. In order to calculate the barrier height and band bending, the boundary condition for electric field is used,

K.% . = KJg ,

I ml 2 m2

g[1].

ml

Подпись: mlthe maximum field in the accumulation layer, which occurs at the

heterointerface, given by

2c1nd J kT q

fC^-r,) –

exp—————– 1

. kT

ISOTYPE HETEROJUNCTION

(5.4)

 

ml

 

<?m2’s the maximum field in the depletion layer, given by

(5.5)

Подпись: (5.5)

m2

Подпись: m22qND2(‘f’2-V2)

(5.6)

Подпись: (5.6)f,1 and Wi are band bendings at equilibrium. V; and V2 are the portions of applied forward voltage developed across GaAs and AlGaAs, respectively {Vf— V + V2). With another known relationship

(‘rl-vx) + (‘r2-v1)

ISOTYPE HETEROJUNCTION

FIGURE 5.3

Typical I – V characteristics of an isotype heterojunction, (a) Linear plot, (b) Semilog plot.

ISOTYPE HETEROJUNCTION

ISOTYPE HETEROJUNCTIONFIGURE 5.4

Energy-band diagram showing the effect of a graded layer / on the resultant barrier height.

these net potentials QҐ – Vj and lF2 – V2), as a function of applied bias, can be obtained by iterating Eqs. (5.3)-(5.6). Of particular interest is the barrier height at equilibrium, solved with Vj= = V2 = 0, giving

2 n

Подпись: 2 n(5.7)

I

exp

exp

exp

kT

kT

kT

The thermionic-emission current under bias can be obtained from f-q-fO (-qV,) T (qV’t

kT

2nm

ISOTYPE HETEROJUNCTION

(5.8)

 

Qualitatively, the square-root term represents the average carrier velocity, A^expU-fV’tT’) is the number of electrons above the barrier an^ the next two terms are due to opposite effects on the barrier exerted by V, and Vy. For better comparison to a Schottky barrier, this equation can be rearranged to give

r2

J = A

exp

exp

kT

kT

‘~q$b (~qV

kT J

exp

ISOTYPE HETEROJUNCTION

(5.9)

 

It can be seen that if V = 0, the current is identical to a Schottky diode where A* is the effective Richardson constant for the wide-energy-gap material.

To eliminate the variable V in the above equation, an approximation is made from Eqs. (5.3)-(5.5)5

qVT,-Vx)

exp

kT

ISOTYPE HETEROJUNCTION

— (‘Ґ – Vf) kT f

 

(5.10)

 

where ‘Ґ= Ґ,1 + ~$s – $s2’ Substituting V into Eq. (5.9) gives

7

irJ

i.

exp

exp

~kT

kT

qXi

kT.

– 1

exp

ISOTYPE HETEROJUNCTION
ISOTYPE HETEROJUNCTION

(5.11)

 

In comparison to a standard therm ionic-emission current of a Schottky-barrier diode, a few points are worthy of mentioning. The temperature dependence of the coefficient is now T instead of T2. The term (1 – Vf/’V) affects both the forward current and the reverse current. It causes the forward current to have a more gradual exponential rise with voltage. The reverse current also becomes non-saturating. A typical set of I-Vcharacteristics of an isotype heterojunction is shown in Fig. 5.3.

Another important deviation from a Schottky diode is that the barrier height becomes temperature dependent. This is implied in the derivation of the barrier height from Eqs. (5.3)—(5.7). Since the temperature dependence on current is a useful technique to measure parameters for thermionic-emission current, the barrier height in Eq. (5.11) can be eliminated to give

(zsT)

V kT *

qv,

-1

exp

Ur

q2’FNt

D2

J =

I

2nm 2kT

ISOTYPE HETEROJUNCTION
ISOTYPE HETEROJUNCTION

exp

 

(5.12)

 

As mentioned in Section 5.2, the transition between the two materials at the heterointerface has to be abrupt. This transition region, indicated as / in Fig. 5.4, has been shown to decrease the barrier height. A transition region of only w 150 A can reduce the barrier to the extent that rectification vanishes and ohmic behavior results.10-12

A structure with two isotype heterojunctions has been reported.13 As shown by the energy-band diagram in Fig. 5.5, the barrier is formed by a thin wide-energy-gap material (« 500 A), sandwiched between two narrow – energy-gap materials. The I-V characteristics in Fig. 5.6 show that the current is symmetrical, and, at low temperature, nonlinear. The nonlinearity is due to the decrease of the effective barrier height with bias, as shown in Fig. 5.5(b).

APPLICATIONS

The isotype heterojunction is not a practical device for rectification. The fabrication requirement is quite stringent. The barrier height obtained is usually lower than that from the metal-semiconductor junction. Also, the reverse current

AlGaAs GaAs I | GaAs

FIGURE 5.6

I-V characteristics of the rectangular barrier at different temperatures. (After Ref. 13)

Подпись:

J qv

Подпись: J qvJ L

00

Подпись: 00(b)

FIGURE 5.5

A rectangular barrier formed by two isotype heterojunctions, (a) Under equilibrium (b) Under bias.

AljGa^jAs

GaAs

Ec

GaAs

ISOTYPE HETEROJUNCTION

• En

 

JfWf TF

 

(a)

 

(b)

 

ISOTYPE HETEROJUNCTION

FIGURE 5.7

Energy-band diagrams of a sawtooth graded-composition barrier, (a) Equilibrium, (b) Under forward bias, (c) Under reverse bias.

does not saturate with voltage. This device, currently, has no commercial value. It is only used as a research tool to study the fundamental properties of heterojunctions.

RELATED DEVICE

Graded-Composition Barrier

The first graded-composition barrier was reported by Allyn et al. in 1980, with a saw-tooth barrier as shown in Fig. 5.7.14 In this example, the energy gap is varied by the Al and Ga concentrations in the AlxGa|_xAs layer. This barrier layer is typically « 500 A. The outer layers are GaAs. The I-V characteristics are shown
in Fig. 5.8 where the forward current is a thermionic-emission current and the reverse current is a tunneling current through the thin barrier.

A barrier of triangular shape, shown in Fig. 5.9, is also possible.13 The electrical characteristics in Fig. 5.10 are asymmetrical, reflecting the different control of barrier height by the two polarities. This asymmetry is similar to that in a planar-doped-barrier diode. Both directions of currents are due to thermionic emission of majority carriers.

ISOTYPE HETEROJUNCTION

FIGURE 5.8

I-V characteristics of a saw-tooth graded – composition barrier (After Ref 14)

Подпись: ISOTYPE HETEROJUNCTION ISOTYPE HETEROJUNCTIONEc

(a) (b) (c)

FIGURE 5.9

Energy-band diagrams of a triangular graded-composition barrier (a) Equilibrium (b) Under forward bias (c) Under reverse bias Currents in both directions are due to thermionic emission

ISOTYPE HETEROJUNCTION/

FIGURE 5.10

I-V characteristics of a triangular graded-composition bamer

Clinical Engineering: Evolution of a Discipline


As discussed in the introduction to this Handbook, biomedical engineers apply the concepts, knowledge, and techniques of virtually all engineering disciplines to solve specific problems in the biosphere, i. e., the realm of biology and medicine. When biomedical engineers work within a hospital or clinic, they are more properly called clinical engineers. But what exactly is the definition of the term clinical engineer? In recent years, a number of organizations, e. g., the American Heart Association [1986], the American Association of Medical Instrumentation [Goodman, 1989], the American College of Clinical Engineers [Bauld, 1991], and the Journal of Clinical Engineering [Pacela, 1991], have attempted to provide an appropriate definition for the term, clinical engineer. For the purposes of this handbook, a clinical engineer is an engineer who has graduated from an accredited academic program in engineering or who is licensed as a professional engineer or engineer-in-training and is engaged in the application of scientific and technological knowledge developed through engineering education and subsequent professional experi­ence within the health care environment in support of clinical activities. Furthermore, the clinical environment is defined as that portion of the health care system in which patient care is delivered, and clinical activities include direct patient care, research, teaching, and public service activities intended to enhance patient care.

Evolution of Clinical Engineering

Engineers were first encouraged to enter the clinical scene during the late 1960s in response to concerns about patient safety as well as the rapid proliferation of clinical equipment, especially in academic medical centers. In the process, a new engineering discipline—clinical engineering—evolved to provide the technological support necessary to meet these new needs. During the 1970s, a major expansion of clinical engineering occurred, primarily due to the following events:

The Veterans’ Administration (VA), convinced that clinical engineers were vital to the overall operation of the VA hospital system, divided the country into biomedical engineering districts, with a chief biomedical engineer overseeing all engineering activities in the hospitals in that district.

Throughout the United States, clinical engineering departments were established in most large medical centers and hospitals and in some smaller clinical facilities with at least 300 beds.

Clinical engineers were hired in increasing numbers to help these facilities use existing technology and incorporate new technology.

Having entered the hospital environment, routine electrical safety inspections exposed the clinical engineer to all types of patient equipment that was not being maintained properly. It soon became obvious that electrical safety failures represented only a small part of the overall problem posed by the presence of medical equipment in the clinical environment. The equipment was neither totally understood nor properly maintained. Simple visual inspections often revealed broken knobs, frayed wires, and even evidence of liquid spills. Investigating further, it was found that many devices did not perform in accordance with manufacturers’ specifications and were not maintained in accordance with manufactur­ers’ recommendations. In short, electrical safety problems were only the tip of the iceberg. The entrance of clinical engineers into the hospital environment changed these conditions for the better. By the mid- 1970s, complete performance inspections before and after use became the norm, and sensible inspection procedures were developed [Newhouse et al., 1989]. In the process, clinical engineering departments became the logical support center for all medical technologies and became responsible for all the bio­medical instruments and systems used in hospitals, the training of medical personnel in equipment use and safety, and the design, selection, and use of technology to deliver safe and effective health care.

With increased involvement in many facets of hospital/ clinic activities, clinical engineers now play a multifaceted role (Fig. 167.1). They must interface successfully with many “clients,” including clinical staff, hospital administrators, regulatory agencies, etc., to ensure that the medical equipment within the hospital is used safely and effectively.

Today, hospitals that have established centralized clinical engineering departments to meet these responsibilities use clinical engineers to provide the hospital administration with an objective option of equipment function, purchase, application, overall system analysis, and preventive maintenance policies.

Clinical Engineering: Evolution of a Discipline

FIGURE 167.1 Diagram illustrating the range of interactions of a clinical engineer.

Some hospital administrators have learned that with the in-house availability of such talent and expertise, the hospital is in a far better position to make more effective use of its technological resources [Bronzino, 1986, 1992]. By providing health professionals with needed assurance of safety, reliability, and efficiency in using new and innovative equipment, clinical engineers can readily identify poor-quality and ineffective equipment, thereby resulting in faster, more appropriate utilization of new medical equipment.

Typical pursuits of clinical engineers, therefore, include

Supervision of a hospital clinical engineering department that includes clinical engineers and biomedical equipment technicians (BMETs)

Prepurchase evaluation and planning for new medical technology

Design, modification, or repair of sophisticated medical instruments or systems

Cost-effective management of a medical equipment calibration and repair service

Supervision of the safety and performance testing of medical equipment performed by BMETs

Inspection of all incoming equipment (i. e., both new and returning repairs)

Establishment of performance benchmarks for all equipment

Medical equipment inventory control

Coordination of outside engineering and technical services performed by vendors

Training of medical personnel in the safe and effective use of medical devices and systems

Clinical applications engineering, such as custom modification of medical devices for clinical research, evaluation of new noninvasive monitoring systems, etc.

Biomedical computer support

Input to the design of clinical facilities where medical technology is used, e. g., operating rooms (ORs), intensive care units, etc.

Development and implementation of documentation protocols required by external accreditation and licensing agencies.

Clinical engineers thus provide extensive engineering services for the clinical staff and, in recent years, have been increasingly accepted as valuable team members by physicians, nurses, and other clinical professionals. Furthermore, the acceptance of clinical engineers in the hospital setting has led to different types of engineering-medicine interactions, which in turn have improved health care delivery.

Hospital Organization and the Role of Clinical Engineering

In the hospital, management organization has evolved into a diffuse authority structure that is commonly referred to as the triad model. The three primary components are the governing board (trustees), hospital administration (CEO and administrative staff), and the medical staff organization [Bronzino and Hayes, 1988]. The role of the governing board and the chief executive officer are briefly discussed below to provide some insight regarding their individual responsibilities and their interrelationship.

Governing Board (Trustees)

The Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) summarizes the major duties of the governing board as “adopting by-laws in accordance with its legal accountability and its responsibility to the patient.” The governing body, therefore, requires both medical and paramedical depart­ments to monitor and evaluate the quality of patient care, which is a critical success factor in hospitals today.

To meet this goal, the governing board essentially is responsible for establishing the mission statement and defining the specific goals and objectives that the institution must satisfy. Therefore, the trustees are involved in the following functions:

Establishing the policies of the institution

Providing equipment and facilities to conduct patient care

Ensuring that proper professional standards are defined and maintained (i. e., providing quality assurance)

Coordinating professional interests with administrative, financial, and community needs

Providing adequate financing by securing sufficient income and managing the control of expenditures

Providing a safe environment

Selecting qualified administrators, medical staff, and other professionals to manage the hospital

In practice, the trustees select a hospital chief administrator who develops a plan of action that is in concert with the overall goals of the institution.

Hospital Administration

The hospital administrator, the chief executive officer of the medical enterprise, has a function similar to that of the chief executive officer of any corporation. The administrator represents the governing board in carrying out the day-to-day operations to reflect the broad policy formulated by the trustees. The duties of the administrator are summarized as follows:

Preparing a plan for accomplishing the institutional objectives, as approved by the board

Selecting medical chiefs and department directors to set standards in their respective fields

Submitting for board approval an annual budget reflecting both expenditures and income projections

Maintaining all physical properties (plant and equipment) in safe operating condition

Representing the hospital in its relationships with the community and health agencies

Submitting to the board annual reports that describe the nature and volume of the services delivered during the past year, including appropriate financial data and any special reports that may be requested by the board

In addition to these administrative responsibilities, the chief administrator is charged with controlling cost, complying with a multitude of governmental regulations, and ensuring that the hospital conforms to professional norms, which include guidelines for the care and safety of patients.

167.4 Clinical Engineering Programs

In many hospitals, administrators have established clinical engineering departments to manage effectively all the technological resources, especially those relating to medical equipment, that are necessary for providing patient care. The primary objective of these departments is to provide a broad-based engi­neering program that addresses all aspects of medical instrumentation and systems support.

Figure 167.2 illUstrates the organizational chart of the medical support services division of a typical major medical facility. Note that within this organizational structure, the director of clinical engineering reports directly to the vice-president of medical support services. This administrative relationship is extremely important because it recognizes the important role clinical engineering departments play in delivering quality care. It should be noted, however, that in other common organizational structures, clinical engineering services may fall under the category of “facilities,” “materials management,” or even just “support services.” Clinical engineers also can work directly with clinical departments, thereby bypassing much of the hospital hierarchy. In this situation, clinical departments can offer the clinical engineer both the chance for intense specialization and, at the same time, the opportunity to develop personal relationships with specific clinicians based on mutual concerns and interests [Wald, 1989].

Once the hospital administration appoints a qualified individual as director of clinical engineering, the person usually functions at the department-head level in the organizational structure of the institution

MEDICAL SUPPORT SERVICES DIVISION

Clinical Engineering: Evolution of a Discipline

FIGURE 167.2 Organizational chart of medical support services division for a typical major medical facility. This organizational structure points out the critical interrelationship between the clinical engineering department and the other primary services provided by the medical facility.

And is provided with sufficient authority and resources to perform the duties efficiently and in accordance with professional norms. To understand the extent of these duties, consider the job title for “clinical engineering director” as defined by the World Health Organization [Issakov et al, 1990].

General Statement. The clinical engineering director, by his or her education and experience, acts as a manger and technical director of the clinical engineering department. The individual designs and directs the design of equipment modifications that may correct design deficiencies or enhance the clinical performance of medical equipment. The individual also may supervise the implementation of those design modifications. The education and experience that the director possesses enables him or her to analyze complex medical or laboratory equipment for purposes of defining corrective maintenance and developing appropriate preventing maintenance or performance assurance protocols. The clinical engi­neering director works with nursing and medical staff to analyze new medical equipment needs and participates in both the prepurchase planning process and the incoming testing process. The individual also participates in the equipment management process through involvement in the system development, implementation, maintenance, and modification processes.

Duties and Responsibilities. The director of clinical engineering has a wide range of duties and respon­sibilities. For example, this individual

Works with medical and nursing staff in the development of technical and performance specifi­cations for equipment requirements in the medical mission.

Once equipment is specified and the purchase order developed, generates appropriate testing of the new equipment.

Does complete performance analysis on complex medical or laboratory equipment and summa­rizes results in brief, concise, easy-to-understand terms for the purposes of recommending cor­rective action or for developing appropriate preventive maintenance and performance assurance protocols.

Designs and implements modifications that permit enhanced operational capability. May supervise the maintenance or modification as it is performed by others.

Must know the relevant codes and standards related to the hospital environment and the perfor­mance assurance activities. (Examples in the United States are NFPA 99, UL 544, and JCAHO, and internationally, IEC-TC 62.)

Is responsible for obtaining the engineering specifications (systems definitions) for systems that are considered unusual or one-of-a-kind and are not commercially available.

Supervises in-service maintenance technicians as they work on codes and standards and on preventive maintenance, performance assurance, corrective maintenance, and modification of new and existing patient care and laboratory equipment.

Supervises parts and supply purchase activities and develops program policies and procedures for same.

Sets departmental goals, develops budgets and policy, prepares and analyzes management reports to monitor department activity, and manages and organizes the department to implement them.

Teaches measurement, calibration, and standardization techniques that promote optimal perfor­mance.

In equipment-related duties, works closely with maintenance and medical personnel. Communi­cates orally and in writing with medical, maintenance, and administrative professionals. Develops written procedures and recommendations for administrative and technical personnel.

Minimum Qualifications. A bachelor’s degree (4 years) in an electrical or electronics program or the equivalent is required (preferably with a clinical or biomedical adjunct). A master’s degree is desirable. A minimum of 3 years’ experience as a clinical engineer and 2 years in a progressively responsible supervisory capacity is needed. Additional qualifications are as follows:

Must have some business knowledge and management skills that enable him or her to participate in budgeting, cost accounting, personnel management, behavioral counseling, job description development, and interviewing for hiring or firing purposes. Knowledge and experience in the use of microcomputers are desirable.

Must be able to use conventional electronic trouble-shooting instruments such as multimeters, function generators, oscillators, and oscilloscopes. Should be able to use conventional machine shop equipment such as drill presses, grinders, belt sanders, brakes, and standard hand tools.

Must possess or be able to acquire knowledge of the techniques, theories, and characteristics of materials, drafting, and fabrication techniques in conjunction with chemistry, anatomy, physiol­ogy, optics, mechanics, and hospital procedures.

Clinical engineering certification or professional engineering registration is required.

Major Functions of a Clinical Engineering Department

It should be clear by the preceding job description that clinical engineers are first and foremost engi­neering professionals. However, as a result of the wide-ranging scope of interrelationships within the medical setting, the duties and responsibilities of clinical engineering directors are extremely diversified. Yet a common thread is provided by the very nature of the technology they manage. Directors of clinical engineering departments are usually involved in the following core functions:

Technology Management. Developing, implementing, and directing equipment management pro­grams. Specific tasks include accepting and installing new equipment, establishing preventive mainte­nance and repair programs, and managing the inventory of medical instrumentation. Issues such as cost – effective use and quality assurance are integral parts of any technology management program. The director advises the administrator of the budgetary, personnel, space, and test equipment requirements necessary to support this equipment management program.

Risk Management. Evaluating and taking appropriate action on incidents attributed to equipment malfunctions or misuse. For example, the clinical engineering director is responsible for summarizing the technological significance of each incident and documenting the findings of the investigation. He or she then submits a report to the appropriate hospital authority and, according to the Safe Medical Devices Act of 1990, to the device manufacturer, the Food and Drug Administration (FDA), or both.

Technology Assessment. Evaluating and selecting new equipment. The director must be proactive in the evaluation of new requests for capital equipment expenditures, providing hospital administrators and clinical staff with an in depth appraisal of the benefits/ advantages of candidate equipment. Furthermore, the process of technology assessment for all equipment used in the hospital should be an ongoing activity.

Facilities Design and Project Management. Assisting in the design of new or renovated clinical facil­ities that house specific medical technologies. This includes operating rooms, imaging facilities, and radiology treatment centers.

Training. Establish and deliver instructional modules for clinical engineering staff as well as clinical staff on the operation of medical equipment.

In the future, it is anticipated that clinical engineering departments will provide assistance in the application and management of many other technologies that support patient care, including computer support, telecommunications, and facilities operations.

Defining Terms

JCAHO, Joint Commission on the Accreditation of Healthcare Organizations: Accrediting body

Responsible for checking hospital compliance with approved rules and regulations regarding the delivery of health care.

Technology assessment: Involves an evaluation of the safety, efficiency, and cost-effectiveness, as well

As consideration of the social, legal, and ethical effects, of medical technology.

References

AHA, 1986. American Hospital Association Resource Center, Hospital Administration Terminology, 2nd ed. Washington, American Hospital Publishing.

Bauld TJ. 1991. The definition of a clinical engineer. J Clin Eng 16:403.

Bronzino JD. 1986. Biomedical Engineering and Instrumentation: Basic Concepts and Applications. Boston, PWS Publishing.

Bronzino JD. 1992. Management of Medical Technology: A Primer for Clinical Engineers. Boston, Butter­worth.

Goodman G. 1989. The profession of clinical engineering. J Clin Eng 14:27.

ICC. 1991. International Certification Commission’s Definition of a Clinical Engineer, International Certification Commission Fact Sheet. Arlington, Va, ICC.

Issakov A, Mallouppas A, McKie J. 1990. Manpower development for a healthcare technical service.

Report of the World Health Organization, WHO/SHS/NHP/90.4.

Newhouse VL, Bel! DS, Tackel IS, et al. 1989. The future of clinical engineering in the 1990s. J Clin Eng 14:417. Pacela A. 1990. Bioengineering Education Directory. Brea, Calf, Quest Publishing.

Wald A. 1989. Clinical engineering in clinical departments: A different point of view. Biomed Instr Technol 23:58.

Further Information

Bronzino JD. 1992. Management of Medical Technology: A Primer for Clinical Engineers. Boston, Butter­worth.

Journals: Journal of Clinical Engineering, Journal of Medical Engineering and Physics, Biomedical Instru­mentation and Technology.

David, Y., Judd, T. M. “Management and Assessment of Medical Technology.” The Biomedical Engineering Handbook: Second Edition.

Ed. Joseph D. Bronzino

Boca Raton: CRC Press LLC, 2000