Background

Clinical and organizational aspects of the preoperative visit can have a significant impact on patient satisfaction. The authors' previous work demonstrated that communication of information from the clinician to the patient was found to be the most positively rated component, whereas organizational issues, particularly waiting time, were the most negative. This study compares two yearly cycles of patient satisfaction surveys to assess the process and impact of implementation of changes.

Methods

The authors distributed a one-page questionnaire, consisting of elements evaluating satisfaction with clinical providers and with organizational aspects of the visit, to patients in their preoperative clinic during two different time periods. Fourteen different questions had five Likert scale options ranging from excellent to poor. Changes implemented included clerical, scheduling, and clinical changes.

Results

The overall collection rate of completed questionnaires was 79%. The scores for each question in Cycle 2 were higher for all questions, with 3 of 14 reaching statistical significance (P < 0.01). These questions related to the explanation of the Preoperative Assessment Clinic by the surgeon's office, courtesy and efficiency of the clinic staff, and satisfaction with waiting time. Average waiting time was reduced from 92 to 41 min (P < or = 0.0001).

Conclusion

Analysis of patient flow and clinic operations led to alterations in clinic processes. Alterations included education of clinic and surgical office staff to improve customer service, and implementation of changes in provider roles. These modifications resulted in an improvement in patient satisfaction and a reduction in waiting time with minimal economic impact.

  • ❖ Patient satisfaction in the preoperative assessment clinic is enhanced by seeing one nurse practitioner rather than sequentially a nurse and surgeon or physician assistant

  • ❖ Patient satisfaction is reduced with longer wait times and by unhappiness with the receptionist

  • ❖ In a tertiary care, teaching hospital, an educational program and shift to nurse practitioner assessment with anesthesiologist supervision improved patient satisfaction, especially in regard to reduced waiting time, and satisfaction with the receptionist

THE preoperative assessment clinic is one of the major entryways to hospitals for a significant number of patients and often provides the patient's first impression of the hospital experience. Likewise, institutions themselves are increasingly sensitive to the impact of patient satisfaction in a competitive medical marketplace. This clinic allows for coordination of preoperative surgical, anesthesia, nursing and laboratory care, medical optimization of patients preoperatively, and transmission of information to the operating room team. Additional goals include (1) avoidance of operating room delays and cancellations due to inadequate assessment or patient preparation, (2) standardization of information for coding and billing, and (3) performance of reporting requirements such as Joint Commission on Accreditation of Healthcare Organizations and National Surgical Quality Improvement Program.1–4Performance of these evaluations is ideally practiced in the setting of financial efficiency and high patient and family satisfaction.

With the emerging role of patients as important medical care partners, it is critical to understand their expectations for care. Obtaining patient feedback can provide valuable insight into the quality of clinical practice and hospital programs.5From the standpoint of the healthcare system, creating a positive patient experience is critical. The consequences of low patient satisfaction in today's medical marketplace have been well documented.6 

As a part of an ongoing service improvement project, we had previously identified operational strengths and weaknesses of our preoperative clinic, the Weiner Center for Preoperative Evaluation (CPE).7Information and communication from clinical providers were previously identified as the most important positive components. In particular, we demonstrated that patients evaluated by a nurse practitioner (NP), versus  those seen by a nurse and surgical clinician, had higher satisfaction scores. The total amount of time spent in the CPE, and explanation of the process by the nonclinical staff, represented the most negatively rated components.

Since the time of the first survey published in 2004,7patient satisfaction with the preoperative process has been widely studied, including many publications over the past couple of years.8–10Similar to our results, Edward et al.  8,9demonstrated the most positive experience with the nurse and the least positive with waiting time. These authors also identified the interaction with the receptionist as a very important component for patient satisfaction and concluded that patient feedback can be used to improve the quality of the preoperative process.8,9Varughese et al .10evaluated parental and preoperative staff satisfaction with the introduction of a NP-aided preoperative assessment in pediatric patients. Even though parental satisfaction remained unchanged, preoperative clinic nurse and anesthesiologist satisfaction increased after the implementation of this program.

We hypothesized that having NPs perform the entire preoperative assessment, including the anesthesia component, would decrease the wait time and increase both efficiency and patient satisfaction. Essentially, this eliminated multiple providers, multiple rooms, and multiple waiting times per patient for the majority of our patients. Once these weaknesses were identified, we implemented operational changes to improve function, then subsequently reanalyzed the impact of these changes on clinic flow, patient satisfaction, and cost. We specifically looked at the decrease in wait time in our CPE, hypothesizing that improving efficiency, in particular by decreasing the amount of time spent in the CPE, would lead to an increase in patient satisfaction.

In conjunction with the Center for Clinical Effectiveness at the Brigham and Women's Hospital, a one-page questionnaire was developed consisting of questions on satisfaction with clinical and nonclinical providers (table 1). After approval by the hospital's Committee for the Protection of Human Subjects, this one-page questionnaire was given to patients presenting to the CPE during two different time periods (March 2005, Cycle 1 and March 2006, Cycle 2). One of the three unit secretaries presented the questionnaire to all patients at the time of clinic registration. Patients were asked to complete the questionnaire and return it at the time they left the clinic; patients were told that the questionnaire responses were anonymous and that completion was not required.

The questionnaire consisted of general questions (GENERAL) including an explanation of the CPE process by the surgeon's office and by the CPE receptionist, the courtesy and efficiency of the receptionist, and the time waiting to be seen. In addition, patients were asked about their visit with the technician (LAB) and clinical care provider, focusing on the courtesy and respect given, an explanation of the process and anesthesia options, and the amount of time spent with the care provider (VISIT). Finally, patients were queried about their overall satisfaction (OVERALL) with the care and service received, the degree to which their questions were answered, and how prepared they felt for surgery. Each question had five Likert scale options that ranged from excellent to poor (excellent, 5; very good, 4; good, 3; fair, 2; poor, 1).11All times were automatically generated by computer log on. The wait time (time spent before or in between providers) was derived from subtracting total time with providers from total time from check-in to check-out of the CPE. The survey was initially developed in 1995, and the process for its content and wording has been described elsewhere.7The survey has been adapted during survey cycles to reflect patient and clinical provider feedback, being last modified in 2004 after an analysis of our previous work. A decision was made by our institution's Center for Clinical Excellence, with feedback from the core group of CPE clinical providers, to limit the number of questions to avoid redundancy. Specifically, similar questions relating to the visit with the anesthesia care provider and nurse or NP are now included under the visit with the care provider. The most recent survey was reviewed by a small number of patients for clarity and comprehensibility and was used for 1 yr before this work.

During Cycle 1, NPs performed the medical history and physical examination and nursing assessment for approximately 75% of patients. A primary care doctor or surgeon performed the medical history and physical examination for the rest of the patients. Whether a NP or a different clinician performed the medical history and physical examination was largely dependent on medical insurance issues. A nurse provided nursing education for patients who were not seen by a NP. The anesthesia provider performed a separate medical history and physical examination and anesthesia assessment on every patient, and determined whether additional testing or consultations were necessary. The laboratory technician acquired all necessary blood and urine testing, and performed an electrocardiogram.

A change in provider function was implemented for more than 5 months (after Cycle 1), where NPs underwent 2 weeks of anesthesia assessment training including weekly lectures on preoperative evaluation and 2 days shadowing an anesthesiologist in the operating room. The curriculum also included staff meetings, which provided bimonthly ongoing education and attendance at anesthesia grand rounds as appropriate. All evaluations were presented to an attending anesthesiologist, so that there was a constant reassessment of individual NP performance. After this training, the same NP performed the surgical, nursing, and anesthesia assessments while also having the laboratory technician do blood testing and electrocardiograms in the same room during the same visit interval. Each patient evaluation got reviewed by an attending anesthesiologist, who decided on the need for further workup and was available to see the patient on an as needed basis. Analysis of provider shifts and patient distribution led to change in NP shifts from 8 to 10 h (4 work days/weeks). In addition, blank slots were left for “surgical add-ons,” so that scheduled patients were not disrupted, and postcard appointment reminders were sent out in advance of the clinic visit. We analyzed our process by generating critical paths and workflow models based on our actual clinical flow. Switching to 10-h shifts allowed us to improve the room utilization during the day as we could increase the number of patients seen by a single NP per day. Previously, rooms were underused during the 3:30 to 5:30 pm time period. We have a computerized tracking system for our daily appointment schedule that allows us to generate reports of average waiting times and visit times. Because of our scheduling system and the fact that we take very few walk in patients, the waiting and visit times are fairly consistent throughout the day. We monitored these times to determine what the average waiting times and average length of a visit should be. The visit length was relatively consistent at approximately 75 min; during this period, the surgical history and physical, anesthesia assessment, nursing assessment (including all regulatory documentation), laboratory testing, and electrocardiogram if necessary were all performed in the same room by one clinical provider with a laboratory technician performing the testing. We have very few American Society of Anesthesologists I and II patients, because these patients are generally triaged to our lower acuity partner hospital, accounting for the consistency in required examination time. Therefore, we schedule appointments in 75-min blocks beginning at 7:00 am with the last appointment blocks scheduled at 3:30 pm, so that the nursing shifts can end by 5:30 pm. We calculated the number of blocks necessary to handle the average daily volume.

Concurrently, we introduced 2-h blocks for weekly staff meetings for clinical and nonclinical providers focusing on receptionist staff interactions with patients. Blocks included sessions on customer service, patient relations, and teamwork. Acceptable and unacceptable clinic behavior was evaluated, and feedback was provided. After the implementation of these changes for more than 5 months, we distributed our questionnaire during the second time interval after an additional 6 months and compared the data from Cycle 2 with that of Cycle 1.

Statistical Methods

Means, standard deviations, and percentages were used to describe the data. Subscales were constructed using the means of the component item responses for each section of the survey (GENERAL, VISIT, and OVERALL), with LAB as a separate subscale. Factor analysis with varimax rotation supported the prespecified subscales.12An additional subscale was constructed related to communication and information (INFO) that included items Q1–Q3 and Q5–Q7. Total satisfaction was calculated as the mean of the 14 individual items (TOTAL). The cycles were compared using the Jonckheere-Terpstra test for individual ordinal five-level items and three-level age category, the Wilcoxon rank sum test for ordinal subscales, and the Fisher exact test for dichotomous questionnaire items, gender, and race. The Wilcoxon signed rank test was used to compare the scale scores within the overall patient cohorts. Spearman correlations were used for the associations among the subscales. Cronbach's alpha with and without listwise deletion was calculated to assess internal consistency overall and according to cycle.13In this context of multiple statistical tests, a reduced criterion for statistical significance of 1% (P < 0.01) was used, with simultaneous focus on effect sizes. SAS version 9.1 (SAS Institute Inc., Cary, NC) was used to conduct the analyses.

A total of 550 consecutive questionnaires were distributed during each time period (1,100 total questionnaires), and 872 surveys were collected for a 79% collection rate. Over the 14-questionnaire items in the 872 patients who responded, the mean number of items that had responses was 12.8. The mean was 12.7 in 443 patients who responded in Cycle 1 and 13.0 in 429 patients who responded in Cycle 2 (P = 0.58). The overall standardized Cronbach's alpha coefficient was 0.95, demonstrating that the questionnaire was reliable and consistent and that the set of items measured the patient satisfaction construct well. The coefficient was 0.95 for Cycle 1 and 0.94 for Cycle 2.

Table 2lists patient demographics. There was no significant difference in age, sex, or race between the two groups. Surveys were collected across all surgical specialties (table 3). Analysis of results for Cycles 1 and 2 (tables 4 and 5) revealed that patients reported high level of overall satisfaction for visits with clinical providers. Satisfaction was lowest for nonclinical aspects of the visits such as the explanation of the process by the surgeon's office and the CPE staff, and their interaction with the patient; in particular, satisfaction was found to be very low for waiting time and was the most negative survey result.

The Likert scores for 3 of the 14 questions in Cycle 2 reached a statistically significant improvement (P ≤ 0.01). These three questions related to the explanation of the center by the surgeon's office, courtesy and efficiency of the clinic staff, and satisfaction with the amount of waiting time. The change in provider roles did not result in any change in answer to questions such as explanation of planned procedure, explanation of anesthesia plan, amount of time spent with provider, explanation of how to prepare for procedure, and overall care received.

Waiting times were significantly reduced during the study period. The average waiting time was 92 ± 10 min during the first period and was reduced to 42 ± 5 during the second period (P < 0.001).

Significant increases in satisfaction were seen for the GENERAL subscale (P < 0.001), and there were also increases for the INFO subscale (P < 0.05) and for TOTAL (P < 0.02) (table 5). Although the GENERAL subscale demonstrated the highest improvement, the VISIT subscale remained essentially unchanged. When using all patients, the order of satisfaction from highest to lowest was VISIT, LAB, OVERALL, INFO, TOTAL, and GENERAL. The order of magnitude of association with OVERALL from highest to lowest was VISIT, INFO, LAB, and GENERAL. The OVERALL subscale was significantly correlated to all other subscales (P < 0.0001): VISIT (r = 0.74), INFO (r = 0.71), LAB (r = 0.69), and GENERAL (r = 0.60).

In today's surgical environment, quality, efficiency, and patient satisfaction are increasingly used as indicators for consumers and for insurers for selecting healthcare providers.14In the current cost-driven environment, there are concerns about appropriate resource use in nonrevenue generating areas, and only hospitals that deliver high-quality care and high patient satisfaction at an affordable price can maintain their financial viability. The reputation of a specific hospital or healthcare provider can be influenced by state, national, or payer rankings of relative quality performance for certain conditions and by patients' satisfaction ratings of their experience. Recognizing that patient decisions have a significant and growing impact on the healthcare industry, new healthcare directions must include an analysis of patient satisfaction. It is expected that the federal government's hospital compare Web site∥will expand to include patient opinions based on satisfaction surveys conducted at nearly all hospitals. A recent report suggested that although many hospitals survey patients about their experiences, the questions and quality of the surveys vary widely.#The Hospital Consumer Assessment of Healthcare Providers and Systems will allow patients and hospitals to see how facilities compare with one another. The survey includes questions about the communication skills of physicians and nurses, pain control, and the quality of discharge instructions. Hospitals that do not report the results of the survey will lose 2% of their Medicare reimbursement.#

First, as described in a previous study in our unit,7we recognized and reacted to the expectations of patients and areas of dissatisfaction with their care by implementing a number of strategies in an effort to improve patient satisfaction. These included providing patient's feedback to the clinical and nonclinical services, including the surgical office staff and CPE receptionists, on the purpose and time required for a CPE visit. Second, as the duration of the CPE visit was an important source of patient complaints, we attempted to streamline patient flow and eliminate redundancy in medical questioning. The most dramatic clinic change involved the institution of an anesthesia education program, so that NPs during the second cycle were now capable of performing all the assessments required on a single patient, eliminating the need for multiple providers and redundant medical questioning. The use of NPs decreased waiting time by allowing all assessments (surgical, anesthesia, and nursing) to be performed by a single provider in a single room. Previously, there were multiple waiting times for a variety of preoperative providers. In addition, the use of a centralized NP model allows for uniform education regarding preoperative risk assessment, standards, and protocols, so that that the preoperative process even for complicated patients can be standardized and streamlined. The high satisfaction scores seen with our clinical providers indicate that patients related well to them. Although we did not specifically measure how much information they retained, the response to the question “Do you feel prepared for your surgery” was very high and did not change between periods. Finally, we designed workshops and sessions on achieving customer service excellence and appointed staff interested in the CPE process. Our results showed that these initiatives and training fostered improvements in these areas. Even though our satisfaction results differ from those of Varughese et al .10who also used NPs, they only studied pediatric outpatients evaluated on the same day of the surgery. Furthermore, parents were interviewed 10–20 days postoperatively, a time where it would be difficult to differentiate satisfaction during each perioperative period. Interestingly, satisfaction of anesthesiologist increased in their study, particularly regarding the completeness of the information and patient preparation.

In Cycle 1 of our study, one of the areas with the lowest satisfaction scores was the interaction between the CPE receptionist and the patient. Receptionists have often been viewed as the “bridge” to clinical providers,15–17and their attitudes and actions can play a key role in patient satisfaction.15Subsequent to the implementation of weekly teaching sessions and feedback on customer service, patient relations, and teamwork for both clinical and nonclinical staff, there was a significant improvement in this area. Similar to our previous work,7we have demonstrated the importance of nonclinical elements in patient satisfaction and underscore the need for first contact staff members, such as receptionists, to be well informed and aware of their important role. Our findings support previous work that has demonstrated that patient feedback can make beneficial changes in behavior.18 

The least satisfaction was given for the wait time spent in the CPE, which in Cycle 1 was an average of 1 h and 32 min. After implementation of change in provider function, the wait time in Cycle 2 was reduced to an average of 41 min; this was associated with a significant improvement in answers to the question regarding wait time. Length of time waiting to be seen in a doctor's office or awaiting surgery has been previously shown to correlate inversely with patient satisfaction.18–20However, while time waiting to be seen has shown a substantial decrease, the score is still very low when compared with the other scores, suggesting that this is an area that is particularly important to the patients and still offers opportunities for improvement.

Implementation of these changes required a major culture shift, requiring staff to think beyond traditional roles with the overall incentive of improving patient care and increasing staff satisfaction. Staff education to perform additional roles had an extremely positive impact. Of note, is that care providers were extremely unhappy with the work environment because of long patient back-ups, unhappy patients, and necessary overtime to handle patient overflow. There was an associated positive impact on the work environment after the institution of these changes. NPs appreciated the learning opportunities provided by attending perioperative medicine–related lectures and observing the key role of anesthesiologists in the operating room. In addition, they appreciated their new role with more responsibility. Anesthesiologists embraced their new supervisory role and took the opportunity to improve teaching geared toward NPs and residents. An additional positive impact of these changes would be to allow for future surgical volume increases by increasing clinic throughput. The change in provider roles did not result in a decrease in overall clinical effectiveness of the preoperative process. Preventable operating room cancellations and delays due to preoperative issues remain very low—approximately four per month of the 1,800 cases seen in the clinic, and there was no difference between the study periods.

The average cost per patient increased 5%, from $198 to $209 after implementation of changes, which can largely be accounted for by mandated salary increases. Reasons for the cost increases include the following. First, the change to 10-h nursing shifts that required more NPs; however, this cost was significantly offset by a major decrease in overtime costs. Second, yearly salary and fringe benefit increased. Third, the institution of the Joint Commission on Accreditation of Healthcare Organization medicine reconciliation mandate that increased time spent with a patient by an average of 8 min per patient. However, the slight increase in cost was significantly offset by the fact that the number of anesthesia personnel in the clinic was able to be decreased. Before instituting the new model, anesthesia residents and attending physicians were required to perform every anesthesia assessment in the clinic. At least six resident full-time equivalents per day were required. After the institution of the new model, the number of residents was decreased to three full-time equivalents per day; this number was required to allow all residents to achieve their educational requirement in preoperative assessment. Sending these residents back to the operating room reduced overall anesthesia costs, because they decreased the number of attending physicians working alone, allowing them to supervise more rooms.

A possible limitation in our study was the 79% response rate; patients were not required to complete the questionnaire distributed. Because the surveys were distributed to all patients from 8:00 am to 3:00 pm 5 days a week regardless of age, sex, comorbidities, and surgical procedures, and the response rates were similar for each survey cycle, we believe that a larger response rate would not change the overall results. Furthermore, the surveys were not distributed to patients scheduled to be seen in the first hour of operation of our clinic, because this group of patients rarely have to wait to see a clinical provider. These patients are more likely to be satisfied, therefore potentially underestimating our results. Second, our results could be criticized for when the questionnaires were distributed. The surveys were completed at the end of the CPE visit, before the patient leaving. Even though many questionnaire-based studies mail the survey to the patients' homes giving them a chance to reflect on their experiences, surveys completed after patients get home from the surgery reflect contributions not only from the preoperative period but also from the entire perioperative experience. Our surveys were completely anonymous, and patients were made aware that no attempts would be made to contact them. Our results could also be criticized by stating that there was a rather modest improvement achieved, that the small numerical differences are irrelevant, and that without an intrinsic control it does not necessarily demonstrate a direct improvement. However, by adjusting the criterion for rejecting the null hypothesis to an alpha level that is less than 0.05 to address the multiplicity issue, we automatically take care of the problem to some degree because the effect size must be larger to obtain a smaller P  value. Furthermore, there were no patients in Cycle 2 who had previous clinic experience on Cycle 1. In addition, although only three questions demonstrated improvement, our results correlate with the verbal feedback we often receive. Indeed, further study will be necessary to articulate the true magnitude and meaning of these findings. An argument could also be made that it is not appropriate to generalize the results to all hospitals. Although the results may be limited to large tertiary care institutions, our main goal was to demonstrate that operational changes and education of staff in the same institution as our first survey would lead to higher patient satisfaction. Finally, our reports as included in this study were not able to identify each patient's waiting time because they generated aggregate deidentified data. However, because of the manner in which our scheduling system is structured and the fact that we see very few American Society of Anesthesiologists I and II patients, the waiting time and visit time are fairly consistent throughout the day. In fact, most of the American Society of Anesthesiologists I and II patients are screened by phone and do not visit the clinic at all, so they have little impact on variation in visit length. In addition, our schedule is evenly distributed in appointments between 7:00 am and 3:30 pm with very few walk-ins, less than 10 per day compared with a clinic with a higher percentage of walk-ins. An analysis of our practice using the Queuing theory21suggested that clustering could be limited by greatly reducing the number of walk-ins. We were able to institute this model with the help of the surgical schedulers. In this manner, we decreased the natural variability of our scheduling model to minimize the peaks and valleys in census.

Recognizing that patient decisions have a significant and growing impact on the healthcare industry, new healthcare directions will include an analysis of patient satisfaction. The practitioner and functional aspects of the preoperative visit, specifically in the setting of the preoperative clinic, have a significant impact on patient satisfaction. In summary, analysis of patient flow and clinic operations led to alterations in the operational patterns, which resulted in continued high clinical effectiveness, reduced waiting time, and improved patient satisfaction.

1.
van Klei WA, Moons KG, Rutten CL, Schuurhuis A, Knape JT, Kalkman CJ, Grobbee DE: The effect of outpatient preoperative evaluation of hospital inpatients on cancellation of surgery and length of hospital stay. Anesth Analg 2002; 94:644–9
2.
Tsen LC, Segal S, Pothier M, Hartley LH, Bader AM: The effect of alterations in a preoperative assessment clinic on reducing the number and improving the yield of cardiology consultations. Anesth Analg 2002; 95:1563–8
3.
Ferschl MB, Tung A, Sweitzer B, Huo D, Glick DB: Preoperative clinic visits reduce operating room cancellations and delays. Anesthesiology 2005; 103:855–9
4.
Correll DJ, Bader AM, Hull MW, Tsen LC, Hepner DL: The value of preoperative clinic visits in identifying issues with potential impact on operating room efficiency. Anesthesiology 2006; 105:1254–9
5.
Allshouse KD: Treating patients as individuals, Through the Patient's Eyes, 2nd edition. Edited by Gerteis M, Edgman-Levitan S, Daley J, Debanco T. San Francisco, CA, Jossey-Bass Publishers, 1993, pp 19–43Gerteis M, Edgman-Levitan S, Daley J, Debanco T
San Francisco, CA
,
Jossey-Bass Publishers
6.
Weiss BD, Senf JH: Patient satisfaction survey instrument for use in health maintenance organizations. Med Care 1990; 28:434–45
7.
Hepner DL, Bader AM, Hurwitz S, Gustafson M, Tsen LC: Patient satisfaction with preoperative assessment in a preoperative assessment testing clinic. Anesth Analg 2004; 98:1099–105
8.
Edward GM, Lemaire LC, Preckel B, Oort FJ, Bucx MJ, Hollmann MW, de Haes JC: Patient Experiences with the Preoperative Assessment Clinic (PEPAC): Validation of an instrument to measure patient experiences. Br J Anaesth 2007; 99:666–72
9.
Edward GM, de Haes JC, Oort FJ, Lemaire LC, Hollmann MW, Preckel B: Setting priorities for improving the preoperative assessment clinic: The patients' and the professionals' perspective. Br J Anaesth 2008; 100:322–6
10.
Varughese AM, Byczkowski TL, Wittkugel EP, Kotagal U, Dean Kurth C: Impact of a nurse practitioner-assisted preoperative assessment program on quality. Pediatr Anaesth 2006; 16:723–33
11.
Ware JE, Hays RD: Methods for measuring patient satisfaction with specific medical encounters. Med Care 1988; 26:393–402
12.
Anderson TW: An introduction to multivariate statistical analysis, 3rd edition. Hoboken, NJ, Wiley, 2003
Hoboken, NJ
,
Wiley
13.
Cronbach LJ: Coefficient alpha and the internal structure of tests. Psychometrika 1951; 16:297–334
14.
Qiu C, MacVay MA, Sanchez AF: Anesthesia preoperative medicine clinic: Beyond surgery cancellations. Anesthesiology 2006; 105:224–5
15.
Gallagher M, Pearson P, Drinkwater C: Managing patient demand: A qualitative study of appointment making in general practice. Br J Gen Pract 2001; 51:280–5
16.
Jacobson L, Richardson G, Parry-Langdon N, Donovan C: How do teenagers and primary healthcare providers view each other? An overview of key themes. Br J Gen Pract 2001; 51:811–6
17.
Hallam L: Access to general practice and general practitioners by telephone: The patient's view. Br J Gen Pract 1992; 43:331–5
18.
Scott G: The voice of the customer: Is anyone listening? J Healthc Manag 2001; 46:221–3
19.
Lledó R, Herver P, García A, Güell J, Setoain J, Asenjo MA: Information as a fundamental attribute among outpatients attending the nuclear medicine service of a university hospital. Nucl Med Commun 1995; 16:76–83
20.
Spaite DW, Bartholomeaux F, Guisto J, Lindberg E, Hull B, Eyherabide A, Lanyon S, Criss EA, Valenzuela TD, Conroy C: Rapid process redesign in a university-based emergency department: Decreasing waiting time intervals and improving patient satisfaction. Ann Emerg Med 2002; 39:168–77
21.
Litvak E, Buerhaus PI, Davidoff F, Long MC, McManus ML, Berwick DM: Managing unnecessary variability in patient demand to reduce nursing stress and improve patient safety. Jt Comm J Qual Patient Saf 2005; 31:330–8