Existing Change Models: Why the Traditional Healthcare Models Are Struggling - REBUILDING CAPABILITY IN HEALTHCARE (2015)

REBUILDING CAPABILITY IN HEALTHCARE (2015)

1. Existing Change Models: Why the Traditional Healthcare Models Are Struggling

Traditional Change Models

As described in the Preface, the general consensus is that performance (and performance improvement for that matter) in healthcare is not where it needs to be. Numerous articles and publications each year identify the problems or argue the root cause. The intent here is not to delve too deeply into the argument, but to highlight one key problem (in this case with the performance improvement methods utilized) and then, in the chapters to come, to demonstrate a solid solution to that problem. Consider the following symptoms:

• There is a sense of resource overloading—it is difficult to get team time to even start a project.

• Most improvement is incremental; there is little in the way of breakthrough change.

• Hard savings are just that: hard to come by and even harder to measure.

• It is difficult to attribute any measured success to specific changes made.

• Improvements fail to stick.

Not all organizations exhibit all these symptoms, but they are certainly commonplace, whether in a small clinic, a hospital, or a system. So if the symptoms are clear and abundant, why, with all the effort under way, are the symptoms still the norm?

The usual approach is to critique the solutions implemented and work from there. Here the suggestion is to look at things in a different way. The place to look is not at the solutions implemented, but rather at the improvement methodologies used—the route to solution and implementation.

To better understand this statement, first let’s examine facets of the traditional improvement methods. Improvement is often undertaken as follows:

• Multiple groups are sanctioned (often independently) to make improvements, to bring agile responsive change.

• Operations and clinical managers are measured on the improvements they make.

• All quality and operations staff in the process are encouraged to make changes and test improvements, to develop change quickly, and to rapidly take advantage of potential improvements.

• Small group efforts are focused on a localized part of the process to alleviate the problem, and then the group moves on to the next focus area.

• Changes made require a consensus of as many stakeholders as possible to ensure buy-in from the people who will do the new process or be affected by it.

• A key source of improvements is from benchmarking other organizations, typically in relatively close proximity or from recent literature, to gain quick, proven solutions.

• The change model is based on continuous improvement using a cyclical PDCA1 (Plan, Do, Check, Act) change model—looping through the cycle again and again to gain higher and higher levels of performance.

1. Sometimes referred to as the PDSA (Plan, Do, Study, Act) model, or the “The Scientific Method.”

• Stand-alone quality groups own process improvement in the organization, so that operational staff aren’t drawn away from their operations duties.

• The focus is on getting better leaders, managers, and communication by developing the existing people or recruiting better people.

At first glance this seems to be a robust set of operating approaches for change, which accounts for their longevity in the healthcare industry. Why, then, do the majority of other industries and organizations not use these approaches? They do seem reasonable—until, that is, we start to line up the symptoms with the change model, and then the flaws become very apparent. This is best shown through an example, as follows. Consider a scenario of operations or clinical managers trying to meet assigned targets, perhaps to increase patient satisfaction by 10% or similar. They do what they’ve been trained to do: they talk to as many people as possible to find a solution that seems to work well in another hospital or institution; on returning to home base they bring together as many people in the process as possible to gain some kind of consensus; they educate key personnel on the solution and commence operations with the new method; they then track the metric (in this case patient satisfaction) to see what, if any, change has occurred.

Oftentimes the metric will improve, but sometimes not, and commonly over time it drops back to where it was before. Sometimes it even gets worse than it was when the change initiative started.

Meanwhile, others in the organization are going through the same motions for the same process (usually under the direction of a different change group). They, too, are finding the “best solution” and bringing it back home, training a few people, and measuring impacts.

Over time, change is continuously occurring, but performance improvement doesn’t necessarily follow.

The problems here are actually quite simple to categorize, and we’ll examine each in more detail in the remainder of this chapter:

• There are disparate change groups.

• There is uncontained change.

• There is no standard change approach.

• The belief is that simple tools can fix the problems.

• There is a reliance on benchmarking to provide the solutions.

• Changes are not made based on data, or on the right data.

• Changes are made based on symptoms, not causes.

• Focus is on systems rather than processes.

• Focus is on people, not on processes.

• There is a lack of context for solutions, and in particular an unclear understanding of the Voice of the Customer (VOC).

• Solutions involve adding extra activities to the process (patching) instead of subtracting activities from the process (streamlining).

• Implementation is poor and limited in magnitude.

• There is little or no emphasis on sustaining the improved process, or control.

• There is confusion regarding the roles of management versus leadership.

To an objective observer, some of these issues are readily apparent. To those in the heat of battle of patient care, they are considered part of everyday life, are accepted norms, and are overlooked or aren’t perceived as the key issues to be addressed.

Let’s take a look at each in turn and how they can combine to have such a negative impact on future performance.

Disparate Change Groups

In most hospitals there are multiple change groups and modes in operation, to name a few: local management-sanctioned change, clinical quality groups, nursing leadership, compliance groups, operations groups, senior executives, medical quality groups, and so on.

Each team formed is typically woefully underresourced and must fight to get meeting time (and often space), and often multiple teams are focused on resolving problems in the same target process. Project 1 can’t get team time because another team meeting for Project 2 involving the same people is meeting at the same time (sometimes even related to the same process issues).

Quality improvement (typically limited to clinical quality) is managed separately from operational improvement. The quality organization is usually a disconnected silo, too often focused primarily on regulatory compliance, and often plays second fiddle to any operations group. Many a quality group has asked how they might get support from operations when they want to run a project. Surely this is the tail trying to wag the dog. Would it not be more appropriate for the operations group to be frequently approaching the quality group in search of the skills and resources for operational improvement projects?

Quality groups instead spend valuable time canvassing to get the right people in the room and aren’t empowered to recruit the organizational manpower they need. Oftentimes they just shy away from the difficulty of getting an individual in the room at all and resort to “cubicle projects.”2

2. Where the project is progressed essentially in a vacuum in a quality group member’s cubicle.

Even in operations the functions are siloed, and broad-scoped, cross-functional change is difficult to come by. Take, for example, a typical emergency department, where ED staff function almost entirely independently of the registration staff working side by side in the same process. Due to this siloing of functions, operations, and change groups, change is made in a nonunified way and breakthrough changes (usually found in aligning the handoffs from function to function) are rare.

Uncontained Change

When change is made by so many disparate groups, it occurs in a nonuniform, uncontained, and often poorly understood way. Change is unmanaged, with one change overlapping the next, and the process is never allowed to settle. Deming3 gave a wonderful demonstration of rolling a marble around a funnel to hit a spot on the floor. By consciously trying to manipulate the dropping process to improve it, an operator only makes things worse (the operator is in fact merely adding variability to the process). It is not until the operator lets the process settle, and doesn’t add change after change after change, that the process begins to perform consistently and in fact better.

3. Dr. William Edwards Deming (1900–1993) was an American engineer, statistician, professor, author, lecturer, and management consultant and is considered to be one of the founders of business performance improvement.

If a process is constantly in flux, it is virtually impossible to get a pulse on how well it truly can perform, since any snapshot in time is essentially of a different process. Also, as performance does improve, it is very difficult, if not impossible, to understand which change the performance improvement can be attributed to and hence which changes to keep.

The greatest problem with uncontained change then occurs: a potentially well-performing process is overwritten with a poorer one. Uncontained change leads to no standardization or consistency across units or shifts or even individuals on the same shift, and the process literally just keeps changing and changing and often never truly improves. This phenomenon, aptly named “1-sigma churn,”4 is absolutely the norm across hospitals.

4. A term coined by Tim Tarnowski (at the time of writing with Indiana University Health) early in the Lean Sigma deployment at Columbus Regional Hospital.

No Standard Change Approach

Disparate improvement groups typically use different approaches to making performance improvement. There is a difference between groups, but there is also a difference within the groups and often even an individual will use different approaches at different times. The driver for this is that there is almost always a clear understanding of the need for standards set around operations, but that same level of understanding doesn’t extend to the need for setting accountability to follow a standard approach or roadmap in making change.

The problems this causes are multifold. It leads to inconsistent and therefore unpredictable timelines. Projects often start slowly with reasonable discipline but then have to accelerate when organizational patience runs out. Acceleration is synonymous with cutting corners and making decisions based on gut feel. Change agents have to “give their best shot,” and decisions are assumptions at best. Also, due to the unpredictability in timelines, it is very difficult to predict future resource requirements, which leads inevitably to all kinds of project resource clashes later. It also makes it very difficult to understand the current status of projects, and without a good yardstick for progress, activity tends to just drag on. The statement “A conclusion is where people [the organization] got tired of thinking” is highly appropriate here.

The differences in approach can also cause frustration in individuals and between groups. Change Agent A is held to performing with a higher level of rigor and chided for not making progress quickly enough, whereas Agent B isn’t fettered with the same approach and brings change (not necessarily the right change, but change nonetheless) more quickly and is aptly rewarded. This quickly drives an “us versus them” mentality between groups. The problem is exacerbated when trying to resource projects. The more disciplined group may find it difficult to get project teams together since their approach perhaps isn’t as exciting or just takes too long. People in general seem to prefer the adrenaline rush of “shooting from the hip” to the grind of working through the details. Subsequently there is erosion over time of the disciplined approach and its credibility, which likely will have a larger and insidious negative impact on the organization than just failed individual changes.

Tools Focus

The majority of people making changes in hospitals have never been formally trained in any improvement methods beyond rudimentary techniques. Complex change management requires something more than a flowchart or a quick team decision to resolve.

The ad hoc use of simple tools in projects is clearly better than using no tools at all. However, the approach misses two critical aspects of performance improvement. First, using tools independently without a systematic roadmap fails to illuminate the linkage between tools so that they build upon one another and advance critical thinking. Second, it fails to recognize the importance of an organizational infrastructure necessary at a program level to prioritize, align, and appropriately resource change.

By hiding behind the tools, change agents revert to the cubicle change model mentioned earlier. There is an unwillingness and inability to challenge the more difficult issues, which are often the more important ones in the organization. When the infrastructure elements are not considered, change groups are disconnected from the strategic direction of the organization, and there is no real understanding of what’s really motivating leadership.

With this low-level thinking, change managers are not considered in a professional role. Projects are handed to untrained, inexperienced project leaders who, with no data-driven, systematic approaches available to them, in effect do little more than “wing it.”

Even when more advanced infrastructure-based change methodologies, such as Lean Sigma, come along, the thinking is more about just adding a few more tools to the toolkit versus truly embracing a more advanced, higher-performing model; the effect is essentially to neuter the methodology in the process.

Reliance on Benchmarking

Very few processes anywhere in healthcare are good from end to end. Admittedly there are pockets of good performance out there, but under scrutiny it’s generally found that the performance is due to the people involved, not the robustness, reliability, and clarity of the process. High performance is related to high-performing teams working extremely hard to maintain it. These teams often exhibit high stress, burnout, and high turnover. Once the team lead goes home or, worse still, leaves the organization, performance quickly returns to typical levels. Let’s face it—we’re working hard, not smart.

And yet, unbelievable as it may seem, healthcare organizations still choose to use the copycat approach as their most important concept ideation tool.

Benchmarking is seen as a solution to problems, and yet the benchmarking undertaken is often without the context of understanding the existing process, its customers, and its suppliers. It is also not often done with the depth of understanding required of the “better” process. This process may in effect be serving a different market, with a different volume and mix of patients, organizational setup, staff, and physicians, and yet it is lifted and copied as is (in a complete unit) to replace an existing process, which sometimes is better.

For some, a full-time role is to benchmark others and find “best practice.” The overlooked flaw here is that what might be best practice for others may not be for us. What is deemed an evidence-based answer is just that, an answer. The problem is that it might not be an answer toour question.

For some, the primary focus is to be a benchmarked organization. In the modern healthcare market it is in fact beneficial to be seen to be successful, which yet further propagates this activity. Whole conferences (very large ones at that) are set up to encourage sharing and testing others’ processes. As one patient I spoke with so succinctly put it, “Fine, but don’t test it out on me!”

For some reason this seems to be a particularly difficult truth to accept. In one prestigious health system I visited, a quality leader threw her hands in the air in exasperation and surprise that benchmarking isn’t the primary solution generator in more advanced change methodologies such as Lean Sigma.

This steadfast belief in the grass being greener on the other side of the street further drives the 1-sigma churn. For every new benchmarking conference, staff members bring back someone else’s process, overwriting again and again their own process without context or control.

Changes Are Not Based on Data, Good Data, or the Right Data

When first starting in process improvement in healthcare, one is generally and genuinely surprised at the sheer volume of data available, much more so than in any other industry. On closer scrutiny, though, it becomes apparent that the data and related measurement systems are invalid or unreliable. For example, it is commonplace for emergency departments to measure length of stay (LOS) for patients. In practice, the LOS data collected represents only a fraction of the true duration from when patients arrive at the hospital site to when they leave (typically the captured measure runs only from registration to disposition). Similarly, when asked to provide data for leadership presentations, analysts often ask, “What do you want it to show?” In a recent surgery project, patient data was stored in 16 (sixteen) databases, none of which were in sync.

There is a lot of data, but not much valuable information.

With poor measurement systems and the resultant data they produce, it becomes very difficult to understand with any real confidence what drives process performance and subsequently what could make breakthrough change. With little in the way of supporting evidence, managers often believe they have to be the ones to come up with all the solutions, and usually no one will challenge them. Even if they were to make decisions based on the data available, the statistical validity would be questionable.

Simple Measurement Systems Analysis (MSA) studies on data systems thought to be robust quickly show a different picture. For example, in one hospital’s analysis of the charge capture and subsequent coding of cath lab procedures, it was discovered that coders were all in complete agreement with each other less than 10% of the time and even with themselves only 60% of the time.

Even when improvements are made, without good measurement systems (and therefore data) any change in performance is difficult to detect (due to being shrouded in measurement system noise), reliably verify, or attribute to the changes made.

Quite often it’s just the wrong data or the wrong focus. We’re simply asking the wrong question. A useful example here is one of a project leader trying to improve the access for pregnant women to prenatal education. By asking a number of times in succession, “Why do we care?” the true underlying problem is revealed.5 Mothers need better access to education prior to the delivery visit. Why do we care? Because if they are educated during delivery, they tend to forget things in the stressful environment and retention is not good. Why do we care? Because mothers need to be educated in how to care for themselves and their newborn. Why do we care? Because, after they leave the hospital, informed mothers can successfully prevent complications and avoid an unnecessary return. By digging in this way, the project leader recognized that the real goal (and hence the data required) related to the reduction in the number of unnecessary postnatal readmissions. By focusing on this as the needed data, the team managed to improve how the education was delivered and what was delivered as well as improve access to the education to ensure the best retention and subsequent care.

5. Described in Ian Wedgwood, Lean Sigma: A Practitioner’s Guide (Prentice Hall, 2006), Chapter 7.01, pp. 120–21.

Changes Made Based on Symptoms, Not Causes

The majority of metrics in healthcare are lagging indicators of performance, merely symptoms of the process versus true process metrics closely tied to the real-time performance of the process—for example, mortality, morbidity, ventilator-associated pneumonia or VAP rates, falls, and employee engagement. Many metrics are composite metrics, made up of many drivers—for example, patient satisfaction and physician satisfaction.

When improvement (or decline) occurs in lagging or composite metrics, it’s very difficult, sometimes impossible, to relate it back to any changes made.

Finally, lagging data captured in the process is often used as a control for ensuring that the process consistently meets performance requirements. Such metrics are almost useless as control metrics, being captured monthly or even quarterly or annually when context is not available and not much could be done to react even if the cause were known. When trying to drive improvement in processes, if the measures used are just symptoms and not real process metrics, it’s just a matter of “track and hope” at best.

Systems versus Processes

As in many other industries, staff in healthcare struggle with the differentiation between systems and processes. In simple terms, processes are “what” is (supposed to be) happening and “how” it occurs; systems are the things that support processes. For example, take a materials tracking system in a surgery (say). The process is made up of the steps, triggers, roles, responsibilities, and skills to ensure that material physically moves from the dock through the hospital to the operating room and beyond. The system involved merely tracks what’s occurring in support of the process to ensure that the current state is reflected and understood at all times (as shown in Figure 1.1).

Image

Figure 1.1 The relationship between process and system

When material is unavailable, it is therefore inappropriate to blame the system when what has truly failed is the process. Also, it is naïve to believe that “all these problems will be resolved when we install xyz system or upgrade to version x.x of the software.” The impact of this has been a painful lesson for a great many organizations that implemented an electronic medical records (EMR) system in the past few years. Likewise, it is misguided to believe that a systems-based approach to performance improvement will change the physical aspects of the organization’s processes. Such an approach tries to tackle certain symptoms house-wide all at the same time, when the needs, context, and organizational setup are different from process to process. An example might be the desire to improve communication (say). True, communication is important in many instances, but without a detailed understanding of the requirements of a particular trigger, the communication mechanism imposed may not be the best (or required at all).

Focus on People, Not on Process

Similar to systems thinking, there is also a tendency to think that solutions will be found in the people, not the process. A number of approaches from improvement to communications methods, teaming, leadership, people management, time management, and so on are undertaken with the naïve hope that these will change the fundamental physics of the process. Not surprisingly, without changing the processes, everything from a performance perspective effectively remains the same, barring minor short-lived incremental improvement. With no significant shift in performance, leadership over the process is often replaced (remember, the focus is on people versus the process), and the next installed leader adds his or her patches and tweaks (propagating the 1-sigma churn). This revolving-door practice is very common in the higher-stress areas of hospitals, particularly in surgical services and emergency departments. The churn is broken only when a leader has the insight or foresight to take apart the process.

This is not to say that people aren’t important in process improvement, but as will be described in Chapter 2, the initial focus should be on the physics and engineering of the process: the mechanics, activities, layout, triggers, flow, roles, accountabilities, and metrics. The softer elements of communication, teaming, and leadership will come into play once the fundamentals are in place. In effect, by focusing on the people first, we’re “coming in from the wrong end.”

Let’s imagine taking the people to one side. The process is what remains. If it is missing, disconnected, broken, misaligned, or flawed in any way, when we layer our people back onto it, we frustrate them and they have to become inventive to work around the process. Our most valued asset, our people, is successful despite our processes, not because of them.6

6. In Dr. Deming’s words: “85% of problems in performance can be attributed to poor processes rather than people. The role of the manager, then, is to change the process, not badger the people.”

Lack of Context for Solutions

As described in the previous “Disparate Change Groups” discussion, it is often difficult to get full stakeholder representation of the process together for a project, and hence a more localized approach to change management is undertaken. Also, with little solid data available on which to base decisions and with just simple tools at hand, versus more advanced data-driven change methodology, decisions are often primarily based on gut feel. This is known euphemistically as “basing decisions on experience.” Managers (incidentally who are measured on making change) pull teams together ostensibly to implement a known solution (usually theirs), and any examination of data is done purely to provide grounds to do so. With this localized and biased viewpoint, little is done to gauge the potential impact of the solution, and even less is done to proactively examine beyond this one solution, let alone to examine the broader solution space.

Changes made on conjecture without context of any kind in high-stress environments are likely doomed to failure when glitches inevitably come along—the changes are not based on any form of evidence and thus are prone to a reversal of subjective opinion and support when things don’t go quite as planned. Any initial support quickly wanes, and the focus is on trying to find another solution.

Many symptoms described in this chapter play off each other. For example, as mentioned earlier, benchmarking without context is a common practice. New processes are brought in without an understanding of the old process’s needs or the new one’s capabilities. Similarly, again described earlier, focus tends to be on people and not the process, so any process context is lost when people are the primary focus. The same thing applies to the systems-versus-process discussion. If the focus is on the supporting system and not the process, the context of process understanding is not addressed, and changes are made without the foundation of understanding required.

Adding versus Subtracting (Patching)

In most industries there are process engineers, a professional role whose primary focus is to design operations processes from scratch, considering the needs of customers, linkages to suppliers, process activities, controls, and so on. This role is rare, if not entirely missing, in healthcare. Healthcare processes tend to evolve over time, and if very little is ever done to take them apart and streamline them, they grow ever more complex and unnavigable, forever being tweaked and added to.

To ensure the right level of performance, quality groups often take on a kind of “process police” role, belatedly tracking the symptoms and reacting when the process goes awry. As more and more is added to the process, the related burden of work content increases accordingly, and the encumbered staff find it harder and harder to focus on (or even see) the critical elements amid the process noise. When the focus is on people, patching occurs differently from unit to unit and shift to shift. This inconsistency, coupled with the higher complexity of an overburdened process, leads to decreased process reliability; that is, the same processes are executed differently between units, shifts, and personnel. Lower reliability in turn incurs extra policing, patching, and complexity, and the cycle repeats.

Simpler processes are more reliable. There is really only one process I can do reliably 100% of the time, and that is nothing.

Poor Implementation

Once a solution is identified with the test-and-tweak thinking, implementation is left to unit managers, and rollouts are often no more than a single-shot communication of the concept. Subsequently, each unit manager is left to construct the detailed design. Such uncontained implementation leads to no standardization or consistency across units or shifts or even individuals on the same shift. The target process gets a watered-down implementation at best.

In such an environment, where reliance is on the personalities involved, it is very likely that physical changes, systems changes, education, and changes in orientation packages are not fully implemented, and little emphasis is placed on inclusion of customers, suppliers, and key process stakeholders.

With an informal approach to the rollout of any change, every unit’s processes are essentially different. This is readily apparent when implementing new information technology (IT) systems. The IT group is required to automate an existing, often flawed process, which varies wildly from unit to unit, is unclear, or often doesn’t exist at all.

Also, with the disparate change groups prevalent in the industry, each group typically doesn’t command enough resources to perform a robust rollout of a change. This is exemplified by the usual approach to rolling out new roles, which often involves an informal one-on-one verbal communication, not a carefully planned rollout of skill augmentation with the appropriate education, tracking of competency, and building the appropriate learning into orientation and transfer procedures. Such an ad hoc approach to skills and role change almost always leads to a difference in understanding of the changes across all those involved and hence considerable variation in the performance of the process.

No Emphasis on Control

Probably the most common questions and discussions in healthcare performance improvement currently are those centered on sustaining the gains from changes implemented. It appears that the vast majority of organizations are failing in this goal and much focus is placed here, from the numerous presentations at conferences to whole conferences targeting the subject.

The primary reasons for this failure are embedded in the failings of the traditional change methods described to this point, plus the lack of emphasis in traditional change models placed on control of the new process.

With multiple disparate change groups making change in the business, it is inevitable that a good process gets further tweaked by a subsequent change team. Individuals in the process are positively encouraged to tweak their processes (sometimes known as “simple tests of change”) without context or data to validate their actions, contributing to the 1-sigma churn described previously.

Even the commonly implemented PDCA change model itself is inherently designed to fail in controlling the process. The approach is one of cycles of PDCA with the clear intent to make change and then later return to the process to make further change. The process never settles, and the tendency (call it human nature if you will) is to assume that rigor in control isn’t really necessary because “we’ll be back around again here shortly.”

The people-versus-process factor is also a key element in failing to sustain. By relying completely on the individuals involved in the process to maintain new performance levels, rather than changing the fundamental physics of the process, staff either burn out trying to maintain the new process given their unreduced workload, or their focus simply gets diverted elsewhere. With no link of process metrics to personal accountability, it is inevitable that a process slides back to its original state.

With the divide between the quality group and operations groups, the sheer stress of pulling the team through the process to the point of implementation takes its toll, and the team, not given the task of placing robust controls, get dragged on to the “next thing” before control is even considered.

With all of these inherent flaws in the change model, it is absolutely no surprise that change is not sustained; in fact, it would be remarkable if it were.

Management versus Leadership

An interesting difference between healthcare and other industries is the common confusion between the roles of management and leadership.

In healthcare there is often a department manager and a department director (the would-be leader). Oftentimes both are engaged in performance improvement, sometimes in alignment, sometimes not. Some days the director gets closely involved in the day-to-day running of the department, others not.

In other industries these roles tend to be more clearly defined: management is about tactics and consistency; leadership is about strategy and change.

The key sustaining role in healthcare is missing: both the manager and the director are measured on process improvement, that is, change. If the manager is not focused on consistency, the push for standardization is lost, thus propagating the differences between care units, shifts, and even within a shift. The role of managers should be ensuring that the process is performed in a consistent manner, and thus their focus is on standardizing across the staff and ensuring that their skill base is grown to the necessary level.

Leaders, on the other hand, should not get too involved in the consistency of practice other than to hold the managers accountable for it. Leadership’s role is to create the vision, to manage and resource the change in a contained way toward that vision, and finally to ensure that the appropriate framework of metrics is in place.

In the traditional healthcare model where roles are mixed, control of change is lost and the standardization of practice doesn’t occur. Any performance improvement implemented thus fails to stick.

Summary

What should be quite obvious throughout this chapter is that the failings of performance improvement in healthcare are no comment on the people, but rather on the methods of change currently in use. People in healthcare, like people in other industries, are smart, hardworking, and creative—but unlike in most other industries they are laboring under change management systems that are antiquated.

The failings of the existing change system are obvious when written in black and white as here, but for some inexplicable reason this is overlooked and the methods are just accepted as the best approach.

The fundamental problem is not the difficulty of making change in healthcare; it’s the change model itself.