Friday, March 23, 2018

Achieving Excellence, Risk-taking and Agility

By Doug Brockway
March 2018

In a recent Q and A interview about the makeup of workplace excellence and how to achieve it, Bill Wray, an accomplished banking (Washington Trust) and health insurance executive and Trilix, a leading technology consulting firm here in Rhode Island—the biggest little state—laid out what excellence is and how to achieve it.

Wray describes workplace excellence as combining financial gain with personal and company satisfaction and risk management, protecting the business from harm.  Initiating a recovery-of or a return-to excellence starts with the front line.  That’s where key ideas are.  It’s also where on-the-ground achievement begets trust in the program and more progress. The correct leadership culture, eventually achieved across all management, is needed to achieve excellence.  That culture includes the belief that one must improve to survive, know how to programmatically execute, and demonstrate personal participation and commitment to changes. A final homily:  be humble about what is, about what you’ve achieved to date and have courage to achieve what could be.

In thinking about the business and project/program recoveries I’ve played a part in I found that all of this rang true.  Same-same for those companies I’ve worked for where excellence was achieved, or, in counter-examples, could not be achieved no matter how much exhortation there was to do so.

Bill Wray’s view that you need to believe you need to improve to survive reminded me of behavior studies I’ve read about in Michael Lewis’ “The Undoing Project."  When hundreds of people are presented a choice between a certain gain or a 50/50 chance at a much larger gain or a total loss most took the certain gain.  In contrast, if the choice was between a known loss and a 50/50 chance to prevent any loss or suffer a larger loss, people tended to take the chance [see below for more].  People are risk-averse when gain is assured.  They take risk when loss is likely.  Think “burning platform” or, “you need to improve to survive.”

“The Undoing Project” observation on risk-taking behavior shows up elsewhere.  For instance, Clayton Christensen wrote in "The Innovator’s Dilemma" and elsewhere how many industries, disk drives and steel come immediately to mind, discarded low margin businesses to easily achieve a “known gain” in net profits.  This tactic was encouraged and followed religiously, ignoring the “need to improve to survive.”  In both industries competitors filled the “low margin product” gap and learned how to be excellent throughout the value chain and ended up overtaking risk-averse market leaders.

I am often reminded of a story that Gary Hamel used to tell in speeches about "Competing for the Future" (co-written with C.K. Prahalad) Hamel told of running a strategic workshop with the top management of a technology firm, a chip maker if I recall correctly.  He stood in front of the room asking for ideas on how the firm could add products, enter new markets, grow revenue.  He got a few.  Then he asked how they could be more efficient, cut costs, eliminate waste, and the executives went on without end. 

Among his points, management has to insist that it can productively talk about the future if it is to lead anywhere.  Firms that increase productivity, Revenue ÷ Cost, only by managing the denominator lose.  Succeeding firms don’t give up “potential competence.”  They need to be “future oriented,” to discover new products and markets.  “Firms have to challenge [existing] assumptions and embrace curiosity of imaging a different future so as to discover unexploited opportunities that lie underneath those undiscovered or unsatisfied human needs.” They needed a number of behaviors that are often summed up as “innovative.”

Also, of interest from “The Undoing Project”, individuals’ appetites for risk are inversely related to the stakes.  This implies that a transformation-to-excellence program should be made up of many, smaller efforts.  People will be more likely to perceive them as worth the risk.  Integrated, massive efforts will seem (and likely are) too big to succeed. This finding argues for making many, many incremental changes (as Agile, Lean, Kanban all argue) instead of betting everything in one throw.
In using agile to build applications approaches like the Scaled Agile Framework (SAFe) allows for managing portfolios of separate but related sprints and “release trains” towards a strategic end.  Recently, companies like John Deere have applied agile concepts of many, incremental but holistically linked changes across business to create “agile innovation.”

This article from Bain outlines how Deere and others use agile approaches do two things: design breakthrough solutions to important customer problems and develop those solutions economically. It’s about design and development, and it must be tightly integrated and rapidly adapted to the direction and pace of market changes.

According to Bain, at John Deere the goal was to “think unreasonably big, work as iteratively and as small as practical, deliver faster than what’s been possible, adjust and adapt constantly” …  John Deere’s innovators target long-term disruptions that may require 5 to 10 years to fully develop and bring to market. They typically take about nine months to identify a new market opportunity, develop the basics of a solution that meets customer needs and test the solution. Using agile techniques and a team of individuals who were already familiar with the principles, the company was able to compress this time frame by more than 75%.”

Bill Wray suggests starting with the front line, with material and observable success, and build from there.  To make a complete effort, apply a systematic approach and ensure that the leadership culture celebrates the push to excellence. 

Indications from “The Undoing Project” suggest that how the efforts, the projects are framed, how they are described, says much about peoples’ willingness to be innovative.  Keeping individual efforts smaller increases participation and confidence but having a method to integrate them is key to broad-based impact.

These factors lead me to think of approaches like the Scaled Agile Framework:  regular sprints, integrated into release trains with real product results, scheduled and resourced strategically by an executive team using Lean and Kanban methods. There are many companies, as Bain contends, that are applying agile methods at scale for general business innovation, for the achievement of business innovation and success.

MORE on “The Undoing Project” – about making choices

Michael Lewis’ book “The Undoing Project” includes a summary of a behavior study that  Daniel Kahneman and Amos Tversky performed on how people make judgments and decisions.  In this experiment they found that the way a question is framed says much about how much risk someone is willing to take.  For example:

Challenge 1 - Given that you start with $1,000, which would you prefer?

Gift A:  A lottery ticket that offers a 50 percent chance to win $1,000
Gift B:  A certain $500
Almost all people choose B. Turn the challenge around …
Challenge 2 – Given that you start with $2,000 which would you prefer?

Gift C:  A lottery ticket that offers a 50 percent chance to lose $1,000
Gift D:  A certain loss of $500
Almost all people choose C.  They take the bet. 

In the first example, if you wanted a 50-50 split in the population you need only offer a certain $370 against the 50-50 bet to win $1,000.  When you turn it around, to a choice of losses, the loss would also have to be about $370. These two challenges are statistically identical choices, yet the bias to take the sure thing in challenge 1 and the bet in challenge 2 remains the same.

This tendency, this preference, remains when the choices do not involve money.  For instance:
We are preparing for an outbreak of a disease that is expected to kill 600 people.  Two alternative programs have been proposed.  We can only do one:

Program A - 200 people will be saved
Program B – There is a one-third probability that all 600 will be saved; a two-thirds probability all 600 will perish
Most people choose the sure thing, Program A.  Turn it around:

We are preparing for an outbreak of a disease that is expected to kill 600 people.  Two alternative programs have been proposed.  We can only do one:
Program C - 400 people will die
Program D – There is a one-third probability that all 600 will be saved; a two-thirds probability all 600 will perish
Most people choose Program D, the chance to save all.

People did not choose between things.  They choose between DESCRIPTIONS of things. People are risk averse when facing incremental gains.  They are risk takers in avoiding incremental pain/loss.

“The Undoing Project” also describes how people avoid loss more than they pursue gain, especially when the stakes increase.  If you ask someone to flip a coin for a $1, no big deal.  Make the coin flip for $100 and you have to offer 2:1 odds.  Make it for $10,000, and the odds needed to entice most people are much higher.  People become more risk averse as the stakes grow.

Monday, March 5, 2018

Re-examining Project Risk

Beyond Size, Structure and Technology, today’s systems require management of integration and market risks

By Doug Brockway
February 2018
In September 1981 the Harvard Business Review published an article by F. Warren McFarlan on the “Portfolio Approach to Information Systems.”  McFarlan described three determinants of project risk: 
  • Size (how big an effort is this?)
  • Structure (how well are the objectives defined?), and
  • Technology (does the technology, and our understanding of it, match the task?)
In advance of implementation he advocated assessing project risk singly (should we build this system/app?) and as a portfolio (what does this do for us overall?).  He advocated adjusting project management approaches based on the nature of the project.

The model was developed in the days when almost all-important systems were in-house, mainframe systems and when mini-computers, local networks, and inter-company systems were just beginning their breakout.  It has no awareness of the challenges brought by PCs, hand-helds, the Internet, Social, Mobile, Analytics or the Cloud (ISMAC). As a result, two more determinants of risk are added in this paper:
  • Integration (how do we orchestrate all that is part of a modern application ecosystem?)
  • Market (how do we penetrate and stay in the target market?
In sum, the framework and its main points remain relevant. Here’s a synopsis with some updates.

Risk Determinants

McFarlan cites three determinants of a given project or portfolio risk:  size, structure and technology.  There are at least two more forms to add: Market participation risk and Integration risk.  In each of these both the absolute measure of a determinant and the relative measure to the company or use must be considered in evaluating risk.


The size of a project can be described through its dollar expense, the staff needed to develop it, the time taken, and the number of people and departments affected.  Projects can also be sized by the amount of “stuff” that is delivered. At that time KLOCs (thousands of lines of code) were also used, but rarely today.  Function Points (there are at least 5 types) are most prominent.  Many companies count use cases.  In agile methodologies teams evaluate the size of deliverables by analogy, refining estimates, sprint-to-sprint, based on the team’s known experience, cadence, and demonstrated “velocity.”

As a rule, a project with 25 people assigned is riskier than one with 5.  A project to be used by a department of 50 people is less risky than one used by 3,000 customers.  For a company experienced in projects affecting four to six thousand customers, a 3,000 customer project may be lower risk than one that is built for a department of 250 by a firm with an average user group is 50.  Absolute size and size relative to a company’s experience materially affect risk.

Many systems are still built in-house but for even the smallest company externally facing systems are the rule.  In bringing them to life companies must select and manage vendors to partner with in building almost all significant applications and systems.  In doing so a company’s past experience in vendor selection at this size of project is a significant risk.  If you’ve never hired someone for a 3,000 external-user application, only 250 internal user systems, the risk rises.


In McFarlan’s terminology “structure” refers to the “structure” of project outputs.  In highly structured projects they are defined completely, from the moment of conceptualization.  Today, as then, with such clearly stated design and deliverables project risk is reduced. Usually these are small projects or very plain vanilla projects to replace or build basic operational systems.

In contrast, on projects where the users cannot reach a consensus on what the outputs should be, where the outputs shift “almost weekly,” the risk goes up. In practical terms this means that any system of meaningful business impact has moderate to high risk.  It also means that any larger scaled project carries extra risk.  No matter how much analysis and use-case development are applied at the front end of a multi-month project, the definition of the most optimal outcome will shift before the project finishes.  The longer the project and/or the more it is directly used to conduct business, the greater the actual need shifts away from initial design.

How systems are developed has changed somewhat.  Approaches like agile development and, when the project size is large, overarching concepts like “release trains,” Kanban and the Scaled Agile Framework (SAFe) make it easier to keep delivering on well “structure” projects while keeping the organization’s “design aim” tracking the moving target. 

Agile breaks projects down into two-week sprints (iterations), each with a defined required output, each output accepted by someone from business management.  Sprints are small projects that are highly structured and, by McFarlan’s analysis, low risk.  If the actual needed deliverable requires more time than that a “release train” is defined; a set of sprints, one following the other, to produce the needed system in 6-12 sprints.  Each sprint has known deliverables. A working system or enhancement is the result.

As the broader project team and the business learns, the definition of later sprints in the release train tend to change over time but, importantly, once a given sprint starts its structured objective does not.  The risk is managed within the sprints.  Release trains are managed by business owners and IT architects to ensure that the overall objectives and the technical results match with company strategy and need.  Using approaches like SAFe, for larger or on-going efforts, the business strategy informs which release trains to build and in a lean, Kanban method, resource availability limits how many are in process concurrently.


Traditionally companies focused on their internal experience with the technology being used.  As the project team’s familiarity with the technology decreased the risk went up.  If you didn’t know the technology, you hired someone who did. In today’s ISMAC-world there are so many technologies and trends you need to hire someone who can hire someone who knows the technology. 

For larger efforts you’ll need to hire someone to hire many someones to cover all the needed bases. And those bases, the underlying technologies, are changing faster than ever before.  A company’s absolute experience with the technologies drives risk as well as its absolute experience in vendor management in any given field.  And, as before, a company and industry’s relative experience with a technology and a vendor’s relative experience say much about the risk taken in pursuing a project or a portfolio.

Integration Risk (new)

Current systems are usually built by third parties, often in whole or in part in other countries, as often as not an amalgam of systems, services, modules and widgets, and frequently must either interact directly with customers and suppliers and a mobile workforce (especially the business-critical projects). These factors create an additional integration risk within and across a set of systems and ecosystems.

Into the 1980’s systems tended to be supportive of given business functions.  They typically stayed within the bounds of a given processes or organization.  They rarely went outside the organization.  Virtually none were conceived and delivered as the business itself, the interaction with buyers, suppliers and competitors.  In today’s world some systems, apps or applets are aimed at a known set of users in a known organization or set of organizations and markets. For these prior experience is rich. Performance expectations are understood and regularly have been met. Such systems have a different risk profile than a revenue generating product’s user experience.

Even targeted, small revenue generating systems can quickly go viral and, in the process, break the systems responsiveness and availability.  Specific skills and experience in technologies and vendors is needed to mitigate this potential. Efforts intended to be transformative can flop.  Engineering insight is needed to design a system that can be seamlessly and economically scaled down.

A system of significant impact must often deal with multiple jurisdictions on intellectual property, personal data and related security, standards of customer service, rights of access and more.  A project with few of these issues and in known jurisdictions for the issues it has is far less risky than one with many, often in new jurisdictions, and where the growth of the consuming population is faster than planned. Certain vendors are prepared for this, others merely claim it.  The development team must be able to determine the difference.

Market Risk (new)

The original five elements remain relevant but when McFarlan developed his approach to risk systems were internally focused.  They supported business efficiency and, on occasion, had impact on revenues.  Today, systems are often an automated reflection of our business strategies. They are the method with which we interact with customers and suppliers, do transactions, and win in the marketplace. In cases like eBay or Amazon the systems ARE the marketplace. As a result, the 21st century risks include issues around intellectual property, competitors’ activities, market disruptors, and so-called “whole product” challenges (Crossing the Chasm). Many systems today define how, when and where we do business and with whom.  Modern systems often carry market or business risks.

If a project is part of market-facing capabilities or closely tied to them it bears the “strategic risk” of becoming irrelevant if the business model doesn’t work out, the revenues needed don’t appear, or unexpected competitors arise.  Management of these risks require participation, insight and direction from the company’s resources that own or are responsible for defining and attacking markets.
Most current systems are designed with “compliance risk” issues in mind.  A lender checks to ensure all the disclosures are made.  A trucker checks to ensure that drivers get sufficient rest.  A chemical company reports that all outputs, useful and waste, are accounted for.  This changes the makeup of teams (they require people versed in relevant regulations and laws, current and likely), the testing that must be done, and adds to future maintenance and enhancement obligations.

Because modern systems are either the business itself or directly enable it they bring scrutiny regarding “operational risk.”  Contingency plans for what to do if the system doesn’t scale or is sporadic in its performance or availability are more urgently needed and more stringently examined.  These factors increase the need for efficiency in design.  As companies pursue total quality initiatives quality must be designed in which leads to changes in development methods and team composition.
Lastly, by being part of the product modern systems bear “reputational risk” and reputational hurdles.  If a competitor adds a useful feature to its e-commerce experience others must follow suit.  You can’t enter a market if you can’t reasonably mimic the feature/function provided by dominant player. Amazon in retailing is the most obvious example. In judging a market facing project or portfolio’s risk you must judge your ability to play at all and for the long haul.

Assessing and Managing Risk

Many companies conduct reviews of projects wherein the IT and business managers independently evaluate the project and then discuss how to improve next time. They also develop questionnaires to be used as part of deciding to fund a project.  In all cases keeping and analyzing the history and trends of risk scores and results is key to developing customized knowledge of how your company/group deals with project risk and what to do about it.

Systems management suites often have risk assessment components.  An ITIL v3 Change Management process must evaluate an array of risks before it accepts any change and supporting software has support for this. These risk assessment modules should be customizable to the kinds of risk you intend to measure and manage.  If not, you’ll need to decide whether to develop a process outside the tool or not.

Many project management techniques and methods assume waterfall development:  business case, business design, technical design, coding, testing, implementation.  This was, essentially, the only way systems were built when McFarlan did his study which lists sixteen methods of either planning projects or integrating the efforts and expectations of delivery and user teams.  Items like milestone phases, systems specification standards and post audit procedures are wide-spread now as are user project managers, steering committees, progress reports and user ownership.

Often better than waterfall are DevOps and agile approaches. A delivery team that practices DevOps fully is producing completed, debugged, implementable code as they go.  This has an enormous, beneficial impact on project and cumulative portfolio risk. Any organization that is properly using agile approaches has “product owner” reviews of deliverables at the end of each sprint and a team retrospective.  Together it is known if the output met specifications, if the process used worked properly, and what will be done to improve both.  As sprint follows sprint follows sprint these retrospectives have a Kaizen-like effect, continuously reducing risk.  Similar retrospectives occur during and more exhaustively at the end of release trains.  Many companies are having broader success applying Scaled Agile-like frameworks across release trains, programs and strategic initiatives.

Warren McFarlan’s model still resonates.  If you could only ensure that you measured and managed project size, structure and technology you will have made important strides in project and portfolio risk.  But today’s systems have a wider array of risks to manage.  Understanding the one’s you face and developing an analytic sense across projects of risk trends and mitigation success will lead to better systems in place and the confidence to do more impactful things in the future.

Monday, February 23, 2015

State-by-State Doctor Shopping Prevention Silos

By Doug Brockway
February 23, 2015

PMPs as currently designed and governed cannot look beyond state lines and thus cannot stop doctor shopping across state lines.

Because PMPs do not operate across state lines, multiple prescriptions can be written in one state and filled a very short time later in another state with no-one the wiser.

In the North East this is a big problem.  In Eastern Massachusetts, where I am at this moment, I can be in New Hampshire in 20 minutes, in Rhode Island in less than an hour, and in Connecticut, Vermont or Maine in less than two hours.  New York State is less than three hours away.

But even in the middle of the US the lack of inter-state controls over doctor shopping endanger us all.  Take St. Louis, for example.  According to the firm Inbound Logistics, Items shipped by truck from the bi-state region reach 70 percent of the U.S. population within 48 hours. A doctor shopper can do the same.  A doctor shopper with an airline ticket is faster.

Of all the reasons that PMPs cannot fully respond to Doctor Shopping this one requires inter-state cooperation.  It requires that an easy-to-use, fast method be developed for a doctor or pharmacist to see all the PMP records of an individual. 

This means a cross-state method of identifying patients (for personal security reasons it cannot be the social security number) and a way to use that identifier to collect from and deliver doctor shopping relevant information to doctors and pharmacists. 

The credit card networks do this today.  A similar capability exists as an overlay for existing PMPs.  Solutions like these are needed in order to provide sufficient integration across PMPs that interstate doctor shopping is slowed if not prevented.

Thursday, February 5, 2015

In Opiate Abuse Prevention, PMP Timeliness is Godliness

By Doug Brockway
February 5, 2015

PMP data are collected monthly, or at best in some states weekly.  This is not sufficient to capture diversion, which can be done intraday with patients getting prescriptions from multiple doctors and filling them at multiple pharmacies in very short periods of time.

An enterprising “patient” can visit a doctor and get a prescription for a pain in the back, leave that office, go to another doctor, perhaps just doors down the hall, and get a prescription for a pain in the leg, and so on.  Perhaps a more frequent situation is the phantom patient that approaches a series of emergency rooms and clinics in an attempt to gain medication for phantom pains?

Then, or alternately, the enterprising patient can visit many pharmacies; a Walgreens at 10:00 AM, a CVS at noon.  Each with a different prescription, and keep doing this for some time, at least a week, before being found out.

Even if one believes that the only role of PMPs is aiding physicians in properly advising patients regarding their use of opiates there is something of a functional gap in the systems.  By not collecting activity in real-time the PMPs are forced to assist doctors and pharmacists with incomplete and thus inaccurate data.

From a process engineering perspective the penalty we all pay for this is not that we miss the opportunity to stop that last attempt at diversion that allows a patient to take the fateful step into addiction.  That we allow any such attempts because we haven’t made data collection (and use) real time we have permitted a diversionary environment.  This helps make PMPs, as currently implemented, inadequate to the tasks of prevention of doctor shopping and diversion.

Friday, January 30, 2015

Closing the In-state PMP Loop

By Doug Brockway
January 30, 2015

A key weakness with Prescription Monitoring Programs, PMPs, is they do not capture and immediately make use of the act of writing a prescription. There is no PMP record of a prescription as it is written.  With PMPs data are only collected by the pharmacist for prescriptions filled.  The pharmacy often has a week, usually more time, to enter the data.

As a result a PMP cannot detect potential diversion prior to dispensing.  One doctor hears you have back difficulties, the next is told that your shoulder hurts.  Each can write you a prescription without knowing of the other’s action. You can see doctor after doctor for at least a week before anyone notices. 

If in your state the pharmacist has more than a week to submit information on a prescription the window for mischief is wider. Sometimes the doctor shopping is done by an individual trying to satisfy an addiction or generate some cash.  More insidiously are the diversions done by coordinated groups intent on the resale of prescription pain killers to the innocent and the unsuspecting.

As noted elsewhere, PMPs are not notable for their user experience.  They cause significant delay in process when only used for look-up purposes by doctors.  What is needed is a very efficient-for-the-user method to capture prescriptions, in real time, and feed that data to a central data base, presumably a PMP, for use in closing the in-state PMP loop. 

This either means wholesale re-write/replacement of PMPs or the use of a “surround strategy;” putting a superior data collection capability, a “wrapper” around the existing systems[1].  If this is done then PMPs will be far more suitable to our goals of opiate abuse prevention than they are now.

[1] Capabilities of this sort, CMS-tested, are available:

Thursday, January 29, 2015

Design Prescription Monitoring for Operability

By Doug Brockway
January 29, 2015

A key reason that Prescription Monitoring Programs (PMPs) are inadequate to the task of prevention of opiate abuse is a matter of consistent, enthusiastic participation and use.  People will flock to technology based products or services with superior user experiences.  The Apple suite is a commonly cited example that takes advantage of what is known as “design thinking.”  In contrast, PMPs are user-experience clunkers, designed for data analysis, not for quick, easy, universal use in an active medical office.

When considering prescribing prescription pain killers PMPs oblige doctors and pharmacists to manually examine a data base. According to recent testimony from the Massachusetts Medical Society (MMS) each lookup takes 3 to 7 minutes. This doesn’t sound like much unless you’re in the middle of a busy day at a medical practice or in an emergency room or any one of a number of situations where speed is important.

It’s as if in your day-to-day life when buying clothes at a store the sales clerk stops the sale to check your credit history.  They’d have to be sure you’re the person referred to on the screen, so they’d ask you a set of authentication and validation questions, then they’d look at your credit history and make their decision whether you’re able and willing to pay for those pants.

Instead they swipe a card (or use a mobile service like Apple Pay) and all that is done in seconds by an independent, objective, consistent third party. If on-average each retail purchase was 5 minutes longer than it is today sellers and buyers would be unhappy, grumpy, and un-cooperative with the process and each other.

With many uses of pain killers the 5 minute investment in the PMP is easily seen as too burdensome. The hospice setting is one.  So are small amount prescriptions, especially for patients well known to the physician. Emergency care, many inpatient settings so-called “immediate treatment” also might fall in this category.  Policy makers and the medical community spend much time discussing how to manage the efficiency of the process for such situations.

A process engineer will tell you that excepting uses is not advisable.  You want to collect all uses of pain killers and do your analyses from there.  You can’t/shouldn’t presume to know where the patterns are.  That same process engineer will tell you that putting a multi-step, multi-minute process in front of all data collection will sink under its own weight. This is a big part of the reason, in the case of retail, the data collection is just a swipe of a card.  For PMPs to be widely used and widely accepted in all prescription writing and fulfilling they need to create a similar capability. 

Monday, January 26, 2015

Prevention Effectiveness Requires Transactional Consistency

By Doug Brockway
January 26, 2015

As discussed in this related post there is a minority of doctors and pharmacists that are consciously and intentionally involved in the diversion of prescription drugs will not turn themselves in based on what is shown in a PMP data base.  On the other side of the transactions are patients who intentionally take advantage of inconsistencies in our prevention systems and processes to gain access to drugs they should not have.  These are the doctor shoppers that PMPs are trying to stop.

Key goals of PMPs include the ability to review a patient’s prescription history, avoid duplication of drug therapy or possible drug interactions, and enable appropriately coordinated care across providers. But PMP data are subject to error in the interpretation by myriad physicians and pharmacists each applying their individual interpretations and doing it with different levels of diligence each time. This contributes to PMPs, as currently engineered, to being inadequate to the task of preventing opiate abuse.

One key issue with PMP data in this regard is that there are spelling, keying errors and missing information.  The PMP may have data for John Q. Public, John Public, J.Q. Public and more.  The underlying records for the patient are not likely the same.  In the terms of systems security, the physician must authenticate and verify the identity of the patient, decide which records to use, and then make a series of interpretations before deciding whether a patient is at risk for doctor shopping.

Another key issue is the delay or “float” in PMP data.  The most ambitious states require data to be submitted by pharmacists within a week of fulfillment, most are less strict.  When a patient who is not doctor shopping visits a doctor it will more likely not be within that one week window.  Their PMP record will be up-to-date.  A doctor shopper is more pressed by time, will see many providers within a week.  Their histories of opiate use will be out-of-date.

Combined, these weaknesses create a system that either is not repeatable in its application or, since it uses incomplete data, is repeatably evaluating incorrectly.  We need a system and process that is both repeatable and accurate.

Having a doctor or pharmacist interpret PMP data to evaluate if a patient is doctor shopping can be viewed as akin to a clothing salesperson or store manager interpreting your credit history to see if you should be able to buy that sweater you like.  The same salesperson will react differently to different customers, even to the same customer at different times and under different circumstances.  In the case of retail the basic analysis is done via computer separate from and agnostic to the sale, not by the person doing the selling.  Crucially, this is done in real-time with the swipe of a credit card or, more recently, with the presentation of an Apple Pay or similarly enabled mobile payment device.  Real time is important because if the checking is too time consuming and burdensome then the checker will often put in a less than stellar effort.

We’re all familiar with and depend on second opinions from doctors when we seek care.  We want to be sure that all possible diagnoses and proscriptions are considered and the best course of action is available to us.  A patient who is doctor shopping also relies on this variance but for dark purposes.  They want to use or sell pain killers. These doctor shoppers use the inconsistencies in doctors’ attention to, thoroughness regarding and interpretations of need for care to see multiple doctors until they get the outcome they want, additional opiates.  PMPs will be more effective when they are able to tighten the loopholes, make data (and thus interpretations) more consistent and up-to-date, and provide basic analysis of PMP data in real time at the point of care.