Skip to main content

Recommendation Phobia

There’s a condition suffered by many CARF accredited organizations and well known to those of us who help those organizations get accredited – CARF Recommendation Phobia. It can strike even the most seasoned organization. The symptoms include:

  • An allergic-like responses to a CARF surveyor using the word “Recommendation” during a survey, typically followed by near endless arguing with the surveyor.
  • Comparing recommendations with other organizations (i.e., “how many did you guys get?”).
  • Contracts with surveyors that specify the maximum number of recommendations they can receive.

While there is no cure as of yet, there are promising treatments that are now available to all organizations! The first and most common effective treatment is a form of talk therapy where the therapist (or consultant or CARF employee) gently reminds the organization that recommendations are actually code for “opportunities for improvement”. This therapy generally works well for mild to moderate presentations of the phobia.

The “opportunities for improvement” therapy can be combined with another commonly used psycho-education treatment where the therapist (or consultant or CARF employee) educates the organization about the fact that there is only a loose correlation between the number of recommendations and the final survey outcome. Recommendations, when written, often combine multiple elements of a standard and their importance can vary dramatically in terms of how they impact survey outcomes. Put another way, you can get a lot of recommendations and still be fully three-year accredited because recommendations are – you guessed it! – opportunities for improvement.

The final treatment approach, generally reserved for more severe presentations of this heinous condition, is called “just get over it”. This treatment should be delivered carefully but firmly by a trained and trusted professional able to competently deliver the appropriate dose. Although related to the other two treatments, it is intended to address the underlying anxiety that is caused by having a neutral third party point out where the organization could do better. A common phrase used during the delivery of this treatment is “it’s just a recommendation”. Another phrase that can be used in instances where borderline or questionable recommendations are given by a survey team is “accept that both CARF and the surveyors can and will get it wrong”, preceded or followed by “get over it”.

If your organization or an organization you love is suffering from this condition, don’t hesitate to reach out to us for help! Our consultants are highly skilled at delivering all of these treatments and are available to help.

Succession Planning

A navigational compass with the word Problem in gray repeated multiple times along the edge of the compass. The word Solution is also outlining the edge of the compass but it is in red text and is not repeated. The compass needle is pointing to the word Solution.Succession planning in the context of global recruitment and retention challenges across health and human services is difficult, to say the least. According to Mercer’s study, the United States alone will need to hire 2.3 million new healthcare workers by 2025 to keep up with the population. Unfortunately, CARF’s succession planning standards don’t make things any easier. So let’s walk it through and arrive at a common sense approach.

CARF addresses succession planning in four different standards within Section One (ASPIRE) of all standards manuals. The first reference shows up in standard 1.A.3.m, requiring that ‘identified leadership’ guides succession planning for the organization. Although the intent statement for that standard refers to some possibilities of how organizational leadership might guide the process, there is no clarity around what, if anything, needs to be in writing.

Succession planning shows up again in standards 3 and 11 of the Workforce Development and Management standards (section I). Standard 1.I.3 includes succession planning as one of seven areas that the organization’s workforce planning should include. The intent statement says that succession planning should identify actions to be taken by the organization in the event that key staff members are unable to perform their duties due to a wide range of possible reasons. So the focus here is on a plan of action in response to loss of service. However, the standard does not require that anything be put in writing. Standard 1.I.11 provides the most detail regarding expectations, outlining seven specific aspects or elements that succession planning should address. This includes identifying key positions and their competencies; reviewing future needs, current talents and readiness of the organization’s workforce; and conducting a gaps analysis and strategic development. While the level of detail suggests that this would need to be documented, documentation is not currently required to meet this standard.

The final reference to succession planning in the standards, and the one and only place where documentation is required, is governance standard 1.B.5.b. It requires an executive leadership succession plan that is to be reviewed at least annually and updated as needed. However, governance standards remain optional for CARF accredited organizations. In short, CARF’s requirements regarding succession planning are overlapping and don’t offer a clear roadmap for organizations.

So what’s a common sense approach? Given the level of detail outlined in the standards and associated intent statements, fully meeting these standards without some documentation would be difficult. And while documentation other than a formal plan (e.g., policies/procedures, management meeting minutes, supervision records and employee performance evaluations) could theoretically meet the standards, that could be difficult and complex to manage. It makes sense to develop a written succession plan for key positions regardless of whether you are applying the Governance standards. One or two pages would suffice. The focus should be on the elements in standard 1.I.11 and leadership should drive the development process. If you’re looking for some helpful hints, consider this list of the Top Ten Best Practices for Succession Planning from the insurance and consulting firm Gallagher. And don’t hesitate to reach out to an ACG consultant if you need help!

Goals! Goals! Goals!

stairs to success

Topic Area: Evaluation

I’ve gone into a few rants over the past few months about the use of the client goal achievement or goal attainment (loosely described) as a measure of program outcomes. So I decided it’s time to move from ranting to writing!

The overwhelming majority of programs and services I’ve been involved in evaluating over the past several years use some form of client goal achievement to measure their success. I get the attraction. It’s a ‘two-fer’ for many programs! Staff have to define goals for the work they do with clients as part of case management and program accountability expectations (e.g., accreditation), so why not get some extra mileage by using them for outcomes measurement? But the devil is in the detail. Most of the programs I’ve worked with have taken advantage of software that has some form of goal scaling built in.  Many software programs (and most in-house solutions) simply require users to indicate whether a goal has been fully achieved, partly achieved, or not achieved at some point in time after the goal is set.  Some provide opportunity to indicate why it was achieved or not achieved.  There are few (if any) parameters around what achievement means or what a reasonable timeframe for full achievement might be.  The system then produces a report counting how many goals are achieved (or not) and links that to program level outcome statements based on categories of goal type that the worker chooses when entering the goal.

So what’s wrong with all of that? To begin, there are a lot of untested assumptions built in to that approach. For example, it assumes that all goals are roughly equal in terms of their importance and the amount of time or effort required to achieve them.  My experience in working with clients to set goals is that they often aren’t equal. This approach also assumes that all goals have a direct and meaningful link to the program’s goals. The problem here is that goals can often be small stepping stones towards some larger end.  So, even if we trust that these individual goals bare some connection to the program’s goals, we end up counting several ‘successes’ (or failures) rather than simply counting the achievement of the real change or benefit we’re hoping for.  And in the end, are those successes or failures a true reflection of our efforts and the efforts of our clients?

Using client goals to measure program success could also have unintended consequences for how our staff practice.  By counting up the number of goals that are achieved or not achieved, we send the message to staff that this highly personal process has meaning at another level – evaluation of whether the program is working or not.  The unintended consequence could be that staff focus their efforts on what is easily achievable (i.e., the low hanging fruit).

A good friend and colleague of mine often reminds me of an important principle in measuring program success; ‘measure me’. In other words, measure whether I, as a whole person, benefited from the program. Reporting on the percentage of goals that are achieved or not achieved is different than reporting on the percentage of clients that experienced a positive change in their life. Somewhere in that mess of goals are numerous clients with one or more goals of differing importance or significance and usually reflecting many steps towards some desired end. A good evaluation system should be able to measure and report on the changes that each unique client experiences.

The good news is that there are University tested and validated approaches to measuring program level outcomes through client goal achievement.  These approaches, usually referred to as Goal Attainment Scaling (GAS), are more rigorous and require staff training.  They are able to produce a standardized score for the individual that accounts for variation in the number of goals that clients have chosen to work on.  They also define clear time limits and parameters for goal achievement. It is unfortunate that the versions I frequently see used are not based on the Goal Attainment Scaling model.

The bottom line?  Goal planning is, and should be, a highly personal affair. Done correctly and with thoughtfulness, it is a fluid and reflexive process that grounds our day-to-day work. The fact that a goal isn’t achieved may be a good thing – perhaps a turning point in the client coming to terms with what their capacity is, or our staff realizing that they’re barking up the wrong tree.  Likewise, the achievement of a goal may have had little to do with our efforts. Some things simply improve with time and sometimes people get better or solve their problems despite us! Adding up the results of this highly personal and reflexive process in the belief that it tells us something about program outcome achievement is problematic unless you take the time to build a very rigorous process. In the end, no system of outcomes measurement is perfect.  All approaches have their pitfalls.  But if you choose to use goal achievement, make sure you use a reliable and valid approach and provide training and support to staff so that they use it correctly and they understand that not achieving a goal can be a good thing!

Your success is our success!

stairs to success

In 2018, the Accreditation Consulting Group (ACG) is proud to announce that we worked with approximately 10% of all the organizations seeking their initial accreditation. All such organizations had a positive accreditation thus saving money by not requiring another survey in the following year and saving staff time/expense in efficiently preparing for the accreditation survey.  ACG worked with both large and very small providers across the various divisions of CARF.  Fees and services have been customized to meet the needs of the organizations seeking accreditation in order to be cost effective. 

Why Hire a Consultant if You had a Three-Year Accreditation the Last Time

Many organizations still hire a consultant even though they have been successful in the past with their CARF surveys – even when the last few surveys have resulted in a three-year accreditation outcome.  The form of consulting under these circumstances is typically short-term.  It may make sense to do a mock survey in order to get ready.  In other cases, it may just be a matter of a few hours of consulting to verify that changes that have occurred within the organization since the last CARF survey are still meeting the standards. 

The purpose of CARF is to emphasize and encourage best practices within the operations of the organization as it relates to both business practices and services delivery.  In some cases, these practices are developed by the creativity and genius of the organizations themselves.  Other times, best practices are taught at conferences or through the professional relationships with other providers.  However, an excellent resource is CARF consultants and the CARF surveyors.  Such professionals have visited many providers and have witnessed firsthand practical and effective means to do business and meet the standards. 

Another reason an organization might hire a consultant even though it is a strong organization is that the standards change each year.  That is the result of ever changing demands and expectations of providers.  As new concepts are introduced, they are integrated into the CARF standards. 

In many cases, there has been staff turnover of key positions between surveys.  This is especially true when there has been a change in leadership. The new replacements may not be aware of the CARF standards or may not be able to anticipate the type of questions that might be asked during the survey.  Practice interviews will help prepare persons and give them confidence.

During the survey itself, it is sometimes advantageous to have a consult available to answer questions as they arise.  It is not too uncommon for there to be miscommunications between what the CARF surveyor is seeking and what an organization thinks they are looking or asking for.  Such misunderstandings can easily be addressed thus avoiding a potential recommendation.

At the Accreditation Consulting Group (ACG) all of our consultants are currently CARF surveyors who each have at least 10 years of experience as a surveyor.  They are excellent resources for examples of best practices and are knowledgeable of efficient ways of meeting the standards. 

CARF Appeal Process

CARF Appeal Process

There are times when an organization receives an outcome of less than three-years.  In some cases, the organization is in dispute with the accreditation outcome that resulted in a one-year or non-accreditation outcome.  Other times, the decision was very close.  CARF has a very detailed appeal process the addresses such situations when the organization feels the outcome is incorrect.  If your organization finds itself in this situation, it is highly advisable to hire a consultant to guide your organization through the process. 

Appeals must be made in writing within 30 calendar days of the date of the accreditation letter.  If the appeal is of a one-year accreditation outcome, a resurvey will be scheduled within 60 days of receipt of the written request.  If the appeal is of a non-accreditation outcome, then the review will be scheduled within 30 days of receipt of the payment for the re-survey.  For more specific information about timelines and CARF’s policies regarding appeals of disputed decisions, please refer to the first section of the standards manual. 

One of the first things the consultant will do is review your survey report to understand why the survey team recommended a particular outcome.  The consultant will pay particular attention to the paragraph that outlines the rationale for the decision, as well as the types of recommendations that were made.  Not all standards are equal.  Certain standards are more important than others, especially when it comes down to the health and safety of persons served and if they are benefiting from services.  A consultant can best guide you and give you feedback on what your options are and the potential to change the accreditation outcome.   

If you make the decision to appeal an accreditation outcome, then the consultant will let you know what your course of action ought to be in order to potentially change the outcome.  It is important that an organization aggressively addresses areas that were of concern and the cause of the accreditation decision.  The consultant will guide you with these activities so that they are targeted.  Examples of forms, plans and policies may be provided. 

It is generally cost effective to hire a consultant and appeal a decision if the outcome was close and if changes can be made to address recommendations.  The best approach is to talk to a consultant as soon as possible and seek an opinion of if your situation might result in an improved accreditation decision.