There’s a condition suffered by many CARF accredited organizations and well known to those of us who help those organizations get accredited – CARF Recommendation Phobia. It can strike even the most seasoned organization. The symptoms include:

  • An allergic-like responses to a CARF surveyor using the word “Recommendation” during a survey, typically followed by near endless arguing with the surveyor.
  • Comparing recommendations with other organizations (i.e., “how many did you guys get?”).
  • Contracts with surveyors that specify the maximum number of recommendations they can receive.

While there is no cure as of yet, there are promising treatments that are now available to all organizations! The first and most common effective treatment is a form of talk therapy where the therapist (or consultant or CARF employee) gently reminds the organization that recommendations are actually code for “opportunities for improvement”. This therapy generally works well for mild to moderate presentations of the phobia.

The “opportunities for improvement” therapy can be combined with another commonly used psycho-education treatment where the therapist (or consultant or CARF employee) educates the organization about the fact that there is only a loose correlation between the number of recommendations and the final survey outcome. Recommendations, when written, often combine multiple elements of a standard and their importance can vary dramatically in terms of how they impact survey outcomes. Put another way, you can get a lot of recommendations and still be fully three-year accredited because recommendations are – you guessed it! – opportunities for improvement.

The final treatment approach, generally reserved for more severe presentations of this heinous condition, is called “just get over it”. This treatment should be delivered carefully but firmly by a trained and trusted professional able to competently deliver the appropriate dose. Although related to the other two treatments, it is intended to address the underlying anxiety that is caused by having a neutral third party point out where the organization could do better. A common phrase used during the delivery of this treatment is “it’s just a recommendation”. Another phrase that can be used in instances where borderline or questionable recommendations are given by a survey team is “accept that both CARF and the surveyors can and will get it wrong”, preceded or followed by “get over it”.

If your organization or an organization you love is suffering from this condition, don’t hesitate to reach out to us for help! Our consultants are highly skilled at delivering all of these treatments and are available to help.

Why get a three year accreditation

Succession planning in the context of global recruitment and retention challenges across health and human services is difficult, to say the least. According to Mercer’s study, the United States alone will need to hire 2.3 million new healthcare workers by 2025 to keep up with the population. Unfortunately, CARF’s succession planning standards don’t make things any easier. So let’s walk it through and arrive at a common sense approach.

CARF addresses succession planning in four different standards within Section One (ASPIRE) of all standards manuals. The first reference shows up in standard 1.A.3.m, requiring that ‘identified leadership’ guides succession planning for the organization. Although the intent statement for that standard refers to some possibilities of how organizational leadership might guide the process, there is no clarity around what, if anything, needs to be in writing.

Succession planning shows up again in standards 3 and 11 of the Workforce Development and Management standards (section I). Standard 1.I.3 includes succession planning as one of seven areas that the organization’s workforce planning should include. The intent statement says that succession planning should identify actions to be taken by the organization in the event that key staff members are unable to perform their duties due to a wide range of possible reasons. So the focus here is on a plan of action in response to loss of service. However, the standard does not require that anything be put in writing. Standard 1.I.11 provides the most detail regarding expectations, outlining seven specific aspects or elements that succession planning should address. This includes identifying key positions and their competencies; reviewing future needs, current talents and readiness of the organization’s workforce; and conducting a gaps analysis and strategic development. While the level of detail suggests that this would need to be documented, documentation is not currently required to meet this standard.

The final reference to succession planning in the standards, and the one and only place where documentation is required, is governance standard 1.B.5.b. It requires an executive leadership succession plan that is to be reviewed at least annually and updated as needed. However, governance standards remain optional for CARF accredited organizations. In short, CARF’s requirements regarding succession planning are overlapping and don’t offer a clear roadmap for organizations.

So what’s a common sense approach? Given the level of detail outlined in the standards and associated intent statements, fully meeting these standards without some documentation would be difficult. And while documentation other than a formal plan (e.g., policies/procedures, management meeting minutes, supervision records and employee performance evaluations) could theoretically meet the standards, that could be difficult and complex to manage. It makes sense to develop a written succession plan for key positions regardless of whether you are applying the Governance standards. One or two pages would suffice. The focus should be on the elements in standard 1.I.11 and leadership should drive the development process. If you’re looking for some helpful hints, consider this list of the Top Ten Best Practices for Succession Planning from the insurance and consulting firm Gallagher. And don’t hesitate to reach out to an ACG consultant if you need help!

Topic Area: Evaluation


I’ve gone into a few rants over the past few months about the use of the client goal achievement or goal attainment (loosely described) as a measure of program outcomes. So I decided it’s time to move from ranting to writing!


The overwhelming majority of programs and services I’ve been involved in evaluating over the past several years use some form of client goal achievement to measure their success. I get the attraction. It’s a ‘two-fer’ for many programs! Staff have to define goals for the work they do with clients as part of case management and program accountability expectations (e.g., accreditation), so why not get some extra mileage by using them for outcomes measurement? But the devil is in the detail. Most of the programs I’ve worked with have taken advantage of software that has some form of goal scaling built in.  Many software programs (and most in-house solutions) simply require users to indicate whether a goal has been fully achieved, partly achieved, or not achieved at some point in time after the goal is set.  Some provide opportunity to indicate why it was achieved or not achieved.  There are few (if any) parameters around what achievement means or what a reasonable timeframe for full achievement might be.  The system then produces a report counting how many goals are achieved (or not) and links that to program level outcome statements based on categories of goal type that the worker chooses when entering the goal.


So what’s wrong with all of that? To begin, there are a lot of untested assumptions built in to that approach. For example, it assumes that all goals are roughly equal in terms of their importance and the amount of time or effort required to achieve them.  My experience in working with clients to set goals is that they often aren’t equal. This approach also assumes that all goals have a direct and meaningful link to the program’s goals. The problem here is that goals can often be small stepping stones towards some larger end.  So, even if we trust that these individual goals bare some connection to the program’s goals, we end up counting several ‘successes’ (or failures) rather than simply counting the achievement of the real change or benefit we’re hoping for.  And in the end, are those successes or failures a true reflection of our efforts and the efforts of our clients?


Using client goals to measure program success could also have unintended consequences for how our staff practice.  By counting up the number of goals that are achieved or not achieved, we send the message to staff that this highly personal process has meaning at another level – evaluation of whether the program is working or not.  The unintended consequence could be that staff focus their efforts on what is easily achievable (i.e., the low hanging fruit).


A good friend and colleague of mine often reminds me of an important principle in measuring program success; ‘measure me’. In other words, measure whether I, as a whole person, benefited from the program. Reporting on the percentage of goals that are achieved or not achieved is different than reporting on the percentage of clients that experienced a positive change in their life. Somewhere in that mess of goals are numerous clients with one or more goals of differing importance or significance and usually reflecting many steps towards some desired end. A good evaluation system should be able to measure and report on the changes that each unique client experiences.


The good news is that there are University tested and validated approaches to measuring program level outcomes through client goal achievement.  These approaches, usually referred to as Goal Attainment Scaling (GAS), are more rigorous and require staff training.  They are able to produce a standardized score for the individual that accounts for variation in the number of goals that clients have chosen to work on.  They also define clear time limits and parameters for goal achievement. It is unfortunate that the versions I frequently see used are not based on the Goal Attainment Scaling model.


The bottom line?  Goal planning is, and should be, a highly personal affair. Done correctly and with thoughtfulness, it is a fluid and reflexive process that grounds our day-to-day work. The fact that a goal isn’t achieved may be a good thing – perhaps a turning point in the client coming to terms with what their capacity is, or our staff realizing that they’re barking up the wrong tree.  Likewise, the achievement of a goal may have had little to do with our efforts. Some things simply improve with time and sometimes people get better or solve their problems despite us! Adding up the results of this highly personal and reflexive process in the belief that it tells us something about program outcome achievement is problematic unless you take the time to build a very rigorous process. In the end, no system of outcomes measurement is perfect.  All approaches have their pitfalls.  But if you choose to use goal achievement, make sure you use a reliable and valid approach and provide training and support to staff so that they use it correctly and they understand that not achieving a goal can be a good thing!

In 2018, the Accreditation Consulting Group (ACG) is proud to announce that we worked with approximately 10% of all the organizations seeking their initial accreditation.

CARF Appeal Process

There are times when an organization receives an outcome of less than three-years.  In some cases, the organization is in dispute with the accreditation outcome that resulted in a one-year or non-accreditation outcome.