Author Archive

Why get a three year accreditation

Topic Area: Accreditation

 

Going through the CARF Survey process is challenging.  As if you didn’t have enough on your plate!  While there is no magic bullet, here are some practical tips that could make your life a LOT easier!

 

Have Your Ducks All Lined Up

If you’re smart about it, getting ready shouldn’t cause too many grey hairs.  As Louis Pasteur said, fortune favors the prepared mind.  I strongly suggest that agencies start out with a gaps analysis; a standard-by-standard review to determine where (and how) they meet the standards as well as where the gaps are.  Although tedious, it makes your life much easier down the line.  The challenge is that many standards have a logical connection to other standards.  Organizations that start at the beginning of the CARF manual and make changes as they work through it, or assign responsibility for different sections of the manual to different people, will run into problems.  They fail to see the interconnections and overlap until it’s too late.  Seeing the bigger picture up front is critical.  I also strongly recommend imbedding standards requirements in existing agency systems or processes.  Ask yourself “Can this requirement be met by adjusting an existing form or adding an element to an existing client or team meeting process?”.  Think ‘Two-Fers’!  Minimize the impact on the front line by using what you’ve already got!

 

Know Thy Survey Team

About two months prior to the survey, you’ll get an email from CARF letting you know who is on your survey team.  Although the members of the team can change right up until the survey start date, it’s worth checking out who they are.  We would like to believe that all surveyors are created equal, but that simply isn’t the case.  They are professionals in the field that bring their own perspectives, experiences, and biases.  So check them out.  Google them.  Find out where they work and what they do there.  Ask around to other accredited agencies to see if they know them.  When the administrative surveyor calls you to discuss the survey (roughly a month prior to your survey start date), ask lots of questions about their background and their approach to surveying.  Although a prepared organization should do well regardless of who the surveyors are, having a sense of what to expect from the team can make a world of difference to how smoothly things go.

 

The Secret to All Good Events; Planning!

Remember that a survey, in essence, is an event.  It follows a schedule, has specific elements, and involves different groups of people with different roles.  Surveys go best when they are well planned.  Work with the survey team to develop a detailed survey schedule.  Do your best to make sure each part of the survey happens as scheduled.  Have point people that act as a liaisons to the different survey team members.  Although there are bound to be some small glitches, you want it to go as smoothly as possible.

 

Make It Easy

Surveying can be grueling!  You fly to a place you’ve never been before to meet up with people you’ve likely never worked with before to spend several intense days at an organization you know little about.  While surveyors are paid by CARF, it’s usually a meager pittance compared to what they make at their day jobs.  They do it because it’s an opportunity to give back, to learn from others, and to see how things are done in other places.  So make their life easy!  Part of that is being prepared and planning for the survey, which I’ve already discussed above.  In addition, make sure that the materials you provide them are clearly marked and ideally referenced to the standard to which they apply.  Provide a nice space to work in.  Make sure they have the necessities of life; coffee and a clean washroom.  Help them to figure out arrangements for lunch and give them some recommendations for dinner.  Make sure you recommend a decent hotel that isn’t too far away.  Although leaving a welcome basket at their hotel isn’t required, I always appreciate when an agency leaves something to welcome me – a note, or some information about the local community.  The little things can truly make a difference.

 

Remember, It’s Your Survey!

I can’t stress this one enough; this is YOUR survey!  You are paying for it (directly or indirectly, depending on the jurisdiction).  You should expect good service, both from CARF’s staff at headquarters in Tucson and from the Survey Team.  They should respond to your questions and be open about the process.  They should be professional and courteous.  You should expect them to be fair and balanced in giving feedback.  They should strive to add value to your organization by offering good advice and pointing you in the direction of additional resources wherever possible.  They should also acknowledged your strengths and give you the opportunity to show off what makes you proud about your agency.  While CARF does its best to match surveyor skill sets to agency needs, the process isn’t perfect.  You may simply end up with someone who isn’t a good match for your organization.  Or if you happen to live in a city that is a sought after tourist destination, you can end up with team members who are interested in a ‘Survey-cation’ (thankfully, that’s rare).  Bottom line – if you’re not happy, let the surveyor team know about it!  And if that doesn’t work, let CARF know about it.

 

Topic Area: Evaluation

 

I’ve gone into a few rants over the past few months about the use of the client goal achievement or goal attainment (loosely described) as a measure of program outcomes. So I decided it’s time to move from ranting to writing!

 

The overwhelming majority of programs and services I’ve been involved in evaluating over the past several years use some form of client goal achievement to measure their success. I get the attraction. It’s a ‘two-fer’ for many programs! Staff have to define goals for the work they do with clients as part of case management and program accountability expectations (e.g., accreditation), so why not get some extra mileage by using them for outcomes measurement? But the devil is in the detail. Most of the programs I’ve worked with have taken advantage of software that has some form of goal scaling built in.  Many software programs (and most in-house solutions) simply require users to indicate whether a goal has been fully achieved, partly achieved, or not achieved at some point in time after the goal is set.  Some provide opportunity to indicate why it was achieved or not achieved.  There are few (if any) parameters around what achievement means or what a reasonable timeframe for full achievement might be.  The system then produces a report counting how many goals are achieved (or not) and links that to program level outcome statements based on categories of goal type that the worker chooses when entering the goal.

 

So what’s wrong with all of that? To begin, there are a lot of untested assumptions built in to that approach. For example, it assumes that all goals are roughly equal in terms of their importance and the amount of time or effort required to achieve them.  My experience in working with clients to set goals is that they often aren’t equal. This approach also assumes that all goals have a direct and meaningful link to the program’s goals. The problem here is that goals can often be small stepping stones towards some larger end.  So, even if we trust that these individual goals bare some connection to the program’s goals, we end up counting several ‘successes’ (or failures) rather than simply counting the achievement of the real change or benefit we’re hoping for.  And in the end, are those successes or failures a true reflection of our efforts and the efforts of our clients?

 

Using client goals to measure program success could also have unintended consequences for how our staff practice.  By counting up the number of goals that are achieved or not achieved, we send the message to staff that this highly personal process has meaning at another level – evaluation of whether the program is working or not.  The unintended consequence could be that staff focus their efforts on what is easily achievable (i.e., the low hanging fruit).

 

A good friend and colleague of mine often reminds me of an important principle in measuring program success; ‘measure me’. In other words, measure whether I, as a whole person, benefited from the program. Reporting on the percentage of goals that are achieved or not achieved is different than reporting on the percentage of clients that experienced a positive change in their life. Somewhere in that mess of goals are numerous clients with one or more goals of differing importance or significance and usually reflecting many steps towards some desired end. A good evaluation system should be able to measure and report on the changes that each unique client experiences.

 

The good news is that there are University tested and validated approaches to measuring program level outcomes through client goal achievement.  These approaches, usually referred to as Goal Attainment Scaling (GAS), are more rigorous and require staff training.  They are able to produce a standardized score for the individual that accounts for variation in the number of goals that clients have chosen to work on.  They also define clear time limits and parameters for goal achievement. It is unfortunate that the versions I frequently see used are not based on the Goal Attainment Scaling model.

 

The bottom line?  Goal planning is, and should be, a highly personal affair. Done correctly and with thoughtfulness, it is a fluid and reflexive process that grounds our day-to-day work. The fact that a goal isn’t achieved may be a good thing – perhaps a turning point in the client coming to terms with what their capacity is, or our staff realizing that they’re barking up the wrong tree.  Likewise, the achievement of a goal may have had little to do with our efforts. Some things simply improve with time and sometimes people get better or solve their problems despite us! Adding up the results of this highly personal and reflexive process in the belief that it tells us something about program outcome achievement is problematic unless you take the time to build a very rigorous process. In the end, no system of outcomes measurement is perfect.  All approaches have their pitfalls.  But if you choose to use goal achievement, make sure you use a reliable and valid approach and provide training and support to staff so that they use it correctly and they understand that not achieving a goal can be a good thing!