360 feedback build or buy

Should I Build or Buy My 360-Degree Feedback Survey?

It’s easy to write a survey, but difficult to design a good one.  When it comes to sensitive instruments like 360-degree feedback surveys, a poor design can cause issues such as survey abandonment, measurement errors, confidentiality concerns, and ultimately a loss of confidence in the data and the overall program it supports.  Further, individual participants may take away incorrect conclusions and develop in ways that make them less them effective.  Making decisions with badly generated data is often worse than making decisions with no data at all.

 

Using a vendor designed instrument with clear competencies, such as the Mandala 360°, is the best way to mitigate these risks for an organization looking to refresh or begin a 360-degree feedback program.  Whether you ultimately decide to build your own instrument or engage with a vendor to provide it, here are six important points to consider in your design/selection process.



1) A clear competency model

There are numerous competency models in the market, but many are designed with certain talent management processes in mind and not all are appropriate for 360-degree feedback.  Competency models for 360-degree feedback surveys should have a limited number of items, but still cover a full range of core outcomes and are easy for a novice reader to interpret. The competency model serves as the foundation of your survey, so if you feel like it is flawed, the survey will be as well.



 

2) A well-designed rating scale that can be interpreted consistently

Seemingly simple words commonly used in ratings scales used in feedback for individuals are highly interpretable.  What is “Average” or “Above Average” (who or what standard is this compared to, what levels within the organization, what degrees of experience?)  

 

What are the ‘Expectations’ that someone can “Meet” or “Exceed”?  

 

What is a “3.5” out of “5”?  


In working with an organization with dozens of operating units across the U.S. on their performance review process, our consultants found each unit used three or four primary numeric ratings – but the specific ratings these units were comfortable using differed (even though they were using the exact same rating scale).  Employees in office A carried ratings of 2.5, 3, and 3.5., where in office B they carried 3s, 4,s, or 5s.  Office C had 2s, 3s, and 4s.  There were a number of other combinations, but it left the company with data they could not use to make decisions as the rating scale was interpreted differently by each team that used it. 

 

There are two established methods to limit bias in a rating scale. 

 

 

    • First are Behaviorally Anchored Rating Scales (BARS), where specific observable behaviors are listed for each rating level, for each rating area.  Although this increases objectivity, in practice these create huge volumes of data that increase survey fatigue and become unwieldy to use in other processes within the organization.  It also assumes that managers will become highly versed in this language.  The structure of BARS also assumes that because a behavior is being exhibited, the sought-after results will automatically follow (which is not always a safe assumption).  Last, depending on how they are written, they are still subject to the same interpretation issues as other rating scale – just with more words.   

    • Second are Frequency-Based Rating Scales (FBRS).  Rather than rating the level to which an area is observed, or selecting a specific set of observations in a BARS, a competency is rated based on how often it is observed.  The rating options may be expressed as a percentage or through words. FBRS are short and easy for raters to interpret, and for 360-degree feedback surveys where survey fatigue is a real risk, a more effective choice for a rating scale.

 

3) Appropriate framing to limit biases

Because 360s involve human raters, they are naturally subject to the same biases as the overall human population.  Rating on single instances of performance (‘halo or horns’), overweighting recent behavior (recency bias), individual rater tendencies towards leniency, strictness, or moderation in their ratings (especially if there are concerns around anonymity, or concerns around consequences for the rating subject), affinity preference, and false attribution of results – these and more can skew results in a variety of directions.  Great 360 programs will frame their invitations, kick-off emails, and landing pages to encourage raters to increase self-awareness of these biases and encourage them to recognize their influence while completing a survey.



4) Opt-out responses

Most raters have limited visibility of the day-to-day performance of the person being rated.  Accordingly, they may not have enough information to rate every category, or at least enough to feel like they can rate particular categories accurately.  In certain surveys you may want to force responses on certain questions, but not with 360-degree feedback – this will heavily pollute results.  Accordingly, options like “Don’t know” or “Do not have enough exposure to accurately rate” are important to include.



5) Length, Repetition, and Confusion

Humans get bored and frustrated with surveys. Research yields a variety of conclusions on the topic of survey fatigue, but generally:

 

 

    • After approximately three minutes, a first group of raters (those who have low interest and those who are having difficulty with the survey) will drop

    • After ten minutes, if the survey is frustrating, repetitive, or seems like it will go on for a while longer, a second group of moderate interest raters will drop

    • After twenty minutes, only your most interested and engaged participants will continue

The good news is most 360-degree feedback raters are interested and engaged (if they’ve been invited to the process effectively – meaning personally by the rater, and through a high-touch communication channel) and so the issues with 360 survey abandonment are generally rooted in length, repetition and confusion.

 

For length, keep your survey at 30 total responses or less. And if your survey involves a number of thinking questions or free text responses, you may need to even limit that question count further.

 

For repetition, avoid asking raters about substantially similar items multiple times.

 

For confusion, do not use quirky controls or rating scales.




6) Labor

Developing your own 360-degree feedback program is extremely time intensive. If you are up for the investment, having a custom tool is a definite luxury – but often vendors can provide this service more effectively as well as at a much lower price than the labor investment you would be making. Your team would need to execute on the tasks below, which would involve several thousand labor hours.

 

 

    • Developing success profiles/a competency model

    • Selecting a survey tool

    • Drafting a survey instrument and email templates

    • Developing a report format

    • Administering individual surveys, analyzing data, and compiling reports

    • Delivering report feedback to participants and aiding in the creation of development plans



Making a decision on how to add a 360-degree feedback process to your talent management program is a significant one. Ensure you’ve weighed all of the options first.

 

 

Sign up below to be notified about new TM blog posts