Level 1 Evaluation: Deciding on the Survey Format

Level 1 Evaluation: Deciding on the ScalesLevel 1 Evaluation relates to the feedback of the participants to the training event. Typically the participants are given "smiley sheets" to indicate their level of satisfaction with the program. As discussed in the earlier post, many categories are evaluated during level 1 evaluation. They include the feedback on the facilitator, training materials, training contents, training design including duration, training facilities, coordinators, etc. Feedback can be taken from the facilitators about the participants, coordinators, administrative support, etc.

The responses on Level 1 evaluation help to redesign and improve the training program for the next round. The most common format to collect data on feedback is a Survey, generally a "Satisfaction Survey."

Many times I have been asked how one can start designing a survey. What needs to be done? Designing a survey instrument is an interesting and creative process. But before I describe the how one starts on the design, let us consider the different scales that can be used in measurement and incorporated into the survey

Scales on a Survey: Surveys predominantly make use of one type of a scale, for instance, the rating scale. However, the survey can have many scales. Due to computerization, it is easy to get responses on these items and it is not difficult to analyze the data, even if different scales are used in the same instrument.

Here are the examples of each type of scale, which are commonly used in a survey:

Type of Scale

Example

Answer Options

Dichotomous Scale

Have you attended a training on Safety before

Yes/No:

Multiple Choice Scales:

Indicate what Quality training you have attended:

Kaizen, Six Sigma, ISO

Rating Scales

To what extent did the course meet your needs

Did not meet my needs….To some extent….most needs….all my needs

Open Ended Questions

How many trainings have you attended this quarter

Name them

______ number

______

_______, etc….

Rank order

Rank the Sessions according to your liking, with 1 as most liked to 6 as least liked

£ Session I

£ Session II

£ Session III , etc

There are other types of scales which are used for data collection for example, semantic differentials, constant sum questions, etc. However, the scales described above are most frequently used in a feedback survey.

Whatever scale you use, the data you get from your survey depends on how well your survey was designed in the first place. I have come across surveys which made no sense and surveys that are excellently designed! Let us now consider some of the decisions we need to make about the questionnaire design in terms of the survey instrument.

Decision points while designing a survey instrument:

  1. Goal of the Data Collection: Why are you collecting the data? What is going to be critical for you? What according to you is the most important criterion for getting feedback? In the previous post I had mentioned different criteria on which you need to get the data. Ideally, you need to get data on all. However, for your own reasons, you may choose to get more data on some criterion than others. For example, you may want more data on content and pedagogy than on the facilitator, but if a new program is being designed, and you know that the facilitator is effective otherwise
  2. Qualitative or Quantitative: How are you going to format it? Should it be qualitative, or quantitative? In an earlier post, I have mentioned that it would be a good idea to get some qualitative inputs on each of the measures, so it's a good idea to incorporate qualitative questions into your instrument. The decisions on the quantitative and qualitative will depend mainly on these criteria:
    • Resources you have to analyze the qualitative elements: For example, what is the volume of the respondents? Can you analyze the data manually? If you have a computerized system, has it been programmed to analyze using key words. Depending on these factors you can decide the extent of qualitative input
    • Goals of your analysis: To what extent do you need the qualitative content? What are you going to do with it? For example, if there is a routine training, which has been in place for a while, and is generally highly successful, you may not necessarily need too much of qualitative content. On the other hand, if there is a change in the say the facilitator, or the contents to some extent, or is being delivered through a different medium (like online) or for a different group of participant profile, you would definitely require qualitative content to see what worked and what did not
    • Willingness to work on the survey feedback forms: Many times organizations design feedback forms, and then they are used over and over again, they are automated, sometimes participants even fill them out online. This is easy. But as a meticulous facilitator, training designer or manager, it would be beneficial if the feedback forms were themselves redesigned once in a way!
  3. Decide on the Types of Scales: Once you have decided to gather quantitative data, you need to decide what type of scale to use. Most of the times in a feedback survey rating scales are used. But as mentioned above, depending on the data you need, you can use a combination of scales. For example, you want to know if the participants liked the program. This question can be simply Yes/No. But suppose you want to know to what extent the participants liked the program, it needs to be a rating scale.

    You can use different scales in the same survey. For example, you can have a multiple choice, a rank order, open ended or other formats. For example, if you want data on how many programs the participant has attended in the quarter, this needs to be an open ended question. Personal data would be open ended.
  4. Decide the Levels of the Scales: This again depends on the amount and kind of data you want to gather. A Likert Scale can have many intervals (3, 5, 7 or more) For example, Never…Sometimes…Always or Very dissatisfied…Very satisfied, etc. However, the tricky question is: how do you decide how many levels of rating to use? Should it be 3 point, 5 point, 7 point or 10 points? Recently I answered an online customer satisfaction survey that had 15 points on the rating scale! How are you going to peg the ratings? So how will you decide how many points should be on the rating scale?

    The answer to this question lies in the following consideration:

    To what extent do you want to differentiate the answers?
    Will finer discrimination be useful?

    For example, on a 3 point scale you will get lesser discrimination in the answers than a 5 point scale. For example, consider the question:

    Was the facilitator successful in answering participant questions?

A 3 point scale would give the following options:

Not at all
Sometimes
All the time

A 5 point scale would give more options:

Not at all
A few times
Sometimes
Most of the time
All the time

A 7 point scale would give even more options:

Not at all
Very few times
A few times
Sometimes
Frequently
Most of the time
All the time

Considering this, we see that a 3 point scale is very poor in the kind of data we gather, while a 7 point scale may not be necessary here. So a 5 point scale probably would be most suitable in this case.

However consider another question:

To what extent are you satisfied with the content of the training?

A 5 point scale would give more options:

Dissatisfied
Somewhat dissatisfied
Neutral
Somewhat satisfied
Satisfied

Compared to a 7 point scale:

Very dissatisfied
Dissatisfied
Somewhat dissatisfied
Neutral
Somewhat satisfied
Satisfied
Very satisfied

In this case a 7 point scale could also be used instead of a 5 point scale. You need to decide the degree to which you want to differentiate in the answers. For example, would it make sense for you to get answers on Dissatisfied and Somewhat Dissatisfied? Or would you want further discrimination between very dissatisfied and dissatisfied. If you want to discriminate to a greater degree have a 7 point scale.

  1. Decide on the Length of the Survey: The length of the instrument needs to be just right so you get all the critical data. If it is too short, you will miss out on gathering important data, if it is too long, participants will not be motivated to complete the scale. So you need to decide what is in and what is out. So making a survey just right is critical. You can do this by piloting your scale. Get people to answer your scale, and see if it makes sense. Get them to give you a feedback on the scale.
  1. Other Considerations: Think of limitations that the respondents are going to face. For example,
    • What would be the best time to take feedback? Do you have time included in your session plan for feedback?
    • If it is online, does everyone have access?
    • Is there a language constraint, do you have to translate into different languages?
    • What would be the cost of the survey?
    • Meeting needs of special needs of participants if any, so they can answer the survey?

There can be many other similar factors you may need to consider before deciding on how to design your survey. The important thing is you do consider all possible and probably limitations before you design, so you won’t hit road blocks when you administer the survey.

Once you make these decisions, you can start building your survey instrument. In the next post, we will discuss how to select items for a survey.

blog comments powered by Disqus
Crimes in Design Webinar
Subscribe to our Monthly Newsletter