Step 5: Presenting the Results

5.1 Using the results

Caution is needed in developing and relying on survey information. While they will be a main source of information they can themselves be bluntish instruments. For example, say an EO is seeking to find out if access to finance is a major concern to members. In the responses that come back, accessing a loan does not emerge as an issue, but this could be because the wrong question was asked. Accessing a loan does not emerge as an issue because firms, for example, do not request a loan if they have no expectation of getting one.

5.2 Using Case studies

A case study is a detailed conversation with a member. It is an excellent way to supplement a survey. It provides detailed information on the topic and a 'real life' example that can be related too. They can provide that extra detail in the form of a firm's story that in turn can help explain the numbers and what they mean in practice.

In undertaking a case study:

  • Prepare your questions in advance;
  • be clear on what you want the firm to answer;
  • ask the firm what information they are comfortable sharing in the public domain and what instead they do not want made public;
  • stay on topic;
  • try and record the interview (but always seek permission);
  • get the right person to interview;
  • ask for a specific time and ask for a location where you will not be disturbed.
Design AspectComments/Suggestions(1)
Open or closed questions?
  • Closed questions generate quicker information that is easy to process.
  • Providing scales (e.g. from 1 to 5) for answers can enrich information.
  • Open questions are time-consuming to respond to and difficult to process.
Amount of time that can be requested from respondents to fill out questionnaires
  • A good survey should not take more than 30 minutes for people with a stake in the evaluation, and 15 minutes for those who are indirectly involved.
  • Language should be clear, simple and gender sensitive.
  • Think about translations in local languages.
Digital or paper
  • Digital surveys are easy to process and the Internet can be used as a tool.
  • In many situations, digital surveys will not be possible because of a lack of access to technology.
Piloting and testing
  • Customized surveys should be tested to assess if they can generate sufficient information and to make sure questions are understandable for respondents. Also, the amount of time needed to fill out surveys should be assessed.
Control questions
  • A good survey should contain some control questions to ensure that the information collected is sufficiently reliable.
Number of desired respondents
  • The reliability of results of surveys increases with the number of respondents.
  • If it is not possible to disseminate surveys widely, they can still be used, provided that results can be cross-checked with other methods of data collection.
Survey structure
  • The use of randomisation of answer options on electronic data collection can affect outcomes. Basically what this does is allow respondents to see answer options in different orders. So if two people are completing a survey, one is completing a survey in a set structured one way while the other is randomly selecting questions, this can result in different outcomes than if both were answering the survey in the same structured way.
Timing of questionnaires
  • One should try to disseminate questionnaires at moments when respondents are ready and willing to invest time on them – for example, the end of the financial year is generally a poor time to survey entrepreneurs.
Level of effort that is realistic for analyzing and reporting data
  • When digital means for processing surveys are available, analysing and reporting on data will usually not require significant time and effort.
  • Open questions can only be processed when significant time is available for analysis.
Rolling out the surveys
  • If surveys are sent to people without proper follow-up, non-responsiveness can be high. A successful response rate would be considered at least 60 per cent.
  • Responsiveness can be greatly increased when surveys are collected manually or when distributed during events at the end of which the surveys can be collected.

(1) Adapted from The PPD Handbook: A Toolkit for Business Environment Reformers (page 143).