Read Understanding Research Online

Authors: Marianne Franklin

Understanding Research (29 page)

Analysing responses

More on these points in the next chapter. Suffice it to say that the basic steps for analysing the responses and then working with results of larger surveys where statistical analysis and random selection govern the design proceeed as follows:

  • Collate results by producing tables/graphs.
  • Make sense (and double-check) the numbers.
  • Write up the analysis – at least one or two paragraphs per table.
  • This is also when you need to pay attention to writing up exactly how you went about the survey, and why; the methodological/data-gathering methods section.
Survey work across the divide

Formalized ways of gathering personal data from human subjects also provide invaluable information for qualitatively inflected research inquiries. The basic rules
and procedures follow those outlined above. Incorporating surveys can be suitable for:

  • Research projects that need to generate baseline knowledge, as a preparatory phase in setting up focus groups, or (self-)selected interview candidates; an initial survey can cover a lot of ground.
  • As pilot research; to clarify whether the object of research, and related research question are feasible, i.e. doable in the time and with the resources available. The difference lies, however, in the relative weight given to the statistical analysis of the survey results, questionnaire design, relative size of the sample and sampling technique used.

In light of the downside of survey work covered above, take note that

  • For research projects that produce or focus on textual material, for example, responses provided to open-ended questions, the time needed to carry out a survey and eventual results may need to take the back seat in the final analysis.
  • If survey results are a lead-in to in-depth interviews and/or focus-group work then the collating of these transcripts and how their analysis is woven into the presentation of your findings and analysis are more important than a lengthy account of the survey itself; the usual requirements of your methodological explication notwithstanding.
  • Even when web-based survey tools and services do the work for you here (as noted above), a common misjudgment is to still lean on these outcomes too heavily on what may well be relatively modest findings. The end result can be that the more substantive, and for these projects more interesting material (for example, transcripts of people talking about complex personal issues) is left under-analysed, less well-presented.

Figure 6.3
I can prove or disprove it. . .

Source
:
http://Vadlo.com

  • Statistical analysis is more than the final percentages spat out by the program you may be using here; the numbers are not in themselves a conclusion and percentages are just that, percentages (see Berg 2009: 181, Eberstadt 1995).

If a survey is in fact the bulk of the research then the sampling technique, questionnaire design, size of the sample, and eventual findings and analysis thereof move you into more specialized, statistically-based forms of research. This then requires you to be well enough versed in the finer points of survey design and execution to make the point relative to your research question, aims and objectives.

Summing up

You can still carry out survey-based research, if not make use of survey work available in the public domain, if you keep in mind the following ground rules for achieving and evaluating good quality quantitative analysis:

  1. The first step is to keep a record and then clearly report the process by which the data are generated. In this respect, you need to be sure that you understand the procedures required to get this information; for example, constructing a standardized questionnaire or basic knowledge of sampling and statistical analysis (even if the actual number-crunching is done for you by automated tools such as Survey Monkey).
  2. Aim to collect data on as many observable implications as possible within the timeframe and scope of the investigation; extrapolating from one, or limited observations unawares could undermine the strength of your eventual conclusions if not taken into account in the set-up and execution; for example, omitting to consider age, occupation, or gender in the survey yet drawing conclusions which assume, or could be influenced by these indicators.
  3. This means that you need to aim at maximizing the
    validity
    of the measures by assuring that you have ‘good indicators’; meaning indicators that actually measure what you intend to measure with respect to your inquiry, and governing hypothesis.
  4. Maximize the
    reliability
    of the measures; for instance make sure that all measures are consistent in themselves (e.g. measures of frequency are different from total measures), and
    consistently
    applied. This means to say that applying the same procedure in the same way should always produce the same measure; double-check or retrace your steps before drawing conclusions.
  5. Make sure that all data produced is
    replicable
    ; others may wish, or need to follow the procedures used to arrive at the same data and thereby engage with, or reconsider your findings.
  6. For smaller samples (for example, a limited survey), beware of over-using percentages in order to suggest a larger sample and more significance than may be the case. In short, a percentage value is not necessarily more legitimate than a more qualitative assessment when the group interviewed or surveyed is a small one; if you have responses from only ten people for instance, it makes more sense to talk about ‘one out of ten surveyed responded to the negative’ rather than ‘10 per cent . . .’. Percentages need only be cited up to the first decimal point for medium-
    sized samples, and rounded up for smaller. Basically, a percentage does not tell the story; statistical values provided to the ‘nth’ decimal point are not an indicator of accuracy or efficacy in itself.

Finally, as is the case across the board, trial and error, ‘pilot research’ in other words, before embarking on the formal data-collection phase is indispensable; standardized questionnaires need trying out, interview questions revised and tried out on willing guinea-pigs, software programs practised and applied in try-out examples.

Further reading

Useful overviews are in Fowler (2002), Harkness et al. (2003). A review of web-based surveys is in Couper (2000); principles for asking sensitive questions are covered by Tourangeau and Smith (1996); Baehr (1980) is an example of surveys using in-depth interviewing; Lewis (2001) is a look at mass surveys and their role in public opinion research from the ‘other side’; for a cultural studies perspective see Deacon (2008); Deacon et al. (2007) address surveys for communications research.

INTERVIEWS

Whilst some surveys are based on interviews, this section looks at interviews as a generic form of data-gathering in their own right.

Interviews conducted without depending on a standardized questionnaire are a well-known and productive way to gain insights from people, those in the know, ordinary people, or particular groups. A derivative of interviews –
focus groups
– is particularly popular in media and communications, business studies, and marketing. Some of the same rules of thumb laid out in the previous section apply to setting up and designing interviews and interview questions.

Interview and focus-group research generates large amounts of textual, including visual and audio material, the content of which requires time and attention to sorting and analysing the data, either in quantifiable form (word/phrase frequencies or volume) or by other, interpretative means; see
Chapter 7
. A combination of analytical interventions is possible, interpreting and then making use of interview material in strictly qualitative inquiries is heavy on text, citations as well as the commentary on them. In both cases, transcripts of interviews are your raw data.

General principles

As Berg puts it, interviewing is ‘simply conversation with a purpose’ (2009: 101). As such they are a core ingredient of much social research, the bread and butter of news and entertainment journalism, and used in various forms extensively in market research.

In distinction to large-scale and medium-sized surveys based on anonymous questionnaires or polling, a research interview is generally held between two persons.
It can be highly structured, using standardized questionnaires, relatively formal along a set of prepared questions, or very informal where interviewer and interviewee engage in an undirected conversation. The form an interview takes, its role in the data-gathering phase, and thereby decisions about how many interviews are sufficient for any project, depend once again on the aims and objectives of the inquiry; what sort of research question is at stake and what sort of information is being sought?

There are various ways of demarcating various sorts of interviews along with related techniques and reasons for undertaking each approach (see Berg 2009: 105, Gray 2009: 371). This discussion clusters them into three broad approaches, all of which are open to adaptation and the unexpected in real-life research scenarios. Three concerns are shared by all sorts of interview formats and their methodological implications:

  1. Research interviews take many forms and can be carried out in various ways (for example, face-to-face, by phone, or using VOIP services). They will produce middling to large amounts of text (the transcripts or notes), all of which may not be pertinent to the inquiry, or able to be analysed as/in the findings. Individual differences in insight, experience, and knowledge amongst interview subjects provide concurring and conflicting ideas and perceptions of the topic on hand. This diversity is what non-standardized interviews are made of; in contrast to standardized formats conducted as part of larger-scale surveys which are, by definition, designed to elicit consistent sets of answers available for statistical analysis.
  2. What is the ‘right’ number of interviews to carry out is a question of degree level as well as research design; some projects conduct one to several interviews to add ‘flavour’ or nuance to other findings whilst others (particularly at Ph.D. level) rest on more substantial numbers of interviews, or more in-depth explorations requiring varying amounts of time to complete the interview and corresponding amounts of text produced per session. This material – the interview data – then needs sorting, coding, and eventual analysis of these findings.
  3. Interviews, whilst popular and convenient, are not mandatory for every qualitative research project looking to engage with people yet not undertake surveys. Moreover, having contacts or knowing people in the organization or activity you want to study does not presuppose that they will make suitable interviewees for the project in themselves. Interviewees do not provide ready-made conclusions; far from it. An individual’s view is partial by definition, although some individuals may have more overview than others. Nominating interviews as a primary source of data may not be sufficient as an articulation of your methodology in itself, unless they form the core research component. Even then they need explication in design terms.

As is the case with survey design, interview questions require time and thought, and try-outs should be done before the research proper can begin. This is important because the working assumption in these sorts of interviews is that what respondents say they do is in fact what they do, or what they feel, so questions need to elicit an
appropriate response (see Berg 2009: 105). Below is an overview of the different forms interviews can take, with accompanying pros and cons.

Standardized formal interviews

The working assumption here is that all respondents need to understand and respond to questions in a similar enough way so that responses can be collated in quantifiable terms, and then statistically analysed; the inferences drawn will then depend on the size and composition of the sample. When the researcher is looking for clear answers from, usually but not exclusively, a relatively large group of respondents – for example, news and entertainment preferences, shopping habits, voting behaviour, public opinion polling – ‘how often’, ‘ when’, and ‘what’ sorts of questions, singular or multi-choice, characterize this category. Standardized interviews are particularly suitable for research questions interested in gaining information, or in ascertaining the views, attitudes, or behaviour of respondents in a controlled manner; hence their use in survey work such as polling and market research; telephone-based research uses standardized questionnaires for one-to-one interviews that are randomly selected.

This format is not suitable for interviews premised on the interviewer playing a substantial role in the conversation and in that respect jointly generating the ‘raw’ material to be analysed. Open-ended, narrative-based or autobiographical sorts of inquiry do not lend themselves to standardized interview formats. This because the aim with this degree of formality in the interview is to obtain a standardized response, cover a particular territory, or ascertain a trend or set of behaviours across the sample; political pollsters use this format to good effect. Here the findings, and quality of the analysis, stand or fall on correctly formulated questions. You cannot collate or compare results across (groups of) respondents if the questions are too ‘fuzzy’, open to many sorts of answers, involve question and answer forms such as the interviewer asking ‘why?’, or ‘ tell me more about that’, or if a crucial question has been omitted.

Semi-standardized formal interviews

Most interviews are formalized, somewhat artificial interactions in that the researcher has approached their interlocutor with a clear aim in mind, then assumed the lead by steering the conversation. However, working with human subjects brings with it always a degree of the unexpected; this too is accounted for in standardized formats through the ‘don’t know’, or ‘neutral’ response category. In practice, most interview situations are a subtle interplay between interviewer and interviewee; both an explicit agreement between the two parties, based on
informed consent
, about the reason for undertaking the interview, and an implicit sort of ‘meaning-making occasion’ for both (see
Chapters 3
and
5
; Berg 2009: 104). This is why interviewees are known to go on record, or comment afterwards about how the interview generated new ideas, insights or even memories for them as well.

The most popular sort of interview is the semi-standardized, or semi-structured one to which the researcher-interviewer brings a set of predetermined questions, or a line of questioning to the meeting. Sometimes the interviewee requests, or can be
sent these questions in advance. In any event the interviewer comes prepared for the respondent to digress, or for the direction of the interview to change along the way. How far the interviewer allows the interview to develop in various directions, or judges when it is time to return to the main theme, or lets the interviewee take control of the conversation (this can be consciously or unconsciously) depends not only on the sort of information the researcher wants to elicit but also the underlying rationale of the project.

In less structured settings, interviewers also need to contend with those moments when a line of questioning becomes too sensitive, a question misunderstood or tensions develop between the interviewer and interviewee; some questions may be out of bounds for official reasons (as with government officials), emotional or cultural ones (traumatic memories or inappropriate references by the interviewer). Sometimes the conversation generates less ‘significant’ material; but how insignificant may well require closer analysis at a later date.

Non-standardized, informal interviews

At the other end of the spectrum from the standardized interview/questionnaire format, and moving away from the more directive even if more open-ended approach of the semi-structured interview, are scenarios that consciously forego a preconceived set of questions, or expectations about the response.

  • Narrative research
    is one approach where the flow and content of the interviewee’s contribution is left to them, their words not directed or interrupted (more or less) by the interviewer.
  • Another example is during fieldwork in which participant-observation is a key element in the data-gathering (see below). Here researchers often find themselves undertaking informal interviews when they ask people questions ‘on the fly’; this approach (when handled ethically) allows for people to talk about what interests them in a less stilted way than a more formal interview moment; these are research interactions that can provide interesting insights, add nuance to a proceeding, or provide another standpoint into the research topic.

Sometimes, however, informal interviews do not start out as such; semi-formal interviews can often morph into one-on-one conversations. Whether this is desirable or not is a moot point and many first-timer research interviews can end up like this. Either way the net result is that the more informal the interview, the greater the input the researcher will have into the material generated. This affects how the same researcher handles the eventual material in the analysis phase. Indeed, it often takes a supervisor or examiner to notice that these sorts of interview transcripts often provide important insight into the researcher’s underlying motivations of the research, their assumptions and goals predicating the interviewee’s input.

When applied consciously, the unstructured approach is also suitable for certain sorts of settings: with people who may not feel comfortable or respond well to a more formal interview format, for example, those in unfamiliar cultural contexts, or disadvantaged groups. That said, the principles of informed consent and anonymity
for these sorts of interactions still need to make sense within the working principles of research codes of ethics (see
Chapter 3
, sixth section).

Cutting across these generic types is another designation: in-depth interviews. By definition these involve interviews with fewer people, over a longer period of time albeit not necessarily, or spread out over several meetings rather than one single comprehensive interview (see Berg 2009: 119–21). In normal parlance, when a research project undertakes ‘in-depth’ interviews the researcher/s is working at the semi-unstructured part of the spectrum.

Administration

A number of the practical considerations when preparing and carrying out interviews, whatever the eventual format you opt for, overlap those for surveys and questionnaire design. Others are specific to the more open-ended structure and expectations of the sort of material interviews will generate.

(a) Preparation matters: Preparing a set of questions, including the key topics you want to discuss as well as basic information about your interviewees, is highly advisable for all but the most open-ended sorts of research interview. Even the latter requires preparation; more on this below.

That said, note that for most interviews, even the most well crafted questions can and will provide all sorts of responses; from different people and sometimes from the same person as they contradict or correct themselves during or after the interview.

The output of the interview – recordings and/or notes – can also vary; some highly relevant for the research question, some digressing, and some challenging as interviewees take the researcher and the topic on. Sometimes they render little material because they digress, or because interviewees – as humans are wont to do – find ways of avoiding the topic.

At the outset of an interview the interviewer needs to establish rapport. This can be small talk or by way of preliminary information-eliciting questions (see Berg 2009: 112–17) before the more substantial questioning begins. Later, as you analyse the outcome, note how silence or diversions in the conversation can also speak volumes; comparable to a statistically significant result in standardized questionnaires in the ‘not applicable’, ‘no opinion’, ‘neutral’, or ‘don’t know’ category.

(b) Sorts of questions: Again, these practicalities overlap those for questionnaire design. For interviews based on qualitative sorts of questioning (i.e. open-ended rather than closed-ended questions and fewer of them), working on how to formulate as well as order the questions you want to ask helps you focus and refine your core questions; gets you to consider closely why exactly it is you are engaging this sort of interaction. Bearing in mind general pointers about question formulation, pitfalls to avoid when conducting interviews across a range of formats and context include:

  • Asking double-barrelled, or overly complex questions.
  • Over-lengthy preambles when in fact you are looking for a specific response to a specific question.
  • Culturally inappropriate or ‘red flag’ questions or formulations; for example, some settings require preambles to questions (i.e. the reverse of the above). In others being too abrupt can create barriers. In others certain sorts of questions can be red flags; for example, Berg talks about how Americans can respond negatively to the question ‘Why?’ (see Berg 2009: 118).
  • Lack of facility with group-lingo or jargon which means you miss cues, sub-texts, non-verbally conveyed information, or misinterpret these later on.
  • Not being up to speed with key issues of concern for your respondents – revealing your ignorance of the terrain does not enamour you to busy respondents.
  • Losing your orientation when interviewing experienced people (for example, politicians, corporate executives, media professionals) who can obstruct the flow, take over the interview (by asking you questions instead), or use the range of human ways to avoid answering the question, (un)consciously (see Goffman, in Berg 2009: 128).
  • Sometimes persistence is not a good thing. Showing respect for a person’s non-response may be wiser. You can try again later by gently probing but don’t insist.
  • Taking everything that is said as the primary message when in fact this could be simply what is for the record; note any differences between on-the-record and offthe-record comments when talking to officials (government or corporate representatives).

(c) Sorts of settings – which is best? I often get asked by student researchers setting out to conduct interviews about the right, correct setting for an interview. As research becomes increasingly predicated on information and communication technologies, web-based media, products and services, the convenience of VOIP services (Skype being by far the most popular at time of writing) for interview purposes appear a given, yet create uncertainty about their validity.

  • (i)   Interviews conducted over the phone, Skype included, are perfectly legitimate particularly if time and resources are limited for a face-to-face appointment. If the interview is to be recorded, then this practicality needs to be prepared for in advance. Otherwise it is down to the researcher to take notes as they ask and listen. Phone interviews are also acceptable; sometimes people respond better in mediated rather than face-to-face settings.
  • (ii)  Emails have become a cost-effective and shortcut way of generating interview material. Strictly speaking this is not an interview. However, a Q&A session conducted by email for those willing or preferring this form of interaction can be productive too. Note here though that the material produced is already written (versus spoken) text. Some email interactions see the respondents spending a lot of time writing and editing their responses. This is a gift for the researcher who ‘only’ needs to read and analyse, code and interpret.
  • (iii) Written responses to written interview questions do require a different level of engagement with the text, though, than with the transcript or interview notes of a spoken session where memory of the voice, setting and other factors can add nuance (and this is why taking side-notes is useful). Not worse necessarily, just different. That said, the point of an interview for many is the spoken word, spontaneity and human interactive aspect.

But rather than set up premature hierarchies of value, the setting and manner for conducting the interviews, or mixture thereof, bears noting and considering in the set-up phase and eventual analysis and methodological explication. In this respect the variety of computer-mediated and technological mediations for facilitating interviews (including here video-conferencing or web-conferencing) are there to use. There is also research that has research interviews and focus groups (see the section on focus groups) conducted in purely online settings: virtual worlds (for example, Second Life) and between avatars. Again, this depends on the inquiry.

Wherever the interview takes place, these architectural aspects do play a part in the flow, pacing, and outcome of the interview; the significance of which is up to the researcher to consider.

(d) Dealing with tensions and disappointments: A brief word on tensions and disappointments, some of which have been mentioned already in passing, and some of these can be avoided, even for the novice research interviewer, by preparing the ground beforehand as well as expecting interviews to differ. Preparation, though, is your best means to avoiding major disappointments in projects with short time-spans.

Issues can arise beforehand, during, and afterwards.

Other books

Homicide Trinity by Rex Stout
Maiden Voyages by Mary Morris
The Hallowed Ones by Bickle, Laura
Spectra's Gambit by Vincent Trigili
Hawk and the Cougar by Tarah Scott
Dreamwalker by Kathleen Dante
The Vampire Queen by Adventure Time
Easy Virtue by Asher, Mia
Grant: A Novel by Max Byrd