Performance appraisal

Cynthia Mathieu, in Dark Personalities in the Workplace, 2021

Social desirability bias

Social desirability bias in selection or performance assessment is the tendency to rate employees according to socially (or in this case organizationally) desired achievements or traits instead of using objective performance criteria. For example, if a manager is extremely goal-oriented, his or her superior may give him/her higher scores on interpersonal leadership behaviors than they should as they are influenced by the fact that the manager is able to bring in contracts and create profit, thereby achieving desired task-related goals. Bellizzi & Bristol (2005) report that sales managers are more lenient in disciplining sales representatives’ ethical infractions when representatives achieved top sales. Further, the authors found that the leniency in the treatment of unethical acts for employees achieving top sales remains even in the presence of a pattern of previous unethical behaviors and an explicit organizational policy proscribing these unethical acts. This phenomenon is not only based on social desirability, but it is also based on profit desirability. “The devil is more devilish when respected,” a quote from Elizabeth Barrett Browning is associated with the notion of social desirability bias. Indeed, supervisors conducting employee performance appraisals are as strongly influenced by socially desirable traits and cues as professionals in charge of employee selection. We have talked about external cues and personality traits socially associated with success and how they influence decisions and behaviors toward individuals who display them. Within an organization, I would say that social desirability becomes “organizational desirability,” which refers to behaviors and results that are in-line with an organization’s values. In an organization driven by profit, performance appraisal systems will be biased toward individuals who bring in a lot of business or are goal-oriented. Indeed, in profit-oriented organizations, negative behaviors that would normally decrease performance appraisal results will be disregarded if the employee or manager contributes to the organization’s profitability. This may explain, in part, how individuals who use unethical, abusive, or disrespectful interpersonal behaviors may receive positive performance evaluation reviews.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780128158272000041

HRM as a strategic business partner

S. Ananthram, in Asia Pacific Human Resource Management and Organisational Effectiveness, 2016

Social desirability bias

Qualitative studies using interview techniques are subject to social desirability bias. In addition, social desirability bias becomes more prevalent in collectivist societies (Robertson and Fadil, 2009) like India. Six strategies were adopted to reduce social desirability bias. Firstly, MNEs and interviewees volunteered to participate in the interview and their names were kept anonymous so as not to place undue pressure to respond in a socially acceptable way. Secondly, interviewees were only provided a brief overview of the study at the outset. Steenkamp et al. (2010: 1–2) noted that this strategy helped ‘avoid priming respondents to answer in particular socially acceptable ways but creates scope to explore their values and priorities unfettered, before homing in on the main topic of interest’. A third strategy included employing a ‘committee of experts’ (Kvale, 2007) that provided advice on the relevance and sensitivity of the interview questions which were incorporated in the final interview schedule. Fourthly, the interviews were conducted by an experienced interviewer using Kvale’s (1996) strategies of a successful interviewer. In addition, it was ensured that there was no power relationship between the interviewer and the interviewees (Nederhof, 1985). A fifth strategy was in line with Brunk (2010: 257), who suggested that one-on-one interviews should be conducted, wherever possible, in ‘the familiar and comfortable surroundings of their home’, thus this study conducted one-on-one interviews with the TMT members at a time and place convenient to them. Finally, interviewees were briefed that there were no right or wrong answers and encouraged them to use anecdotes and experiential evidence to support their views. These strategies combined provided confidence that social desirability bias could be reduced.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780081006436000051

Measures of Concerns with Public Image and Social Evaluation

Mark R. Leary, ... Kate J. Diebels, in Measures of Personality and Social Psychological Constructs, 2015

Results and Comments

Research interest in individual differences in approval motivation has emerged in two distinct traditions. Perhaps the best known involves the social desirability response bias – the tendency for people to answer self-report questions in ways that portray them in a positive light (Holden & Passey, 2009). Concerns regarding the effects of socially desirable responding on the validity of personality measurement led to the development of several measures of social desirability, some of which were developed as ‘lie scales’ for specific personality measures and some of which were designed as free-standing measures (the best known of which is the Marlowe–Crowne Social Desirability Scale). A second line of research has focused on the implications of individual differences in approval motivation for cognitive, emotional, and interpersonal phenomena outside the domain of response biases in personality measurement. Some of that work has used various measures of social desirability, but other research has relied on other measures of approval motivation such as the MLAMS.

The fact that correlations between the MLAMS and measures of social desirability bias (including the Marlowe–Crowne scale) are negative and that the MLAMS and MCSDS correlate quite differently with a variety of other measures confirm an essential distinction between these two scales. Martin (1984) asserted that the Marlowe–Crowne measure assesses ego-defensiveness (high scorers possess an idealized view of themselves that must be maintained and defended), whereas the MLAMS directly assesses the desire for social approval – to please others, receive positive evaluations and approval, and avoid negative evaluations and rejection. Given that the MLAMS’s patterns of correlations diverge from the MCSDS’s, researchers should consider conceptually which construct they wish to measure.

Martin-Larsen Approval Motivation Scale

1=Disagree Strongly

2=Disagree

3=No Opinion

4=Agree

5=Agree Strongly

1.

Depending upon the people involved, I react to the same situation in different ways.

2.

I would rather be myself than be well thought of. (R)*

3.

Many times I feel like just flipping a coin in order to decide what I should do.

4.

I change my opinion (or the way that I do things) in order to please someone else.*

5.

In order to get along and be liked, I tend to be what people expect me to be.*

6.

I find it difficult to talk about my ideas if they are contrary to group opinion.*

7.

One should avoid doing things in public which appear to be wrong to others, even though one knows that he is right.

8.

Sometimes I feel that I don’t have enough control over the direction that my life is taking.

9.

It is better to be humble than assertive when dealing with people.

10.

I am willing to argue only if I know that my friends will back me up.*

11.

If I hear that someone expresses a poor opinion of me, I do my best the next time that I see this person to make a good impression.

12.

I seldom feel the need to make excuses or apologize for my behavior. (R)*

13.

It is not important to me that I behave ‘properly’ in social situations. (R)*

14.

The best way to handle people is to agree with them and tell them what they want to hear.

15.

It is hard for me to go on with my work if I am not encouraged to do so.

16.

If there is any criticism or anyone says anything about me, I can take it. (R)*

17.

It is wise to flatter important people.

18.

I am careful at parties and social gatherings for fear that I will do or say things that others won’t like.*

19.

I usually do not change my position when people disagree with me. (R)*

20.

How many friends you have depends on how nice a person you are.

Notes:

(R) Reverse scored item.

*Items in short form.

Reproduced with permission.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780123869159000164

Emotions of odors and personal and home care products

C. Porcherot, ... I. Cayeux, in Emotion Measurement (Second Edition), 2021

21.9 Implicit processes of feelings

Explicit measurements largely demonstrated high validity for evaluating consumers’ affective response to odors and fragrances. As previously mentioned, these methods present some drawbacks, including the social desirability bias and the inability to access cognitive processes occurring outside of conscious awareness (Gawronski & De Houwer, 2014; Greenwald & Banaji, 1995; Nosek, Hawkins, & Frazier, 2011). It is therefore questionable whether the feelings evoked by smells and expressed by consumers through these questionnaires are the result of high-level introspective processes, which would necessarily require an explicit awareness of this feeling. Recent data seem to indicate that at least some of these feelings are also implicitly processed. Implicit methods emerged in the field of social psychology with the intent to address the issue of accessing cognitive processes occurring outside conscious awareness. Introduced in 1998 by Greenwald and colleagues, the Implicit Association Task (IAT) is by far the most popular method. It is a computer-based speeded discrimination task that measures the relative association strength between two targets (e.g., “flowers”, “insects”) and two attributes (e.g., “pleasant”, ”unpleasant”). In a typical IAT, participants are required to sort stimuli belonging to these four categories as quickly and accurately as possible by pressing two response keys of a computer keyboard. According to the central assumption underlying the IAT procedure, participants are likely to respond faster and more accurately when two strongly associated concepts are mapped onto the same response key (compatible block; e.g., “flowers” and “pleasant”) than when they are not (incompatible block; e.g., “insects” and “pleasant”). Although the IAT has been largely used in the visual modality, its application in other sensory modalities is still underinvestigated. A brilliant exception comes from the studies by Demattè, Sanabria, and Spence (2006) that demonstrated that the IAT can be successfully adapted to explore cross-modal associations between odors and colors.

More recently, we tested the ability of the IAT procedure to explore automatic associations between odors and relaxing/energizing feelings (Lemercier-Talbot et al., 2019). The sensitivity of the IAT was tested by using simple compounds (menthol and vanillin; Experiment 2) and fine fragrances (Perfume 1 and Perfume 2; Experiment 3), which were selected a priori from their ability to evoke energizing (menthol; Perfume 1) and relaxing feelings (vanillin; Perfume 2) at an explicit level. Following the typical structure, our olfactory IAT was composed of five successive blocks (Fig. 21.9). In Block 1, subjects were asked to sort two odors (vanillin and menthol in Experiment 2; Perfume 1 and Perfume 2 in Experiment 3) into two categories neutrally labeled “odor 1” and “odor 2”. In Block 2, subjects were asked to sort eight words that originated from the Swiss version of EOS (Chrea et al., 2009; Ferdenzi, Delplanque et al., 2013) into two emotional categories, i.e., “relaxing” and “energizing”. In Block 3 – the first critical phase of the IAT – the former tasks were superimposed: Subjects were required to categorize odors and emotions by using the sorting rules learned in Block 1 and Block 2, respectively. Block 4 was identical to Block 1, except that the position of the response keys was reversed. Finally, Block 5 represented the second critical combined task in which subjects were required to categorize odors and emotional words, this time by using the sorting rules learned in the second and in the reversed fourth block. In both experiments, we observed an IAT effect, revealing that participants responded faster in compatible trials than they did in incompatible trials. Specifically, participants responded faster when the menthol odor (Experiment 2) and Perfume 1 (Experiment 3) were paired with energizing emotions and when the vanillin odor (Experiment 2) and Perfume 2 (Experiment 3) were paired with relaxing emotions compared with the reverse mapping (Fig. 21.10).

Fig. 21.9. Design of the IAT for the experiments 2 and 3.

Fig. 21.10. Averaged reaction times and confidence interval of the mean at 95% for Experiments 2 and 3 depending on block and stimulus type.

These results not only support the existence of automatic associations between odors and feelings, but also highlight the ability of the IAT procedure to capture them. The sensitivity of the IAT has been successfully verified by using both simple, familiar compounds (menthol, vanillin) and complex, less familiar fragrances, selected on the basis of previous explicit tests. In addition, these results confirmed the feelings reported with questionnaires by revealing similarities in the results obtained from both implicit and explicit methods, the difference being that it is much quicker to collect data on feelings evoked by smells through questionnaires.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780128211243000211

Response Bias

Timothy R. Graeff, in Encyclopedia of Social Measurement, 2005

Social Desirability Bias

People naturally want others to view them favorably with respect to socially acceptable values, behaviors, beliefs, and opinions. Thus, answers to survey questions are often guided by what is perceived as being socially acceptable. For example, even if a person does not donate money to charity, they might report that they have donated. Donating money to charity is the socially acceptable behavior. Social desirability bias can affect responses to questions about whether or not people spank their children, whether or not they recently purchased any fur coats, or even whether or not they voted in recent elections. Research on topics about which there are socially acceptable behaviors, views, and opinions is very susceptible to social desirability bias.

Social desirability bias is by far the most studied form of response bias. Social desirability bias can result from (1) the nature of the data collection or experimental procedures or settings, (2) the degree to which a respondent seeks to present themselves in a favorable light, (3) the degree to which the topic of the survey and the survey questions refer to socially value-laden topics, (4) the degree to which respondents answers will be viewed publicly versus privately (anonymously), (5) respondents' expectations regarding the use of the research and their individual answers, and (6) the extent to which respondents can guess what types of responses will please the interviewer or sponsor of the researcher.

Social desirability bias is often viewed as consisting of two factors, self-deception and impression management. Self-deception refers to the natural tendency to view oneself favorably. Self-deception has been linked to other personality factors such as anxiety, achievement, motivation, and self-esteem. Impression management refers to the situational dependent desire to present oneself in a positive light. This can manifest itself in the form of false reports and deliberately biased answers to survey questions.

There is no standard statistical procedure for measuring the amount of social desirability bias across varying situations, contexts, and survey topics. Nonetheless, researchers have developed scales, such as the Marlowe–Crowne 33-item Social Desirability Scale, and shorter versions of the scale, to identify and measure the presence of social desirability bias in survey results. When a social desirability scale is added to a survey, significant correlations between the social desirability scale and other survey questions indicate the presence of social desirability bias due to respondents' desire to answer in socially desirable ways. Low correlations between the social desirability scale and other survey questions suggest the lack of social desirability bias.

Unfortunately, such social desirability scales cannot be used as standardized measures of social desirability bias across varying situations, settings, data collection procedures, and research topics. Along with the obvious disadvantage of making the survey longer, such social desirability scales often contain questions that respondents might perceive as inappropriate and unrelated to the fundamental purpose of the survey.

Other attempts to measure and reduce social desirability bias include the use of a pseudo lie detector called the bogus pipeline. With a bogus pipeline, researchers tell respondents that the data collection procedures and/or measurement apparatus are capable of identifying when someone is lying. Thus, respondents are more motivated to respond truthfully. Of course, such procedures would be inappropriate for many social research procedures, techniques and data collection methodologies.

The amount of social desirability bias in a survey can vary by (1) mode of contact (anonymous versus face-to-face interviews or signed surveys), (2) differences in a respondent's home country and culture (respondents from lesser developed countries have been found to be more likely to respond to personality surveys in a manner consistent with existing cultural stereotypes), and (3) the amount of monetary incentive provided to the respondent (respondents receiving larger monetary incentives have been found to exert greater effort in completing a survey and were more likely to respond in a manner that was favorable toward the survey sponsor).

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B0123693985000372

Overview of Nutritional Epidemiology

Adriana Villaseñor, ... Ruth E. Patterson, in Nutrition in the Prevention and Treatment of Disease (Fourth Edition), 2017

3 Measurement Error Specific to Food Records/Recalls

Unlike FFQs, food records and recalls are open-ended, do not depend on long-term memory, and allow for measurement of portion sizes. In addition to the sources of error that are common across self-reported dietary assessment methods, food records are also prone to bias that results when participants change their eating habits during the assessment period. This may occur as the result of social desirability bias (e.g., respondent decreases intake of unhealthy foods to avoid having to report these items) or due to burden and fatigue (e.g., respondent begins eating fewer foods or more simply prepared dishes to make it easier to record intake). Like food records, food recalls are typically open-ended. However, recalls are usually collected without advance notification. Therefore, participants cannot change what they eat retroactively and the instrument itself should not affect food intake, although misreporting due to social desirability bias is still possible. Both recalls and records are subject to coding errors because scannable forms are not typically used.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780128029282000072

Health Psychology

Dimitri M.L. Van Ryckeghem, Geert Crombez, in Comprehensive Clinical Psychology (Second Edition), 2022

8.05.2 Questionnaires

The most frequently used methodology to assess key constructs in health psychology is questionnaire assessment. Typically, questionnaires consist of closed-ended questions, which are answered using a Likert scale. Thereby, they aim to provide insight into a person's psychological state (e.g., level of anxiety), health behavior (e.g., frequency of exercising), or underlying mechanisms (e.g., attention vigilance). The use of questionnaires is highly popular due to its ease of administration (both using paper and pencil or online), which can be achieved without the presence of an assessor and at a very low cost. Online assessment has further increased its popularity. Persons can fill out questionnaires online at home, and summary scores can be immediately provided showing how well/poor a person is doing in a certain health domain. Online assessment has furthermore been facilitated by the availability of easy-to-use open source survey apps (e.g., LimeSurvey at https://www.limesurvey.org/, Redcap at https://www.project-redcap.org/software/, Qualtrics at https://www.qualtrics.com/). These survey apps allow researchers to administer questionnaires to a large number of people without much effort. Yet, despite the many advantages, assessing health outcomes via questionnaires is not without risks. The use of questionnaires has limitations, and answers can be systematically distorted by response bias, an individual's tendency to respond inaccurately or incorrectly to a question.

The best known response bias is the social desirability bias, or the tendency to answer questions in a manner that will be viewed favorably by others. Explicitly asking individuals to answer questions may result in the over-reporting of “good” behaviors, beliefs, or attitudes, and the under-reporting of “bad” behaviors, beliefs, or attitudes. To overcome the presence of a social desirability bias, psychologists have developed implicit measures that do not require reflection and introspection and reduce the ability of participants to control their answers. These measures are assumed to be less sensitive to social-desirability and positive self-presentation and, hence, to have better validity. Examples are the implicit association test (IAT), affective priming tasks, the go/no-go association test, and the implicit relational assessment procedure (see Gawronski and De Houwer, 2014). This is an active area of research, also in health psychology (Sheeran et al., 2016). However, as yet, in most situations self-report measures outperform implicit measures in terms of reliability and validity (Meissner et al., 2019). Another strategy to address social desirability bias can be achieved by overcoming the conditions that facilitate social desirability and positive self-representation. Indeed, social desirability bias occurs when individuals avoid judgment by others and feelings of shame. Therefore, having an empathic and non-judgmental approach, which also assures confidentiality and trust, may be key factors to reduce social desirability bias. Likely, this is easier to achieve in an applied setting with health-care providers than in a national health and/or illness survey performed by a private company. All in all, social-desirability bias reminds us that self-report is always an act of communication in a relational context. This is no different for self-report questionnaires.

A second type of response bias relates to one's tendency to agree with questionnaire items, without considering the specific content of the question (Messick, 1967). This acquiescence bias may be more present in some cultural subgroups (Rammstedt et al., 2017). It is most prominent when respondents are asked to confirm a statement or if the question is answered with opposite answer options (e.g., “agree/disagree”; Kuru and Pasek, 2016). Several techniques have been proposed to overcome acquiescence biases, such as the use of balanced scales (Cloud and Vaughan, 1970), item-specific questions (Höhne et al., 2018), and statistical correctives (Kuru and Pasek, 2016). Although these techniques may reduce the bias, they may also complicate assessment (Kuru and Pasek, 2016). Furthermore, they can reduce user-friendliness, or bring along new problems (e.g., decreased content validity when using reverse-scored items).

Third, most often questionnaire items ask people to reflect over a long time window (e.g., over the last 2 weeks, how much pain did you experience; over the past 3 months, how much has pain interfered with your life activities?) or do not specify a time window at all, allowing memory processes to play a role. This is particularly true when people report on experiences that fluctuate highly over time and contexts (e.g., emotions, bodily symptoms). Recall bias has been well-investigated in the context of pain, whereby it has been suggested that recall of pain is disproportionately affected by the most recent and the highest pain levels within the recall period (i.e., peak-end effect; Kahneman et al., 1993). In addition, research suggests that people who recall pain tend to overestimate their symptom severity (Broderick et al., 2008) and indicate that the association between retrospective data and daily recall is only modest (Stone et al., 2005). Recall bias is, however, not unique to pain. Topp et al. (2019) investigated recall bias in the measurement of health-related quality of life and found that recall bias (report of past 4 weeks) was considerable on the individual patient level and could impact upon decision-making in clinical practice. Generally, the length of the recall period is inversely related to the accuracy of recall (Broderick et al., 2010; Stull et al., 2009). Yet, shorter recall periods can lead to the under-reporting of symptoms in some conditions (Norquist et al., 2011). As such, it has been suggested that the recall period of questionnaire items should be well-considered and take into account respondent burden and their ability to easily and accurately recall the information, the attributes of the construct of interest (e.g., variability over time), and the needs of the administering clinicians/researchers (Batterham et al., 2017a,b).

Finally, questionnaire items are frequently interpretable in multiple ways and unclear to respondents. This increases the risk of obtaining different interpretations for the same question by different people. The latter issue demands careful attention while developing questionnaire items to assess health constructs (see also Content Validity section) The presence of an administrator while completing the questionnaire might help, as it allows to explain unclear items to people. Yet, explaining the meaning of every question is cumbersome and is only possible for a limited number of items. The presence of an examiner itself may also impact upon the answers of a person on a questionnaire, as a responder may not feel comfortable selecting extreme or unconventional choices.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B978012818697800193X

Parenting Styles and their Effects

M.H. Bornstein, D. Zlotnik, in Encyclopedia of Infant and Early Childhood Development, 2008

Measurement of Parenting Styles

Parenting styles have been measured using direct observational techniques, but more commonly data are obtained through interviews and questionnaires of parents and children. However, the accuracy of reports is debatable because parent and child responses tend to differ. Some researchers suggest that reports from children may be more accurate because children are less influenced by social desirability biases (the desire on the part of the person answering to ‘look good’ to the interviewer). Moreover, parenting style in the eyes of children may have more significance. Additionally, SES may affect the accuracy of parenting style reports because some argue that parents of different SES tend to be more or less prone to social desirability biases which could undermine the validity of these assessments.

The Parental Authority Questionnaire (PAQ) developed by John Buri is a commonly used instrument to categorize parenting styles. The PAQ is a 30-item assessment consisting of three 10-item scales that correspond with authoritative, authoritarian, and permissive parenting style. The PAQ assesses children’ perceptions of their parent’s parenting style and is completed by children about each parent independently. On the questionnaire, the children indicate how well each statement describes a parent based on a 5-point scale, 1 indicating “I strongly disagree that this statement relates to my mother or father” and 5 indicating “I strongly agree that this statement applies to my mother or father.” An example of an item that corresponds with authoritative parenting is: “As I was growing up, once family policy had been established my mother discussed the reasoning behind the policy with the children in the family.” An example of a statement corresponding with authoritarian parenting is: “Whenever my mother told me to do something as I was growing up, she expected me to do it immediately without asking questions.” An example of a statement corresponding with permissive parenting is: “My mother has always felt that what children need is to be free to make up their own minds and to do what they want to do, even if this does not agree with what their parents might want.”

The PAQ has proved to be a reliable and valid measure of Baumrind’s parenting style typologies. In a test–retest reliability study in which participants completed the PAQ twice over a 2-week period, high reliabilities were found for mother’s authoritativeness, authoritarianism, permissiveness, and father’s authoritativeness, authoritarianism, and permissiveness. Cronbach coefficient alphas were used to calculate internal consistency reliability for the measure, and high values were obtained for each of the PAQ scales. A third study that measured the discriminant validity of the PAQ indicated that mother’s authoritarianism was inversely related to mother’s permissiveness and authoritativeness, and father’s authoritarianism was inversely related to father’s permissiveness, and authoritativeness. Also, mother’s permissiveness was not related to mother’s authoritativeness. A fourth study assessed criterion validity to examine whether parental nurturance is correlated with authoritative, authoritarian, and permissive parenting styles. Authoritative parents (mother and father) were found to be highest in parental nurturance; authoritarian parenting was inversely related to nurturance for both mothers and fathers; and parental permissiveness was unrelated to nurturance for both mothers and fathers. A final study examined if the PAQ is influenced by social desirability biases by looking at correlations with the Marlowe–Crowne Social Desirability Scale. For example, it would be problematic if people agreed with more authoritative and with fewer authoritarian statements because they wished to appear more socially desirable. PAQ scores did not correlate with the Marlowe–Crowne Social Desirability Scale; therefore, the PAQ does not appear to be vulnerable to social desirability response biases. These studies show that the PAQ is a reliable and valid measure for categorizing parenting styles according to Baumrind’s typologies.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780123708779001183

Self-Reported Metrics

Tom Tullis, Bill Albert, in Measuring the User Experience (Second Edition), 2013

6.2.5 Biases in Collecting Self-Reported Data

Some studies have shown that people who are asked directly for self-reported data, either in person or over the phone, provide more positive feedback than when asked through an anonymous web survey (e.g., Dillman et al., 2008). This is called the social desirability bias (Nancarrow & Brace, 2000), in which respondents tend to give answers they believe will make them look better in the eyes of others. For example, people who are called on the phone and asked to evaluate their satisfaction with a product typically report higher satisfaction than if they reported their satisfaction levels in a more anonymous way. Telephone respondents or participants in a usability lab essentially want to tell us what they think we want to hear, and that is usually positive feedback about our product.

Therefore, we suggest collecting post-test data in such a way that the moderator or facilitator does not see the user’s responses until after the participant has left. This might mean either turning away or leaving the room when the user fills out the automated or paper survey. Making the survey itself anonymous may also elicit more honest reactions. Some UX researchers have suggested asking participants in a usability test to complete a post-test survey after they get back to their office or home. This can be done by giving them a paper survey and a postage-paid envelope to mail it back or by e-mailing a pointer to an online survey. The main drawback of this approach is that you will typically have some drop-off in terms of who completes the survey. Another drawback is that it increases the amount of time between users’ interaction with the product and their evaluation via the survey, which could have unpredictable results.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780124157811000066

Self-Report Instruments and Methods

Timo Lajunen, Türker Özkan, in Handbook of Traffic Psychology, 2011

4.2.3 How to Cope with Socially Desirable Responding in Self-Reports of Driving

The literature on socially desirable responding and self-reports of driving seems to be mixed. Whereas driver behavior scales seem to have significant correlations with socially desirable responding, quasi-experimental studies do not seem to indicate any serious bias in self-reports of driving. One possibility is to stop using self-reports of driving and accidents and to rely only on observed behavioral data and official accident records, as some researchers seem to suggest (Af Wåhlberg, 2010; Af Wåhlberg et al., 2009). The other possibility is to let the use of self-reports of driving go unchallenged and accept the small social desirability bias as an innate characteristic of self-reports. The first alternative would limit behavioral traffic research immensely because many fields, especially social psychology, require use of self-reports. For example, driver attitudes, opinions, and attributions cannot be measured “objectively” but only with self-reports. Moreover, the objective measures also have serious methodological limitations, as studies using an instrumented vehicle, simulator, or laboratory tests show. Official accident records suffer their own sources of bias. Studies conducted by Af Wåhlberg (2010) and Af Wåhlberg et al. (2009) show that the second alternative is not an option: Traffic researchers cannot continue ignoring bias in the self-reports of driving and outcomes.

Self-report research methodology offers various ways of coping with socially desirable responding. First, an emphasis on anonymity and confidentiality in instructions and procedures (e.g., sealed envelopes and large group data collection) reduce the effect of socially desirable responding. Second, scales for socially desirable responding, such as the DSDS, can be included in studies and their effect statistically controlled. Scales for controlling impression management, self-deception, careless answering style, and so forth can be easily designed and embedded in such instruments as the DBQ. It is surprising that traffic psychologists have ignored these biases while the use of control scales is common in mainstream psychological tests (e.g., validity scales of the MMPI-2). Third, objective measures of accidents and behavior should be used whenever possible.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780123819840100049