1Royal Bolton Hospital, Lancashire, United Kingdom
2North Western Deanery, Manchester, United Kingdom
© 2014, National Health Personnel Licensing Examination Board of the Republic of Korea
This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Type of participant | Medical | Non-medical |
---|---|---|
Undergraduate | Langenfeld et al. [50], Rabow et al. [84], Brasher et al. [100], Metcalfe and Matharu [106], Blue et al. [117], Iqbal and Khizar [119], Solomon et al. [127], Johnson and Chen[129], Windish et al. [130], Ramsey et al. [135], Tochel et al. [144], Fallon et al. [152], Stritter et al. [153], Shellen- berger and Mahan [154], Cohen et al. [155], Dolmans et al. [156], Donnelly and Wooliscroft [157], Irby and Rakeshaw [158], Parikh et al. [159], Wilson [160], De et al. [161], Duffield and Spencer [162], Tiberi- us et al. [163], Gil et al. [164], Pfeifer and Peterson [165] | Al issa and Sulieman [9], Bernardin [19], Crittenden and Norr [20], Adams and Umbach [21], Wolbring [22], Remedios and Lieberman [24], Chen and Hoshower [25], Worthington [26], Kember and Wong [27], Marsh [28], Marsh [29], Marsh and Roche [30], Rowden and Carlson [31], Goos et al. [32], Davies et al. [33], Blackhart et al. [34], Dwinell and Higbee [35], Burdsal and Bardo [36], Theall and Franklin [38], Feldman [39], Sojka et al. [40], Berk [41], Greenwald and Gillmore [42], Gigliotti and Buchtel [43], Doyle and Crichton [44], Aleamoni [46], Kember and Leung [52], Roch and McNall [67], Atwater et al. [73], Redman and McElwee [74], Chan and Ip [76], Henderson et al. [77], Brugnolli et al. [78], Midgley [80], Per Palmgren [82], Olson et al. [87], Braine and Parnell [88], Perli and Brugnolli [89], Heffernan et al. [90], Kelly [91], El Ansari and Oskrochi [102], Berber [110], Robbins and DeNisi [134], Govaerts et al. [146], Surratt and Desselle [149], Cardy and Dobbins [166], Henzi et al. [167], Parker and Carlisle [168], Cooke et al. [169], Myall et al. [170] |
Postgraduate | Archer et al. [10], Barrow and Baker [12], Coats and Burd [13], Arah et al. [47], Schneider et al. [51], Scott et al. [53], Ahearn et al. [55], Grava- Gubins and Scott [63], Owen [64], Fiander [65], Risucci et al. [66], Ko- larik et al. [85], Smith et al. [86], Ranse and Grealish [92], O’Connor et al. [95], Luks et al. [96], Turnball et al. [97], Biller et al. [98], Carpenter et al. [99], Busari et al. [101], Basu et al. [103], Whang et al. [104], Devlin et al. [105], Barrett et al. [107], Steiner et al. [108], Getz and Evens [109], Girard et al. [111], Antiel et al. [112], Lin et al. [113], Ratana- wongsa et al. [114], Thangaratinam et al. [115], Kanashiro et al. [116], Watling et al. [118], Watling et al. [120], Pearce et al. [122], Conigliaro et al. [123], Dech et al. [124], Yarris et al. [125], Sargeant et al. [126], Sender Lieberman et al. [128], Tortolani et al. [131], O’Brien et al. [132], Claridge et al. [133], Hayward et al. [136], Sargeant et al. [137], Paice et al. [141], Ryland et al. [142], Rose et al. [145], Bing-you et al. [151], Kjaer et al. [171], Hrisos et al. [172], Beckman et al. [173], Mattern et al. [174], Kendrick et al. [175], Keitz et al. [176], Moalem et al. [177], Sargeant et al. [178], Schuh et al. [179], Vasudev et al. [180], Ellrodt [181], Harrison and Allen [182], Dola et al. [183], Cohn et al. [184], Fisher et al. [185], Pankhania et al. [186], Welch et al. [187], Greysen et al. [188], Mailloux [189], Buschbacher and Braddom [190], Cooke and Hutchinson [191], Holland et al. [192], Sabey and Harris [193], Nettle- ton et al. [194], Chamberlain and Nisker [195], Verhulst and Distle- horst [196], Guyatt et al. [197], Barclay et al. [198] | McCarthy and Garavan [1], Hall et al. [14], Caskie et al. [15], Smith and Fortunato [16], Kudisch et al. [17], Mullen and Tallant-Runnels [37], Tews and Tracey [48], Tews and Tracey [49], Smither et al. [54], Antonioni and Park [56], Tsui and Barry [57], Ryan et al. [59], Anto- nioni [61], Goodwin and Yeo [62], Antonioni [68], Bettenhausen and Fedor [69], Westerman and Rosse [70], Mathews and Redman [71], Reid and Levy [72], Redman and Snape [75], Cohan [81], Raik- konen et al. [83], Beecroft et al. [93], Sit et al. [94], Brett and Atwater [138], Barclay et al. [139], Tourish and Robson [148], Dipboye and de Pontbriand [199], Copp et al. [200], Bratt and Feizer [201], Smither and Walker [202], Becker et al. [203] |
Both undergraduate and postgraduate | Gross et al. [23], Schum et al. [45], Albanese [58], Eva et al. [60], Cannon et al. [121], Irby [140], Watling and Lingard [143], Williams et al. [147], Mcleod et al. [204], Bennett et al. [205] | Ilgen et al. [150], Henzi et al. [206], Henzi et al. [207] |
Proforma categories | Further information |
---|---|
1. Number | Each article was allocated a number to allow easy identification. |
2. Study method | What type of study was it? |
3. Profession | What profession were the participants? |
4. Type of participant | Undergraduate or postgraduate or both? |
5. Geographical location | Which continent was the article from? |
6. Purpose of study | Was the study for summative (for promotional/reward purposes) or formative (for improvement/development) purposes? |
7. Feedback subject | Feedback on training, trainer or learning environment? |
8. Quality of feedback | Quantitative or qualitative? |
9a. Were controls used? | Controls may be used to compare the efficacy of different interventions. |
9b. Type of interventions | |
10a. Type of evaluation | What type of feedback method was used? e.g., paper survey, focus groups |
10b. Quality of questions | What types of questions were used? e.g., closed, open mixture |
11. Duration of study | Measured in months |
12. Number of participants | Total number of participants giving upward feedback |
13. Response rates | Measured in percentages |
14. Types of bias | Split into implied and overt: |
Overt bias would be explicitly mentioned by the authors within the study. | |
Implied bias would be identified by the reviewer as potential bias but was not mentioned within the study. | |
15. Action plans | Did the authors address the outcomes/consequences of the article? Was an action plan devised to address this? |
16. Kirkpatrick levels | Which level? [18] |
(1) Reaction: What do the raters think about their trainer/training/environment? | |
(2) Learning: Was the ratee able to learn from this feedback? This can be identified through mechanisms such as feedback reports, receiving results. | |
(3) Behavior: Did the ratee change their behavior due to this feedback? This can be reflected in repeat ratings. | |
(4) Results: Was there any improvement in teaching after receiving the feedback? Did others benefit from this improvement? | |
For example, did exam rates improve? Did this change improve company profits? |
Type of bias | Further information |
---|---|
1. Affect/leader-member relationship | Defines the relationship between ratee and rater [57,134]. The bias of liking someone may lead to potentially inaccurate ratings. |
2. Motivation | Low response rates may not be representative of the sampled population. This could potentially be due to lack of motivation. Prior interests, including prior subject interest [4,30] could also affect participation and enthusiasm. For example, did students volunteer themselves to enter into the study? A response rate of 60% or more is perceived as an acceptable level [208]. Articles that explicitly mention rater motivations, enthusi- asm or prior subject interests were also included. |
3. Fear and retaliation, career progression | The fear that honest ratings could lead to retaliation and affect career progression, could potentially affect upward feedback outcomes [12]. |
4. Self efficacy, lack of understanding/knowledge of upward feedback, role appropriateness | Do raters feel they are suitable/appropriate/confidence to rate their superiors [11,17]? |
5. Cynicism and trust, perceived usefulness | Raters may not feel their voice will be heard and may be skeptical that changes will be made according to their feedback [16]. |
6. Ingratiation, yea saying, leniency, reward anticipation/incentives | Raters may rate leniently as a means of showing ingratiation or to receive reward in return [11]. |
7. Method of feedback | This includes how survey was implemented e.g paper, online, the location of survey implementation [115], whether any reminders and method of reminders [55]. Also included whether the survey was done over a period of time or only used 1 day/session [115]. |
8. Voluntary/compulsory | All members had to participate or could choose not to participate. |
9. Frequency/timing, opportunity to observe | The timing of the survey: Was it done straight after rotation, or done many months after rotation, or done in the middle of the rotation [201]. |
10. Cultural/gender | Cultural differences may affect survey accuracy [78,119]. Gender could affect survey differences e.g., nursing where the survey population is predominantly female [83]. |
11. Halo effect | Raters have a tendency to give similar ratings to all aspects of a survey [11,57]. Raters are not able to differenti- ate between different traits. |
12. End aversion/extreme response | End aversion: the avoidance of extreme ratings [11]. |
Extreme response: always rating very high/very low scores [11]. | |
13. Survey fatigue | If there are multiple surveys to complete in the study or if the survey was very long, then this could affect sur- vey accuracy. |
14. Survey purpose | Was the survey for administrative or developmental purposes [11,41]? Why was the survey done? |
15. Others | Potential biases that could also potentially affect bias but not mentioned above. e.g., recall bias [201]. |
Type of participant | Medical | Non-medical |
---|---|---|
Undergraduate | Langenfeld et al. [50], Rabow et al. [84], Brasher et al. [100], Metcalfe and Matharu [106], Blue et al. [117], Iqbal and Khizar [119], Solomon et al. [127], Johnson and Chen[129], Windish et al. [130], Ramsey et al. [135], Tochel et al. [144], Fallon et al. [152], Stritter et al. [153], Shellen- berger and Mahan [154], Cohen et al. [155], Dolmans et al. [156], Donnelly and Wooliscroft [157], Irby and Rakeshaw [158], Parikh et al. [159], Wilson [160], De et al. [161], Duffield and Spencer [162], Tiberi- us et al. [163], Gil et al. [164], Pfeifer and Peterson [165] | Al issa and Sulieman [9], Bernardin [19], Crittenden and Norr [20], Adams and Umbach [21], Wolbring [22], Remedios and Lieberman [24], Chen and Hoshower [25], Worthington [26], Kember and Wong [27], Marsh [28], Marsh [29], Marsh and Roche [30], Rowden and Carlson [31], Goos et al. [32], Davies et al. [33], Blackhart et al. [34], Dwinell and Higbee [35], Burdsal and Bardo [36], Theall and Franklin [38], Feldman [39], Sojka et al. [40], Berk [41], Greenwald and Gillmore [42], Gigliotti and Buchtel [43], Doyle and Crichton [44], Aleamoni [46], Kember and Leung [52], Roch and McNall [67], Atwater et al. [73], Redman and McElwee [74], Chan and Ip [76], Henderson et al. [77], Brugnolli et al. [78], Midgley [80], Per Palmgren [82], Olson et al. [87], Braine and Parnell [88], Perli and Brugnolli [89], Heffernan et al. [90], Kelly [91], El Ansari and Oskrochi [102], Berber [110], Robbins and DeNisi [134], Govaerts et al. [146], Surratt and Desselle [149], Cardy and Dobbins [166], Henzi et al. [167], Parker and Carlisle [168], Cooke et al. [169], Myall et al. [170] |
Postgraduate | Archer et al. [10], Barrow and Baker [12], Coats and Burd [13], Arah et al. [47], Schneider et al. [51], Scott et al. [53], Ahearn et al. [55], Grava- Gubins and Scott [63], Owen [64], Fiander [65], Risucci et al. [66], Ko- larik et al. [85], Smith et al. [86], Ranse and Grealish [92], O’Connor et al. [95], Luks et al. [96], Turnball et al. [97], Biller et al. [98], Carpenter et al. [99], Busari et al. [101], Basu et al. [103], Whang et al. [104], Devlin et al. [105], Barrett et al. [107], Steiner et al. [108], Getz and Evens [109], Girard et al. [111], Antiel et al. [112], Lin et al. [113], Ratana- wongsa et al. [114], Thangaratinam et al. [115], Kanashiro et al. [116], Watling et al. [118], Watling et al. [120], Pearce et al. [122], Conigliaro et al. [123], Dech et al. [124], Yarris et al. [125], Sargeant et al. [126], Sender Lieberman et al. [128], Tortolani et al. [131], O’Brien et al. [132], Claridge et al. [133], Hayward et al. [136], Sargeant et al. [137], Paice et al. [141], Ryland et al. [142], Rose et al. [145], Bing-you et al. [151], Kjaer et al. [171], Hrisos et al. [172], Beckman et al. [173], Mattern et al. [174], Kendrick et al. [175], Keitz et al. [176], Moalem et al. [177], Sargeant et al. [178], Schuh et al. [179], Vasudev et al. [180], Ellrodt [181], Harrison and Allen [182], Dola et al. [183], Cohn et al. [184], Fisher et al. [185], Pankhania et al. [186], Welch et al. [187], Greysen et al. [188], Mailloux [189], Buschbacher and Braddom [190], Cooke and Hutchinson [191], Holland et al. [192], Sabey and Harris [193], Nettle- ton et al. [194], Chamberlain and Nisker [195], Verhulst and Distle- horst [196], Guyatt et al. [197], Barclay et al. [198] | McCarthy and Garavan [1], Hall et al. [14], Caskie et al. [15], Smith and Fortunato [16], Kudisch et al. [17], Mullen and Tallant-Runnels [37], Tews and Tracey [48], Tews and Tracey [49], Smither et al. [54], Antonioni and Park [56], Tsui and Barry [57], Ryan et al. [59], Anto- nioni [61], Goodwin and Yeo [62], Antonioni [68], Bettenhausen and Fedor [69], Westerman and Rosse [70], Mathews and Redman [71], Reid and Levy [72], Redman and Snape [75], Cohan [81], Raik- konen et al. [83], Beecroft et al. [93], Sit et al. [94], Brett and Atwater [138], Barclay et al. [139], Tourish and Robson [148], Dipboye and de Pontbriand [199], Copp et al. [200], Bratt and Feizer [201], Smither and Walker [202], Becker et al. [203] |
Both undergraduate and postgraduate | Gross et al. [23], Schum et al. [45], Albanese [58], Eva et al. [60], Cannon et al. [121], Irby [140], Watling and Lingard [143], Williams et al. [147], Mcleod et al. [204], Bennett et al. [205] | Ilgen et al. [150], Henzi et al. [206], Henzi et al. [207] |
Proforma categories | Further information |
---|---|
1. Number | Each article was allocated a number to allow easy identification. |
2. Study method | What type of study was it? |
3. Profession | What profession were the participants? |
4. Type of participant | Undergraduate or postgraduate or both? |
5. Geographical location | Which continent was the article from? |
6. Purpose of study | Was the study for summative (for promotional/reward purposes) or formative (for improvement/development) purposes? |
7. Feedback subject | Feedback on training, trainer or learning environment? |
8. Quality of feedback | Quantitative or qualitative? |
9a. Were controls used? | Controls may be used to compare the efficacy of different interventions. |
9b. Type of interventions | |
10a. Type of evaluation | What type of feedback method was used? e.g., paper survey, focus groups |
10b. Quality of questions | What types of questions were used? e.g., closed, open mixture |
11. Duration of study | Measured in months |
12. Number of participants | Total number of participants giving upward feedback |
13. Response rates | Measured in percentages |
14. Types of bias | Split into implied and overt: |
Overt bias would be explicitly mentioned by the authors within the study. | |
Implied bias would be identified by the reviewer as potential bias but was not mentioned within the study. | |
15. Action plans | Did the authors address the outcomes/consequences of the article? Was an action plan devised to address this? |
16. Kirkpatrick levels | Which level? [18] |
(1) Reaction: What do the raters think about their trainer/training/environment? | |
(2) Learning: Was the ratee able to learn from this feedback? This can be identified through mechanisms such as feedback reports, receiving results. | |
(3) Behavior: Did the ratee change their behavior due to this feedback? This can be reflected in repeat ratings. | |
(4) Results: Was there any improvement in teaching after receiving the feedback? Did others benefit from this improvement? | |
For example, did exam rates improve? Did this change improve company profits? |
Type of bias | Further information |
---|---|
1. Affect/leader-member relationship | Defines the relationship between ratee and rater [57,134]. The bias of liking someone may lead to potentially inaccurate ratings. |
2. Motivation | Low response rates may not be representative of the sampled population. This could potentially be due to lack of motivation. Prior interests, including prior subject interest [4,30] could also affect participation and enthusiasm. For example, did students volunteer themselves to enter into the study? A response rate of 60% or more is perceived as an acceptable level [208]. Articles that explicitly mention rater motivations, enthusi- asm or prior subject interests were also included. |
3. Fear and retaliation, career progression | The fear that honest ratings could lead to retaliation and affect career progression, could potentially affect upward feedback outcomes [12]. |
4. Self efficacy, lack of understanding/knowledge of upward feedback, role appropriateness | Do raters feel they are suitable/appropriate/confidence to rate their superiors [11,17]? |
5. Cynicism and trust, perceived usefulness | Raters may not feel their voice will be heard and may be skeptical that changes will be made according to their feedback [16]. |
6. Ingratiation, yea saying, leniency, reward anticipation/incentives | Raters may rate leniently as a means of showing ingratiation or to receive reward in return [11]. |
7. Method of feedback | This includes how survey was implemented e.g paper, online, the location of survey implementation [115], whether any reminders and method of reminders [55]. Also included whether the survey was done over a period of time or only used 1 day/session [115]. |
8. Voluntary/compulsory | All members had to participate or could choose not to participate. |
9. Frequency/timing, opportunity to observe | The timing of the survey: Was it done straight after rotation, or done many months after rotation, or done in the middle of the rotation [201]. |
10. Cultural/gender | Cultural differences may affect survey accuracy [78,119]. Gender could affect survey differences e.g., nursing where the survey population is predominantly female [83]. |
11. Halo effect | Raters have a tendency to give similar ratings to all aspects of a survey [11,57]. Raters are not able to differenti- ate between different traits. |
12. End aversion/extreme response | End aversion: the avoidance of extreme ratings [11]. |
Extreme response: always rating very high/very low scores [11]. | |
13. Survey fatigue | If there are multiple surveys to complete in the study or if the survey was very long, then this could affect sur- vey accuracy. |
14. Survey purpose | Was the survey for administrative or developmental purposes [11,41]? Why was the survey done? |
15. Others | Potential biases that could also potentially affect bias but not mentioned above. e.g., recall bias [201]. |
Type of feedback bias | Implied | Overt |
---|---|---|
Affect, leader-member relationship | 76 | 39 |
Motivation | 42 | 14 |
Fear and retaliation | 31 | 32 |
Self efficacy, lack of understanding/knowledge of upward feedback, role appropriateness | 56 | 28 |
Cynicism and trust, perceived usefulness | 67 | 32 |
Accountability and confidentiality | 54 | 117 |
Ingratiation, yeah saying, leniency, reward anticipation/incentives | 30 | 52 |
Method of feedback | 104 | 39 |
Voluntary/compulsory | 35 | 102 |
Frequency/timing opportunity to observe | 37 | 31 |
Cultural or gender bias | 68 | 23 |
Halo effect | 8 | 10 |
End aversion/extreme response | 14 | 5 |
Survey fatigue | 50 | 8 |
Survey purpose | 66 | 37 |
Others | 13 | 11 |