Institution: | Slovak University of Technology |
Technologies used: | C#.NET |
Inputs: | User (programmer) characteristics, User activity log |
Outputs: | Pairs of users for code review or help |
Addressed problem
Code review is an important part of quality software development. A chunk of new code must typically pass review before it is accepted to project’s trunk in source control system. Picking a suitable code reviewer who can accomplish this task in a competent and timely manner is a non-trivial problem. Also when a programmer hits an impasse and needs help a suitable helper is needed.
Description
In our work we aim to create a novel method for selection of a suitable peer for code review that takes user’s activity (e.g. correct/wrong tests), user’s characteristics (e.g. personality) and reviewing abilities into account, observes previous reviewer assignments and can produce better assignments over time. The proposed approach can also be used when help in development of code is needed. In the method, we model the probability of success of a peer using a Rasch model.
Assuming the users’ parameters – reviewing abilities – for a particular task are known, the method can select the helper that is expected to deliver effective help with the highest probability of success. The method now works in two phases: calibration phase and performing phase.
Initially, in the calibration phase, when the reviewing abilities of individual users are yet unknown and only the personality traits are known (using personality questionnaire administered beforehand). Helpers (reviewers) are assigned randomly in order to provide performance (which interactions are successful) data for maximum likelihood estimation of the reviewing parameters. After determining the reviewing abilities passing a given threshold of measurement error, the collected performance data enable us to correlate personality traits with these reviewing abilities – providing a default values for reviewing abilities of a new user given his or hers personality traits.
Reviewing abilities are calibrated for different tasks (or problems) separately. In evaluation, we are interested to explore the consistence in how the personality traits can model the reviewing abilities for individual tasks, and to what extent are the reviewing abilities independent of the task or problem they are calibrated with. In the second phase, the performing phase, when the reviewing abilities have been determined, helpers are assigned to help (review) requests according to their reviewing abilities. The transition from calibration to performing phase can be more made smooth – iteratively lowering the percentage of random assignments – and observing the changing reviewing characteristics of users. When an ability estimate still fluctuates, more calibration data (random assignments) can be collected.