From manual scoring to AI-based screening
How we developed an AI-based assessment model backed by decades of scientific research & I/O Psychology.
How we developed an AI-based assessment model backed by decades of scientific research & I/O Psychology.
Hiring the best person for the job has never been a simple task. Often, the hiring process consists of a series of steps and the final hiring decision is made after considering how well have applicants done at each stage. How the ranking and ratings of job candidates are determined at each stage is important to think about when developing and evaluating a company’s hiring process.
This task of assessing how an applicant performs at each stage is easier if the hiring tool currently being used comes with a pre-established way of measuring applicant performance.Marian Pitel, PhD in I/O Science at University of Guelph
For example, a published and validated cognitive ability test will likely come with a specially tailored rubric that can be used to identify which of the candidates who complete the test perform better than others. However, if the hiring tool is new or otherwise does not have a complementary scoring guide, then it can be difficult to determine how candidates should be assigned ratings or rankings based on their performance on the tool. Nonetheless, this is an important task to take on as it has direct implications for which candidates end up being hired by companies.
We developed and tested a scoring model used to manually assign scores to candidates, indicating their performance on the assessment - we called them scoring aids. We placed utmost focus to determine which indicators will be prioritized and used to determine the holistic success of the scoring aid. The success criteria of the manual scoring model were decided a priori to consist of:
After several iterations of the manual scoring model and rounds of pilot testing, we evaluated the scoring model to reach the following conslusions.
Not too bad for a first try... Despite the advantages of the manual scoring model, we also found the following critical issues that forced our attention to further improvement.
After comparing the strengths and weaknesses of the manual scoring model, we wanted to develop a different scoring model that was less reliant on the human decision-making processes. Because our goals are to standardize the scoring process and use a single scoring model across multiple types of assessments, we determined that an
artificial intelligence (AI)-based scoring model was a better fit in our situation given the more objective markers and level of sophistication that such a model can provideMarian Pitel, PhD in I/O Science at University of Guelph
Our discussions around developing and testing the manual human scoring model led us to the conclusion that this kind of model is more useful and effective in two kinds of situations: (1) when there is an abundance of opportunities and resources to conduct several briefing sessions such that raters can be calibrated on what to expect from each aspect of the scoring, and (2) when the scoring model can be specifically tailored to individual assessments and the scoring tool can be more concrete and specific, leaving less room for subjective interpretations. Otherwise, an AI-centered evaluation model can often be the more efficient and effective approach.
Nearly 12 months later, our data science and research team is in constant pursuit of improvement. Each datapoint we collect reinfroces our model's predictive power, allowing us to measure across reliability and viability standards. Constant backtesting models are deployed to ensure fairness against bias and diversity.
To learn more about our research, contact Marian here.
At nugget.ai, we continuously work towards gathering insights and perspectives to foster a global culture around workplace performance and development using big data and machine learning. Visit us to learn more about what we do.