How staffing can be framed as a sequence of decisions, rather than a sequence of processes. Staffing Cycle Framework highlights a sequence of seven high-level decisions that occur in staffing every position in an organization. These decisions, listed in Table 1 cover the time period from the initial intent of individuals and organizations to enter into employment relationships, through the matching processes associated with making and accepting job offers to the decision by individuals or organizations to end these employment relationships. You can register our session on HR Analytics and get certified by Dr.Anubha Walia or contact anubhawalia@gmail.com. In staffing, these decisions are not seen as joint hiring decisions, but as a sequence of decisions in which control shifts between job seekers and the organization. In Table 1, the following decision events (D1, D3, D5, and D7) are controlled by job seekers, which decision events (D2, D4, and D6) are controlled by organization decision-makers. When they are not in control of a decision, the job seeker or organization decision making acts as an influencer of those decisions.
Table 1: Decision Event and Description (Seven Core Decisions in the Staffing Cycle (Carlson & Connerly)). for Understanding Description analysis order your book today.
D1
The job seeker’s decision to enter the workforce (to begin actively seeking employment). In the United States, just over half of the population is a part of the workforce (employed or actively seeking employment).
D2
The organization’s decision to create a position that it wants to hire an individual to fill. A key aspect of this decision is the organization’s decision about how the job will be designed, compensated, incentivized, located, and supervised. In many cases, these decisions can substantially impact on the success of subsequent staffing outcomes.
D3
The job seeker’s decision to apply for the organization’s position. In the United States, applicants must make an affirmative decision to seek a specific position within an organization. This decision is likely the most critical in staffing as it determines who can potentially be hired. Influencing better quality recruits to apply increases the potential impact of the cycle (achieving a high-quality hire). If high-quality applicants do not apply, they cannot be hired, and no subsequent action in the staffing cycle can replace this lost potential value. Recruiting is effort by the organization to influence these job seekers’ decisions.
D4
The organization’s decision to extend an individual a job offer. This is the domain of selection. Organizations increase value by using more valid and cost-effective selection procedures.
D5
The job seeker’s decision to accept a job offer. While we can offer individuals positions, not all of them may accept them. Top candidates that fail to accept job offers represent lost value; the value of that loss is determined by the difference in the potential to contribute to the organization between the top individuals that do not accept offers and the lesser-rated candidate who eventually accepts.
D6
The organization’s decision to retain an employee. Framed in the negative, this is the organization’s decision to dismiss an employee or involuntary turnover. This may happen if the organization no longer needs the position (the opposite of D2), or the individual is unable to perform in the position that is acceptable to the organization. This decision is framed in the positive to acknowledge that the organization’s evaluation of the individual is ongoing throughout their employment.
D7
The job seeker’s decision to remain in a position. Framed in the negative, this is a person’s decision to leave a position (but not necessarily the organization), or voluntary turnover. Retention programs are efforts by organizations to influence these decisions.
This framework is useful for guiding workforce analytics efforts in staffing because it identifies key intermediate outcomes in the sequence of staffing decisions that can be evaluated and helps identify the critical component processes (and roles of the key players) in influencing these outcomes. For example, consider the outcomes of decisions D3, D4, and D5.
Evaluating Recruitment Effectiveness (D3)
D3 is the decision by job seekers to apply for a position. The outcome of that decision from the organization’s perspective is the creation of an applicant pool. Applicant pools have attributes that can be used to determine how good the outcome of D3 is for the organization. Traditionally, this is often evaluated by examining the number of applicants attracted. Having enough applicants to ensure that the position can be filled is an important outcome of recruitment. But not only does the organization want the process to result in a hire, but they want to hire an employee who, through their work, will be able to maximize value contributed to the organization. Thus, not only does the organization want to attract applicants, but they want to attract high-quality applicants. Further, because every applicant that applies will require at least some amount of expense to process their application and candidacy, the organization does not want large numbers of low quality applicants. Table 2 offers an example of a workforce analyses that provides
Table 2: Analysis of Quality of Applicants Attracted by Requisition IDReq_ID/ SCORE<1010s20s30s40s50s60s70s80s90s100s110sTotal Apps22473 37 52736827322181 3192347332816552680636 241274532272432369301 161251061732211050275 11723549 19193829153 11427158 8183728163 11027160 325919 11032159 81814612 4930060 9 2181194 44
insight into the quality of recruitment outcomes for a position in an organization. This analysis includes information about the number of applicants attracted for each job requisition and an estimate of their quality (e.g., capacity to contribute in this job).
These data highlight substantial differences in recruitment outcomes across requisition IDs and show that the number of applicants attracted to a job listing (requisition) may not be strongly associated with the number of high-quality applicants in the pool. For instance, Requisition 22473 resulted in most applicants (n = 319), but generated slightly fewer high-quality applicants and substantially more low-quality applicants than requisitions 23549, 27158, or 27160. These types of data can be used to guide decisions regarding recruiting processes, particularly with respect to how organizations might alter the content of their recruiting messages and channels to alter the distribution of quality scores in future requisitions. For instance, an organization may seek to replicate the recruiting outcomes, like those for 27158, or even improve upon these results. The seniors offer guidance for helping organizations that currently do not generate quality scores for all applicants to do so.
Evaluating the Effectiveness of Job Offer Decisions (D4)
D4 is the organization’s decision regarding who among those who have applied will receive job offers. As noted above, the outcomes of D3 represent the starting point for D4 selection processes. Consequently, the outcomes of D3 have downstream effects on the outcomes of selection decisions. The objective of selection is to identify the applicants who will be the best performers; however, because the selection activities have costs, the objective is to optimize selection decisions in light of these costs. We know from selection research that an optimal set of selection devices can be identified for any job (though that optimal set will not guarantee perfect selection decisions). To maximize selection validity (i.e., making the most correct hiring decisions), the strategy that maximizes validity is to administer all useful selection devices to all applicants and then aggregate scores optimally across these devices. Offers should then be made first to those individuals with the highest scores.
Although this approach maximizes validity, it also maximizes cost. Therefore, organizations seek methods to find an optimal combination of validity and cost. One common approach is the use of multiple hurdle selection systems. In multiple hurdle selection, organizations administer one or a few devices at a time to applicants, identify high scorers (and dismiss low scorers), then administer the next device, retain high scorers, and so on until all useful devices have been used. This minimizes costs because not every device is administered to every applicant. However, validity is lost because not every device is equally valid, so individuals who score high on some devices may not score high on others. Consequently, applicants that may ultimately be top performers get dismissed during the process. This is further exacerbated by the incentive to use lower costs devices early in the sequence when there are lots of applicants to process. However, lower-cost selection devices also typically have lower validity, which increases the likelihood of losing high-quality applicants early in the process.
The objective of workforce analysis in support of selection decisions is to help organizations first understand and then improve the validity of their selection practices. Validity refers to the association between scores on a predictor (selection device) and future job performance. The validity of a selection practice is typically evaluated by examining how individuals’ scores on the selection device (i.e., a resume review, standardized test, interview, etc.) correlate with future job performance scores. Consider, for example, a situation where the predictor and future job performance are correlated rxy = .50 (Figure 3).
Fig 3
Figure 3 uses an oval to represent where within the plot area the greatest density of points will occur with a correlation of rxy = .50. The horizontal and vertical lines divide the X and Y axes respectively into low versus high scores on the predictor and low versus high scores on the outcome, with high scores being to the right or above the lines respectively. In Figure 3, the intersection of these horizontal and vertical lines divides the area in the diagram in to four quadrants. Quadrant I represents people who scored high on the predictor and were hired and who were also high performers on the job. Quadrant III represents people who did not score well on the predictor and, therefore, where not hired, but would have been poor performers had they been hired. Thus, Quadrants I and III represent correct hiring decisions. Quadrant II represents individuals who scored well on predictor, but will not be high performers on the job; these are false positives. Quadrant IV represents people who do not score well on the predictor and were not hired, but had they been hired would have been high performers; these are false negatives. Both Quadrant II and IV represent hiring mistakes. The proportion of hiring mistakes here is indicated by the proportion of the area in blue that falls in Quadrants II and IV. Higher selection validity results in a tightening of the distribution of points, reducing the number of instances falling in Quadrants II and IV
FIGURE 3■ Stylized Scatter Plot Depicting the Distribution of Data Points for a Selection Processes That Has a Validity for Predicting Future Job Performance of rxy = .50
To evaluate selection device validity, the organization requires data on the correlation between applicant scores on selection devices and their future job performance. As readers may recognize, organizations are unlikely to hire all individuals in an applicant pool, so the organization will not have performance scores for all applicants. There are several imperfect solutions to this challenge. First, organizations can examine the magnitudes of relationships between predictor scores and outcomes for the data they do have (i.e., job performance for hires only). This can be a potential solution when the organization hires large numbers of individuals for a given position. Second, organizations can choose to rely on selection devices that have been developed by outside organizations for which large scale validation studies have been conducted. Here evidence of validity generalization can be used to estimate the validity of devices for positions in a given organization. Schimdt and Hunter (1998) provide evidence of the validity of a number of common selection devices. While methods for estimating the validity of selection devices may yield imperfect results, organizations should not be dissuaded from developing the best data they can to help improve the validity of selection procedures.
Evaluating Job Acceptance Performance (D5)
Finally, organizations want to maximize the acceptance rates of applicants. Job acceptance performance refers to the extent to which the organization is able to influence its preferred candidates to acceptance of job offers. In our staffing example, an outcome of D4 is a list of individuals to whom the organization is willing to make job offers. If all preferred candidates accept the offers extended, job acceptance performance is maximized. Often that is not the case. A traditional means of assessing job acceptance is through a yield ratio, the ratio of offers accepted to offers extended. For example, an organization that extends five job offers for a particular position and has three of them accepted would have a yield ratio of .60 or 60%. Organizations seek to maximize yield ratios.
A yield ratio does have limitations though. Specifically, yield ratios assume that every job offer that is accepted and, likewise, every job offer that is declined, have the same impact on the organization. That is rarely true. Not everyone who is extended an offer is necessarily expected to produce the same on-the-job performance. Further, if the organization has a given number of positions to fill, failing to gain acceptance of an offer often means that an offer will need to be extended to the next-higher-scoring applicant pool that, by definition, is perceived to have lower potential. The difference in performance potential between the first-choice applicant and the person who eventually accepts the offer reflects the loss that occurs by not gaining an acceptance from the preferred candidate.
The magnitude of the opportunity that exists for improving job acceptance results is gauged by the number of individuals who do not accept offers and the difference in job performance potential between initial offerees and the individuals that ultimate accept positions. If an organization experiences few instances of rejected offers, or who recruits sufficient numbers of highly rated applicants such that there is little difference in performance potential between original offerees and accepters, then there may not be opportunities to substantively improve job acceptance practices. On the other hand, job acceptance results are poor and poor recruiting results in few high-scoring applicants, then improving job acceptance results may be an important opportunity for the organization.
The following example illustrates these effects. The data in Table 5 represent applicant scores for the top 10 applicants for a position for two different job requisitions. The three top-scoring individuals from each applicant pool will receive offers. Now con- sider the following scenarios. First assume that the top applicant in each pool does not accept their offers, while Candidates 2 and 3 do. In response to the nonacceptance, the organization offers the fourth best candidate who then accepts. The amount of regret in each case can be initially scaled by the difference in applicant scores between the non-accepting top scoring applicant, and fourth best applicant who accepts. In Pool 1, the presence of several high scoring applicants results in a modest loss of six points (i.e., [1st − 4th] 108 − 102). In Pool 2, which has the highest overall scoring applicant, the smaller number of top scoring applicants results in a more substantial loss of 25 points (i.e., [1st − 4th] 110 − 85).
Consider the alternate scenario where job acceptance performance is worse, resulting in the first, second, and fourth best applicants do not accept offers, but the third, fifth, and sixth do. In Pool 1, this results in a loss of 20 points (i.e., [1st − 4th] 108 − 102 + ([2nd − 5th] 107 − 99) + ([4th − 6th] 102 − 96) = 20). However, in Pool 2, the result is a more substantial loss of 52 points (i.e., [1st − 4th] 110 − 85 + [2nd − 5th] 102 − 81 + [4th − 6th] 85 − 79).
Table 5: Job Acceptance Performance AnalysisScore RangePool 1 Applicant ScoresPool 2 Applicant Scores110–120 110100–110108, 107, 105, 10210290–10099, 96, 95, 92, 909380–908785, 8170–80 79, 76, 7360–70 67, 6550–60
Analyses like these can be useful for every organization. Ideally, organizations would attempt to estimate more precise value of differences in scores in dollar increments, though in many cases, this may not currently be feasible for at least some positions in every organization. But the value of working toward such estimates is easily seen in these examples, particularly when gauging the amount of investment an organization should be willing to make to intervene to capture opportunities of different magnitudes. But, even in the absence of dollar valued estimates of score differences, these analyses can be very useful. They provide guidance that is more conceptually correct than commonly used alternative metrics and score differences are directionally correct and the magnitudes have at least ordinal interpretations—bigger differences in scores represent bigger opportunities for improvement.
Assessing the Financial Impact of Staffing Decisions: Utility Analysis
Thus far, the staffing analyses that have been described examine changes in intermediate staffing outcomes, such as increases in applicant quality, increased acceptance rates of first choice job offerees and retention of high performing employees. Although improving these outcomes is important, these metrics do not provide an outcome that is readily interpreted in dollars that can be directly compared to changes in costs. Estimating the contribution of better performance on intermediate outcomes to organizations can be challenging. The discussion of utility analysis provides an initial step toward estimating the value of the greater contributions of better employees to organization effectiveness. Utility analysis requires three pieces of information. The first is an estimate of where applicants fall in the distribution of potential employee performances. The can be estimated imperfectly by the relative location of an applicant’s quality score in the distribution of all applicant quality scores. The second is an estimate of how imperfect the estimate of applicant quality is likely to be. This is provided by the estimate of the validity of the selection procedure. The third piece of information is an estimate of the value of differences in job performance. Jobs that have high autonomy, where individuals have greater capacity to determine what they will do and how it will be done, have greater potential for increasing the variability in outcome. Done really well, those decisions create the potential for high outcomes, but done poorly, there is also the potential for very poor outcomes. Low autonomy positions tend to produce more consistent results. High responsibility increases the potential impact of each decision, perhaps because it involves more dollars or impacts more people, further increasing the difference in the value of high versus low performance. These can be estimated by subject matter experts, or in the absence of these data, a rough estimate can be develop using salary data as shown below.
In utility analysis, differences in the value of better employees can be determined by estimating the difference between the location of two employees in the distribution of all employees. This can be done by calculating a standardized difference in applicant scores (i.e., ∆Z = [Score of Applicant 1 – Score of Applicant 2] / Standard deviation of applicant scores). If standardized differences are calculated, the value of these differences can be estimated if we know the difference in contribution we might expected for a one standard deviation difference in job performance. In utility analysis, this is known as the standard deviation of job performance in dollars (SDy). This value will vary across jobs according to a number factors including the amount of autonomy and responsibility assigned to the job. In the absence of more specific information, an initial estimate of SDy can be developed by multiplying .4 times salary. So, for a job with a salary of $50,000, this approach would yield an estimate of SDy = $20,000. Given these inputs differences an initial rough estimate of differences in job performance could be estimated using the following formula:
Utility = ∆Z * rxy * SDy
Therefore, for two applicants with scores of 110 and 90 for a device for which the standard deviation of applicant scores is SD = 20, a selection device with validity of rxy = .50, and who are applicants for a managerial position with an annual salary of $50,000 an estimate of the difference in job performance per year would be calculated as follows:
Utility = (110 – 90)/20 * .50 * (.40* $50,000) = $10,000
Thus, when triaging selection analysis opportunities, greater opportunity comes from (a) high volume of hires, (b) low validity of current selection processes, and (c) the value of the standard deviation of performance for a given position. These data can then be evaluated in conjunction with data on the validity and cost of various alternative selection processes. Thus, workforce analytics can be used to put a tangible cost or benefit value to the hiring decision based upon the score on a selection device.
You can read Book on Fundamentals of Research published by Dr.Anubha Maurya Walia for the description analysis
Commentaires