Less ammo, more hits – targeting an employee development program

In his last blog post my dear colleague Luděk Stehlík promised that we’d bring a concrete example of a simple tool that can help us identify reliable pieces of knowledge, that may have real impact on the business results of the company. The list of universally “hot” business topics include job performance of employees, employee engagement, retention rates, and so on. Let’s be ambitious and let’s take a look at something that is directly related to performance!

As always, we have to be wise when selecting and preparing data for analysis. Our statistical calculations, in most cases, will technically not fail or issue a warning when we feed them the wrong data. Keeping this in mind, we always have to make sure that our data sources are as reliable as possible, our variables are comparable to each other (and if not, we have to make them comparable), and last but not least, they match the data types that are required by the calculations we want to apply.

While we can certainly try to fabricate assumptions on influencing Performance, the best case is when we do have access to Performance measurements that we can use. In this case, our data can be used as a “dependent” or “target” variable which opens up many interesting possibilities for statistical calculations.

Just one remark before getting to the point: we shall always take the time to think about the inherent properties of our data. No, not (only) the statistical properties (range, type, etc.) of it but those details about the origin of the data that no statistical analysis might show us: How old is the data? Does it cover all the population (every employee) or does it have significant gaps that might distort our results? Moreover: is the same data for some part of the population really comparable with another part of the population? To bring another war example: let’s say we required every officer in our regiment to report the merits of their subordinates on a given scale. The data came in, and it looks good. When, however, we think about how the merits of a rifleman might have been measured, and how the merits of a medic, or for another contrast, what might be the difference between the measurement of the efficiency of a rifleman versus a sniper, we often realize that what seems to be uniform data might actually have an inherent structure that could have an influence on every calculation based on it.

Let’s suppose we accept our Performance data to be reliable and fit for our analysis, and we also have a nice set of Competency measurement at our hands. Competencies could be virtually anything that someone could “be good at”. Most companies already have a Competency set designed according to their strategy, and possibly more than one set – one for each job type. If we don't have such a system (and data), we can always calculate competency measures based on “raw” psychometric data (but this the topic of another blog post). Or we can run a 360 degree survey to obtain similar information.

The question we want to answer is: which competences in our company need the most development, from the perspective of increasing (or maintaining) performance?

We can break this question down to two parts, each tangible enough to be answered by statistical methods: 1. Which competencies seem the most related to Performance? 2. Which competencies are the most underdeveloped? When we have the answers for both questions, we can combine them to identify those competencies that have the strongest relation to Performance and the lowest (average) measurements at the same time. After all, we don’t want to spend time and money developing competencies that have no effect whatsoever on our success (and we also might not want to develop important, but already highly developed competences), do we?

Let’s take a look at our first try. Never mind that the correlations between Performance and each Competency are all negative (especially if you don’t know exactly what it means), the bigger problem is that the correlations are very low. Typically, a correlation has to be at least 0.5 to mean anything and the p-value (a measure that tells about the reliability of the connection between the two variables) better be less than 0.05. What did we do wrong? Did we employ the wrong method? Is our data unusable or incomparable (see above)?  I don’t think so. It might just be that different job families work in different ways: different Competencies might drive Performance for managers than for back office employees. Does it sound plausible? Let’s test the idea.

This time, we have used only a specific segment of our data: managers and executives. Lo and behold… the same analysis yields usable and statistically significant results. What does this figure tell us? We have our competencies ranked according to their apparent potential to influence (“drive”) performance, from top to bottom. For additional caution, we have not only measured the correlations but also added the famous p-value. Then, we have visualized the current level of development for each Competency by shading: the darker bars represent the more underdeveloped areas, while a pale color signifies that a competency is rather well developed.

Interpreting the graph is easy: Organizing is the most underdeveloped among the more important competencies, and Planning and Managing & Measuring Work might need less development but their importance (when it comes to influencing Performance) is higher so they definitely deserve the attention too.


A possible bonus of this analysis – which we conducted primarily for developmental purposes – is that the results can be easily reused in the context of employee selection or promotion: it tells us which criteria to use when selecting or promoting new or current employees.

You might notice that the direction of the correlation has become positive once we found the right segmentation. This is not always necessity, as a negative correlation (the less “A” the more “B”) can be as meaningful as a positive one (the more “A” the more “B”), but in our case it’s intuitively “good”: we’re typically happier to see Perseverance having a positive correlation with Performance (meaning the more Perseverance the more Performance) rather than a negative one.

In our analysis, we simply examined the correlation of Performance with some Competencies. There are, however, many more advanced statistical techniques (linear regression, mediation analysis or structural equation modeling) that enable us to make stronger conclusions regarding the connection between the variables, eventually including causality (“Does higher Perseverance result in higher Performance?”)

One might argue that it makes limited sense to use Competency measures that are summarized on a group level. Another possibly valid point is that regardless of the current level of development of a certain Competency, it shall be advisable to develop it (further) once its correlation to Performance is high. I leave these decisions to you. Although Competency development happens at the level of individuals, for logistic reasons it might still make sense to examine averages of levels of development at a group level. It is also a question of logistics (and strategy) if the company focuses on the development of underdeveloped Competencies or concentrates the efforts on the continuing development of those key Competencies that are already well developed. Statistics won’t make the people management decisions for you! But it can help a lot for you to make wiser decisions.

András Murányi
HR R&D Specialist

Assessment Systems International

Leave a Reply