You are benchmarking the compensation of your executives and something doesn't smell right? Try the hybrid approach for an unbiased solution.
Benchmarking of reward data for the C-Suite has become a big issue for Boards since the advent of the two strikes rule that allows shareholders to exert at least some power over executive pay. Many Boards across the country were forced to fundamentally re-evaluate the remuneration packages of their executives and were left with a challenge: how can they know whether they are writing a bad contract or paying their executives what they deserve?
Well, the answer is complex because there are many elements at play when it comes to CEO compensation, making CEOs’ salaries vary dramatically. According to a report by the Australian Council of Superannuation Investors, the average CEO of ASX 100 companies in Australia earned $1.86 million in 2015 (add bonuses and shares and they took home $5.54 million) while the highest-paid individual CEO had a realised pay of $19.39 million.
CEOs packages have to be attractive enough to entice top talent and the role of the Board and remuneration committees is to balance the interests of all concerned while bringing rigour and integrity to the process, which begins with a robust job of benchmarking their executive salaries.
Traditionally benchmarking of rewards packages is a straight forward affair with comparisons being undertaken using position matching from salary surveys. In this approach, job titles, along with brief descriptors and analysis of various dimensions (such as revenues, people numbers, geography, job family/discipline, industry/sector) results in a matching of an organisation’s job with similar jobs in the market.
The overall effectiveness (both validity and reliability) of this approach can vary dramatically due to factors such as sample size (how many jobs in the particular survey) and how precise is the match of title and dimensions. In most instances, the validity increases the more dimensions used but the more dimensions used the smaller the data sample. And the smaller the sample, the less likely that it will be consistent over time. Hence, reliability declines.
One form of position matching that has been increasingly used is to create comparator groups using only disclosed remuneration data from annual reports. The biggest issue with this approach though is that some jobs may appear the same when looking at titles and dimensions, yet one may in fact be far less complex than the other, requiring less skill, experience and competence to perform. The person with the skills to perform the more complex role is inevitably going to be worth more than the person who can only perform the less complex role.
Is it reasonable that the reward packages of two such jobs then are in the same benchmarking data?
Take the example of two companies in the same sector selling into the same market, similar market capitalization and even similar profits. One however has the full value chain, from R&D, through to production, supply, marketing/sales, distribution & servicing; the other only marketing/sales onwards. Without taking consideration of the relative complexities of managing the full value chain it would not be possible to understand the difference in complexity of the respective CEO roles.
This is where based pay data can provide a unique power by ensuring that jobs requiring similar skill levels are compared to each other and those requiring higher and/or lower levels of skill are appropriately compared to jobs in their own skill levels.
Once jobs have been formally sized it is a relatively straight forward process of statistical regression of size and rewards data to create a remuneration database based on job complexity. Job sizing is a more effective way to benchmark salaries when compared to position matching because it considers complexity factors. However, even this method can have reliability issues. Like in the position matching approach, job sizing also compares dimensions such as geography, industry/sector, discipline/job family to help cut a database appropriately, and as samples get progressively smaller reliability suffers.
That’s why it’s important to bring an extra element to the table. Once the job has been sized or matched, the Board or Remco members will do a practical “smell” test. They will apply their own experiences and views of the market to the equation. But what happens when something just doesn’t smell right?
This is when a hybrid solution comes into play. By triangulating the data from the job matching, job sizing and smell test, the Board is assured that it has been as objective as possible under the circumstances.
By taking a hybrid approach Boards and Remco combine science and expert opinion to build a clear understanding of the role and apply accurate unbiased judgement to ensure that bosses are not over or under paid.
In the paper C-Suite Benchmarking Revisited I review the different benchmarking methods in more detail and provide examples to help you benchmark your executive salaries more effectively. You can download the paper here.