The long-awaited rising rate environment is upon us. In fact, we’re well into it now.
According to the FDIC, in the first quarter of 2018, US insured deposits rose by 2.9% over the fourth quarter of 2017. Most banks are now looking at promotional efforts to encourage savers who have previously been reticent to put money in low-interest bearing deposits.
Of course, as every banker knows, in a rising rate environment you have to be careful not to raise rates too quickly as holding back as long as possible is the general path to profitability. Or is it?
The last time we had a rising rate environment, there were no internet-only banks. There are a number of alternative players willing to offer seemingly sky-high rates to attract new customers – regardless of the profitability.
And so today’s banker is caught between a rock and a hard place. If the rate is too low, you’ll be beaten by competitors. If the rate is too high, you’ll be giving up much-needed margin.
Years ago, the answer was to find a rate position among competing banks’ CD rates and “stick it out.” I recall one very large bank’s marketing position: be somewhere between number 2 and 5 on the list of highest CD rates. In another case, one bank simply pegged their CD rate to their competitor’s. “We’ll be 5 basis points higher than so-and-so.”
That kind of static interest rate-setting won’t work anymore. The banks you’re competing against have become much smarter than in the last rising rate environment. Maybe you have too. Whether you have or not, you need to realize that your interest rate-setting process has to change.
In the end, you need to understand how your rate setting affects new deposit inflow. It’s a complex mix of internal decisions and external factors that will determine how much deposit growth you’ll see and how profitable that growth will be. Finding a balance between the two can be tricky.
Here are three metrics you should be measuring and managing as you determine your deposit rate decisions. I’ll mention them in order of importance and in turn, simplicity. As these metrics become more complex to measure, they also become more powerful, making your bank stand out in its ability to predict and control profitable deposit inflows.
It’s All About that Beta
This one should be familiar to you. Beta – the proportion of a rate increase or decrease that is passed along to a customer – has traditionally been the gold standard. Keeping betas “just right” has been the goal of every deposit portfolio manager.
In one sense, betas provide a great summary of the potential for margin. In fact, many banks have tried to model deposit inflow based on the beta. Creating a statistical formula that will predict new money based on beta has a distinct advantage. It eliminates the external rate environment as a factor in deposit growth. By resetting the “sea level” each time a rate increase occurs, you can compare historic deposit growth in periods where the rate environment was very different.
That’s not to downplay the effect of other macroeconomic factors. Beta does, however, provide a normalization of interest rates across different periods when trying to understand your rates’ effect on new deposits.
We know that in a rising rate environment, the correlation between deposit inflows and beta is positive and strong. The more of the overall rate you pass along, the more you’ll attract customers.
Simple, right? Well, not so much.
What just measuring inflows won’t gauge is how much of that new deposit money you’re just “repricing.” Knowing how much is new to the bank and how much is just coming in from lower-priced or even non-interest bearing accounts is critical to overall bank profitability.
Measuring and managing at the product level will not do.
In addition, how much inflow is good? If deposits rise by 2% is that good? Did your competitors do better? There’s more complexity to the question than just to measure inflows.
So, maybe it’s not all about that beta.
When It’s Not Good to Share
It’s very difficult to get a good handle on the potential for deposit growth in your markets without understanding your market share. There are a number of different vendors that can provide you metrics for your market share of deposits. As you do that, make sure you’re also collecting good data on your competitors’ interest rates.
Many banks have a rate survey process that collects competitive rates. I’ve seen literally hundreds of spreadsheets sitting on a server somewhere that got brought into a pricing committee meeting and then ignored.
Combining market share with competitor rates provides a rich area for modeling. Connecting the dots between not only your position relative to the overall rate environment, but to your competitors as well, will drive additional insight into your impact on new deposit growth. To do that well, you’ll need historical market share and competitor rate information.
So liberate those forgotten spreadsheets and databases of competitive rates and start using them to model their impact. That may require some work as historically most banks did not collect this data in a way to be used in a statistical modeling exercise. But the one time conversion effort will pay back benefits as you deepen your understanding of your competitive position.
Imagine understanding exactly what rate position, with respect to your competitors, will provide the maximum profitable deposit growth. That seems like a better approach than guessing as most bankers still do.
Robbing Peter to Pay Paul
They say sometimes our biggest enemy is ourselves. That’s definitely the case with deposit pricing. Many a banker will tell you the war stories of running a CD promotion only to find that the vast majority of the money came from money market accounts with lower interest rates. You decreased your margin and didn’t even earn a customer. Sure, duration increased, but did the decreased margin compensate for that? Probably not.
The effect is called cannibalization and it’s a critically important, but often ignored, factor in deposit pricing. Most bankers pay lip service to its effects but cannot measure it. And, as they say, if you can’t measure it, you can’t manage it.
But how you do measure a complex effect like cannibalization? It turns out that simplifying the effect into a single number just doesn’t tell the whole story.
Another vertical that understands cannibalization very well is the consumer retail industry. Grocery stores, department stores, and other retailers have been measuring the effect of cannibalization for over a decade now. The largest retailers understand how the price on one product will affect the sales of another.
They quantify this effect through statistical modeling. Every pair of products within a class of “substitutable products” has a cannibalization coefficient. That coefficient measures the impact of pricing of one product on the other. You can imagine the complexity at a grocery store where every different type of cheese might be substituted for another.
Fortunately for us and bankers, the problem is much simpler. The number of different deposit products we have that could be substituted for another are pretty low in comparison. That doesn’t undersell the challenge of modeling that effect. Tracking funds between accounts and identifying which money is “new to bank” or coming in from an existing customer can be a data management challenge. Further complicating the matter is understanding how much new money might have been brought in from an existing customer – both “new to bank” and existing customer.
But solving these complexities and understanding the dynamics of the money moving back and forth between accounts will put your bank in a better position to set deposit rates and be confident in the decision. Not only will you know it’s “right”, but you’ll be able to predict the net inflow, net margin, and overall change in customer-level profitability with every single rate change.
Imagine the power of understanding the impact of deposit pricing on the inflow of funds over the next 8, 12, 26, or even 52 weeks.
Where do you start?
If the idea of collecting the data and measuring these metrics seems overwhelming, have no fear. There are a number of software vendors in the space that can automate the process.
Choosing the right vendor and having a strategy for rollout that is optimized for your bank’s size, data collection processes, and strategic goals can be tricky. Getting that wrong can cause you to waste time, overspend, and – in the worst case scenario – completely fail to achieve your objective.
Start with a banking analytics consultant that can help you craft a go forward plan. Such a plan will keep you from biting off more than you can chew while gaining insights on the lowest hanging fruit first.
Plus, software vendors in this space utilize complex analytic solutions leveraging advanced quantitative power and big data infrastructure. Many bankers I speak with feel intimidated in comparing these vendors because they feel the science just goes over their heads. Banks that have successfully partnered with such vendors most often start with their own consultative bench strength. By having someone at their side who can cut through the statistical and analytical complexity and get to the core business value proposition, bankers can be confident that their technology rollout plan has the maximum chance of success.
As rates continue to rise and alternative players make the competitive field more complex, don’t be left behind by being unable to mine your own deposit data for the much needed insights. Make deposit analytics a priority in your institution.