Server memory FAQ: What to know before you upgrade
Upgrading server memory is a balancing act. Between what you need now and what you’ll need in a year. Between the need for speed and the cost of those extra GBs. Get too much memory and you can, paradoxically, pay for it in cost and performance. Get too little and your users are slogging through gummy applications.
We’ve helped many organizations figure out what worked best for their server architecture, workload, and apps. During those conversations, we hear a lot of the same questions. Here are five of the big ones organizations ask when considering a server memory upgrade.
1. How much server memory should I buy?
It’s probably the most basic question when approaching a server upgrade, and there’s really no straightforward answer. It depends greatly on what you’re trying to accomplish. Size of organization doesn’t matter. There are large firms with extraordinarily light workloads and small memory footprints, and small ones running tons and tons of memory-hogging apps. There are free tools available to let you know how much memory you currently use, but you should also discuss memory usage with your application experts. Plan for future growth and balance your needs against your budget.
2. What are other companies buying?
Just 18 months ago, most companies were putting 8 GB DIMMs in their servers. That’s changed as the price of larger server memory DIMMs has come down. Right now, 16 GB DIMMs are sitting in a sweet spot where you can get a lot of computing power for a great price. Larger 32 GB DIMMs are looking attractive too, and though the price is almost certain to come down, some firms should go with 32 GB DIMMs now, gaining empty DIMM slots to easily accommodate future growth. You’ll pay a premium for 64 GB DIMMs, but it’s safe to say that kind of computing power isn’t science fiction anymore.
3. Should I fill every slot in the server?
A 24-slot server filled with 32 GB DIMMs is a lot less expensive than it used to be, but it’s probably more capacity than most organizations need. The short answer is no, you probably don’t need to fill every space in a server. There are exceptions, of course, like in financial services, where larger memory (or application) size could be a competitive advantage. But it’s a balance between performance and capacity. If you fill three memory banks per processor instead of two, you’re going to lose more than 20 percent of your speed. Worse, an unbalanced memory configuration can lead to unpredictable response times. Think of it as wait staff in a restaurant. If one waitress (processor) has four tables (memory channels), and another waiter (a second processor) has two tables (memory channels), the second waiter is going to have to work twice as hard to match the waitress. Furthermore, the restaurant (the server) as a whole will have uneven wait times and service times because the workload isn’t balanced.
4. Can I mix memory and use different-size DIMMs in the same server?
Can you mix 8 GB DIMMs with 16 GB DIMMS? Absolutely, as long as you’re balanced between the two processors. But one thing to avoid is having too much of a difference in their capacity. Combining 8 GB with 16 GB, or 16 GB with 32 GB is fine, but combining 8 GB with 32 GB isn’t recommended and may not be supported.
5. What’s the best ratio of DIMMs to processors, and how do I balance the system?
Every processor has to have memory, and every processor should have the same amount of memory for the best results. Figuring out the ratio of DIMMs to processors gets a little technical after that, and it can vary based on workload. Your best bet is to consult an expert.
As you can see, there aren’t many cut-and-dried answers. Organizations are increasing their memory capacity, and when doing so, it’s important to keep everything balanced. But first you have to find out how much memory your specific applications need.
What other questions do you have about server memory? Let us know in a comment below or contact your Account Executive.
Joe Murphy, Lenovo Senior Systems Engineer, contributed to this post. Joe supports Lenovo’s complete data center portfolio, including hyperconverged offerings. He joined Lenovo in 2014 after more than 25 years at IBM and has been supporting SHI for over a year.