You are not logged in.
Hi all,
I’ve been running Arch Linux with 128GB of RAM and have experienced some issues with Btrfs filesystem corruption when I overclocked my RAM to 6600 MT/s. After a few instances of crashes and filesystem issues, I reverted the RAM speed back to 3600 MT/s, and everything seems stable now.
However, I want to make sure I don’t encounter this corruption again, and I’d like to know:
What RAM speed would be optimal for a 128GB configuration to avoid system instability and Btrfs corruption?
Is 3600 MT/s a safe choice for large memory configurations, or should I stay below that?
Are there any specific BIOS settings or best practices I should follow to ensure stability?
I’d appreciate any advice or recommendations on memory speed or stability, especially for large-capacity setups like mine.
Thanks in advance!
Offline
What RAM speed would be optimal
The minimum of two maximums: specified by your memory vendor and your CPU/motherboard can support.
I’d appreciate any advice or recommendations on memory speed or stability
Test it with memtest86+ at least few hours. There are packages for EFI, Legacy BIOS and ISO boot.
Offline
“Btrfs corruption” is an odd measure of hardware stability. Or a goal of achieving one.
Make sure the hardware is working correctly. Then data in RAM isn’t going to be corrupted. As a secondary consequence filesystem data is also going to remain correct.
There is no way to make sure that hardware used out of spec is going to run stably.⁽¹⁾ Just set configuration, stress test it, see the results. If it comes out right, there are good chances (not certainity!) it’s ok. Stress testing article lists some tools. In your case I’m going to point towards memtest86+-efi (BIOS systems version), which you may run for 24 hours, and mprime (you need to build it, graysky offers a PKGBUILD in AUR). Consult overclocking fora and resources to get details on how to use them best. There is also memtester, which can be run while OS is running, but for an overclocked system this is not a replacement for proper, full-scale testing. It rather serves as a routine, relatively cheap test.
Make sure your system is correctly cooled and mind that stress testing may reveal latent hardware malfunctions, some of which may brick the computer.
If you have any signs of system instability, like unexplained filesystem corruption or misbehavior of programs using large amounts of memory, switch back to using RAM within its specs. There is nothing you can do about that.
To answer the final question: no, there is no magical switch in a computer to make things run faster. And it’s not the 2000s, when we could easily score a win in the overclocking roulette. Now business seized that opportunity and industrialized gambling.
Rising voltages often provides better signal quality, if the hardware can pump enough power. But even a small rise puts a lot of additional stress on RAM and, if in any way involved, CPU and power converters on the mobo. Heat alone rises quadratically, and heat-induced failure risk rises even quicker.
____
⁽¹⁾ True for any hardware, but the risk grows fast outside specs.
Last edited by mpan (2026-03-02 05:22:54)
Paperclips in avatars? | Sometimes I seem a bit harsh — don’t get offended too easily!
Offline
Just wanted to say thank you to the Arch community for the quick replies and guidance.
I finally stopped being lazy and decided to do things the proper way. Instead of relying on EXPO/XMP, I used MemTest86 to manually validate the best stable speed for my 128 GB RAM configuration.
With 64 GB, EXPO profiles usually work fine, but once you go beyond that capacity you’re pretty much on your own and need to tune things manually.
This time I validated everything with MemTest86 first, so I could find a stable configuration without risking my OS or filesystem.
Really appreciate the help and the fast responses. Thanks again!
Offline