Just got a new workstation with 2x RTX 3090 GPUs (Gigabyte OC rev. 1). Hard to get hold of these GPUs at the moment!!
I’ll update this post with some benchmarks soon, but initial tests are running very nicely. Update: see benchmarking results here. Also managed to run simulations 4 times bigger than anything I’ve done before thanks to 24 GB per GPU.
This workstation was purchased for a new multi-GPU project (more than 1 GPU per program instance) so expect some exciting news soon. For simulations without long-range interactions (dipole-dipole or demag, e.g. for atomistic Monte Carlo simulations) I expect a near linear scaling in performance increase. The more interesting case will be when long-range interactions are included. For this case it will be tricky to squeeze significantly more performance but I have some ideas I need to test so fingers crossed. Besides performance increase, it will at least allow simulations with double the memory (48 GB).