Showing posts with label gpu. Show all posts
Showing posts with label gpu. Show all posts

2023-05-26

GPUs in CFD



In a chat recently, I heard that computational fluid dynamics (CFD) can’t take advantage of GPUs. That seemed a bit doubtful to me, so I looked it up. Seems like there has been some work recently that showed how use of GPUs greatly accelerate CFD workloads.

This press release on OpenACC’s website talks about how a private company (AeroDynamic Solutions, Inc. (ADSCFD)) used OpenACC to give their proprietary CFD solver Code LEO GPU capabilities, with very good speedup.

By using OpenACC to GPU-accelerate their commercial flow solver, ADSCFD achieved significant value. They realized dramatically improved performance across multiple use cases with speed-ups ranging from 20 to 300 times, reductions in cost to solution of up to 70%, and access to analyses that were once deemed infeasible to instead being achieved within a typical design cycle.

Similar blog posts from Nvidia and ANSYS+Nvidia last year also show significant speedups (between 12x and 33x) and significant power consumption savings, as well.

Nvidia’s blog post show results from a beta version of ANSYS Fluent and Simcenter STAR-CCM+. 

Figure 2 shows the performance of the first release of Simcenter STAR-CCM+ 2022.1 against commonly available CPU-only servers. For the tested benchmark, an NVIDIA GPU-equipped server delivers results almost 20x faster than over 100 cores of CPU.

The performance of the Ansys Fluent 2022 beta1 server compared to CPU-only servers shows that Intel Xeon, AMD Rome, and AMD Milan had ~1.1x speedups compared to the NVIDIA A100 PCIe 80GB, which had speedups from 5.2x (one GPU) to an impressive 33x (eight GPUs). 

ANSYS’s blog post covers the same result as Nvidia, showing 33x speedup using 8 A100 GPUs. They also do a cost comparison of equal-speed clusters, one using GPUs and the other purely CPUs:

1 NVIDIA A100 GPU ≈ 272 Intel® Xeon® Gold 6242 Cores

Comparing the older V100 GPUs with Intel® Xeon® Gold 6242, the 6x V100 GPU cluster would cost $71,250 while the equivalent CPU-only cluster would cost $500,000, i.e. about one seventh the price.
 
 

2012-05-21

NVIDIA Nsight Eclipse Edition

One of the new products announced along with CUDA 5 at the recent GPU Technology Conference was NVIDIA Nsight Eclipse Edition, which runs on Linux and Mac OS X. Previously, the only IDE available was Nsight Visual Studio which ran only on Windows.

I attended the demo talk for Nsight Eclipse, and it seemed a well thought out product. It gives access to all running threads on all cores, optimization suggestions, debugging interface, etc. Plus the usual Eclipse features like refactoring, build, version control. Watch the video:


Nsight Eclipse Edition is distributed as a pre-built binary, i.e. you can't just point Eclipse to a new software source. And, you have to be in the registered developer program to get access to the download.

Once you install the CUDA Toolkit, say in CUDAHOME=/usr/local/cuda, the nsight executable is in ${CUDAHOME}/libnsight.

2012-04-02

Using NVIDIA drivers in Fedora 16

UPDATE 2 (2012-08-23): The Nvidia installer now makes use of DKMS, which causes any kernel updates to rebuild the Nvidia kernel module. So, no need to go through this rigamarole at every kernel update.
UPDATE: There was a typo in my lspci command line. Should have been VGA and not CGA.

Installing the latest NVIDIA drivers under Fedora (or really, any distribution), is a little roundabout. Here is how I do it, which is a mix and match of several howtos on the net. Part of the reason the process is a little complicated is the use of the open source Nouveau drivers: these have to be removed before NVIDIA's drivers can be installed.

The canonical reference for all things Linux+NVIDIA is if-not-true-then-false.com. Their write-up on NVIDIA and Fedora 16 gives directions to use the RPMFusion repositories, which provide non-free software (including the NVIDIA drivers).

We will follow their instructions for removing Nouveau, but install NVIDIA drivers downloaded from NVIDIA themselves.

$ sudo -i
# yum install gcc kernel-devel
# yum update kernel* selinux-policy
# reboot

To remove nouveau, build a new initramfs image:

# mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r)-nouveau.img
# dracut /boot/initramfs-$(uname -r).img $(uname -r)
# reboot

Next, edit the file /etc/default/grub. To the line that defines GRUB_CMDLINE_LINUX, append the following:

rdblacklist=nouveau nouveau.modeset=0

Mine looks like:

GRUB_CMDLINE_LINUX="rd.md=0 rd.lvm.lv=vg_johnny/lv_swap rd.dm=0  KEYTABLE=us quiet  SYSFONT=latarcyrheb-sun16 rhgb rd.luks=0 LANG=en_US.UTF-8 rdblacklist=nouveau  nouveau.modeset=0"

Then, make the grub2 config:

# grub2-mkconfig -o /boot/grub2/grub.cfg

Next, find the model number of your GPU card, and find the appropriate driver from NVIDIA:

> lspci | grep VGA

Mine shows:


01:00.0 VGA compatible controller: nVidia Corporation G92 [GeForce 9800 GT] (rev a2)

Then, go to NVIDIA's Linux driver page and pick the appropriate version. For me, it was Linux x86_64/AMD64/EM64T, version 295.33 -- the driver installer is a file named NVIDIA-Linux-x86_64-295.33.run

The installer will build a kernel module, but to do so, you must be in runlevel 3 (i.e. no GUI, but with networking):

# telinit 3

You will drop down to the console prompt. Login as root, and then do:

# sh NVIDIA-Linux-x86_64-295.33.run

and answer the prompts along the way. You should be able to just do "telinit 5" to get back the GUI login, but I usually just reboot.

Now, whenever the kernel is updated, you will have to rebuild the kernel module by repeating the last step.

At the final step of the Nvidia installation, you will be asked if you want to enable dkms, which allows kernel updates to rebuild the Nvidia kernel module automatically. Say "yes".