http://www.youtube.com/watch?v=r1sN1ELJfNo - 1 year ago
http://www.youtube.com/watch?v=ebCou7hnol0 - 4 months ago
Note the latest Video cards are around 4-5 times more powerful than the hardware used in these demos!
Executable
http://galaxy.u-aizu.ac.jp/trac/note...0/R700demo.zip
This works a treat on my ATI 5850 video card! - a galaxy with 40,000 stars flies around beautifully.
http://galaxy.u-aizu.ac.jp/trac/note...screenshot.png
* * * * * * * * * * * * * * *
This is a pretty nifty example of a simulation of 10,000 -> 40,000 stars interacting - very fast, in real time.
Previously this sort of demonstration required either alot of time or access to a node or beowolf cluster supercomputer. But next generation graphics cores (ATI's are released, NVidia are due in late Q1 2010) reveal a massive number of powerful processors (ATIs mid range has 1440 processors, the higher end 1,600 and top of the line 3,200) and you can glue multiple graphics card together.
Up to now it was pretty challenging spreading compute bound loads between a CPU and GPU. With Windows 7 and either Direct X11's direct compute, Kronos Open platform OpenCL, ATI's Stream or NVidia's CUDA it is now possible to load balance compute loads across both the CPU and GPU highly effectively.
What this demo shows is a compute bound problem being solved on the fly; 3d gravity models are order (n squared) complexity - 30,000 stars means approximately 900,000,000 (* 18 roughly) calculations per step in any simulation. GPUs have reached over 3 teraflops performance - that's 3,000,000,000,000 floating point operations per second - an order of magnitude or two more powerful than CPUs alone.
NVidia and ATI will shortly begin to release scientific demonstrations of this software. I for one would love to run galaxy simulations as my screen saver.
Matthew