Monday, December 31, 1973
Jensen Paris june 2025 https://www.youtube.com/watch?v=X9cHONwKkn4
Each market, each domain of application. Becomes accelerated. Each one of these
5:52
libraries opens up new opportunities for the developers and it opens up new opportunities
5:59
for growth for us and our ecosystem partners. Computational lithography. Probably the single
6:06
most important application in semiconductor design today, runs in a factory at TSMC.
6:12
Samsung large semiconductor fabs. Before the chip is made, it runs through an inverse physics
6:20
algorithm called cuLitho, computational lithography.
6:25
Direct sparse solvers Algebraic multi-grid solvers.
6:31
CuOpt, we just opened sourced. incredibly exciting Application Library.
6:38
This library accelerates decision making To optimize problems with millions of
6:46
variables for millions of constraints like traveling salespeople problems.
6:51
Warp, a Pythonic Framework for expressing
6:56
geometry and physics solvers…really important. cuDF cuML, Structured databases, DataFrames,
7:07
classical machine learning algorithms cuDf accelerates Spark with zero
7:13
lines of code change. cuML accelerates Sidekick learn with zero lines of code change. Dynamo and CuDNN
7:22
cuDNN is probably the single most important library NVIDIA has ever created. It accelerates
7:29
the primitives of deep neural networks. And Dynamo is our brand new library that makes
7:35
it possible to dispatch, orchestrate, distribute extremely complex inference
7:42
workloads across an entire AI factory. cuEquivariance and cuTensor tensor
7:48
contraction algorithms. Equivariance is for neural networks that obey the laws of geometry, such as proteins, molecules, Aerial, and Sionna. Really
8:00
important framework to enable AI to run 6G. Earth-2, our simulation environment for
8:11
foundation models of weather and climate models. Kilometer-squared, incredibly high resolution.
8:19
MONAI, our framework for medical imaging, incredibly popular. Parabricks is a solver
8:26
for genomics analysis incredibly successful. cuQuantum, CUDA-Q I'll talk about in just
8:33
a second for quantum computing and cuPyNumeric acceleration for NumPy and
8:40
SciPy. As you can see, these are just a few of the examples of libraries. There are 400 others.
8:48
Each one of them accelerates a domain of application. Each one of them opens up new opportunities. Well, one of the most exciting.
8:58
Most exciting is CUDA-Q. CUDA-X is this suite of libraries.
9:05
Library suite for accelerating applications and algorithms on top of CUDA. We now have CUDA-Q.
9:12
CUDA-Q is for quantum computing. For quantum classical computing based on GPUs.
CUDA-Q
9:23
We've been working on CUDA now for several years. And today I can tell you there's an inflection
9:30
point happening in quantum computing.
No comments:
Post a Comment