Previously, writing software applications to run on GPUs meant programming in the language of the GPU, hiding the code image transformations and matrix operations. This is described on the website Mr Dobbs as ‘a process similar to pulling data out of your elbow to get it to where you could look at it with your eyes!’
Helpfully Nvidia’s CUDA was created to provide a easy to use API enabling programmers to work with familiar programming languages while developing software that can run on GPUs. However, even with the support of Nvidia’s CUDA for common C/C++ and Fortran programming languages and, with its support for new languages, we still haven’t seen increasing numbers of applications that support GPUs. Slow progress would be the best description. Why is this?