Error executing LLM model: Cuda error: no kernel image is available for execution on the device

 If you receive an error like this when executing an LLM model:

Error executing LLM model: Cuda error: no kernel image is available for execution on the device

The first things to check:

- NVidia graphics drivers are up to date

- CUDA version is latest stable version

Often, these are installed together, by navigating the NVidia website and running the appropriate scripts.

If the error still occurs, then the issue could be that the "client library" you are using was not built with the same NVidia versions.

Example: the ctransformers Python library.


To build ctransformers locally:

```
pip3 uninstall ctransformers
pip3 install ctransformers --no-binary ctransformers
```

The --no-binary option prevents installing pre-built binaries. This effectively forces a local build. This ensures that the local version of CUDA and NVIDIA graphics driver will be used.

Comments