MATLAB Answers


CUDA_ERROR_UNKNOWN when using floats instead of double precision

Asked by Francesco on 18 Sep 2013


I have a identical .cu files, one where I use variables defined as float and another where I use variables defined as double.

The double precision one works perfectly when called on by the kernel, whereas the float version does not. I get an error when I gather() the output variables:

Errror using gpuArray/gather
An unexpected error occurred during CUDA execution. The CUDA error was: CUDA_ERROR_UNKNOWN

From the documentation, it appears that the feval() function will automatically cast my input arrays to the correct type, however, I have also tried individually transforming each input/output array to float using single(), but I get a similar error.

Here is the format:

__global__ void SegForceNBodyCUDA(double const *SoA,
                    double const a, double const MU, double const NU,
                    int const S,
                    double *f0x, double *f0y, double *f0z,
                    double *f1x, double *f1y, double *f1z);


__global__ void SegForceNBodyCUDA(float const *SoA,
                    float const a, float const MU, float const NU,
                    int const S,
                    float *f0x, float *f0y, float *f0z,
                    float *f1x, float *f1y, float *f1z);

Both .cu files compile correctly without errors/warnings.

Please advise.

Thank you,



So, basically I just turn the inputs/outputs that were double precision into single precision using single().

Also, although you say that it won't automatically cast the inputs to the correct type, the documentation says "All scalar inputs in C (double, float, int, etc.) must be scalars in MATLAB, or scalar (i.e., single-element) gpuArray data. They are passed (after being cast into the requested type) directly to the kernel as scalars. All const pointer inputs in C (const double *, etc.) can be scalars or matrices in MATLAB. They are cast to the correct type, copied onto the card, and a pointer to the first element is passed to the kernel. No information about the original size is passed to the kernel. It is as though the kernel has directly received the result of mxGetData on an mxArray." So, my interpretation is that it does cast them...?



Casting is unlikely to be the issue here - you see the error even when you have forced the types to be correct. CUDA_ERROR_UNKNOWN usually means that the card has crashed, and I've previously seen this when there is a bad memory access (e.g. off the end of an array, or an invalid pointer etc.).

Unfortunately that means that the problem may be somewhere in your kernel or in the way the output arrays are being handled in CUDAKernel. Without your code it's hard to identify which of those two it might be. If you could supply a minimal example that is sufficient to demonstrate the problem, I might be able to help work out where the problem lies. You can send it to me directly if you can't (or don't want to) post it here.

Finally, about the casting: if the input is plain MATLAB data, it is cast to the correct type and copied to the GPU. If the input is already on the GPU (i.e. it is a gpuArray), the type and complexity must exactly match and a specific error is thrown if they do not. Since you do not see this error, your inputs are being cast for you.



Hi Ben,

Thank you for your reply.

The fact is that I've directly modified the original double precision code by simply declaring variables as "float" instead of "double". I haven't made any changes in the algorithm. I don't allocate any memory within the kernel file, so I am not sure why it crashes.

The file is rather long, about 1000 lines of code, separated into several functions, so it's difficult for me to whittle it down to a minimal working example.

I can send it to you and see if you can reproduce the mistake, and in the meantime also look for the error myself.

Thank you,

Kind Regards,


Log in to comment.

1 Answer

Answer by Ben Tordoff on 25 Sep 2013
 Accepted Answer

Thanks for sending the code.

I’ve done some initial investigation and it looks like you have an illegal memory access somewhere. Here is what cuda-memcheck reports:

Running CUDA Single Precision, Optimised...
warning: Cuda API error detected: cuModuleGetGlobal_v2 returned (0x1f4)
warning: Cuda API error detected: cuModuleGetGlobal_v2 returned (0x1f4)
[Launch of CUDA Kernel 102 (SegForceNBodyCUDA<<<(4,1,1),(256,1,1)>>>) on Device 0]
Memcheck detected an illegal access to address (@local)0xfff830
Program received signal CUDA_EXCEPTION_1, Lane Illegal Address.
[Switching focus to CUDA kernel 102, grid 103, block (0,0,0), thread (5,0,0), device 0, sm 12, warp 2, lane 5]
0x0000000010052d98 in SegForceNBodyCUDA(float const*, float, float, float, int, float*, float*, float*, float*, float*, float*) ()

I couldn't see anything obviously wrong in the kernel, but it's quite a lot of code. The most likely culprit is reading/writing past the end of an input/output array. However, the fact that the illegal memory address includes "@local" may indicate a problem with how data is being passed around internally to the kernel (i.e. in thread-local memory). I don't think there is any problem with the way the kernel is being called by MATLAB.

Best of luck debugging this - these type of problems can be tricky to isolate.


  1 Comment

Thanks a lot Ben!

I'll try my best :)

Log in to comment.

Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

MATLAB Academy

New to MATLAB?

Learn MATLAB today!