How to speed up code using GPU?

2 views (last 30 days)
khan
khan on 10 Apr 2015
Commented: Greg Heath on 20 Apr 2015
Hi all, I have a general question, I have a neural network where the input is 80x60x13x2000.
In current setup i take one sample (80x60x13) at a time to process it for final output. Where in the first hidden layer it becomes 76x56x11x3, in second becomes 38x28x9x3, and in third becomes 34x24x7x3.
Now can any body tell me how can i use GPU at first and third layer in such a way that it becomes faster. Previously i converted all data to gpuArray, but it became worse.
Can anybody guide me how to better utilize it?
With Best Regards
khan
  1 Comment
Greg Heath
Greg Heath on 20 Apr 2015
Sizes of inputs, targets and outputs are 2-dimensional. I have no idea how your description relates to 2-D matrix signals and a hidden layer net topology.
Typically,
[ I N ] = size(input)
[ O N ] = size(target)
[ O N ] = size(output)
The corresponding node topology is
I-H-O for a single hidden layer
I-H1-H2-O for a double hidden layer
Please try to explain your problem in these terms.

Sign in to comment.

Answers (0)

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!