How do you change the output dimension of a NN layer?
Show older comments
I am attempting to implement the NeRF2 architecture from this paper. So far, I have the layout of the network working properly, as well as the attenuation network and the radiance network separately. At the moment, my network looks like this:

However, the outputs from relu_8 have shape CB, while the other two inputs to radianceConcat have shape SCB. The AttnReshape layer was supposed to fix this, but I can't figure out how to actually reshape the labels on the data. I've attached the code I'm using below. I am completely at a loss here, any help would be greatly appriciated.
8 Comments
Cris LaPierre
on 5 Mar 2025
Your attached code returns errors:
- Unrecognized function or variable 'ReshapeLayer'.
- Unable to find file or directory 'reshapeLayer.mat'.
Alex B
on 5 Mar 2025
Matt J
on 5 Mar 2025
However, the outputs from relu_8 have shape CB, while the other two inputs to radianceConcat have shape SCB.
That alone is not the problem. When you concatenate arrays together, they have to have the same dimensions along every dimension except the one you along which the concatenation is intended. You have configured radianceConcat to concatenate along the 3rd dimension, which means you need all three inputs to have size(Input,1:2) the same. However the sizes of the input activations are
size(headingDirInputs_Activation)=[3,20,1]
size(rxPosInputs_Activation)=[2,8,1]
size(relu_8_Activation)=[1,1]
It is not at all clear how any reshaping is supposed to let you concatenate along the 3rd dimension, and particularly not relu_8 which is just a scalar.
Alex B
on 5 Mar 2025
so the concat layer throws an error saying the dimensions aren't consistent.
They are not consistent, but it is not just that some inputs are CB and some are SCB. You can see in the output of analyzeNetwork(nerfNet) that all the activations going into radianceConcat are of different sizes in all dimensions. You cannot freely concatenate arrays of arbitrary sizes even if all of them were SCB.
You need to tell us what dimensions the activations, both going into radianceConcat and coming out, are expected to have.

I tried using the ReshapeLayer to cast the inputs to a dlarray with the proper size, but that gave me an error as well.
The only place the ReshapeLayer is applied is the output of relu_8, but the output of relu_8 is just a scalar. A scalar cannot be reshaped.
Alex B
on 5 Mar 2025
Alex B
on 5 Mar 2025
Answers (1)
Matt J
on 5 Mar 2025
0 votes
Probably. See if the attached network does what you want.
Categories
Find more on Built-In Layers in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!