SqueezeSegV2 Network returns "NaN" for "MeanAccuracy" and "MeanIoU"

1 view (last 30 days)
Hello everyone,
I am trying to train and validate the SqueezeSegV2 Network from the matlab page (https://ch.mathworks.com/help/deeplearning/ug/lidar-semantic-segmentation-using-squeezesegv2.html) with my own Dataset, but I am getting "NaN" values for 'MeanAccuracy" and "MeanIoU" parameters:
My initial point clouds ware unorganized, and I converted them to be organized using the same parameters as in the Velodyne sensor (https://ch.mathworks.com/help/lidar/ug/unorgaized-to-organized-pointcloud-conversion.html) except that I have set the Vertical FoV with different values : VerticalFoV = [26.8 -24.8].
Regarding the labels, I have a set of 'xlsx' files where each cell represents the class name corresponding to the Color information of each point from each pointcloud ( My PointClouds include Color Information, but not Intensity information unlike the dataset from the matlab page, hence I added values of 0 for that). Based on these xlsx files I created other tables which store ID values corresponding to each class in order to be able to create the PNG files.
Should I let the parameters of the Velodyne as they are predefined ? Or does this have to deal with the lack of Intensity information from my Dataset ?
I appreciate any Information regarding this issue.
Thanks !

Accepted Answer

Shubh
Shubh on 17 Jan 2024
Hi Serban,
When training and validating the SqueezeSegV2 network with your own dataset and encountering "NaN" values for 'MeanAccuracy' and 'MeanIoU', several factors might be contributing to the issue. Here's a breakdown of potential causes and solutions:
1. Vertical Field of View (FoV) Adjustment: Changing the VerticalFoV to [26.8 -24.8] is fine as long as it accurately represents your sensor's field of view. The key is to ensure that the network receives data that reflects the actual distribution and characteristics of the input it will encounter in deployment. However, make sure that the conversion from unorganized to organized point clouds is done correctly and consistently.
2. Lack of Intensity Information: The absence of intensity information might be significant, especially if the SqueezeSegV2 model you are using was pretrained or designed to expect intensity data. The network might be relying on intensity information for certain feature extractions. You can try a few approaches here:
  • Add Intensity as a Feature: If possible, add intensity information to your dataset. If the actual intensity data is not available, consider using a placeholder value (other than 0) that might work better with the network.
  • Modify the Network: If adding intensity data is not feasible, consider modifying the network architecture to work without intensity data. This would likely require retraining the network from scratch.
3. Data Preprocessing: Ensure that your data preprocessing steps (such as normalization, scaling, etc.) are correctly applied and consistent with the requirements of the network. Incorrect preprocessing can lead to ineffective training and validation.
4. Labeling and Annotation Quality: The way you convert 'xlsx' files to PNG labels is crucial. Ensure that this conversion accurately reflects the class of each point and is in a format that the network expects. Any mismatch in labeling can lead to poor training outcomes.
5. Training Hyperparameters: Check your training hyperparameters. Sometimes, inappropriate learning rates or optimization algorithms can lead to NaN values during training.
6. Network Initialization: If you're using a pretrained model, ensure that it's appropriately adapted to your dataset. If training from scratch, ensure the network is initialized correctly.
Debugging Strategy:
  • Start with a Small Dataset: Try training and validating on a small, well-understood subset of your data where you can manually verify the inputs and expected outputs.
  • Monitor Gradients and Losses: During training, monitor the gradients and loss values to check for exploding or vanishing gradients.
  • Use a Validation Set: A validation set that the network has not seen during training can provide a better understanding of how well the network is generalizing.
In summary, the issue might not necessarily be due to the FoV parameters or the absence of intensity information alone. It's often a combination of factors related to data preparation, network architecture, and training process. Careful examination and systematic debugging of each component should help in identifying and resolving the issue.
Hope this helps!

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!