I have some 2D images - reconstruction or transaxial slices of tomography -
that I want to "put together" so I can build a 3D volume.
Does anyone know how to?
No products are associated with this question.
Direct link to this answer:
You could use the CAT command, e.g.,
Direct link to this comment:
It would generate a 3rd dimension matrix right?
Do you know how to create an image, a 3D one, using this matrix?
Isn't that what I just showed you?
cat() can be used to build the image as Matt shows. The problem is to display the image.
I was able to obtain a 3rd dimension matrix, a combination of
N slices of M x K informations into a single variable.
However now I want to DISPLAY the 3D image, like I displayed the 2D
image with each slice of M x K.
Do you know how to Walter Roberson?
Here's a pretty popular 3D viewer
There are many others to browse on the FEX as well.
I don't know if I read it wrong, but it seems this function - and several others - need a volume.
I tried to find something, but all I found was functions that uses volume data to obtain some kind of information about 2D data, e.g., slices.
The thing I need is something that gets my 2D images (=slices) or my 2D data, and makes it into a 3D volume, or make some kind of visualization of the 3D volume, even if it does not make anything other than that.
You've already been shown how to convert combine a set of 2D slices to make a 3D volume. That's what you've already done using the CAT command. Now that you have your volume, displaying it should just be a matter of feeding the volume to a 3D viewer.
Is it that easy?
I tried to run the function before, but it wasn't doing what I wanted.
I thought that maybe it would not work since I didn't have the appropriate files, like:
Matlab Files (.mat)
Dicom Files ( .dcm , .dicom )
V3D Philips Scanner ( .v3d )
GIPL Guys Image Processing Lab ( .gipl )
HDR/IMG Analyze ( .hdr )
ISI Files ( .isi )
NifTi ( .nii )
RAW files ( .raw , .* )
VMP BrainVoyager ( .vmp )
XIF HDllab/ATL ultrasound ( .xif )
VTK Visualization Toolkit ( .vtk )
Insight Meta-Image ( .mha, .mhd )
Micro CT ( .vff )
The images are in .bmp, btw.
Is it possible to just feed the data and the program will make a 3D image? What about colormap? Which one is it gonna use?
Since you said these were tomography slices, I assumed they were grayscale. They definitely started off that way. Tomography scanners don't reconstruct color images. I would make a grayscale volume and then use the viewer's user interface to set the color rendering you want.
Yes, they start as grayscale, but as I said,
I made some mathematics using those values
and I obtained new values, and new images,
with a new colorbar and, since the values changed,
a new scale.
The thing is, how do I go back to grayscale since
my values are not in the 0-255 range anymore?
And if I do that, wouldn't I lose my reference, which
is the whole point of what I've been doing so far?
Only you understand the purpose of any of this. I don't quite understand why it makes sense to assign a different color scale to different slices in a volume. If you switch to sagittal or coronal slices, it will look crazy.
The color scale is the same for all new slices.
My purpose is simple: I have a grayscale image, which is - in physics - directly related to the attenuation coefficient of my sample in relation to radiation, in this case, X-rays.
However attenuation coefficient depends on density and atomic number. So using some equations, I can obtain new images, one that is related to density and other related to atomic number. The color scale is just so I can know which colors assign to which values of density or atomic number.
To sum it up, I need this new color scale so I can identify those tiny variations of my variables. I don't really care how strange it may look, I just want to obtain my slices and my volume. The slices I have already obtained. I just need my volume to finish it.
I can even change my scale to make things easier, as long as I can identify, through a new scale, which colors are related to which values.
I think the bottom line is that the 3D volume data that you feed to the 3D viewer code must not be pre-divided into color components. Each slice of the volume A(:,:,i) must represent an intensity distribution only. However, the intensity does not necessarily have to be as originally reconstructed from the tomograph. It sounds like you could apply all the post-transformations you mention to the 3D intensity distribution and then call that final 3D array of intensities your input volume to the viewer.
If f I've understood it right, then the bottom-line solution is simply to avoid intermediately storing the data as bitmap. The bitmap format splits the image into color channels, which it sounds like you don't need. So, store the transformed volume data to a .mat file instead.
Opportunities for recent engineering grads.