Hi Shane,
I’m probably not explaining very well so thought I would try again with some illustrations. To begin, let’s start with a single structured scan in an e57 file:
https://master.dl.sourceforge.net/proje ... on018.e57
If we inspect the xml portion we can see what point attributes are present.
Code: Select all
<points type="CompressedVector" fileOffset="48" recordCount="4067815">
<prototype type="Structure">
<cartesianX type="ScaledInteger" minimum="-536870912" maximum="536870911" scale="9.9999999999999995e-007"/>
<cartesianY type="ScaledInteger" minimum="-536870912" maximum="536870911" scale="9.9999999999999995e-007"/>
<cartesianZ type="ScaledInteger" minimum="-536870912" maximum="536870911" scale="9.9999999999999995e-007"/>
<intensity type="ScaledInteger" minimum="0" maximum="32767" scale="3.0518509475997192e-005"/>
<colorRed type="Integer" minimum="0" maximum="255"/>
<colorGreen type="Integer" minimum="0" maximum="255"/>
<colorBlue type="Integer" minimum="0" maximum="255"/>
<rowIndex type="Integer" minimum="0" maximum="2047"/>
<columnIndex type="Integer" minimum="0" maximum="8191"/>
<cartesianInvalidState type="Integer" minimum="0" maximum="2"/>
</prototype>
<codecs type="Vector" allowHeterogeneousChildren="1">
</codecs>
</points>
Note that this is a structured scan so both rowIndex and columnIndex are defined. If your data is in a ptx file the row and column indexes aren’t given explicitly, but the shape of the 2D array is given in the header and values for all cells must be given, in a specific order, in the body so you can calculate the indexes. This isn’t very efficient from a storage stand point because you need to store values even for points without returns. I’ve always seen those no data points filled with coordinates of 0, 0, 0. It hasn’t been touched in a long time, but here is some old code that demonstrates this:
https://sourceforge.net/p/tlspy/code/ci ... es/ptx.py
In any case, once you have the indexes for each point everything I describe below applies to structured scans in either e57 or ptx format.
If we load that e57 point cloud I linked above and inspect the attributes we can see that they are all present. Note that we’ve renamed and scaled some of the attributes to match our working software’s conventions. In particular columnIndex == uv[0] and rowIndex == uv[1].
pt_cloud_data.JPG
Since the uv (aka rowIndex, columnIndex) coordinates define the scan point positions on a regular 2D grid, it is trivial to create a mesh by connecting each scan point to it’s neighbors in uv space. The top pane is the mesh in 3D space and the bottom two panes show the same mesh in uv space with the lower right pane zoomed in. Note that the vertexes of the mesh are still the original scan points, we haven’t changed them in anyway, simply added faces connecting them.
mesh_uvs.JPG
Technically, we don’t even need to generate a mesh for the next steps, but it is a good way to visualize the connectivity. Since the uv coordinates are a mapping to a 2D coordinate system we can use them directly as pixel indexes and render out the scan as an image.
comp_pano.jpg
The points have multiple attributes so we aren’t restricted to just using color to create an image. We can render out images for any of the point attributes including the x, y, and z coordinates. That also means that if we render images for all the attributes we can invert the process and losslessly convert those images back in to scans.
pt_coords.jpg
Here is a full resolution panorama generated using this method. You have to look closely to see it in this example, but if you zoom in on the exact center of the image you can find a vertical seem that splits the pano in half. That’s because you’re actually looking at the left and right halves of the scan sitting next to each other and the seam in the image is where they would overlap in 3D space. The panorama I’ve created here is NOT equivalent to putting a virtual 360 camera at the scan position and rendering the view. Doing that would be even easier, but for our use case we’re most interested in maintaining the structure of the scans so doing a direct conversion is better.
Station018_Color.jpg
The specifics of our workflow are a little specialized, but there’s no reason you can’t take the same approach with other tools. The core part of a hypothetical PDAL pipeline might be:
Code: Select all
[
{
"type":"readers.e57",
"filename":"Station018.e57"
},
{
"type":"filters.assign",
"value" : "X = columnIndex"
},
{
"type":"filters.assign",
"value" : "Y = rowIndex"
},
... Some other steps to save as a raster ...
]
The main blocker to doing that right now is that the attributes columnIndex and rowIndex aren’t read by readers.e57. Adding them should be a pretty small pull/feature request, which is why I suggest asking on the PDAL mailing list.
You do not have the required permissions to view the files attached to this post.