Basic 'workbench' for processing point cloud data

Please post all open source software related items here, eg OpenScanTools
neeravbm
V.I.P Member
V.I.P Member
Posts: 136
Joined: Thu Mar 16, 2017 3:29 pm
7
Full Name: Neerav Mehta
Company Details: Indoor Intelligence
Company Position Title: CTO
Country: USA
Linkedin Profile: No
Has thanked: 1 time
Been thanked: 6 times

Re: Basic 'workbench' for processing point cloud data

Post by neeravbm »

How do you find PCL scales with larger point clouds
Most of PCL's algorithms work only when the entire point cloud is loaded in memory. It should be fine when you are dealing with individual scans. But loading an entire unified point cloud in memory before applying any processing to it will just crash the computer. This means that the algorithms will need to be modified so that they operate on a section of the point cloud at a time.

PCL uses OpenMP. So it should scale well with the number of processors you have. One area of improvement is porting these algorithms to GPU.
Neerav Mehta
CTO, Indoor Intelligence
Creators of http://scantobim.xyz and http://rep3d.com
neeravbm
V.I.P Member
V.I.P Member
Posts: 136
Joined: Thu Mar 16, 2017 3:29 pm
7
Full Name: Neerav Mehta
Company Details: Indoor Intelligence
Company Position Title: CTO
Country: USA
Linkedin Profile: No
Has thanked: 1 time
Been thanked: 6 times

Re: Basic 'workbench' for processing point cloud data

Post by neeravbm »

Jed, QT Node Editor looks pretty cool! Thanks for sharing the link.

Out of curiosity, what are the processing steps/nodes do you require? I have created quite a few of my own processing APIs in code. As long as there is no proprietary algorithm in them, I can expose these steps and integrate with Qt Node Editor.

Also the developers here are not going to be able to make any money by working on this. It could be termed as a passion/hobby project. But the passion quickly runs out if there are not enough users using it. So if you are reading this thread and think that you'll want to use this, then please comment on this thread and mention what kinds of processing steps/nodes/filters you yourself will be interested in.
Neerav Mehta
CTO, Indoor Intelligence
Creators of http://scantobim.xyz and http://rep3d.com
jedfrechette
V.I.P Member
V.I.P Member
Posts: 1236
Joined: Mon Jan 04, 2010 7:51 pm
14
Full Name: Jed Frechette
Company Details: Lidar Guys
Company Position Title: CEO and Lidar Supervisor
Country: USA
Linkedin Profile: Yes
Location: Albuquerque, NM
Has thanked: 62 times
Been thanked: 219 times
Contact:

Re: Basic 'workbench' for processing point cloud data

Post by jedfrechette »

neeravbm wrote: Tue Oct 23, 2018 5:43 pmAlso the developers here are not going to be able to make any money by working on this.
I don't think that needs to be the case. Just look at GDAL, which PDAL was largely inspired by. A huge proportion of both the open and closed source GIS software ecosystem is built on top of it and a lot of developers make money from that. More directly I don't think there's anything stopping you from selling proprietary tools as closed source PDAL plugins. Once again quoting from the docs:
PDAL allows application developers to provide proprietary extensions that act as stages in processing pipelines. These might be things like custom format readers, specialized exploitation algorithms, or entire processing pipelines.
PDAL is still relatively young, so off hand I don't know of any examples of people doing that publicly, but I'm sure it's happening at a consulting level. It's also worth noting that most of PDAL's development is done by a for-profit company.

I'll also point toward Houdini, the other software I mentioned in the last thread. Although a proprietary application, it is built on top of, and benefits from, a whole host of open source libraries. To a certain extent, the same could even be said for ESRI's ArcGIS. Personally, I'd be more than happy to buy a point cloud workbench, built in the same mold, as long as it remains open enough that I can easily extend it and use it as a platform to build on top of. Open source core might be a different business model, but that doesn't mean there aren't ways for people to get paid for the valuable effort they put in.


From a production standpoint the two additions to PDAL that would have the most immediate value to me would be:
  1. Reopen this old feature request https://github.com/PDAL/PDAL/issues/39. In particular, being able to losslessly round-trip a structured scan originating from an e57 file through a PDAL pipeline would be very useful.
  2. Estimate surface normals using adjacency data from structured scans.
Both of those features are probably best integrated in to the open source core. However, after that, various types of classification tools would be high on my wish list, and those are things that I think would be much easier to have as commercial addons. Interactive tools would also be very high on my list, but that's a bigger development effort, with different requirements than algorithm development.
Jed
jedfrechette
V.I.P Member
V.I.P Member
Posts: 1236
Joined: Mon Jan 04, 2010 7:51 pm
14
Full Name: Jed Frechette
Company Details: Lidar Guys
Company Position Title: CEO and Lidar Supervisor
Country: USA
Linkedin Profile: Yes
Location: Albuquerque, NM
Has thanked: 62 times
Been thanked: 219 times
Contact:

Re: Basic 'workbench' for processing point cloud data

Post by jedfrechette »

neeravbm wrote: Tue Oct 23, 2018 5:43 pmmention what kinds of processing steps/nodes/filters you yourself will be interested in.
Talking about specific operators is great, but I don't think that's what we're missing right now. Somebody needs to build the platform first.

ESRI doesn't dominate the GIS industry because they had the best widget for transforming raster maps. The are dominant, because they built the platform that everybody else built their tools on top of.
Jed
Post Reply

Return to “Open Source Software”