We dream a magic button for 3-D point cloud processing

3D Studio MAX, 3D Coat, Lightwave, ZBrush, VRMesh, TopoGun, Blender, Rhino, Trimble Sketchup Etc.
Post Reply
User avatar
Joon
V.I.P Member
V.I.P Member
Posts: 254
Joined: Wed Aug 21, 2013 8:01 pm
8
Full Name: Sung Joon Ahn
Company Details: CurvSurf
Company Position Title: Founder + CEO
Country: Republic of Korea
Linkedin Profile: Yes
Location: Seongnam-si, Korea
Has thanked: 3 times
Been thanked: 17 times
Contact:

Re: We dream a magic button for 3-D point cloud processing

Post by Joon »

It is my pleasure to release

FindSurface SDK
for the recognition of plane / sphere / cylinder / cone / torus in point cloud data.

The following are provided:
- FindSurface runtime library for Android / iOS / Windows / Linux
- Source codes of basic demo applications.

Further source codes will be published in the future for the application examples shown in YouTube CurvSurf.

FindSurface Web Demo is a tool for learning the functionality of FindSurface.

The FindSurface runtime library is free for non-commercial use.

Joon
User avatar
Joon
V.I.P Member
V.I.P Member
Posts: 254
Joined: Wed Aug 21, 2013 8:01 pm
8
Full Name: Sung Joon Ahn
Company Details: CurvSurf
Company Position Title: Founder + CEO
Country: Republic of Korea
Linkedin Profile: Yes
Location: Seongnam-si, Korea
Has thanked: 3 times
Been thanked: 17 times
Contact:

Re: We dream a magic button for 3-D point cloud processing

Post by Joon »

Good news!

Image

The source codes of the iOS App combining the following two Apps are released:
1. Realtime geometry extraction: Finding Planes in depthMap - Apple iPad Pro LiDAR (auto, plane, sphere, cylinder, cone, torus)
2. Geometry extraction from a point cloud saved: As-Built Object Geometry from Point Cloud - Apple iPad Pro LiDAR (auto, plane, sphere, cylinder, cone, torus).

The iOS App runs on:
- iPad Pro LiDAR
- iPhone 12 Pro LiDAR
- iPhone 12 Pro Max LiDAR
- iPhone 13 Pro LiDAR
- iPhone 13 Pro Max LiDAR.

Joon
User avatar
Joon
V.I.P Member
V.I.P Member
Posts: 254
Joined: Wed Aug 21, 2013 8:01 pm
8
Full Name: Sung Joon Ahn
Company Details: CurvSurf
Company Position Title: Founder + CEO
Country: Republic of Korea
Linkedin Profile: Yes
Location: Seongnam-si, Korea
Has thanked: 3 times
Been thanked: 17 times
Contact:

Re: We dream a magic button for 3-D point cloud processing

Post by Joon »

When the possible API for accessing the 576 raw laser depth points of the Apple mobile LiDAR is available, the object surface detection and measurement even in a total darkness will be possible from the point of view of the moving LiDAR 3D camera.

The current depthMap is generated by interpolating the 576 points together with the RGB image. The trap is the RGB image of a total darkness that leads to an empty depthMap and a stray motion tracking, even though there are 576 raw laser depth points from the point of view of the moving LiDAR 3D camera.

The RGB image of a total darkness is a trap of sceneDepth.

FYI:
The dToF (direct time of flight) 3D-camera of iPad Pro 2020/2021, iPhone Pro 12/13 and iPhone Pro Max 12/13 has physically 64 (16 stacks of rod of 4 cells each) VCSELs (vertical cavity surface emitting laser). The 64 laser pulses are multiplied with 3x3 to 576 by a DOE (diffraction optical element). The 576 rebounded laser pulses from object surfaces are detected and the individual time elapses are measured by SPAD (single-photon avalanche diode) image sensor. The 576 depth points are interpolated with RGB images to the 256x192 depthMap of 60 Hz. Apple has released the access API to the 256x192 depthMap but not to the 576 depth points.

Apple's patented logic for detecting laser dots on the image sensor is wired underneath the image sensor of Sony. Thus, Sony has to supply the image sensor only to Apple.
User avatar
Joon
V.I.P Member
V.I.P Member
Posts: 254
Joined: Wed Aug 21, 2013 8:01 pm
8
Full Name: Sung Joon Ahn
Company Details: CurvSurf
Company Position Title: Founder + CEO
Country: Republic of Korea
Linkedin Profile: Yes
Location: Seongnam-si, Korea
Has thanked: 3 times
Been thanked: 17 times
Contact:

Re: We dream a magic button for 3-D point cloud processing

Post by Joon »

Why do we use measuring devices?

Human intelligence can visually identify a cup on a table, but we cannot accurately measure the size, position, or orientation of the cup in mm accuracy.

Sensing & measuring devices play a role here.

We may like to exactly know:
- shape & size
- position & orientation
- materials, temperatures, ...
of objects in front of us.

Human intelligence can visually detect an object on a street, but we do not accurately measure its size, position, or orientation.

The artificial intelligence based on visual input data may can not be better than its creator, the human.

But the artificial intelligence based on additional multiple measuring data may can be better than its creator, the human.

We, the human, cannot see in darkness or smoke, but measuring devices can.
Post Reply

Return to “Modeling Software”