Photogrametry help

To chat about anything else.
Post Reply
queirozcarlos
I have made <0 posts
I have made <0 posts
Posts: 1
Joined: Tue Nov 08, 2022 5:09 pm
1
Full Name: Carlos Queiroz
Company Details: Orbitall 3D
Company Position Title: owner
Country: Brasil
Linkedin Profile: Yes

Photogrametry help

Post by queirozcarlos »

Hi friends. How are you?

At the company I work for, we have a process where an AI analyzes images of oil tankers and makes markings in areas with oxidation.

We are using Reality Capture software to generate the mesh.

We would like to create a textured model with these images. The problem is that not all images that pass through the AI have the same demarcation, that is, they have no repeatability.

I would like to keep the demarcation done by the AI, we change the colors of the demarcation to purple, however since in some cases we don't have the same demarcation (some areas are not demarcated in similar / close pictures). When I generate the texture, in some cases I lose the purple information.

It would be interesting if the demarcation was still present.

I posted a photo of the model with the new color and also inserted three nearby images demarcated by the AI exemplifying that there is no constancy in the demarcation. I marked only one point where the problem occurs, but there are many.

We are open to testing other software for success.

Steps to reproduce the problem:
- Alignment
- Mesh generation
- Texturing

Thanks.
You do not have the required permissions to view the files attached to this post.
VXGrid
V.I.P Member
V.I.P Member
Posts: 544
Joined: Fri Feb 24, 2017 10:47 am
7
Full Name: Martin Graner
Company Details: PointCab GmbH
Company Position Title: Research and Development
Country: Germany
Linkedin Profile: No
Has thanked: 160 times
Been thanked: 175 times
Contact:

Re: Photogrametry help

Post by VXGrid »

Dear Carlos,

I think the issue is very intriguing, but I think the subforum you have choosen is not ideal.
Laserscanning Europe is a company providing service, sell scanning accessories and do trainings.

Perhaps you have luck with anybody having ideas though ;)

Cheers
Martin
User avatar
smacl
Global Moderator
Global Moderator
Posts: 1409
Joined: Tue Jan 25, 2011 5:12 pm
13
Full Name: Shane MacLaughlin
Company Details: Atlas Computers Ltd
Company Position Title: Managing Director
Country: Ireland
Linkedin Profile: Yes
Location: Ireland
Has thanked: 627 times
Been thanked: 657 times
Contact:

Re: Photogrametry help

Post by smacl »

VXGrid wrote: Fri Nov 18, 2022 2:45 pm Dear Carlos,

I think the issue is very intriguing, but I think the subforum you have choosen is not ideal.
Laserscanning Europe is a company providing service, sell scanning accessories and do trainings.

Perhaps you have luck with anybody having ideas though ;)

Cheers
Martin
Well spotted Martin, moved to general forum.
Shane MacLaughlin
Atlas Computers Ltd
www.atlascomputers.ie

SCC Point Cloud module
MikeDailey
I have made 30-40 posts
I have made 30-40 posts
Posts: 30
Joined: Wed Dec 12, 2018 5:56 pm
5
Full Name: Mike Dailey
Company Details: Survey work
Company Position Title: Office Tech
Country: United States
Been thanked: 6 times

Re: Photogrametry help

Post by MikeDailey »

Very cool stuff. This is outside of my area of expertise so take what I say with a grain or two of salt.

I'm assuming you are running the drone pics through your AI tool and then rendering the modified images through the cloud/mesh generation suite. This cloud/mesh generation is choosing what picture to use for colorizing a given point and so you would have two options.

1) Determine how it decides this and try to override that choice.
2) Try to get your AI to make more uniform decisions on its oxidation determinations.

I think #2 is your best bet. From looking at the sample you shared it would appear to me that your algorithm has a hard time with lighter areas. As you fly around and take pictures from different angles, those lighter colored surfaces get more or less washed out by the sun. You highlight one example in the picture and you can see a similar example on the covered walkway opposite the crane. Maybe try adjusting the contrast and or saturation levels of the project as a whole, individual photos or maybe even specific areas in the photos if possible. Doing this, I believe, would allow your AI tool to generate a more consistent identification of the oxidized areas. I would assume that once this is done, the mesh generation tool would be more likely to hold these highlighted areas into the mesh.
User avatar
stevenramsey
Global Moderator
Global Moderator
Posts: 1937
Joined: Sun Aug 12, 2007 9:22 pm
16
Full Name: Steven Ramsey
Company Details: 4DMax
Company Position Title: Technical Specialist Scanning
Country: UK
Skype Name: steven.ramsey
Linkedin Profile: Yes
Location: London
Has thanked: 30 times
Been thanked: 72 times
Contact:

Re: Photogrametry help

Post by stevenramsey »

first thought would be to weight the images that don't have the correct color to something like 0.1 so that the corect colors have a higher weighting
Steven Ramsey

Home [email protected]
Work [email protected]
Mobile +44 7766 310 915
jedfrechette
V.I.P Member
V.I.P Member
Posts: 1237
Joined: Mon Jan 04, 2010 7:51 pm
14
Full Name: Jed Frechette
Company Details: Lidar Guys
Company Position Title: CEO and Lidar Supervisor
Country: USA
Linkedin Profile: Yes
Location: Albuquerque, NM
Has thanked: 62 times
Been thanked: 220 times
Contact:

Re: Photogrametry help

Post by jedfrechette »

I would treat the color textures and oxidatized regions as seperate texture maps and do your compositing after the photogrammetry rather than before:
  1. Generate a base color texture map using the original source photos and standard photogrammetry processes.
  2. Run the source photos through your oxidation detection algorithm and generate an oxidation mask image for each source photo where oxidized pixels have a value of 1 and all other pixels have a value of 0.
  3. Use the camera pose and distortion information from step 1 to generate an oxidation texture map for your model using the oxidation mask images. Depending on what types of errors you can tolerate most easily there are a few different ways you could combine the oxidation masks.
    1. If Type I errors (false positives) are better than false negatives set the pixel value of the oxidation texture map to the maximum pixel value of all images that project on to that point. That will give you a texture map with a value of 1 anywhere oxidation was detected on at least one source image.
    2. Alternatively, if Type II errors are better use a minimum function to only mark pixels where oxidation was detected on all source images that project to that point.
    3. An even better option might be to add up all the pixel values of the source images that project on to a point and use that as a reliability metric based on the assumption that the more overlapping source images oxidation is detected on the more likely it is to be real.
  4. Finally, composite the oxidation texture map over the base color map for visualization
EDIT to add: Also, as Mike said, while you're working on this part of the process get the engineers working on the oxidation classification algorithm to improve the repeatablility of their algorithm.
Jed
Post Reply

Return to “General Chat”