Modifying E57 scalars while keeping data structured

Please feel free to ask other members for help with certain projects
VXGrid
V.I.P Member
V.I.P Member
Posts: 544
Joined: Fri Feb 24, 2017 10:47 am
7
Full Name: Martin Graner
Company Details: PointCab GmbH
Company Position Title: Research and Development
Country: Germany
Linkedin Profile: No
Has thanked: 160 times
Been thanked: 175 times
Contact:

Re: Modifying E57 scalars while keeping data structured

Post by VXGrid »

jedfrechette wrote: Thu Nov 26, 2020 4:49 pm If we're just talking about a contrast adjustment to improve visualization I don't see why the software needs to know what scanner captured the data. Photoshop doesn't need to know what camera was used when it applies a contrast stretch to a photo, and as indicated by Autodesk, Faro Scene doesn't need to know the scanner used to adjust the contrast of a scan.

As a user, I'd much rather have measured attributes like intensity stored as relatively "raw" values, regardless of format, and not have any post-process effects baked in so we can do quantitative analysis of them if we want. Again I'd refer to the analogy of digital photography. I want to shoot in raw, convert the data to an open lossless format (images=exr, scans=e57) and have the software I'm using to view the data handle the display transform and expose any user adjustments that are necessary.

I firmly agree though that vendors should definitely be writing more metadata to e57 files though to make the job of downstream users easier.
I think comparing cameras to scanners is somewhat a comparison of apples and oranges.
Intensity/Reflectivity is depending on the angle, the distance and the wavelength used + the hardware equipment, while the camera is producing the result depending on the existing lighting and the hardware.
If a hardware manufacturer would give you the complete raw data, with no noise reduction and no information about the hardware, you would spend some time only to process the raw data to somewhat "good looking" values. But you don't know if this is coming in a logarithmic scale, ranges from 0 -> 255 or somewhat else (based on one data set), because there is no such information. Without proper documentation this is a volatile task.

Let's take Riegl as an example:
You can use the SDK to read the raw data and use their documentation to apply all calculations from raw values to processed.
If you read the E57 file I'd expect the finished processed data.
The reasons are:
  1. The software vendor needs to read the data twice, if no bounds are given, meaning I need to read the complete E57 once to find the max and min value, and a second time, where I apply my calculations from raw to processed data (since I don't know which hardware was used and I don't have a unified solution for that hardware).
  2. If I adjust the contrast of a scan in my software (like for example Scene), then I will either only use the data in Scene, because the view is adapted with the applied contrast when I open a scan, or a data write is necessary if I want to use it in another software.
  3. When the hardware vendor is writing the E57, he can apply these calculations for their device specificly, since E57 is an exchange format, not a raw format.
  4. A point cloud file is not the end product. We are using the point cloud to process it further into CAD, BIM, ....
    With an image I have (in most cases) the end product, or the data which I enhance to my endproduct (Photoshop), With the point cloud I would need to do an extra step in between to start my enhancement.

To conclude with some points:
I'm no expert of Register360 / Cyclone and I don't know if you can apply contrast enhancement as a saved operation (like, import the RTC data, click enhancement, export as E57).
Furthermore I don't know if this was just not specified by the client like: Hey we'd like to have an E57, but please enhance the reflectivity values, so it is not so dark, and they just received the E57 without such operations.

It's just my personal opinion that you normally would expect these values to be more on the processed side, than the raw side, since an endcustomer on the image side would expect a JPG/PNG rather than a RAW.


Edit:
I think where we could agree: The hardware vendor should give the user the choice which kind of E57 is generated. A processed or raw one.
User avatar
smacl
Global Moderator
Global Moderator
Posts: 1409
Joined: Tue Jan 25, 2011 5:12 pm
13
Full Name: Shane MacLaughlin
Company Details: Atlas Computers Ltd
Company Position Title: Managing Director
Country: Ireland
Linkedin Profile: Yes
Location: Ireland
Has thanked: 627 times
Been thanked: 657 times
Contact:

Re: Modifying E57 scalars while keeping data structured

Post by smacl »

jedfrechette wrote: Thu Nov 26, 2020 4:49 pmAs a user, I'd much rather have measured attributes like intensity stored as relatively "raw" values, regardless of format, and not have any post-process effects baked in so we can do quantitative analysis of them if we want.
I'm very much of the same mind. The provision of raw data, including all the setup details, offers more opportunities for custom corrections, processing of noise and extraction.
VXGrid wrote: Fri Nov 27, 2020 11:45 am[*]The software vendor needs to read the data twice, if no bounds are given, meaning I need to read the complete E57 once to find the max and min value, and a second time, where I apply my calculations from raw to processed data (since I don't know which hardware was used and I don't have a unified solution for that hardware).
If intensities have been normalised in the E57, it just means that this extra pass has been done at the E57 generation stage rather than the import stage. Same amount of processing as far as the end user is concerned. Ideally, if the hardware vendor is going through this extra pass I'd rather they simply store the min-max values and any other relevant information as attributes rather than applying them.
Shane MacLaughlin
Atlas Computers Ltd
www.atlascomputers.ie

SCC Point Cloud module
VXGrid
V.I.P Member
V.I.P Member
Posts: 544
Joined: Fri Feb 24, 2017 10:47 am
7
Full Name: Martin Graner
Company Details: PointCab GmbH
Company Position Title: Research and Development
Country: Germany
Linkedin Profile: No
Has thanked: 160 times
Been thanked: 175 times
Contact:

Re: Modifying E57 scalars while keeping data structured

Post by VXGrid »

smacl wrote: Fri Nov 27, 2020 3:38 pm
jedfrechette wrote: Thu Nov 26, 2020 4:49 pmAs a user, I'd much rather have measured attributes like intensity stored as relatively "raw" values, regardless of format, and not have any post-process effects baked in so we can do quantitative analysis of them if we want.
I'm very much of the same mind. The provision of raw data, including all the setup details, offers more opportunities for custom corrections, processing of noise and extraction.
[...]
But is in this context not a SDK of the hardware manufacturer a lot better, since you have a clear separation of import options simply by the file type?
In addition a SDK is normally (hopefully) provided with proper documentation.

Let's assume all hardware manufactures are writing E57 files with raw data. Now they need to standardize which values they are going to write in there, like for example the value scale (logarithmic/linear/...), or they need to define fields for these. Then we need flags, to distinguish between the manufacturer, and perhaps the scanner itself.

I have issues with that:
  • There are more software vendors than hardware manufacturers, and if a software vendor wants to use these values, they need to implement some sort of algorithm(s) for every E57 file depending which hardware manufacturer wrote the E57 file.
    Every software vendor! needs to write and maintain code for every hardware they want to support, without proper documentation.
  • The hardware manufacturers are not writing standardized (in this context - providing similar information) E57 files. Every E57 header I look at, looks different (sometimes even from the same manufacturer).
  • A SDK based solution might bring speed benefits due to tailered implementation details - no generalisation, while reducing data size (no need to transfer E57 files, just use the raw hardware manufacturer data).
  • If SDKs are available, E57 can be used to provide postprocessed data, so if the software vendor wants to use the data, for example for visualisation, they don't need to implement anything besides an E57 reader.
User avatar
smacl
Global Moderator
Global Moderator
Posts: 1409
Joined: Tue Jan 25, 2011 5:12 pm
13
Full Name: Shane MacLaughlin
Company Details: Atlas Computers Ltd
Company Position Title: Managing Director
Country: Ireland
Linkedin Profile: Yes
Location: Ireland
Has thanked: 627 times
Been thanked: 657 times
Contact:

Re: Modifying E57 scalars while keeping data structured

Post by smacl »

VXGrid wrote: Fri Nov 27, 2020 4:04 pmBut is in this context not a SDK of the hardware manufacturer a lot better, since you have a clear separation of import options simply by the file type?
In addition a SDK is normally (hopefully) provided with proper documentation.
In theory yes, my experiences of SDKs to date has been that they come with a nasty price tag in some cases and in others have appalling performance characteristics. Some also can cause installation headaches when you come to deliver your product based on the SDK. If you're in any doubt about this, try acquiring a simple import/export SDK from each of the major manufactures and test how long they each take to read and write say 1 billion points.
Shane MacLaughlin
Atlas Computers Ltd
www.atlascomputers.ie

SCC Point Cloud module
VXGrid
V.I.P Member
V.I.P Member
Posts: 544
Joined: Fri Feb 24, 2017 10:47 am
7
Full Name: Martin Graner
Company Details: PointCab GmbH
Company Position Title: Research and Development
Country: Germany
Linkedin Profile: No
Has thanked: 160 times
Been thanked: 175 times
Contact:

Re: Modifying E57 scalars while keeping data structured

Post by VXGrid »

smacl wrote: Fri Nov 27, 2020 4:41 pm
VXGrid wrote: Fri Nov 27, 2020 4:04 pmBut is in this context not a SDK of the hardware manufacturer a lot better, since you have a clear separation of import options simply by the file type?
In addition a SDK is normally (hopefully) provided with proper documentation.
In theory yes, my experiences of SDKs to date has been that they come with a nasty price tag in some cases and in others have appalling performance characteristics. Some also can cause installation headaches when you come to deliver your product based on the SDK. If you're in any doubt about this, try acquiring a simple import/export SDK from each of the major manufactures and test how long they each take to read and write say 1 billion points.
I agree 100%. Been there, done that... every point you are mentioning. Nice dependencies, DLL hell, no/bad documentation, issues on customer PCs

But still I'd prefer a "not so well designed" SDK over overloading E57 files, because then I can decide, do I want to be able to import this hardware manufacturer, rather then my customer coming to me saying: I tried to import this E57 from that hardware manufacturer/software. Doesn't look good in your software, please fix.
jedfrechette
V.I.P Member
V.I.P Member
Posts: 1236
Joined: Mon Jan 04, 2010 7:51 pm
14
Full Name: Jed Frechette
Company Details: Lidar Guys
Company Position Title: CEO and Lidar Supervisor
Country: USA
Linkedin Profile: Yes
Location: Albuquerque, NM
Has thanked: 62 times
Been thanked: 219 times
Contact:

Re: Modifying E57 scalars while keeping data structured

Post by jedfrechette »

I don't think relying on vendor SDKs is scalable, it might work OK in the short term, but it put's way to high of a burden on 3rd party developers and places them completely at the mercy of the dominant vendors being willing to provide those SDKs. Relying on vendor SDKs places a particularly high burden on small developers and individual users which stifles innovation. History has shown that's a bad idea for everything from word processing to image to 3d model formats.

Since I didn't know, I thought I would look up what the e57 spec actually says about intensity. According to the spec:
8.4.4.5 The intensity element shall encode the strength of the signal for a point. The intensity value shall not include compensation for signal strength reduction as a function of distance, surface orientation, or other properties of the surface being sensed.
So it looks like raw values are what should be recorded and Leica probably is doing the right thing. However, I actually don't like that requirement being part of the e57 spec and there's a reason I put "raw" in quotations marks in my original reply. I think what we really need are just better standards on what values like Intensity and Red, Green, Blue represent and how they should be handled.

Let me try to explain what I think would be a good model for solving all of these problems. Essentially, we're talking about color management so I would just copy an existing color management system, in particular the ACES color management system that has more or less become the standard way to handle color in the film business. They are trying to solve a slightly more complicated version of the same problem we are discussing. Data comes from a variety of different sensors, each of which has a different spectral response. That data is then handled by countless different software packages during its lifetime. During its lifetime the data will also need to be displayed on a variety of output devices with widely varying performance characteristics, e.g. computer monitor, digital projector, HDTV, phone, whatever people are watching movies on 50 years from now.

To greatly simplify the system ACES consists of:
  1. Input Transform
  2. Standard Color Space used for storing all working data.
  3. Look Transform
  4. Output Transform
The Standard Color Space is linear and scene-referred so values are directly proportional to light and reflectivity values in the original scene that was captured. These stored values are independent of whatever output device they will eventually be displayed on. Note this is opposite how photographers and graphic designers usually handle color management. They usually work in display-referred color spaces that are relative to the output, e.g. an image in a web browser or printed on a specific type of paper by a specific type of printer. The obvious problem with display-referred color spaces is that you need to know what your output will be ahead of time. A scene-referred color space has the advantage of being directly related to the physical scene that it recorded. The focus is on recording physical data about the scene, worrying about making it look good can wait until later. Of course, you still need a convention for what your color values mean relative to the physical world. The convention used by ACES is that 0.0 is a hypothetical material that reflects no light, 0.18 is a 18% grey card, and 1.0 is a hypothetical 100% reflective diffuse material. Values greater than 1.0 are also common and represent light sources.

So how do you get from the raw data collected by a sensor to that standard color space? That's the job of the input transform, which is typically provided by the camera manufacture. Similarly, it is the Output Transform's responsibility to display data in the Standard Color Space correctly on whatever output device is being used. The Look Transform is where things like color adjustments, contrast stretching, etc. happen. It's also important to note that color spaces are independent of file formats. Data in the Standard Color Space might be stored on disk in an .exr or .tif file or it might just be an array in memory. Regardless, the user (or their software) needs to keep track of what color space the data is in.

A key feature of this system is that it's all lossless. Working data is stored in the Standard Color Space so you always have a predictable relationship between the values you're working with and the physical world without needing to worry about the specific performance characteristics of whatever sensor was used to capture that data. However, since you know what the Input Transform was, you can always invert it and get back to raw values if you want. Similarly the Look and Output Transforms are applied on top of the working data in the Standard Color Space so you aren't baking in those decisions about what the data should look like in a given context.

The problem with our industry is that we haven't decided what "Intensity", or "Color" for that matter, means. Right now everybody uses their own definition which makes it harder than it needs to be for both developers and users. If it was up to me, I'd make it simple and define our own linear scene-referred color space that allows us to maintain a direct link to the physical world we're measuring. 0.0 is 0% reflectivity and 1.0 is 100% reflectivity, regardless of whether we're measuring: I, R, G, or B. Everything else follows from there.
Jed
snahta
I have made <0 posts
I have made <0 posts
Posts: 1
Joined: Wed Jan 13, 2021 6:16 am
3
Full Name: surbhi nahta
Company Details: sevenmentor
Company Position Title: trainer
Country: india
Contact:

Re: Modifying E57 scalars while keeping data structured

Post by snahta »

Hi,

yes, I think that is 1 means to do it. I can not think that Autodesk is seriously presenting this as a solution for the failure of their software to correctly read the and perform a comparison stretch on the strength values.

Regards
Last edited by smacl on Wed Jan 13, 2021 2:29 pm, edited 1 time in total.
Reason: Advertising link removed
Post Reply

Return to “Request Help With Projects”