I think comparing cameras to scanners is somewhat a comparison of apples and oranges.jedfrechette wrote: ↑Thu Nov 26, 2020 4:49 pm If we're just talking about a contrast adjustment to improve visualization I don't see why the software needs to know what scanner captured the data. Photoshop doesn't need to know what camera was used when it applies a contrast stretch to a photo, and as indicated by Autodesk, Faro Scene doesn't need to know the scanner used to adjust the contrast of a scan.
As a user, I'd much rather have measured attributes like intensity stored as relatively "raw" values, regardless of format, and not have any post-process effects baked in so we can do quantitative analysis of them if we want. Again I'd refer to the analogy of digital photography. I want to shoot in raw, convert the data to an open lossless format (images=exr, scans=e57) and have the software I'm using to view the data handle the display transform and expose any user adjustments that are necessary.
I firmly agree though that vendors should definitely be writing more metadata to e57 files though to make the job of downstream users easier.
Intensity/Reflectivity is depending on the angle, the distance and the wavelength used + the hardware equipment, while the camera is producing the result depending on the existing lighting and the hardware.
If a hardware manufacturer would give you the complete raw data, with no noise reduction and no information about the hardware, you would spend some time only to process the raw data to somewhat "good looking" values. But you don't know if this is coming in a logarithmic scale, ranges from 0 -> 255 or somewhat else (based on one data set), because there is no such information. Without proper documentation this is a volatile task.
Let's take Riegl as an example:
You can use the SDK to read the raw data and use their documentation to apply all calculations from raw values to processed.
If you read the E57 file I'd expect the finished processed data.
The reasons are:
- The software vendor needs to read the data twice, if no bounds are given, meaning I need to read the complete E57 once to find the max and min value, and a second time, where I apply my calculations from raw to processed data (since I don't know which hardware was used and I don't have a unified solution for that hardware).
- If I adjust the contrast of a scan in my software (like for example Scene), then I will either only use the data in Scene, because the view is adapted with the applied contrast when I open a scan, or a data write is necessary if I want to use it in another software.
- When the hardware vendor is writing the E57, he can apply these calculations for their device specificly, since E57 is an exchange format, not a raw format.
- A point cloud file is not the end product. We are using the point cloud to process it further into CAD, BIM, ....
With an image I have (in most cases) the end product, or the data which I enhance to my endproduct (Photoshop), With the point cloud I would need to do an extra step in between to start my enhancement.
To conclude with some points:
I'm no expert of Register360 / Cyclone and I don't know if you can apply contrast enhancement as a saved operation (like, import the RTC data, click enhancement, export as E57).
Furthermore I don't know if this was just not specified by the client like: Hey we'd like to have an E57, but please enhance the reflectivity values, so it is not so dark, and they just received the E57 without such operations.
It's just my personal opinion that you normally would expect these values to be more on the processed side, than the raw side, since an endcustomer on the image side would expect a JPG/PNG rather than a RAW.
Edit:
I think where we could agree: The hardware vendor should give the user the choice which kind of E57 is generated. A processed or raw one.