Hello!
I have been wondering how to speed up data processing for GeoSLAM Connect. How does this software scale with faster cores, number of cores and memory(RAM) bandwidth and latency.. Should one choose Intel or AMD processors etc.
What is your experience on this matter?
-Joona
Optimal processing hardware
-
- I have made <0 posts
- Posts: 3
- Joined: Wed Aug 30, 2023 3:33 pm
- Full Name: Joona Sakkinen
- Company Details: Working in architechtural office
- Company Position Title: Employee
- Country: Suomi
- Linkedin Profile: No
- Has thanked: 3 times
-
- I have made <0 posts
- Posts: 3
- Joined: Wed Aug 30, 2023 3:33 pm
- Full Name: Joona Sakkinen
- Company Details: Working in architechtural office
- Company Position Title: Employee
- Country: Suomi
- Linkedin Profile: No
- Has thanked: 3 times
Re: Optimal processing hardware
Thanks for the answer. I already contacted them but they couldn't answer my question, they only stated that used number of cores cannot be manually selected. Seemed like they didn't understand the question. Maybe I can find more technical people here who have tried things by themselves.
-
- V.I.P Member
- Posts: 483
- Joined: Fri Feb 24, 2017 10:47 am
- 6
- Full Name: Martin Graner
- Company Details: PointCab GmbH
- Company Position Title: Research and Development
- Country: Germany
- Linkedin Profile: No
- Has thanked: 144 times
- Been thanked: 160 times
- Contact:
Re: Optimal processing hardware
Perhaps we can specify the question less broadly and ask GeoSLAM again.Jonagoldie wrote: ↑Thu Aug 31, 2023 9:29 pm Hello!
I have been wondering how to speed up data processing for GeoSLAM Connect. How does this software scale with faster cores, number of cores and memory(RAM) bandwidth and latency.. Should one choose Intel or AMD processors etc.
What is your experience on this matter?
-Joona
Which workflow are you refering to?
- "Normal" SLAM - so just import + standard processing to LAS file
- Colorization with the camera
- Panorama placement
- Outlier removal
- moving objects removal
A good indicator is the following
Open the task manager -> go to second tab (with the graphs) -> CPU -> switch the view to logical processors (so you can see all your cores, multiple graphs).
Then you start running your Connect stuff.
If multiple of the graphs go to 70% to 100% then all of these are used for calculation.
You should not run any other software in the background, esspecially something which process as well, because then you can't tell which is which.
Keep an eye to the used RAM, because this is another limiting factor.
If your computer has 128 cores and 8 GB of RAM, we developers can't use more cores than 1, since we are only buffering data to the hard drive due to RAM being full.
Keep in mind that there are tasks which can't be parallelised, so single core speed still matters.
If all cores are running (at some point of the import/processing) with some kind of load, then it means you could use more cores for that particular step.
Then it depends on how long this step takes, if it is one peak, nevermind, if it is the complete import, you could upgrade to get faster imports/processing.
-
- I have made <0 posts
- Posts: 3
- Joined: Wed Aug 30, 2023 3:33 pm
- Full Name: Joona Sakkinen
- Company Details: Working in architechtural office
- Company Position Title: Employee
- Country: Suomi
- Linkedin Profile: No
- Has thanked: 3 times
Re: Optimal processing hardware
Thanks for the reply, I was asking them about the regular scan processing, now I got a bit of better answer, but still just an example of a setup instead of any real information about the scalability of number of cores or how much it is memory speed limited. I already tested this on my own setup as much as I could. Processing utilized all 6 of the cores fully but memory was not fully utilized size wise, hard to tell if the bandwidth was limiting or not. They also told that some of the processing requires calculation which uses cpu cores and other part of the processing is transferring data around (RAM stuff). So both do matter. I tried to ask more specifically on how far will things scale, 8 cores, 32 cores, 128 cores?. Is there any limit on the scalability? If there is diminishing returns at some point, then speedier cores would become the priority.VXGrid wrote: ↑Mon Sep 04, 2023 3:48 pmPerhaps we can specify the question less broadly and ask GeoSLAM again.Jonagoldie wrote: ↑Thu Aug 31, 2023 9:29 pm Hello!
I have been wondering how to speed up data processing for GeoSLAM Connect. How does this software scale with faster cores, number of cores and memory(RAM) bandwidth and latency.. Should one choose Intel or AMD processors etc.
What is your experience on this matter?
-Joona
Which workflow are you refering to?If you want to do a test yourself:
- "Normal" SLAM - so just import + standard processing to LAS file
- Colorization with the camera
- Panorama placement
- Outlier removal
- moving objects removal
A good indicator is the following
Open the task manager -> go to second tab (with the graphs) -> CPU -> switch the view to logical processors (so you can see all your cores, multiple graphs).
Then you start running your Connect stuff.
If multiple of the graphs go to 70% to 100% then all of these are used for calculation.
You should not run any other software in the background, esspecially something which process as well, because then you can't tell which is which.
Keep an eye to the used RAM, because this is another limiting factor.
If your computer has 128 cores and 8 GB of RAM, we developers can't use more cores than 1, since we are only buffering data to the hard drive due to RAM being full.
Keep in mind that there are tasks which can't be parallelised, so single core speed still matters.
If all cores are running (at some point of the import/processing) with some kind of load, then it means you could use more cores for that particular step.
Then it depends on how long this step takes, if it is one peak, nevermind, if it is the complete import, you could upgrade to get faster imports/processing.
-
- V.I.P Member
- Posts: 483
- Joined: Fri Feb 24, 2017 10:47 am
- 6
- Full Name: Martin Graner
- Company Details: PointCab GmbH
- Company Position Title: Research and Development
- Country: Germany
- Linkedin Profile: No
- Has thanked: 144 times
- Been thanked: 160 times
- Contact:
Re: Optimal processing hardware
Ah the hard to tell stuffJonagoldie wrote: ↑Mon Sep 04, 2023 7:32 pm [...]
Thanks for the reply, I was asking them about the regular scan processing, now I got a bit of better answer, but still just an example of a setup instead of any real information about the scalability of number of cores or how much it is memory speed limited. I already tested this on my own setup as much as I could. Processing utilized all 6 of the cores fully but memory was not fully utilized size wise, hard to tell if the bandwidth was limiting or not. They also told that some of the processing requires calculation which uses cpu cores and other part of the processing is transferring data around (RAM stuff). So both do matter. I tried to ask more specifically on how far will things scale, 8 cores, 32 cores, 128 cores?. Is there any limit on the scalability? If there is diminishing returns at some point, then speedier cores would become the priority.

So this is now in no way connected to GeoSLAM or Connect, but generally speaking:
Normally one would write ones code in a way, that if it uses multithreading, that there is no hard coded number, but the number of running worker threads are depending on available free RAM and cores.
Regarding RAM:
A couple of years back I found out that Windows starts to outsource used RAM. At that time the software (not Connect) was written in a way, to leave 2 GB of RAM free and use everything else. This was fine for computers with 8 or 16 GB RAM, but once we used that on 32 or more GB, Windows started to outsource parts of the RAM onto the hard drive (so there was now more free RAM available), so our software filled it again (more RAM, less loading from hard drive == faster).
This loop of behaviour continued, until the C: drive was so full, that no results could get written anymore and the software crashed. Windows then freed all the outsourced files, so there was space on the C: drive again.
There are three different speeds in regards to access to data. Cache, RAM, hard drive (in this order, I think speed difference is in magnitude 100 or something) (Somewhere was a thread where we discussed that the data loaded from a server could be faster, than from an internal SSD hard drive, given the right server and network setup).
But to answer your question:
You can't really tell. For our software I can say: This part is parallelized with these dependencies, this is single core, this is due to loading speed from the hard drive, this is due to writing speed, this is due to a slow SDK/API we are using, ...
But then it depends on the project.
Example: Let's take a simple task, where we know that every calculation will take the same speed, applying a transformation matrix to the point cloud.
Now we need to take every point in the cloud and apply this multiplication. Normally done by a 4x4 Matrix times a 4x1 vector of a single point (X; Y; Z; 1)
The next step is implementation dependent. If the cloud is completely in RAM, then we could do that in parallel, lock free, if we distribute the points into different threads. Generating threads is expensive, so if there are not many points it will be faster to do this calculation in the main loop, otherwise different threads.
Here comes the next part: The cache has different sizes (on your PC, L1 L2 L3 as well as depending on PC spec). So when you load the data and process it, perhaps more points fit into a smaller cache, which will result in a faster execution of the code.
So if we have a project with or without colour can affect the calculation speed - depending on the data size and how it is organized.
TL;DR
And this is what I would guess for the GeoSLAM Connect software. If there are more cores and other resources available, it will use them - but it all is depending on the project. Calculation SLAM with a lot of context will go a different route than calculating it with more focus on the IMU, so indoors - outdoors will process differently.
Perhaps here is somebody who can share his experience with a big machine and GeoSLAM Connect software and used resources?