MapInfo Pro Developers User Group

 View Only
  • 1.  Optimal Hardware and Software settings for Interpolation?

    Posted 09-20-2022 15:33
    Hello, 

    I believe my question lies mostly around the processor utilization of Create Raster. We are looking to build a PC designed specifically for this task, creating rasters covering hundreds of square miles at 1 meter or less resolution. 

    On the MapInfo side, is there any setting besides Concurrency that can influence this process? Is "Full" an increase over "Aggressive"? Then finally does the create raster tool make full use of multi-core threading? Watching the core clocks of our current Intel machines, the processing is isolated to a single core at a time on "Full" Concurrency. 

    Our debate is between the route of faster single core clock speed of Intel CPUs, and the better multi-core performance of AMD. There are a lot of new technologies, particularly in the consumer market that make this question very complicated. We have discovered as well that beyond size, memory speeds play a dramatic impact on interpolation. Suggesting that early adoption of DDR5 memory through Intel CPUs may be best, even though there are few professional grade components to utilize this. 

    I am hoping for any suggestions designing a system for interpolation, or help becoming aware of any limitations MapInfo might present to an over-engineered machine. 

    Thank you, 

    Brandon 


    ------------------------------
    Brandon Shepherd
    Knowledge Community Shared Account
    Shelton CT
    ------------------------------


  • 2.  RE: Optimal Hardware and Software settings for Interpolation?

    Employee
    Posted 09-21-2022 06:03
    Hi Brandon

    The Concurrency settings doesn't influence the raster processing. It only influences how some of the vector processing tools like Buffer are using multi-threading.

    Generally, raster processing does support multiple threads. But also amount of ram and disk speed is important.

    For raster, you need to look a bit further down on the Options page. here you'll find MapInfo Pro Raster, Preferences.

    And some details from the MapInfo Pro Raster Help file:

    Memory and Performance

    The Memory and Performance tab contains the following settings:

    • Memory Cache Size - Controls the amount of RAM memory which is reserved for caching raster data when performing any raster operation. The cache memory is used to keep recently used data in RAM so it is quicker to read and process. The cache size can be increased or decreased depending on the available system resources and processing needs. As a general rule the system will perform better with a larger cache as it will have to do less disk reading and writing when performing a data intensive processing or analysis task. If the system memory is less or equal to 2 GB, then by default the cache size will be set to Normal. That means 50% of the system memory will be used for any operation. If the system memory is greater than 2 GB, you can set the following options in the Memory Cache Size drop-down list:
      1. Low = (1024 MB)
      2. Normal = Max(50% System Memory, 2 GB)
      3. High = Max(50% System Memory, 4 GB)
      4. Maximum = Max(50% System Memory, 8 GB)
    • Automatic Interpolation Cache Size - Select this option if you want MapInfo Pro Advanced to automatically control memory cache size when creating rasters from points using the interpolation method. When disabled, you have to specify the Interpolation Cache Size.
    • Interpolation Cache Size - Controls the amount of system memory (RAM) which is reserved for caching data when creating a raster from points data using the interpolation method. This is specified in Megabyte. The range is between 1024 MB to 50% of available memory on your computer.
    • Run Tasks in Sequential Mode - Runs the raster operations in sequential mode. When enabled, the Raster engine applies resources to complete one operation before moving to the next. When one operation is in progress, rest of the operations would be waiting in a queue.


    ------------------------------
    Peter Horsbøll Møller
    Principal Presales Consultant | Distinguished Engineer
    Precisely | Trust in Data
    ------------------------------



  • 3.  RE: Optimal Hardware and Software settings for Interpolation?

    Employee
    Posted 09-21-2022 07:52
    Edited by Peter Møller 09-21-2022 07:54
    PS: I tried to run an interpolation process using 100 Point Cloud files in LAZ format.
    They had a total size of 3.2GB.

    From the log file, I can see they contained a total of 768,986,460 "valid stations"

    The processing is constructed of multiple steps:
    • In the first step, the input files are read and gathered in a temporal cache. This process took around 6 minutes and seemed only to use a single processor.
    • When the actual gridding or interpolation process started, I could see that the processor usage rose to around 80% (of my 8 cores). This process took around 30 minutes.
    • In the next step, overviews are updated which took around 30 seconds.

    The total process took around 40 minutes resulting in a 2.5GB MRR covering a 10 by 10 km area with a resolution of 0.4m.

    I have a new Lenovo laptop P1 2nd Generation with, 32Gb ram, and a 2TB SSD PCIe Gen4 NVMe which has an excellent read/write performance.

    ------------------------------
    Peter Horsbøll Møller
    Principal Presales Consultant | Distinguished Engineer
    Precisely | Trust in Data
    ------------------------------



  • 4.  RE: Optimal Hardware and Software settings for Interpolation?

    Posted 09-21-2022 14:44
    Thank you for your responses Peter. I will certainly start with changing the memory cache as mine was set to low somehow. The other interpolation cache settings were normal. We run these processes on machines with 192 GB of RAM, often interpolating individual points files in text format anywhere from 1 or 2 GB up to 200GB. We are always searching for the limit of what MapInfo and our computers can do. 

    Is it possible to increase the 50% threshold for memory usage? We understand it's for general concern over computer wear or damage, it's just expensive to buy enough RAM to compensate. 

    I will make sure any build utilizes a Gen4 NVMe SSD as well. 

    Also, as an alternative to this route of building a designated PC, is there a cloud computing alternative available through MI or Pitney Bowes for this type of work? 

    Thanks,

    ------------------------------
    Brandon Shepherd
    Knowledge Community Shared Account
    Shelton CT
    ------------------------------