SET Parallelization Approach Explained

Virtually all well-structured, non-trivial application code happen to follow the model-view- controller paradigm, even if its writer might not know those terms when writing the code. As described on the left side of Figure 1 below, every code has a main() or equivalent, as shown at the top. Every code has subroutines further down that actually do the hard work of the code, shown here as the deposit(), push(), and updateField() subroutines. Often these codes have a subroutine that call the low-level subroutines repeatedly, in a loop, such as the doLoop() here, and that loop is called by main().

Programmers recognize that the code itself expresses a hierarchy and connection between its components, starting with main() and so on down the calling chain. As shown in Figure 1 on the right, programmers recognize that these low-level routines, like deposit(), push(), and updateField(), are the “Back End” or Model of the application, while the main() is the “Front End” or View of the application. The code interfacing in between the Front End and Back End is the glue code or Controller. Knowing this, programmers can then apply SET API and parallelize this application by placing corresponding pieces of the application above and below the SET infrastructure. The Front End code goes at the top, while the Back End code is encapsulated at the bottom. The Glue Code is split and changed with SET API, as needed, to interface to SET’s parallelization layer. The size of the changes to the Glue Code is typically a very small fraction of the overall code and is usually below 1% for real-world applications.

As shown in Figure 2, all the changes in the code example are in the Glue Code. For the typical application, the Front End and Back End application codes are nearly unchanged, and are simply "plugged" into the SET infrastructure via the provided API (SET automatically spawns as many instances of the Back End code for any given number of cores as needed). This fact translates into an enormous advantage in time savings for the application writer. Not only is the parallelization process simplified and accelerated, but the resulting code is both much easier to maintain and much easier to upgrade because the Back End and Front End codes are isolated from the message-passing functionality and other details of the parallelization. The ongoing life of this application as a parallel application is, compared to other approaches, much more viable with SET.

SET is the Best of the Bunch

The table below compares the main features of SET, OpenMP and MPI:

SET Provides for Powerful Cloud Computing Model

SET enables widespread and HPC-style exploitation of Cloud Computing’s available parallelism. Cloud infrastructures, particularly HPCaaS, provide both greater processing power and network throughput, that can provide advantages to not only desktop computers, but to mobile devices like smartphones, laptop and tablet computers.

As shown in Figure 3, a simple variation of SET provides a way to bridge these two worlds. Essentially, the network connection to the SET and Application Front End is extended by thousands of miles. The SET Back Ends and Application Back Ends reside in the cloud, just as for clusters and supercomputers, but Front End components, instead of residing on hardware in the same room as the Back Ends, reside on geographically distant hardware of any kind, but the behavior and structure of SET remains*.

This solution is useful because in additional to addressing desktop computers, it addresses the understandably limited processing capabilities of any laptop, tablet computer or smartphone, which are less than the processing capabilities of modern desktop computers, and are tiny by comparison to the computational capabilities of the Cloud. Via SET, these mobile devices can fully direct and harness, above and beyond the usual Cloud services, the parallelization power of Cloud computing, creating a potent alliance advantageous for a wide range of applications. Naturally, the limitations in bandwidth, latency, and security of a network connection over that distance would apply, so some adjustments to the application may be needed to accommodate such issues. However, the latest industry advances in fiber-based Gigabit Internet speed help to minimize the impact of these issues.

Additional SET Technical Advantage

SET operates nearly the same in each of the following scenarios:

NOTE: The Red Arrow represents Gigabit Ethernet and Gigabit Internet (Which already exists).

Scenario 1: A Multicore Workstation or Multicore Mobile Device

A Local Cluster

Scenario 2: A Single Core Mobile Device and a Local Cluster

(A local cluster can also be a group of PC's networked together)

A Local Cluster

Scenario 3: A Multicore Workstation or Multicore Mobile Device and a Local Cluster

(A local cluster can also be a group of PC's networked together)

Scenario 4: A Single Core Mobile Device and a Cluster in the Cloud

Scenario 5: A Multicore Workstation or Multicore Mobile Device and a Cluster in the Cloud

For more information, see SET FAQ here:

 

Copyright © 2016 Advanced Cluster Systems, Inc. All rights reserved.

Supercomputing Engine Technology™and SET™ are trademarks of Advanced Cluster Systems, Inc.