editor's blog
Subscribe Now

Privatizing the Cloud

Last year, Nimbic put a lot of focus on their cloud implementation – to the point of changing the company name (erstwhile Physware). This year, part of their focus has been on implementing their tools on so-called “private clouds”: making use of the large server farms that some companies have. The drivers for this are the existence of these farms – why not use them? – as well as the usual security concerns that, while not universal, still dog the whole public cloud question.

But this now starts to sound a whole lot like an enterprise installation of the tools on a corporate farm, managed by LSF – a trip back, oh, 20 years or so. Is that, in fact, the case?

Not really. The old model is one of letting LSF assign a particular job to some available server (perhaps one with specific required characteristics). But the key is that LSF schedules independent jobs. The cloud implementation actually makes use of two other levels of parallelism. One is the obvious ability to take advantage of multicore within a system. But it also allows a single job to be distributed over multiple systems, and these systems communicate using MPI.

This requires much more coordination than the old model, and it also requires that the server machines be roughly of the same class, since intra-job load balancing is done statically.

This adjustment is but one of several we’ll see over the next little while as companies refine their approach to the cloud.

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Reliability: Basics & Grades
Reliability is cornerstone to all electronic designs today, but how reliability is implemented and determined can vary widely by different market segments. In this episode of Chalk Talk, Amelia Dalton and Sam Accardo from the YAGEO Group explore the definition of reliability for electronic components, investigate the different grades of reliability offered by the YAGEO Group and the various steps that the YAGEO Group is taking to ensure the greatest reliability of their components.
Aug 15, 2024
53,474 views