editor's blog
Subscribe Now

Privatizing the Cloud

Last year, Nimbic put a lot of focus on their cloud implementation – to the point of changing the company name (erstwhile Physware). This year, part of their focus has been on implementing their tools on so-called “private clouds”: making use of the large server farms that some companies have. The drivers for this are the existence of these farms – why not use them? – as well as the usual security concerns that, while not universal, still dog the whole public cloud question.

But this now starts to sound a whole lot like an enterprise installation of the tools on a corporate farm, managed by LSF – a trip back, oh, 20 years or so. Is that, in fact, the case?

Not really. The old model is one of letting LSF assign a particular job to some available server (perhaps one with specific required characteristics). But the key is that LSF schedules independent jobs. The cloud implementation actually makes use of two other levels of parallelism. One is the obvious ability to take advantage of multicore within a system. But it also allows a single job to be distributed over multiple systems, and these systems communicate using MPI.

This requires much more coordination than the old model, and it also requires that the server machines be roughly of the same class, since intra-job load balancing is done statically.

This adjustment is but one of several we’ll see over the next little while as companies refine their approach to the cloud.

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

STM32 Security for IoT
Today’s modern embedded systems face a range of security risks that can stem from a variety of different sources including insecure communication protocols, hardware vulnerabilities, and physical tampering. In this episode of Chalk Talk, Amelia Dalton and Thierry Crespo from STMicroelectronics explore the biggest security challenges facing embedded designers today, the benefits of the STM32 Trust platform, and why the STM32Trust TEE Secure Manager is an IoT security game changer.
Aug 20, 2024
39,807 views