Allocating AI and other pieces of your workload placement puzzle

Commissioned Allocating application workloads to locations that deliver the best performance with the highest efficiency is a daunting task. Enterprise IT leaders know this all too well.

As applications become more distributed across multiple clouds and on premises systems, they generate more data, which makes them both more costly to operate and harder to move as data gravity grows.

Accordingly, applications that fuel enterprise systems must be closer to the data, which means organizations must move compute capabilities closer to where that data is generated. This helps applications such as AI, which are fueled by large quantities of data.

To make this happen, organizations are building out infrastructure that supports data needs both within and outside the organization - from datacenters and colos to public clouds and the edge. Competent IT departments cultivate such multicloud estates to run hundreds or even thousands of applications.

You know what else numbers in the hundreds to thousands of components? Jigsaw puzzles.

Workloads Placement and... Jigsaw Puzzles?

Exactly how is placing workloads akin to putting together a jigsaw puzzle? So glad you asked. Both require careful planning and execution. With a jigsaw puzzle - say, one of those 1,000-plus piece beasts - it helps to first figure out how the pieces fit together, then assemble them in the right order.

The same is true for placing application workloads in a multicloud environment. You need to carefully plan which applications will go where - internally, externally, or both - based on performance, scalability, latency, security, costs and other factors.

Putting the wrong application in the wrong place could have major performance and financial ramifications. Here are 4 workload types and considerations for locating each, according to findings from IDC research sponsored by Dell Technologies.

AI - The placement of AI workloads is one of the hottest topics du jour, given the rapid rise of generative AI technologies. AI workloads comprise two main components - inferencing and training. IT departments can run AI algorithm development and training, which are performance intensive, on premises, IDC says. And the data is trending that way, as 55 percent of IT decision makers Dell surveyed cited performance as the main reason for running GenAI workloads on premises. Conversely, less intensive inferencing tasks can be run in a distributed fashion at edge locations, in public cloud environments or on premises.

HPC - high-performance computing (HPC) applications ALSO comprise two major components - modeling and simulation. And like AI workloads, HPC model development can be performance intensive, so it may make sense to run such workloads on premises where there is lower risk of latency. Less intensive simulation can run reliably across public clouds, on premises and edge locations.

One caveat for performance-heavy workloads that IT leaders should consider: Specialized hardware such as GPUs and other accelerators is expensive. As a result, many organizations may elect to run AI and HPC workloads in resource-rich public clouds. However, running such workloads in production can cause costs to soar, especially as the data grows and the attending gravity increases. Moreover, repatriating an AI or HPC workload whose data grew 100x while running in a public cloud is harsh on your IT budget. Data egress fees may make this prohibitive.

Cyber Recovery - Organizations today prioritize data protection and recovery, thanks to threats from malicious actors and natural disasters alike. Keeping a valid copy of data outside of production systems enables organizations to recover lost or corrupted due to an adverse event. Public cloud services generally satisfy organizations' data protection needs, but transferring data out becomes costly thanks to high data egress fees, IDC says. One option includes hosting the recovery environment adjacent to the cloud service - for example, in a colocation facility that has a dedicated private network to the public cloud service. This eliminates egress costs while ensuring speedy recovery.

Application Development - IT leaders know the public cloud has proven well suited for application development and testing, as it lends itself to the developer ethos of rapidly building and refining apps that accommodate the business. However, private clouds may prove a better option for organizations building software intended to deliver a competitive advantage, IDC argues. This affords developers greater control over their corporate intellectual property, but with the agility of a public cloud.

The Bottom Line

As an IT leader, you must assess the best place for an application based on several factors. App requirements will vary, so analyze the total expected ROI of your workloads placements before you place them.

Also consider: Workload placement is not a one-and-done activity. Repatriating workloads from various clouds or other environments to better meet the business needs is always an option.

Our Dell Technologies APEX portfolio of solutions accounts for the various workload placement requirements and challenges your organization may encounter as you build out your multicloud estate. Dell APEX' subscription consumption model helps you procure more computing and storage as needed - so you can reduce your capital outlay.

It's true: The stakes for assembling a jigsaw puzzle aren't the same as allocating workloads in a complex IT environment. Yet completing both can provide a strong feeling of accomplishment. How will you build your multicloud estate?

Learn more about how Dell APEX can help you allocate workloads across your multicloud estate.

Brought to you by Dell Technologies.

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Jan 16
Google reports halving code migration time with AI help

Chocolate Factory slurps own dogfood, sheds drudgery in specific areas

Jan 16
AI datacenters putting zero emissions promises out of reach

Plus: Bit barns' demand for water, land, and power could breed 'growing opposition' from residents

Jan 16
Debian 12.9 arrives, quickly followed by MX Linux 23.5

The eighth point-release of Bookworm - yes, you read that right - and the latest MX with new Xfce

Jan 16
Apple's interoperability efforts aren't meeting spirit or letter of EU law, advocacy groups argue

Free Software Foundation Europe and others urge European Commission to double down on DMA

Jan 16
UK government tech procurement lacks understanding, says watchdog

NAO report highlights £3B cost overruns and 29 years of cumulative delays in IT projects

Jan 16
US adds Chinese RISC-V player that TSMC suspected of helping build Huawei GPUs to risky company register

Sophgo scores a place on Entity List, Indian nuclear boffins taken off

Jan 16
Parallels brings back the magic that was waiting seven minutes for Windows to boot

In a preview of x86_64 VMs running on Apple silicon, so it's excusable for now