Only a few weeks after Motorola® launched the XoomTM, Apple® launched the iPad 2TM. These technological marvels, like their predecessors, illustrate several fundamental challenges that all design engineers face today: designs are getting more complex, and competition more fierce. As a result, the verification effort required to validate these designs is growing exponentially while the schedules are shrinking. Consequently, design engineers’ jobs are getting much harder.
Companies must dramatically grow their capital investments for IT infrastructure to support this exploding demand for verification. This is costly, time consuming, and in many cases, is constrained by practical limitations, such as physical space, power availability, cooling capacity and IT support resources. This is in addition to the cost of the hardware itself, and in these times of tight budgets, most CFOs balk at increasing capital expenses (CAPEX). It’s no wonder that customers tell us that verification is already the most expensive aspect of ASIC/SoC design. The dilemma from the compute infrastructure perspective is clear: support the growing verification demand with limited cost increases or take the blame for lengthening verification schedules.
While it’s obvious that larger and more complex designs need more verification throughput, this is not a static requirement. Verification environments typically have usage peaks and valleys, and with some over-provisioning to support schedules, compute resources will be underutilized during valleys and overcommitted during peaks. Early in the design, engineering is the limiting factor as more bugs are found than can be fixed immediately. Later, as finding the remaining bugs becomes harder, larger simulations, longer individual tests and growing queues are inevitable. Infrastructure is then at 100% capacity and progress slows.
The above scenario is nothing new. For years, IT engineers have provisioned hardware with these variable demands in mind. While all companies would like to support verification peaks, this is now proving too costly and inherently inefficient, since provisioning for the worst case peaks means that for most of the project, the servers are underutilized. But even the best laid plans can fall apart when the unexpected occurs – like a last minute bug. Even if the fix is simple, the verification may require a full regression repeat, which can take days or even weeks. A schedule delay is virtually certain. So in addition to the normal peaks and valleys, engineers must plan for unexpected, last minute problems.
The ideal solution would be combining baseline provisioning to handle average verification loads with elastic and scalable access to compute resources able to quickly ramp up verification capacity to meet peak verification needs. In worst-case scenarios, rapid scalability would allow engineers to compress weeks of verification into a day or two. Equally important, the solution should scale down when the demand subsides to keep costs in check. Is cloud computing the answer?
It is certainly true that cloud computing has the potential to satisfy scalability requirements. Cloud computing provides the ability to access hundreds of servers extremely rapidly. It is also flexible, with the ability to turn off the servers instantly once they’re no longer needed. In fact, there are several interesting economic benefits associated with the adoption of cloud computing. In addition to handling last minute bugs while avoiding schedule delays, having virtually infinite resources means that engineers can easily compress schedules. If a week-long regression could be completed in a day, early market entry would be possible, which should lead to more revenue and higher market share. Cloud computing also requires no additional CAPEX because it is an operating expense (OPEX). CAD managers may then have the flexibility to spend more money on other needed resources.
But before a company opts for such a dramatic change to its EDA infrastructure, it must consider the following key tradeoffs. These include:
- Security: Cloud computing providers are likely more secure than the average enterprise customer. As part of their business model, they undergo independent security audits regularly. It is therefore important to check for industry-accepted certifications such as ISO27001/27002, SAS 70 Type II and others. Cloud providers know they will have no business if the customers’ data is not secure.
- Liability: No company is going to provide 100% liability against theft, and cloud customers must be prepared to accept this limitation. This is why cloud providers focus so much attention on security.
- Corporate Policies: Many companies have policies on moving corporate IP offsite. These policies typically must be reviewed, and should be updated as needed.
- Licensing: Today’s installed software licensing agreements don’t cover cloud computing. New licensing agreements will come into play, and the time and attention needed must be factored into planning.
- Geography: Some countries have restrictions on technology exports. It is therefore important to work with cloud computing providers with sufficient global reach.
- Automation: Extending an EDA environment to the cloud can be very straightforward, or depending on the customers’ requirements, may require an experienced partner to make the initial transition faster.
For EDA tools like verification, cloud computing represents the next paradigm shift. It offers the potential to deliver dramatic increases in verification throughput while simultaneously optimizing long-term costs. It can even offer a company the option of cost-effectively pulling in development schedules to meet more aggressive market and revenue goals. With the right partner and a mature cloud provider, companies will be prepared to verify the largest designs – today and in the future.