4 basic considerations in migrating to cloud-based EDA tools

As the benefits of Moore’s Law begin to diminish while design complexity increases, there is a need for cost- and time-efficient solutions that can also provide exceptional performance and functionality while lowering power consumption. That’s why cloud-based electronic design automation (EDA) solutions are gaining popularity among chip designers and the cloud is becoming key for furthering innovation and productivity.

A few of the many advantages cloud technologies can offer chip designers include reduced system maintenance costs, advanced storage, and compute resources, as well as fast ramp-up and flexible pay-as-you-go models that aid not only the peak usage periods, but also during the entire chip design flow. While we continue to see the limits of in-house compute resource availability, the cloud has been proven to provide flexibility to scale design and verification capabilities to ensure better quality, lower cost, and faster time to results.

Figure 1 Cloud computing is emerging as a viable platform for IC design and verification tasks. Source: Synopsys

However, as the journey to the cloud accelerates, designers will need to carefully examine their cloud technologies to achieve optimal results. There are four key factors designers should consider when turning to cloud-based technologies that result in better, faster, and cheaper semiconductors.

  1. Data transfer and management

When migrating EDA workloads to the cloud, a key consideration is determining what data is transferred in and out of the cloud. Solutions that reduce data transfer overhead will deliver accelerated time to results and increased productivity.

While there are a variety of models useful for managing both on-premise and cloud environments, the most straightforward model is to migrate the required data to the cloud. In the data management process, the cloud environment must be capable of replicating the on-premise environment, and that starts with determining and cataloging the dependencies for a design.

Cloud data transfer must be swift and resilient to ensure no data is lost when it’s transferred from on-premises storage to the cloud. Cloud storage will need to provide the flexibility to scale in and out based on the requirements of the design and verification tasks.

Storage efficiency is another component that must be considered when designing EDA solutions. Cloud providers also grant flexibility on the type of storage, with cost playing a critical role in the decision-making process for engineering teams. EDA solutions are being engineered to leverage distributed storage, block storage, and in-memory compute that improve the turnaround time while lowering the total cost of ownership.

  1. Cloud security

Data security concerns are one of the primary reasons why the semiconductor industry has been slow to adopt cloud technologies. To ensure sensitive data is protected, there must be a robust data governance plan that identifies who is given access to what type of data, accompanied by powerful access and identity management measures.

Cloud infrastructure vendors are experienced in implementing security into their infrastructure, applications, and operations. These providers have been employing the most modern security measures to protect their data centers while delivering their commitment on redundancy and high system uptime. Working closely with cloud security vendors, EDA vendors can adapt their technologies to securely run their workloads while preventing the risk of data leakage.

Above all, chip and system designers should seek EDA vendors that have adapted to cloud environments while offering encryption, troubleshooting tools, and next-generation monitoring capabilities.

Figure 2 Design engineers must ensure that they use cloud platforms offering robust data transport and security capabilities. Source: Synopsys

  1. Accessibility and deployment

High-performance computing and NFS-heavy storage define EDA workloads, with a high dependence on system libraries, tools environment, and hardware. Chip designers accustomed to how their EDA flows function on-premises would benefit from the faster turnaround time with on-demand, near real-time provisioning, and a much better consumer-grade user experience when using cloud solutions.

It’s also important for chip designers to consider the investment of time to set up the cloud, from establishing network connectivity to managing their firewall. Additionally, asking questions such as how the design team would access cloud-based tools, how they can best visualize what and how current resources are being utilized, and how much faster certain tasks could be completed are important.

Today, we are also seeing artificial intelligence (AI) emerging as a significant player in the chip design process, which significantly increases the efficiency of both on-premises compute environments and scales compute on-demand in the cloud to enhance power, performance, and area (PPA).

  1. Scalability and cloud architecture

The ability to scale compute infrastructure is one of the top reasons why designers are migrating EDA workloads to the cloud. Several compute-intensive tasks are best when they are broken into smaller parts across distributed compute and storage resources—very much a cloud-native approach.

EDA flows are supported by robust scheduling with streamlined storage to adequately manage distributed workloads. Many EDA tools have been re-architected to scale to thousands of cores along with distributed schedulers that can efficiently use these resources. Another advantage of the cloud is the availability of hybrid scaling, allowing workloads on-the-cloud or on-premises, depending on what is needed for the task.

When leveraging cloud technologies, re-architecting EDA solutions is imperative. Similar to how EDA products have welcomed multi-processing and multi-threaded opportunities, EDA solutions must do the same with cloud architecture. As chip designers begin their journey to the cloud, welcoming new technologies like distributed storage, distributed computing, and more will lead to greater innovation.

Arun Venkatachar is VP of AI, Cloud & Central Engineering at Synopsys.

Related Content

Scroll to Top