Magma

Magma acquisition will light a fire under the EDA industry

A Footwasher Media AnalysisBy Lou Covey, Editorial Director

The acquisition of Magma Design Automation by Synopsys was arguably the biggest story of the Electronic Design Automation (EDA) industry in 2011.  It will also most likely end up being the biggest EDA story of 2012 as well.

Most observers were stunned at the news, and not because it was an unlikely fit.  It's actually a great fit and gives Synopsys a virtual stranglehold on the digital IC design market. It was improbable because of the deep-set enmity between the two companies, especially between the two CEOs, Aart DeGeus and Rajeev Madhavan.  Both companies launched multiple lawsuits against each other over the past decade claiming patent infringement on a variety of technologies, all of which have been resolved prior to the acquisition but left scars throughout both companies.  As one source that spent time as an employee in both companies said, "It was personal for some people and just business for others but it was pervasive.”

The acrimony between DeGeus and Madhavan often manifested publicly.  DeGeus would often be absent from CEO panels where Madhavan was present, and Magma employees and supporters made sure the industry noted that Madhavan was often not invited when the leadership of Cadence, Mentor Graphics and Synopsys were represented.

Madhavan also perennially accused Synopsys of deliberately undercutting prices in large package deals, a charge DeGeus vehemently denied, perennially.  To Synopsys' defense, however, it was an unofficial industry-wide practice, according to Jeff Jussel, senior director of global technology marketing at element14.  Jussel is a former ASIC designer and EDA executive (including marketing director at Magma) and is leading element14's push into embedded and electronic design services.

There was so much competition between the big four it drove a never ending spiral of ever cheaper prices.  Better results, better productivity, but less and less money for the developers of that technology.  The big guys could deal with it better because they had the big "all you can eat" deals.  1 million for one year, 3 million for two years, 4 million for five years.  The deals looked bigger, but the terms were getting longer and the price per seat was coming down.  It started killing the innovation in start-ups because there was so much pressure on margins that there was nothing left to buy from the start-ups, and the start-ups were getting pushed to the side by the deals," said Jussel.  "The practice squeezed innovative start-ups out of the market because they couldn't compete and be profitable.”

Synopsys bought many failed start-ups for the cost of assets alone, eliminating competition and gaining valuable technology with little investment.  As this practice continued industry wide, investors saw little upside in funding new start-ups and, at present, there is virtually no interest in funding new companies that have no chance of an IPO nor of being acquired for a premium over investment.  This is where the acquisition of Magma may have the greatest potential for energizing the moribund industry.

First, it consolidates the industry nicely.  Synopsys holds digital, Cadence leads in analog and mixed signal, and Mentor dominates embedded design.  The lawsuits and undercutting that decimated the start-ups will be a thing of the past.  Customers will have to pay what the vendors ask or be forced to build their own solutions...or look to start-ups. Which brings us to the second reason.

At $507 million, it is the single largest acquisition in the industry's history, eclipsing Cadence's acquisition of Cooper & Chyan Technologies for $428 million in 1997.  When combined with the Ansys purchase of Apache Design for more $310 million and a handful of other smaller deal this year it helps release nearly a billion dollars of cash into the pockets of investors and founders.  All of these deals will be concluded before the Design Automation Conference in San Francisco this July.  Conservatively, the industry could see $100 million of that invested in new technology before the end of 2012.

And who will be leading that charge?  None other than Rajeev Madhavan.

Madhavan could be called the single most successful entrepreneur in the EDA industry.    He was a founder of LogicVision, a company that was sucked up by Mentor Graphics for $13 million in Mentor stock. He founded Ambit to attack the Synopsys logic synthesis hegemony, using guerilla marketing techniques to grab market share and, in the end, sold out to Cadence for a quarter billion dollars.  Madhavan reinvested much of his take into founding Magma, which went from start up to IPO in short order.  Combined with the Ambit valuation, Madhavan-founded companies account for over $700 million in corporate value.  No other single entrepreneur has those kind of results in their resume.

The sale agreement precludes Magma or Synopsys representatives' speculation on who stays and who goes.  There are those who hope Madhavan stays put for some time and he must make that commitment for the sake of the deal.  But no one believes that DeGeus will want his nemesis hanging around the office coffee bar any longer than is necessary, and he will have followers as he goes out the door.

"I expect Rajeev to be gone within days of the deal being done," Jussel stated.

That is not to say there will be a mass exodus.  According to Jussel, Magma has "some of the most intelligent and best educated people in the industry who love creating technology for IC Design.  They're working for the customers, not the logo."  Those are the people that Synopsys want to keep, and they will be very generous to them.

DeGeus has stated that the talent of Magma was what was important to Synopsys, not the technology, so does that mean Magma's tools are going away?  Jussel laughed at that question. "We'll see how that works out.  The existing installed base loves the Magma tools so they will continue to support those product lines or lose the business altogether."

But there will be business minds with wads of cash in their pockets that won't be as welcome.  The doors of start-ups will be wide open for these people.  That is very good news for an industry that has limped along for much of the past two decades and whose lack of consistent innovation has held back the semiconductor industry as well.

 Was the Synopsys Magma deal good for the industry?  Tell us why at element14.com

 

Operational Transparency provides
 actionable intelligence for productivity

In this interview sponsored by Vpype, Magma Design Automation and Operational Transparency, we look at a new management tool, OpDots, that manages information in an organization for instant visual analysis and facilitate action. The tool works not just for marketing, sales, operations, manufacturing, and social media measurement, but can consolidate and coordinate all those disciplines centrally. Click here for the video.

Synthesis Needs to Change to Serve Modern Chip Design

By Tets Maniwa Senior contributing editor for New Tech Press Copyright March 2009 by Footwasher Media

As EDA tools evolve, the resulting products try to increase automation. Unfortunately, the last great advance was from schematics to language-based design starting with the first synthesis tools in the mid-80's. Designs have grown more challenging and complex over the past 25 years as process geometries have gone from 0.5mm to 32nm and design from 100,000 gates to over 100,000,000 gates.

As process dimensions shrink below 90nm, synthesis tools must provide more information to the back-end tools than previous generations of the processes and tools.  While there was hope for the potential of C synthesis (and there are several of those solutions in the market today) the vast majority of IC design teams still start by writing the RTL themselves because of quality of results concerns in high end chips.

Gary Smith, principal analyst at GarySmithEDA, believes physical synthesis tools are needed for timing closure for all designs below 90nm, and have been in place as necessary EDA tools for the high performance designs since 2002. As the industry moves to 65nm and below, however, everyone needs physically aware synthesis tools to get timing closure. The main players in physical synthesis are Cadence, Magma, and Synopsys, the market leader with 54 percent of the market with the rest evenly split between Cadence and Magma. Currently, however, Synopsys only has a slim technology lead, effectively making synthesis a commodity.

Pie in the sky?

The vendors claim their offerings do the job, pointing to advances in algorithms, databases, internal architectures, and a speed boost from the CPUs make it possible to synthesize multi-million gate designs. The basic tools have changed from just translating gates and registers to netlists, to optimizing compilers addressing logic optimization, timing, power, and even some physical effects.

"DC Ultra’s Topographical technology and Design Compiler Graphical improve overall flow predictability, which helps manage overall design flow and reduce iterations," said Gal Hasson, senior director of marketing, RTL synthesis and test at Synopsys.  "When the synthesis has physical knowledge, the result is a better start point for place and route, improvements in turn-around times, and increased productivity."  Hasson claims in the past 3 years, changes in synthesis have lead to much better estimates of loading and timing as the wireload models were replaced by physical information. The resulting improved netlist leads to reduced design time and fewer iterations.

Jonathan Smith, product marketing manager at Magma, observes Magma has integrated the analysis of the physical effects and the routing to address the issues in the main flow rather than trying to fix the problems at the end. "This integration minimizes the creation of chaotic designs where a fix for one problem creates a new problem. Among the underlying technology advances are gain-based model for gates, allowing the tool to adjust drive strengths as a function of the implied loading. Integration RTL, physical, and other analyses helps to reduce overall run time and reduces the surprises that happen in physical implementation. "

Starting points

Users, however, are struggling with many design challenges, not all tool related. Karl Pfalzer, principal engineer at AMCC, states that place and route effects are not addressed at the design level. In SOC's, the designers focus on architecture and micro-architecture levels to optimize system-level performance.

"Part of their problem is while working at optimizing the block they may not be aware of system-level problems such as  complex cross-block interfaces, " Pfalzer said.  "Another is the quality of the IP. They use a lot of different types of  IP.  Often the IP is measured (and purchased) solely on its functional merit."

However, a lot of this IP has poor implementation-level quality for: internal clock-domain crossings, "lint" and constraints. "The IP users don't always understand how the IP interacts with other logic but don't have the time to fully look into the IP blocks so they use the IP as is," Pfalzer added.

Leon Stok, director, of EDA at IBM Systems and Technology Group, notes that especially for high-performance, high-frequency internal designs with clocks greater than 3 GHz, his group uses IBM synthesis and timing tools. These tools allow them to mix gate and transistor-level synthesis for fine-grained control in areas like data path circuits. This allows IBM to create a design flow based on design requirements and technology.

WYSIWYG?

Most power users use a "standard RTL flow," a basic methodology taking RTL run through synthesis with wire-load models to develop a gate-level model, which is then used to create a virtual place and route design. For larger designs, problems include design size and complexity, library selection, hierarchy and partitioning, run times, and the need to include changes in the design while moving to tape-out.

The greatest challenge is design size. There are capacity limits at the block level, which is driven by costs. Companies try to reduce the number of design teams and designers for each design, but the limits of capacity require more blocks and more design teams. The primary process to overcome the size issue is to use a hierarchical design flow. They synthesize lower-level modules then assemble those modules into larger modules and synthesize the interfaces until they're finished. From synthesis they go to floorplans and then to physical design.

The problems with hierarchical design is the preliminary partitions may not reflect the designer's final design. "Blocks are not  always  well defined," Pfalzer noted. "Complex logic  is often shared between blocks and it  can be  hard to develop proper timing budgets and micro-architectures to achieve optimal performance."

Pfalzer added that they want to use all  of  the  implementation tools as  often and as early as possible .  Design closure is an iterative process, best done using successive refinement.  "We cannot wait until the RTL is functionally done before we start detailed implementation.  It is best to find even the smallest problems early and fix them then."

Stok agreed. "In many areas the problem is logic designer productivity, which is actually a combination of design and verification and can still take a lot of changes and improvements. Most of the work still is at the RTL,and without a path to synthesis the higher-level languages just add another layer to the flow without improving productivity."

In comparison, physical design has made a lot of progress in increasing automation over the last 10 years. To a greater extent, physical design productivity has kept pace with the increases in design complexity and size due to process scaling.

Ameesh Desai, senior director, design tools and methodology at LSI Corporation, said their flow is a standard SOC methodology, but does require them to first partition designs into reasonably sized pieces in order to complete synthesis in a reasonable amount of time. "Because the process is iterative to some degree, it's difficult to determine how much of the time is actually spent only in synthesis and not in other corrections. The problem is if we open all the libraries, synthesis will generally choose only the fastest elements, without regard to power consumption. This bias in synthesis requires significant additional work to bring power back within budget."

Desai adds constraints may not be correct or valid when the design netlist goes to place and route. DC topological and DC graphical seem to be better but LSI is not using these tools to the full extent possible. In addition, the way they use the tools are still not solving all the problems that show up when the constraints and the synthesized final netlist prevent design closure.

The overall throughput is also a major barrier according to Yoshi Inoue, chief engineer for the Design Technology Division of LSI Product Technology Unit at Renesas.  Inoue asserts that their synthesis process goes through many iterations to include the engineering changes and the mismatches that happen between the design and the physical implementation. Not only does this take a lot of time, but also costs a lot of money. Their benchmark is 20 million gates in 20 hours with a large set of CPUs running Design Compiler. They don't like the high cost for the tools, especially when they have to invoke many licenses at a time. Generally this process takes about one week to complete, but may take many iterations before design closure.

Ken Saito, senior engineer for Renesas EDA advanced technology development, adds another part of the problem is the effort it takes for designers and synthesis to fix errors. "We are not always positive their design meets the constraints and the amount of time it takes to process the design is a problem. One final challenge is that the bottom up methodology cannot check all of the nets in a single pass, leading to multiple trials and iterations. We have to adjust the libraries per module to ensure a reasonable starting point and match to all of the requirements: logic, timing, power, etc."

Wishing upon a star

Increased automation would enable a full chip flow and eliminate the synthesis bottleneck. Next generation tools will need better coupling between RTL and physical issues, larger capacity, multivariate analysis, and optimizations over a wider range of parameters. Obviously, much greater capacity and speed are a major part of the equation, but not necessarily sufficient for a tooling and methodology change. The increase in process technology constraints and design rules makes the synthesis job even harder.

Capacity increases, however, will be memory and runtime constrained. Memory is a limiting factor in multicore computers because the shared memory becomes a bottleneck and total memory size is too small to hold the whole design at one time. For distributed computers in a CPU farm, the existence of non-shared memory eliminates the size limits, but becomes communications limited instead.

Gary Smith suggests the next stage in synthesis tools will be power-aware capabilities, especially for power modes and level changes. Physically aware synthesis needs a trial placement mode for better loading estimates for design closure.

"Real physical synthesis would be a panacea to their current situation," Pfalzer opined. "Take RTL , a reasonable floorplan and top-level constraints  in, and then get a good physical design as an output. There are no tools to do this.  Instead intermediate levels are used to build the blocks and then work through design issues to develop good enough constraints, floor plans, integrate the IP, etc. to get to a decent place and route."

Pfalzer said Cadence’s Chip Estimate and InCyte tools may help in this regard, but he's not convinced.

Stok asks for greater capacity, which helps always. "IBM used to have flat flows that give the tools lots of leverage, because are not hindered by many constraints and small design blocks. A higher capacity tool would enable fewer constraints and larger blocks, which enables the tool to reduce turnaround time while increasing quality of results. "

Due to the capacity limits of existing tools, designers must create hierarchies and partitions. A tool with larger capacity would give designers less chance to go wrong in creating the different hierarchies and partitions that result from conflicting requirements and targets at the block- and chip-level designs. "We must have good tool support and much greater levels of automation to continue to get the most out of our technology processes." Stok stated.

The increasing complexity and quantity of technology rules for a process mean that the tools must evaluate much more information in their algorithms to get good results. A tool which can simplify this process while providing good density and yield would help tremendously but it also needs to co-optimize many new and different parameters to address the requirements of the 30 nm node. We will need to start experimentation to find the optimal priorities in the mixture of parameters and libraries.

LSI's Desai would like some form of more accurate power optimization. Due to the wide range of parameters within the cells they use, the T-max, channel scaling, power, speed, and others they have to perform some experiments to confirm which libraries and cells to use as a starting point for synthesis. "The problem is if we open all the libraries, synthesis will generally choose fastest parts without regard to power consumption, requiring significant additional work to bring power back within budget."

Due to the limitations of the tools, LSI has to use a hierarchical design process. They would like to see a tool that will not require as much partitioning and able to handle much larger blocks in a single pass or, conversely, allows more flattening to minimize this level of effort. Some of the floorplanning tools are starting to address some of these inherent structural problems and discrepancies between synthesis and physical implementation.

Another area that would be very helpful is the ability to evaluate multi-corner effects. The problem here is related to the library issue, synthesis can get you to a design closure point that may not be realizable or may be only locally optimized but not globally efficient. And finally, the ability to confirm constraints that correlate with the back-end requirements. Currently, constraints may not be correct or valid when the design netlist goes to place and route.

Renesas' Inoue considers an ideal tool would be something that works in a top-down fashion with a capacity of greater than 50 million gates. "It would be very helpful if the tool can handle multiple constraints and optimizations simultaneously. In addition, more efforts at physical optimization like DC topological at synthesis would improve the overall design flow tremendously."

This article was sponsored by Oasys Design Systems