Lead Article

The Digital Divide, Net Neutrality and Badu Networks

Badu Networks, is developing a technology for distribution within the next two years that will dramatically boost the capacity of public Wifi areas, which will make it possible for many to access the internet with little or no cost.  That is important not only for bridging the Digital Divide but also solve part of the problem of net neutrality.  This is the first of a two-part video interview.

Cognitive Computing: Creepy and Constructive Technology

By Joe BasquesManaging Editor

Today we start our coverage of Big Data issues as a regular part of New Tech Press special reports.

Cognitive computing has been around for a while but has been sitting primarily in the realm of science fiction.  It's coming into reality very fast and it is as creepy and as productive as advertised.  Yes, it means machines that can think like we do but do it faster and come up with more efficient solutions.  But that also means eliminating a lot of danger in the development of new technology (like the infamous  software glitch in the Toyota Prius).

New Tech Press sat down with Saffron Technology in Austin, Texas recently to talk about the potential of thinking machines.

#EELive Keynote sees bright side of death of Moore's Law

278731.png

The death of Moore's Law, predicted annually for the past decade, may not be as bad for the electronics industry, as opposed to what some hand-wringers have expected.  Unless you work for Intel, that is. In fact, it might unleash a new wave of innovation driven by user expectation rather than an engineers’ linear imagination. uber-hacker

That was the core of the keynote delivered today at EELive in San Jose by Bunnie Huang, MIT EE PhD. and renowned hardware hacker.

Huang pointed out that Moore's Law has been very good for Big Silicon over the past several decades with the x86 architecture scuttling every attempt to supplant its supremacy (anyone remember the Alpha chip?).  And while that has been very good for Intel and its customers, it has also buried real innovation based on end-user needs.

But as the physical limitations of silicon have put up very real barriers to the continuation of the Moore paradigm, the opportunity for that innovation has opened wide, Huang said.

Component recycling in China

At the basis of this "revolution" is the reality that a six- to eight-year development cycle for new products is no longer that unrealistic, now that hardware performance upgrades won’t be happening every 18 months, as per Moore.  The performance upgrades are now coming from hardware and software optimization on competing platforms like ARM and Imagination Technologies cores.

"It wasn't too long ago that an ARM-based processor was the basis for toasters and DVD players.  Today those cores are viable options for servers and 64-bit cores are not that far away," Huang predicted.

But even more important is the trend to recycling components and the easily obtainable how-to manuals available from China, where, Huang said, a growing industry makes it possible to build your own devices running on open source software.  There is a glut of electronic devices and systems in the world that contain perfectly sound components that can be harvested and sold for pennies on the dollar against new components.  That same industry is producing manuals for the component buyers guiding them though the build.  Those same components can be used to repair existing products.

Yes, that may violate Western patent laws, but the reality is, the cows are out of the barn and it is extremely difficult to put them back in now.  Component harvesting is beginning in the west for engineers who want to build their own prototypes and can't get the attention of major component manufacturers because they are too small to be taken seriously as customers.  Since these innovators are no longer handcuffed to Moore's Law advances, they can take their time and actually consider user needs and produce beta products before ever making the "big buy" from major suppliers.

The availability of inexpensive components, reliable documentation and open-source software, according to Huang, mean it may be possible, within a decade, to see the availability of low-cost and open-source hardware for general consumer purchase.

Economics and Vengeance wreak havoc with wafer market

mjl300.jpg

Deliveries are up, revenues are down for silicon wafters By Lou Covey Editorial Director

As Semicon West approaches in two months, there will be a rising chorus of predictions for 450mm manufacturing and test equipment. But silicon wafer industry revenues are heading in the wrong direction, dramatic technology advances in test and reclamation, and an overall urge for revenge among their customers over gouging during the solar boom.

In February, SEMI issued its annual report on the state of silicon wafer industry that showed a small increase in the delivery of wafers to customers in spite of a well-known glut of supply, but a sharp decline in revenues that outstripped reasons based on inventory. What is more, this appears to be developing into a trend over the past three years.

When questioned about the discrepancy, SEMI replied simply, but cryptically, that it was due to "market pressures and a weak yen.” Yet placing the blame on the decline of Japanese currency seemed knee-jerkish at best. While it is true the yen has plummeted over the past 20 years, the decline has relatively flattened out over the past five years. Japan, while still the leading supplier, has seen a significant increase in competition worldwide especially in China, Malaysia and even the US, modifying the yen's influence even more. The extremely vague explanation "market pressures" is even more suspect. Let’s take a look at test, for example.

A joint study by Hewlett Packard, the University of Oregon and the University of California, San Diego showed that applying data mining to optimize IC test resulted in significantly higher yields on virgin wafers. At ITC in 2013, Craig Nishizaki, Senior Director of ATE Development at NVIDIA, extolled the benefits of test data management methodologies. Higher yield through improved test reduces demand. Although, you wouldn’t know it talking to people in the test management industry.

Jim Reedholm, an independent representative of Yieldwerx, based in Austin Texas. “I haven’t seen any data that would backup any claim that effective test management would reduce demand for wafers, but it sounds right.” Executives at other semiconductor test management technology companies were either unresponsive or, strangely enough, unlocatable.

OK, so maybe no, maybe yes. How about reclaimed wafers?

According to Semico analyst Joanne Ito, improvements in reclamation processes for silicon wafers has dramatically dropped the demand for test wafers for semiconductor manufacturing and, according to SEMI, both revenue and material shipments are up 14 percent from last year. Ito believes that will be a trend going forward, but SEMI predicts that increase will flatten out by 2015. We might be getting closer to a reason for the discrepancy between revenue and shipments.

Reclaimed Silicon Wafer market is Spiking

According to Ito, during the solar boom up to 2008 there was a significant lack of raw material for wafers, which drove up prices for the high-quality virgin wafers that the semiconductor industry needs. That same solar demand ate up the capacity for 200mm and 300mm manufacturing. When the boom blew out, demand for the raw material dropped and a huge overcapacity developed in 200-300mm facilities.

OK, so oversupply tends to push down prices and revenues. Econ 101. There is only one problem. Demand has not decreased in the semi industry. It is increasing as demand for smart devices worldwide drives semi design starts. Hence, the steady increase in shipments to semi customers. Why the dramatic drop in revenue?

Ito thinks the answer might be vengeful purchasing executives in semiconductor companies.

“During the solar boom, the semi industry (which has higher requirements for the wafers than solar) saw an annual increase in prices of 10-12 percent from the wafer manufacturers because of the cost of silicon raw materials. So the semi industry said, ‘We paid you a when the cost went up, we want this cost reduction reflected in our prices.”

Now we have a better idea what “market pressures” are actually in play. The opportunity to return to the wafer manufacturers what they forced upon their customers a decade ago is too good to resist.

This quid pro quo may seem good for the industry in the short run. However, long-term it is causing significant problems. We will take a look at that next week…

Do you agree there is an element of vengeance going on, or do you see other factors at play? Leave a comment and tell us what you think.

 

CES and 4KTV: The industry has it backward again

COMMENTARY By Lou Covey, Editorial Director Many tech media outlets (as well as several general media pubs like the WSJ) are predicting that 4K television will be a very big deal this year at the annual tech echo chamber known as CES.

The question I keep asking, however, is “Why?” So far, I’m not getting an answer.creatingcomics_clip_image002

In truth, most of the media is focusing on technology that is driven by sponsors and advertisers at CES, and 4K TV is a big deal for the companies making the TVs as well as the component manufacturers and embedded software/hardware design companies. All of them have big money riding on the success of 4K TV... just like they did for 3D TV for the past three years. The problems with those hopes and dreams for 4K are the same as for 3D, however.

HDTV took a long time to get into general distribution, basically because HD content took a long time to develop. It was only last year that Netflix actually streamed HD content for the first time... if you had an internet connection strong enough to handle it and most people don’t. No streaming media service can support 4K content now or for the foreseeable future, nor can most ISPs. While Blu-ray disks can hold more content than DVDs, they lack the capacity to handle any 4K content longer than 30 minutes, upgraded to 4K. So we are going to need significant upgrades to content delivery systems before any current 4K TVs are going to be able to show what they can do.

But that’s just the content problem.

When HDTVs came out, 40 inches was a big deal and most people went with smaller products simply because they took up less space in the house and they cost so much less. The problem was that the content could only be viewed in full 1080p on a TV at least 42 inches big, viewed from a minimum of 6 feet away, so consumers weren’t getting the full HD experience. That changed when President Obama gave a tax rebate to everyone regardless of whether they paid taxes. That rebate, in most cases, was enough to buy that 40+-inch HDTV... right in time for the federally mandated switch to HD broadcasting.

Like 3DTV, 4KTV lacks the federal subsidy and the regulatory support the HDTV had, so unless some serious back-room lobbying is going on 4KTV is headed for a tough financial road. But let’s say, just for a second, that their is some dealing going on. What does the 4KTV experience look like for the consumer.

There are rumors that CES will feature an announcement of a below-$1000, 60-inch 4KTV for purchase sometime in 2015. According to CNET, to watch an HDTV, at a 20 degree angle, needs 2.5 times the length from the screen to get the benefit of the screen or 12.5 feet from the display. The good news for 4KTV is the ratio is about 1/4 less. So if you buy the smallest available 4KTV (60 Inches) you’ll be fine seated 9 feet from the screen, which is the largest average distance most people sit in front of their TVs. Here’s the problem with that.

Most housing developers are heavily lobbying to reduce the average square footage of their developments with common living areas reduced to 10x10. Not only will a 60-inch monitor consume most of the space, but it may be impossible to sit far enough away to make viewing optimal.

Economics, technology, content and geometry is all working against the success of 4KTV. But that is not stopping the industry from telling a consuming public, that is getting savvier technologically, that this is the next big thing. That focus is what will delay the acceptance and success of the platform.

The industry should be investing in development and promotion of delivery mechanisms for the platform, as well as creation of appropriate content. The latter, in particular, will drive demand for the platform. It’s not a chicken and egg issue. It’s a cart and horse issue.

Accellera swallows OCP-IP

By Lou Covey Editorial Director, New Tech Press

Accellera Systems Initiative, the non-profit standards organization  for electronic design automation (EDA) and intellectual property (IP) standards, has taken over the moribund Open Core Protocol International Partnership (OCP-IP).

Accelera takes over OCP-IP

Accellera has been taking over multiple standards organization in the industry for several years and this is only the latest.  The acquisition includes the current OCP 3.0 standard and supporting infrastructure for reuse of IP blocks used in semiconductor design. OCP-IP and Accellera have been working closely together for many years, but OCP-IP lost corporate and member financial support steadily over the past five years and membership virtually flatlined. Combining the organizations may be the best way to continue  to address interoperability of IP design reuse and jumpstart adoption.

"Our acquisition of OCP assets benefits the worldwide electronic design community by leveraging our technical strengths in developing and delivering standards," said Shishpal Rawat, Accellera Chair. "With its broad and diverse member base, OCP-IP will complement Accellera's current portfolio and uniquely position us to further develop standards for the system-level design needs of the electronics industry."

OCP-IP was originally started by Sonics, Inc. in December 2001 as a means to proliferate it's network-on-chip approach.  Sonics CTO  Drew Wingard has been a primary driver of the organization.  It has long been perceived as the primary marketing tool of the company and it will be interesting to see how the company (which has been on and off the IPO trail several times since its founding) fairs without being the "big dog" in the discussion.

A comprehensive list of FAQs about the asset acquisition is available.

Yotta Data Sciences may have the answer to lowering chip design cost

Yesterday we ran a report about whether chip design could be effectively reduced to the point of profitability for innovative designs.  Conventional wisdom said that possibility will not come around for at least 10 years.  But in our year-long investigation we stumbled across a very quiet company, Yotta Data Sciences, and its founder and CEO Tom Grebinski, that might have a solution that would speed the process up within a couple of years. Grebinski has put his thumbprint on the semiconductor industry for a couple of decades.  He dealt with how ICs are physically composed by pioneering atomic layer deposition technology in the 1980s, he moved to developing a way to handle yottabytes of data as the author of the the OASIS integrated circuit layout format standard of SEMI.  Now he’s taking on how that data can be managed, distributed and protected efficiently and effectively.

 See the full story on ChipDesignMag.com

Quest for the 10K Chip: Final Report

Part 1: Is this trip necessary? By Lou Covey New Tech Press First appeared in Print Version of Chip Design Magazine (June 2013)

At the Design Automation Conference in June 2012, Verilab’s JL Gray posed a perfectly unreasonable question: “Can you build a chip to verified prototype for less than $10,000 that an investor would want to back as a product?”  Over the past few months a variety of experts on the subject came to consensus and, as it turns out, the answer is yes and no. Yes, you can build one to prototype, but no, an investor would not find the product interesting. How about building a chip interesting to an investor for under $25 million, which Gary Smith, chief analyst for GarySmithEDA, calls the tipping point for investors.  Again, the consensus was in agreement: Yes, but not for at least 10 years... maybe 20.

DAC had a significant contingent of companies pushing FPGA prototyping tools and several pundits claiming that using FPGAs immediately reduce the cost of development.  In an interview a few months later, Smith concurred that people are making $10K chips with FPGAs and free tools “all the time,” but those tools did more good in developing chips at the 90nm and above node and nothing to reduce the cost of bringing a chip through manufacturing to market, which is the bulk of the cost, especially at the more advanced nodes. (link to Smith videos) That’s the rub, but progress is being made.

Reducing the overall cost of chips, from development to manufacturing, has been the goal of everyone from the smallest EDA company to the largest foundry.  But, every advance in process node means investment in even more expensive and potentially unproven technology.  Almost every EDA product release promises to reduce cost by $10 million, depending on the application, theoretically.

As early as 2009 the total overall cost for bringing an advanced chip to market exceeded $50 million.  Smith states that the cost has been reduced to as little as $28 million.  Much of that cost reduction, he identified as coming from three directions: effective and low-cost tool packages, the advancement of ESL tools raising the level of abstraction, and the return of independent design services reducing NRE costs.

It cost an average of $10 million to acquire enough EDA tools to do a decent job on a new chip 10 years ago. Smith said you can achieve success with packages below $20,000 now for chips at 90nm and up to 2 million gates.  Below that node, and above that gate count it gets dicey.

Raising the level of abstraction to ESL has been touted for years as the key to reducing overall cost, and the latest available tools, according to Smith, are just coming onto the market.  According to Karl Kaiser, vice president of engineering at Esencia, “ESL is the key to reducing engineering costs, which comprise the major financial barrier in the development of any truly innovative chip design.”

Kaiser pointed out that small companies, like Adapteva are using innovative tools as well as innovative ways of raising funds to develop new products (Maxfield article) to keep costs low.  Adapteva’s first chip to market cost less than $2 million.  But those cases are the exception rather than the rule.

Design services are rising to the forefront again to the level that Smith calls the niche, the “wild west” of the EDA industry with new companies coming to the forefront not only in Asia but in the US as well.

Josh Lee, CEO of Uniquify, a design service company, echoed Smith saying a design services company can not only simplify but standardize the design process, objectively evaluate what the right tool will be and compensate faster for specific tool weaknesses.  This eliminates a significant amount of NRE cost especially for an OEM that doesn’t specialize in chip design.  Several companies are divesting themselves of chip design departments and outsourcing the services to design firms.  Earlier this year TSMC added Uniquify as a design partner and LG Electronics selected the company for memory IP design.

A common thread, however, through all the comments and predictions on the issue of lowering the cost of semiconductors is NRE.  That boils down to the time it takes for an engineer or a team of engineers to coordinate and produce a design.  Tools can go so far. Outsourcing can go so far. Methodology, IP and design platforms can go so far.  None of them have even come close to dealing with one major problem: The amount of data engineers have to deal with.

There has been some discussion regarding the use of the cloud to allow EDA customers to use tools “as needed” rather than buying seats; and to use the cloud to coordinate disparate design groups worldwide.  Almost unanimously there is skepticism. “The cloud is an important aspect of IT,” said Smith, “but it can’t handle the high end computing problems.”  Lee concurred, “The cloud can be useful in EDA, but alone it will not help solve the problems of design.”

Or not... Stay tuned.

(Part 2: Enter Yotta Data Sciences, will appear later this week.  This article is unsubsidized and was produced in partnership with Extension Media.)

 

EXCLUSIVE- SLAC installs hyper efficient server

By Lou Covey Editorial Director

A hyper-efficient, high-performance data center was recently installed at a SLAC facility in Menlo Park, California  that operates on an energy efficiency several orders of magnitude over the most efficient data centers in the world--- and it fits on the back end of a pickup truck.

While most server racks operate on a ratio of 1W of cooling power to 3-4W of processing power, the system installed at SLAC operates at a 1 to 200W ratio.  The system is a collaborative project between Intel, Emerson Network Power, Panduit, One Stop Systems, Inc., Smart Modular, Inc. and Clustered Systems.

The high-density, high-performance system, located in the the LCLS (Linac Coherent Light Source) facility, comprises 128 servers in an 800mm wide 48U rack containing four chassis with built in cooling and a 105kW n+2 redundant power supply. Each chassis cools the servers directly via a pumped refrigerant, up to 20 kw per 8U chassis, with up to 5 fitting into a standard IT rack. The space for the fifth chassis was allocated to a high-performance switch.

“This is HPC, Cloud or even a Data Center in a box," said Phil Hughes, CEO of Clustered Systems. “A user can put a system anywhere there is power. No special facilities are required. We have calculated that capital expense can be reduced by up to 50% and total energy consumption by 30%. All investment can go into compute and not have to be shared with bricks and mortar.”

"To be able to pack such computing power into such a small space is unprecedented," said Amedeo Perazzo, Department Head Controls and Data Systems, Research Engineering Division at SLAC.

New Tech Press was given an exclusive look at the system. Watch the first part of the video here:

 

Quest for the $10K Chip: The wild west of design services

In the fourth installment of our Quest for the 10K chip series, We return to the last part of the interview with EDA Analyst Gary Smith to discuss the reemergence of design services and their role in reducing the cost of chip design. Smith believes the industry is headed toward a new generation of ASIC houses, and says the advantage of ESL is that it allows designs to be handed off at any number of various sign off points. Smith believes the design services segment is in for big growth in the coming years.