Trend

Outsourcing has become a zero sum game with benefits for many

This is the latest in our ongoing series of articles on outsourcing, benefits and downfalls. By Lou Covey Editorial Director

Outsourcing product design and manufacturing has become an international way of life despite the concern that it takes jobs world_of_outsourcingaway from one country in favor of another. As the practice has matured, it has become more of a zero-sum game as long as the participant realize it is best as a cooperative exercise.

The decision to outsource any part of a product lifecycle is not longer a matter of which country a company will choose, but which countries to choose. High precision work is still the realm of the United States with Western Europe a close second. Mass production of mid-quality products is an acceptable choice, even though costs are starting to rise. And Central Europe is rising as the choice for high-quality, low-cost software design.

In the end, companies have a much greater choice in how and where they choose to put together their products and services and it tends to result in jobs all around the world.

We spent some time talking to George Slawek, the managing partner of the software outsourcing company Eurocal Group , which features management , customer relations and sales in the United States, combined with software developers in Poland. We found he sees business as not either/or. He says Poland offers options not available elsewhere, but are not the be-all and end-all or options. You can listen to the 10 minute discussion here.

http://www.spreaker.com/user/footwashermedia.com/outsourcing-has-benefits-for-all

(Full-disclosure: Footwasher Media provides consultation to Eurocal Group on content and marketing strategy)

The Digital Divide, Net Neutrality and Badu Networks

Badu Networks, is developing a technology for distribution within the next two years that will dramatically boost the capacity of public Wifi areas, which will make it possible for many to access the internet with little or no cost.  That is important not only for bridging the Digital Divide but also solve part of the problem of net neutrality.  This is the first of a two-part video interview.  

me!box Media changes game for social video

Product ReviewBy Lou Covey, New Tech Press Editorial Director

Video has become an increasingly significant part social media because it has a lot going for it.

First, it is passively engaging, just like television. The developed world is used to plopping in front of a screen and allow two dimensional images entertain, inform and cajole us into specific action. We are more likely to consume advertising content through video then we are through print which makes it the most valuable advertising platform. Second, it is ubiquitous. You don’t see many people walking down the street reading a newspaper or magazine. That usually requires a cup of coffee, a sidewalk cafe and about 30 quiet minutes. It is not unusual, however, to see someone walking down the street, watching a podcast on a mobile device. About 5 years ago I was taking the train to San Francisco riding in the upper deck and looked down to see a dozen people on the train watching video on phones and iPods. I knew video had jumped the chasm at that point.

But the passivity of the viewer is the one thing that makes it a negative. There is no easy call to action as there is with online text. You can embed links into the text, making it easy for the audience to go deeper into your content. You can collect much more data from a text document than you can from a video.

Me!box Media has changed that.

me!box is a video platform that allows producers to embed links, documents, other videos... any kind of link you would put into a text document... into a video (See how we did it here.)

The platform adds a relatively small bar on the right of the video. Links appear in sequence to when they are mentioned in the video. The platform allows the producer to stop the video at specific points, if necessary to allow the audience to view the additional content, or to let the video continue on as the audience looks at additional visual content.

There are some video platforms that offer this type of feature on a very limited level. For example, Youtube allows you to embed links to other Youtube videos, or become subscribers to specific channels. But no platform I’ve been able to find creates the level of engagement that me!box provides.

This is not an experimental platform. HP and Intel have used me!box for their training programs and me!box Media is talking to B2B publishing companies about incorporating the platform into their online publications. This latter use is where I see the real value.

Footwasher Media did several NTP video interviews that were shared with multiple publications last year. The online editors love them because they produced high traffic, and in fact, the produce high traffic on the sponsor sites that ran them as well. But the love wasn’t there among the sponsors for the very reason that engagement was only passive. The clients preferred the text content because it drove active engagement by the audience.

Video is high value to online publications because it keeps people on the page, but it is low value to many advertisers because it doesn’t drive people to their sites or material. You can insert ads into the video, but, again, there is no active engagement. Using me!box, publications can have the high, passive engagement of the video, which creates more impressions, but advertisers can get traffic directly from the video, get the audience to download additional content and actively share.

One downside to the publishers is that they may have to do a page redesign to accommodate the expanded width of the me!box platform.

“We don’t call it a redesign,” Me!box Media CEO Mark Jacobs told me. “ we say we are opening a larger value-add space.”

Jacobs has the data to back up the definition. Me!box videos get twice the play time, three times the engagement and and sharing, and up to five times the clocks-to-action and impressions of YouTube or Brightcove videos (see chart), according to Jacobs.

“Typically, the me!box space becomes the highest return per square inch on a page. It’s like throwing out a free newspaper rack and replacing it with a video vending machine.”

Other than the rather annoying spelling of the company name (Really? An exclamation point in the middle? My spell checker actually swore at me) This platform opens the potential of social video for many applications.

You can find out more about me!box at www.meboxmedia.com. Tell ‘em the guys from Footwasher Media sent you.

Note: me!box is still a relatively new technology and is optimized currently only for the desktop/laptop world.  The company is working on the third release of the product and starting from the mobile direction.  Viewing video on the platform on handheld devices is still fairly sketchy.

Rising custom IC costs could eat into Apple's nest egg

By Lou Covey, Editorial Director It's time to stop wondering what Apple is going to do with its cash reserve after it pays out dividends to stock holders.  If what Cadence's Tom Beckley says about the next generation of chips holds true, Apples is going to need every dime to create the next generation of processors for the iPad and iPhone.

Beckley, senior vice president of R&D in the Cadence Custom IC group, was the keynote speaker at the 2012 International Symposium on Quality Electronic Design in Santa Clara (ISQED) addressing "Taming the Challenges in Advance Node Design."  Beckley pointed out that Apple has been the poster child for cost-efficient development and production, but even if every chip developer followed the "Apple Way" it would not put much of a dent in the total cost for developing the next generation of SoCs.

The A5 system on chip in the current Apple products, designed at 45nm, could come in under $1 billion to design and bring to market with effective control of the supply line.  Cost projections for a chip at 28nm (the next step) could be as much as $3 billion. At 20nm, the cost could exceed $12 billion (if you build your own fab, which Apple could well afford.)  The Cadence exec stated that the cost of EDA tools (both purchased and developed) could run as high as $1.2 billion alone.

The evidence of the increasing costs of development can be seen in the profit margins of the iPad.  According to iSuppli, the cost of the A5 chip in the new iPads at $23 is double the cost of the original A4 chip.  Why is the cost going so high?  Because the way chips are being manufactured is changing dramatically.

Beckley explained that the physics of making a semiconductor mask reached a breaking point at the current most popular nodes as the resolution of a photoresist pattern begins to blur around 45nm.  Double patterning was created to address that problem at 32nm.  "But everyone wanted to avoid doing it at 32nm because of the mask costs.  They wanted to maximize their investment in lithography equipment."

The process splits the design where the structures are too close together, into two separate masks.  It's an expensive process (especially when each mask costs around $5 million) and requires entirely new ways of creating the masks to avoid rule violations.  But where the foundries were willing to let is slide at 35nm, they are requiring double patterning at everything below, Beckley stated.

These new techniques are driving up development costs straight up the design chain.  Beckley said he has close to 400 engineers in his unit working on tools just for 20nm design -- half of his entire staff.

The benefits of the moving the node are just as tremendous, he said.  Instead of millions of transistors, each chip will have billions allowing for greater functionality in devices.  "We expect improvements  of 25-30 percent in power consumption and up to 18 percent overall perform and improvement," he predicted.

"If what I'm saying scares you, it should.  There are many questions and issues to be ironed out," Beckley concluded.  "But at Cadence we are already working with a dozen customers on active test chips, which will increase to 20 very soon, and we are already working with customers for products at 10nm."

What are you doing to overcome the rising cost of custom ICs?  Join the discussion at element14.com

 

 

 

 

Raspberry Pi to narrow Digital Divide?

By Lou CoveyEditorial Director 

The effort to close Digital Divide -- the separation between those who can and can't afford access to the Internet -- has been a point of frustration for government and social activists for more than a decade. However, the rousing success of the Raspberry Pi computer launch on Leap Day could significantly close the Divide with the right price point and distribution strategy and punch a hole in commercial efforts to derail low-cost computing.

The United Nations established the World Information Society Day (May 17) for the first time in 2001 and since then there has been a steady stream of programs and products aimed at closing the divide, from the One Laptop per Child (OLPC) non-profit organization to Intel's Classmate PC.  Even the popularity of netbooks and tablets demonstrated the demand for low cost "ports" to the internet.  None, however, have made a significant dent in the problem.  In the US, where the gap is the smallest, 22 percent of the population still lacks internet connectivity, a figure that has barely improved since 2000 (Internet World Stats).

Several issues continue to dog efforts to close the divide: usability, price and supply. OLPC entered into competitive issues with suppliers early on and is still struggling to bring the devices to below $100 without significant charitable and government subsidy.  Intel, in particular, cut ties with the organization over the price per unit and launched the Classmate PC with greater functionality, and made it difficult to the OLPC offerings to gain significant market presence.

The long-anticipated Raspberry Pi, however, smashed the $100 barrier with a $35, fully internet enabled, credit-card sized device, manufactured and distributed by several sources, including Premier Farnell.  The current version is in one, uncased configuration, powered by an ARM-based Broadcom system on chip, with two USB slots, 256MB of RAM, HDMI slot, SD memory card slot and an Ethernet port, running Fedora Linux OS.

The primary target for the device is education, especially below the college level, but according to Jeff Jussel of element14.com, Premier Farnell's engineering community, the foundation's wants to build an open user community of experienced engineers first, to provide essentially free resources for students to learn how to use the technology. "The Foundation really designed this for education - to give schools and students an exciting platform for rekindling interest in programming.  I think this is the most exciting computing platform for education since I had my first Apple IIe as a kid." (hear full interview with Jussel)

Hence the partnership with electronics distributors rather than chip and system manufacturers. Enter the first problem: While both the foundation and distributors anticipated a significant demand for the product, they had no idea how big it would be.

"We made a limited run," said Jussel, "just to see how fast they would go and we knew we would run out of inventory early on. We thought initially demand would be in the thousands."  That was an understatement.  Worldwide demand exhausted the inventory in two hours and caused the servers for both the distributors and the foundation to crash briefly.

"Demand was actually in the 10s of thousands," said foundation co-founder and executive director Eben Upton (hear interview with Upton).  "We knew we had something special.  We just didn't know how special."

Orders came in primarily from the developer community, as anticipated, leaving very little for education at the outset.  Upton admitted that marketing efforts to education have been focused almost exclusively to the United Kingdom where the government has provided significant support.  In the US, however, not only is Raspberry Pi seen as a misspelled dessert, alternatives like Beagleboard and Cotton Candy are also unknown outside of colleges.  New Tech Press contacted several secondary education technology leaders who did not know of any of the options.

Woodside High School in Redwood City, California has been nationally recognized for its innovative approaches to using and teaching technology, including being competitive in national robotics competitions has yet to use any of the options, and the faculty had not yet heard of Raspberry Pi. David Reilly, principal at WHS said options like Cotton Candy, in excess of $100 would be outside of the budgetary restraints of even the most well-off schools, but the $35 Raspberry Pi might actually been doable.

Jussel said Premier Farnell, through its element14 online community, will soon be launching a program in the US not only to raise awareness of the technology, but to get samples into the hands of educators by the beginning of the start of the new school year.

Once the education market is penetrated, Upton hopes the next step is attacking the Divide.  Upton said the foundation's program began 6 years ago to deal with an ever increasing lack of programming skills of students entering the Cambridge University computer sciences programs.  A study showed that many of the students had no regular access to computers prior to enrolling, a problem that seems to be increasing in families below the poverty level in developed countries.  The availability of a fully functioning, low-cost computing system could rapidly close the gap as long as students had the ability to learn how to use them.

In the US, according to the AC Nielsen company, low-income minority families are more likely to own smart phones and high-definition televisions, than middle income white families, but less likely to own a personal computer.  The families use the phones as their internet connection because the phone and data service are more cost effective than a high-speed cable connection.  Upton said the Raspberry Pi was specifically designed to be able to be plugged into a phone, keyboard and HDTV to keep the overall cost for the system below $100.

How can the engineering community and electronics industry use Raspberry Pi to help achieve the ultimate goal of closing the Digital Divide?  Join the conversation with your ideas at www.element14.com.

Raspberry Pi founder wants to close digital divide

The Raspberry Pi personal computer project began 6 years ago at the University of Cambridge in the UK out of a study conducted by Cambridge to discover why incoming students had ever declining programming skills.  The study showed that many of the students had no regular access to computers prior to enrolling, a problem that seems to be increasing in families below the poverty level in developed countries.  The availability of a fully functioning, low-cost computing system could rapidly close the gap as long as students had the ability to learn how to use them.  This interview with foundation executive director Eben Upton, PhD, Computer Sciences at Cambridge, outlines the beginning of the program and where the foundation hopes to go with the technology.

Distributors facilitating spread of Raspberry Pi technology

The Raspberry Pi project, according to Jeff Jussel of element14, Premier Farnell's engineering community, will give give schools and students an exciting platform for rekindling interest in programming similar to what the Apple IIe did for launching a generation of computer sciences in the 1980s.  In this interview, Jussel discusses how commercial distributors around the world, including element14, will be providing low cost kits and building a worldwide community of developers to enhance the experience for new programmers.

New IPad takes dictation

Apple's third generation of the iPad (imaginatively titled "iPad") was revealed today in San Francisco featuring a remarkably sharper screen and faster processing thanks to a quad-core GPU and a dual-core processor.  A larger battery is 70 percent stronger than the iPad2 battery allowing the device to maintain the same 10 hours of use.

 Apple CEO Tim Cook claimed the new display is sharper than the average high-definition television set, but the higher resolution means little for common low-resolution web images.

Cook pointed out at the outset of his presentation that the iPad in the fourth quarter outsold all PCs in the world, bolstering his claim that we now live in a post-PC world.

The new device also includes a high res camera on the back of the device, similar to - one used on the iPhone4s, and there will be separate versions for Verizon and AT&T LTE networks.

Software-wise there are several upgrades, but the popular Siri app won't be available immediately, instead Apple has included something new in the meantime: the ability to dictate and turn your voice into text. The company also said it would start letting users store movies in its iCloud remote storage service, so they can be accessed through the Internet by PCs and Apple devices. It already lets users store photos, music and documents in the service.

The new iPad will go on sale March 16 in the U.S., Canada and 10 other countries. A week later, it will go on sale in 25 more countries, making it the fastest product rollout in Apple history.

Is the iPad 3 the end of 3G as we know it?  How will users fully leverage the capabilities of the new HD iPad without unlimited data plans or issues due to throttling?  Tell us what you think.  Leave your comments at element14.com

Poland a bright spot in EU fiscal woes

Recently, bad economic news has been almost a daily occurrence out of the European Union, but there are occasional bright spots that miss the regular news cycle.  Poland seems to be one of them. Poland is due to become an official member of the Euro Zone in January 2012 and is obliged, under the terms of the Treaty of Accession 2003, to replace its current currency, the Zloty, with the Euro, however, the country may adopt the Euro no earlier than 2019.  That's probably good news for Polish start ups that seem to be able to find plenty of government support and venture capital for a raft of innovative technologies.

Footwasher Media's Lou Covey sat down with three Polish startup companies touring Silicon Valley recently, as they were on the hunt for partners and investors to help them expand into the US.  The three companies were Ekoenergetyka with electric vehicle charging technology, virtual environment maker i3d , and a chemical synthesis innovator called Apeiron.

This interview is the first in a series of reports and interviews on the state of European innovation and efforts of the European Commission's Digital Agenda.

 

 

Apple TV won't be AppleTV.

The growing relationship between Sharp and Apple that was revealed last week put to bed conjecture whether Apple's next leap might be into television.  It is.  And that leaves an open question: Has Apple gone mad?

The profit margin on TV's is razor thin at best. In 2007 the average screen sold for $982; this year it's $545 and, in many cases, TVs are a loss leader for electronics retailers (you make money on the cables, you know).  Apple has always been about margin and their phones, computers and tablets have had a much higher profit then just about anyone else.

While people will buy a new computer or car or phone every couple of years, they tend to stick with one TV for a long time.  The Consumer Electronics Association has HDTV penetration at 87 percent, which means anyone who wants one probably already has one.  Apple will have to find a way to convince buyers that they really need a new TV, and a technology bell or whistle isn't going to cut it. Sales of 3D TVs are in the toilet and that was supposed to be the next big thing.

Steve Jobs dropped a hint to his biographer when he said he had finally figured out how to change the TV market.  Like all of Apple's breakthroughs it had to be in the arena of the user interface demonstrated with the release of the iPhone 4 – the voice interface Siri. The Apple TV final product may not be hardware at all, but voice recognition software. And after all the years that Apple has remained steadfastly against licensing its technology, Siri could become a standard in television and a steady stream of revenue for Apple.

In the past two years, TVs have become connected to the internet, cable systems, and telephones with multiple input ports. But that has made their use even more complex for the average user.  A huge after-market industry for universal remotes relies on this complexity for their sales.  In fact, the complexity of modern electronics is the final barrier to adoption for many.

But Siri could make controlling the various functions as simple as vocalizing a request. "Adjust sound for music." "Record CSI:Miami." "Show me email."  Combining the technology with Facetime would make it possible for the user to say, "Call Mom" and start a video chat on the main screen.  The vision of a communications hub in the home could be realized, not with new hardware and a bunch of cables, but with one app.

That's pretty big.

What do you think the next big evolution in TV will be?   Join the discussion at www.element14.com

 

Fire may change the game... but not for Apple.

By Lou Covey Editorial Director for Footwasher Media

The web is awash with reviews of the Kindle Fire, many positive (some scathingly negative), and the comparisons to the iPad are just as plentiful.  The question that keeps coming up, however, is the Fire a game changer in the tablet war?  Probably not for Apple, but probably in the Android world and definitely in the remains of RIM's empire.

In the iPad comparison, the Fire is the inexpensive, entry-level tablet for noobs.  At $199  it is better than half the price of the iPad, which means people who want the media experience of a tablet at bargain prices, it's a good choice. Although Apple has released the latest version of the iPod Touch at the same price, so if the user doesn't care about the screen size, you can get a more flexible, powerful product from Apple, still. The Fire performs slower and using key pad apps will be difficult on the much smaller screen, barring significant improvements in touch technology.

The iPad, especially when paired with an after-market bluetooth keyboard, makes an effective laptop replacement.  There are even productivity apps that make it possible to use the iPad for word processing, spreadsheets and presentations. All of that is lacking in the Fire.  As far as content goes, the fire serves well as a distribution method for Amazon, but like most Android devices, it lacks the depth of apps in the iOS universe.  So Apple execs won't be losing any sleep over the sales of the Fire. Google, on the other hand... The introduction of the Fire further fragments the developer community that is divided between iOS, Android, Blackberry and even Microsoft 7 Phone (MS7).  Developers can bypass the Google Market

and deal directly with Amazon, which is great for Amazon but not so much for Google.  IDC just released a quarterly survey that shows that developers are abandoning all other tablets in north America to create apps for the Fire.  The trend seems to be going that way in Asia and Europe, as well.  So while Google was looking at Apple as their main competitor, Amazon has been snaking the market out from under them.  Yoink! The future for RIM's Blackberry is even grimmer.  The same IDC report said MS7 has now surpassed RIM as the third place tablet OS developers prefer to work in.  Along with the continuing decline in the overall device market, RIM seems to be hanging on by it's fingernails. So the Fire IS a game changer for RIM.  Their technology has just not kept up with the market development.  The Playbook was a joke, a little less funny than HP's tablet.

RIM is going nowhere... except into someone else's division. RIM still has a lot of value.  They have a pretty loyal customer base, albeit shrinking. They have that bag of Nortel patents in wireless technology, the best security platform and the best integration of MS Exchange and Lotus notes.  Microsoft could become a serious competitor to Android and iOS if they bought RIM, and that would change the game for everyone.

Sponsored by element14.com

RTOS Market in Turmoil

By Ann Steffora Mutschler Senior Correspondent, New Tech Press

With engineers clamoring for all things Android  and open-source, the RTOS market is experiencing some major changes – although that depends on whom you ask.

A new entrant to the market, FreeRTOS, garnered the top spot UBM’s 2011 Embedded Market Study.  However, Dr. Jerry Krasner of Embedded Market Forecasters has taken issue with these results.

According to his blog, Krasner pointed out, “In EMF’s 2011 Annual Survey of Embedded Developers…developers reported using an in-house RTOS (20.1%), Android (19.3%), XPE (16.5%) and Windows CE (15.9%). FreeRTOS was used by 0.9% of respondents. From our perspective, the suggestion that FreeRTOS use would exceed that of in-house, Android, XPE, CE, or VxWorks use is beyond any reasonable reality check.”

This of course has set the stage for confusion among all parties, as to which RTOS is really leading the pack.

There is no doubt, however, that internally developed RTOSs come out ahead of commercial ones. David Blaza, VP at UBM said there is a “stubborn percentage of developers who stick with their home-grown OS and the reason for that is that they invest a lot of time and money in it and they know how it works – it does the job. Engineers are very, very conservative: they don’t really want to change. Just the sheer investment in code is monumental for them.” But, Chris Rommel, VP at VDC Research pointed out, engineers are slowly shifting away from internally developed RTOSs because, “not every type of embedded type of device needs a robust RTOS.”

Users have stuck with in-house RTOSs mainly due to legacy assets and organizational issues. Plus, the scale of the organization or project comes into play – licensing a commercial RTOS can be cost-prohibitive to some companies, he said.  CE devices don’t have a real-time requirement but there is, however, a big difference between a simple office printer and the cockpit controls of an airplane.  Rommel did remind that it is not always clear cut in terms of OS choice since the value of the legacy work must be consideration in the decision-making process.

Krasner’s data also shows that in-house RTOS are still the biggest chunk of the market. “Year over year over year people have, as far as them writing new stuff, it’s not worth their money but there are an awful lot of people who have legacy stuff that they invested in 10 or 15 years ago and its much cheaper to hang onto that. The in-house stuff is not people saying they are going to spend six months writing their own RTOS – it’s that they have it, it’s legacy, it’s proprietary, its got feature that they want. In their mind, they are economizing what they already have instead of having to go out and pay.”

In terms of weighing various market research report results, UBM’s Blaza believes, “it is all about who is paying the piper, frankly. We just report what we see. We have the largest embedded audience in the world and we just report what we see and we had to put it in,” he said referring to the FreeRTOS results that some have questioned.

At the end of the day, the most critical data for engineering and marketing teams to get a handle on is what they want out of the market research they purchase or commission. As for vendor rankings…that may be best sorted out in a boxing ring.

Sponsored by element14.com

New EU movement looks to change how startups are done

The 12 Entrepreneurs, an unusual movement made up of start-up leaders, government representatives and service providers from Europe and the US, launched officially recently at the PlugandPlay Center in Sunnyvale with the goal of developing a new model for funding and supporting startups. Here is the interview: Founders of the group, Roman Tolic of Austria- based Hercules Film Network, and Emmanuel Carraud of MagicSolver of the UK, formed the organization with the purpose of building bridges between the centers of innovation around the world, finance visionary projects and create jobs in the US and Europe.  While the initial group is made of the “founding 12,” Carraud said, “Membership is open to anyone who is interested in the potential of building a bridge between Europe and Silicon Valley.”

“The 12 Entrepreneurs do not represent any single organization, but rather an ideal of inter-supportive entrepreneurship for the coming decade,” Tolic explained. “The 12 want to make the world a better place for entrepreneurs everywhere.”

Speakers at the event included Saeed Amidi, founder of PlugandPlay, and Ida Rose Sylvester, managing partner at Silicon Valley Link, both of whom highlighted the innovation potential in Europe and the struggle to bring a successful and cohesive approach to supporting startups on a pan-European basis.  Sylvester pointed out that while there are literally hundreds of organizations in the Silicon Valley representing separate regional development agencies, until now, there has never been a  concentrated effort to support all of Europe.

After the speakers concluded, Tolic announced that the Belgian government and the Vienna IT Enterprises have made formal financial commitments to the movement.

A highlight of the event was the signing of a manifesto outlining the group’s goals and purposes.  Signers included entrepreneurs and government representatives from Austria, France, Spain, Germany, United Kingdom, Romania, Poland, Portugal, Norwey, Italy, Czech Republic, Centrope Region (encompassing Austria, Slovakia, Czech Republic and Hungary), Sweden, and the US

Following the presentations, group members and the audience of more than 40 interested parties began a brainstorming session on what the next steps for the organization should be, including:

  • Expanded and financed access to resource partners
  • Open networking opportunities
  • Creative funding approaches
  • Crowdsourcing to resolve manpower issues
  • Co-innovation to roll up potential competitor into stronger companies
  • Encouraging investors to get in for the longer term
  • Find better customers and make those customer better
  • Open university workshops in entrepreneurialism

Tolic and Carraud have left for Europe to attend to the businesses but also to meet formally with the European Commission, government leaders and business organizations that have expressed interest in supporting the organization.  In the US, the movement will be led by Prasad K. R. an angel investor for mobile software companies; Carles Cabret,  a business development associate for the Spanish incubator Inspirit, and Lou Covey, a Silicon Valley communications strategist.

12 Entrepreneurs is on Facebook and Linkedin.

Accellera announces new standards submissions

At the Design Automation Conference last month, Dennis Brophy, vice chairman for the 10-year-old standards organization, let slip that several standards resulting from the merger with SPIRIT last year would be submitted to IEEE.  (Yea we know this is a month old, but it hasn't been announced yet and we just fixed a major tech glitch in this site's database following a major upgrade.). follow this link for the interview. Watch video live on Vpype Live Broadcaster

Verigy takes test a step forward

Finding something new that might actually help the semiconductor industry become profitable is like looking for three wise men and a virgin in Las Vegas, especially when you are going through Semicon,  But last week, on the second day of the conference, I had three people tell me I should go look at what Verigy was showing.  I’ve always been used to seeing testers that took up entire rooms and were hot enough to cook soup in (which I have done but that’s another story).  What I found was both fascinating and yet left me wanting more.  That’s not a bad thing.  What it was was a step in the right direction.  That’s for sure. Here's the link. THIS IS AN UNSPONSORED PODCAST from New Tech Press

Verification IP: Solace for the Common Integration Nightmare?

Language barriers have been problematic since the dawn of civilization. Entire countries have split along spoken language lines, and wars have been fought largely based upon different cultures that have built up around various languages with entirely different concepts.

The culture within semiconductor design and development is no different, except the battles are being fought in the market rather than with physical weapons. Just as in political wars, certain languages will dominate and efficiencies will be achieved through standards, whether created or de facto.

This is particularly true in the verification world. With verification taking roughly 70 percent of chip development time chip designers and developers must use every tool available to cut costs, reduce complexity, and deliver chips to market fast. Making sure all of these tools can communicate is critical.

Much has been written about compatibility of intellectual property (IP) blocks, and the occasional nightmares of getting them to work in a system on chip or embedded design. Far less is known about the interoperability of verification IP (VIP), which is used to verify specific IP blocks to entire systems.  If VIP doesn't work according to plan, it is often because of language incompatibilities.  The definition of language is not just the verification language in which the IP was written for.  It often includes an understanding of the methodology of the chip developer, which combined create a language environment.

Done right, VIP has tangible results. It can help verify some IP or portion of a design and it can help facilitate system-level verification, which is becoming increasingly important as complexity increases in systems on chip. Many developers buy VIP to verify a portion of the design , then continue to use it for system-level verification. In those cases, it is critical for the VIP to be flexible enough for reuse in multiple instances, especially at the bleeding edge of chip development where creators can't anticipate all of the permutations or uses of their IP.

Problems are compounded using VIP from multiple sources, the same as with IP from multiple sources. If the different sources use incompatible methodologies or languages, chaos erupts further slowing the  chip verification process adding big cost overruns into the equation. Incompatibilities may stem from not only from language differences, but how the language is applied, as well as the overall architecture or approach.

Verification Goals

There are two basic challenges in verification. The first is checking what the chip designers created against what the architects intended. The second is making sure that overall design actually works. Both challenges have become much more difficult as the complexity of chips goes up. At advanced process nodes, power and timing are intertwined like a Gordian knot, where multiple power domains intersect with multiple cores being powered on and off at rapid and sometimes unpredictable intervals.

Many of the languages created to solve these problems are far from complete, which drives the mixing of tools, IP and VIP based on multiple languages and methodologies, as well as the creation of new languages.   For example,  real-world progress of the SystemVerilog standard continues to lag behind marketing. 

 

"SystemVerilog is on a strong path to acceptance, even as further extensions and enhancements are being developed, specifically in the methodology and VIP interoperability areas," says Mark Gogolewski, chief technology officer and chief financial officer at Denali Software, which makes memory and protocol VIP. That is particularly true in the United States, but he says the trend is less clear-cut in other parts of the world which is split across SystemVerilog, e, SystemC, Perl, Verilog and VHDL. SystemVerilog was developed under Accellera as an extension to Verilog with donations from member companies and user-driven enhancements.  It was then transferred to IEEE and quickly ratified under its Corporate Standards Program.

The Big Picture

The real challenge is getting VIP blocks to interoperate at the system level, which requires a significant amount of integration. Testing the IP in systems to ensure it performs as intended, as well as working with other components, can be extremely difficult because of the huge amount of data involved in creating chips, and vague parameters provided by many IP vendors. VIP can be provided with IP blocks or developed by independent vendors or the chip developer. In all cases, however, the biggest challenges involve integration at the system level.

 

Verification languages are used to create the test benches for these chips and set up the test cases so that a chip’s functionality is predictable. That job has become so complicated, however, that shortcuts are necessary.  This is where VIP comes into play. VIP can be used for everything from qualifying IP for standard protocols such as PCI Express or USB to creating an entire test environment for much larger portions of a chip.

But VIP brings its own headaches. Unexpected problems can arise if those VIP blocks do not fit into the same methodology, making it more difficult to determine whether the IP they test actually behaves the way it was intended to work when it is put in complex system on chip.

Pinpointing Problems

“The main problem with verification IP is not only to make sure it’s compliant with a protocol, but also that the functionality of the block is what you expected,” says Cyril Spasevski, chief technology officer at Magillem Design Services, based in Paris, France.

 

“The rules from the IP provider are often vague,” Spasevski says. “That means you have to build complex test benches.”

That also means the IP and the VIP not only have to be flexible enough in design to work together, but they must work as planned across a whole system. The more functions that are added into a system on chip, the harder this becomes. And when IP is mixed from multiple companies, levels of compliance with standards or protocols are reduced to relative terms.

“If you have a subsystem that’s ARM-based, don’t even try to mix it with something else,” says Spasevski. “It takes too much time to validate.”

"I’m not sure that it’s really a problem with VIP as a category, but it’s a problem that the right VIP can help solve," said Gogolewski.   "The idea here is that if an engineer did not design a block to be 100% compliant to a protocol, the VIP needs to be flexible enough to accommodate those situations, otherwise the engineer would have to write a VIP from scratch to deal with his deliberately non-compliant block.  So, in effect, the VIP should attempt to enforce the protocol strictly, but allow end-user to dictate exceptions.  Of course, this makes the creation of such VIP much more complicated."

Spasevski contends it is easier to work with IP and VIP from a single vendor, but there are mixed opinions on that front. At the very least, VIP must be built on a platform that is flexible enough to be mapped onto different system methodologies. That sounds simple enough on paper. But many times the creators of IP blocks do not give much detail about the way those IP blocks can be used, and the more pieces of IP that are being purchased the less well understood the interactions. The real value of VIP is its adaptability in these situations.

VIP can be flawed, however. JL Gray, a verification consultant based in Texas, said if an engineer has to verify the VIP, he probably should have written it from the outset because he’s going to have to debug it, anyway.  But he said the reality is that it’s still simpler to do that than have to create everything from scratch.

“A lot more people need to buy VIP than realize it,” Gray said, noting that’s what starts all the integration and protocol issues. “The question everyone has to ask is, ‘Which environment is the base environment?”

New Definitions for Pain

While tools and standards are being developed to reduce the pain of doing verification — everything from functional verification platforms to verification IP — there will always be a high pain threshold. Functionality is added to chips at each new process node, because there is a lot of space to do that and because the development cost of new chips is so high that it’s more economical to put everything on one chip, even if it isn’t all enabled for every device. Adding new functionality increases chip complexity and greatly increases the volume of data that has to be verified.

 

The best that can be done is to try to add some order into this gargantuan task, and that’s exactly what standards groups such as Accellera and OCP-IP are trying to do. In particular, they are trying to slash the amount of work that needs to be done by allowing previously verified blocks to be re-used in new chips—both IP and verification IP (VIP).

Accellera, which recently completed a standard for a property specification language for the formal specification of hardware, is currently working on a standard to make VIP interoperable, a project started in May with more than 80 companies participating.

“What we’re seeing is that designs are collaborative projects,” said Shrenik Mehta, chairman of Accellera. “Sometimes you want to mix and match components.”

The first challenge in that arena is defining the problem and then figuring out a solution that is flexible enough to work across many different environments and methodologies. “What we’re trying to determine is how to take a test bench and make it interoperable,” Mehta said. “But if you give a choice to every engineer you’re going to get different answers from each of them. It’s like one story being written by ten different reporters. None of them is exactly the same.”

But at least part of the difference in the verification world is rethinking of the entire verification process. “This isn’t about the process node,” he said. “It has to do with building a design differently. You need a different discipline.”

OCP-IP Chairman Ian Mackintosh believes that the discipline should focus on interfaces that can ensure re-usability across a wider range of architectures and methodologies.

“The number of things that can be standardized is huge,” said Mackintosh. “But the number of things that are standardized is tiny. Getting broad-based collaboration in this space is not possible at this time, and I don’t see that changing anytime in the near future.”

As a result, OCP-IP is focused on the interfaces rather than what’s behind them. Mackintosh said that if there is fundamental compatibility on language, content and functionality, then what is really needed from standards groups is broad-based verification technology that will support any language or interface they define.

“Essentially, everything you’re doing is verification, whether it’s hardware or software,” Mackintosh said. “If the ideal in verification is reducing the time, then maybe we’re thinking about the problem the wrong way. Verification should always be a large portion of the design cycle. It’s the tools to implement it that have failed. All you’re doing with verification is verifying that what has been done is correct.”

Verification engineers take a different slant on that idea. They contend that the real problem isn’t in the verification itself. The problem arises because they have to wade through masses of data to find what needs to be debugged. The debugging is well understood once the problem is located. That’s one of the reasons the large EDA vendors are busy creating functional verification platforms, which raise the level of abstraction up a notch, and it’s why verification IP is so attractive for debugging specific IP blocks.

Both Accellera and OCP-IP agree that a different approach is needed for building chips in the first place. OCP-IP believes the future work is in software, and that’s why interfaces are so important. “Hardware development has to be more sophisticated, so you need more to re-use,” Mackintosh said. “The bulk of the work will be in software. If you’re product driven, that’s going to be a hard problem to solve. If you’re market driven, it’s easier because you can figure out three years from now that you’re going to need this function at this cost.”  Accellera, meanwhile, is focused on creating the foundation for IP and VIP.

“Today, a lot of IP is standards-based,” said Mehta. “That’s true for the processor, I/O and controllers. A lot of IP is based on industry standards, and you embed that with proprietary IP in a language like C or SystemVerilog. We see the emergence of both of those, which should provide enough interoperability between different VIP blocks. But if what you want to do is add in IP and VIP, you should know whether it is fully verified and tested. That has nothing to do with standards.”

He said the standards will simply make it easier to connect standardized blocks of IP and VIP together. “Time will tell if this is successful,” he said. “ But at least it should ease the pain.”

Getting Over the Verification Hurdly

Verification is likely to remain the single most time-consuming part of developing chips for the foreseeable future, for three reasons.  First, it is getting more expensive to create complicated chips, because more functionality is being added.

 

For example, the immensely popular Apple iPhoneTM is utilizing this kind of built-in future programmability.  Inside Apple retail stores are signs warning customers that iPhones may be permanently damaged by software upgrades if they are unlocked using non-Apple software downloaded from the Internet.  Hackers often ignore that warning but doing so means that they miss "gee-whiz" functions that Apple will introduce later. Apple and other vendors build in features ahead of time because it’s cheaper to do it once, verify the functionality and debug it, rather than trying to build and verify successive iterations of chips with new functions added with new releases of the product . Under the right circumstances adding those dormant functions is very feasible from a real-estate perspective at advanced process nodes.

But verifying more functionality can also greatly increase the time it takes to develop a new chip. Coverage models have to be developed to make sure the chip works and that the intent of the design is carried through the development cycle.

That leads to the second reason why it takes so long to verify a chip. More capabilities mean a more complex design. Prioritization of buses must be set up, for example, to ensure that the phone function on a multifunction cell phone takes precedence over an MP3 player. In addition, when the phone is not in use, it needs to power down to conserve battery power.

Getting these seemingly simple tasks to work in sync, and still have enough battery power left over at the end of the day, is no simple task. Add to that such concerns as signal integrity and maintaining signal strength and it gets even more complicated. Verifying each one of these functions has to be done independently, and as part of a system-wide verification process.

All of this creates data that has to be sifted through to find problems, which leads to reason number three. There simply is too much data to easily identify bugs and debug only that portion of the data that is causing problems. While most of the big EDA companies are working on higher-level languages for functional verification, the tools are still in their infancy.

What's in the Toolbox?

Verification engineers use everything at their disposal to speed up the process. That means simulation of hardware and software, formal verification, and VIP. The VIP is a relative newcomer to the process, where tools are outgrowths of technology developed over the past 10 to 15 years.

 

What’s unusual about VIP is that it can be used to check a specific piece of IP, or to help with overall system-level verification. It’s also incredibly hard to create, because writing VIP requires the VIP creator to understand the IP at least as well, if not better, than the people creating the IP and the overall process in the first place. It’s almost like adding a complementary version of the IP where all parameters that can be imagined are configurable and can be tested.

“For the PCI Express specification, there are 1,000 pages of documentation and it’s all nastily complex,” said Gogolewski. “We have a defined space that includes 500 rules to make sure it’s a valid configuration, then thousands of configurable assertions. For our customer base, building your own VIP is only viable if they’re the very first to market. And, you can’t just be a good programmer or a verification engineer anymore. Now you have to be an expert in both protocols and verification. The number of interfaces is increasing and the complexity of the interfaces is increasing.”

Setting Coverage Models

One of the biggest concerns among verification engineers is the need to establish complete coverage metrics so they know everything that needs to be tested actually gets tested. Shankar Hemmady, a verification engineer at Synopsys, wrote in his recent book on verification that metrics must address code, functional and assertion coverage.  For most designs, this has the complexity and variability of a matrix.

 

Joe Sawicki, vice president and general manager of Mentor Graphics’ design-to-silicon division, said there are three business contexts for chip designers: low power, a shorter opportunity window and a shrinking average selling price. At the same time, designs are increasing in complexity, manufacturing variability must be dealt with throughout the design cycle, and the cost of testing a chip is increasing.

In this context, condensing the time it takes to verify a design, the chip, the software and everything associated with the chip are no longer options. They are requirements—and extremely difficult ones.  That's probably why that even though there is little insight into the actual market size of VIP, a few companies are making money in the business.

KC Rajkumar, EDA and IP analyst for Royal Bank of Canada Capital Markets, said the market for verification IP is one of the few market niches in electronics that has reached maturity and consolidated to a point of equilibrium.  But while there are only a few players (Denali and Synopsys are all that really remain) it is an important technology because as designs become more complex, they becomes increasingly hard to visualize.

VIP, done right, can help verify some IP or portion of a design and it can help facilitate system-level verification.  So whether you buy it or grow it in house it is critical for the VIP to be flexible and robust.