element14

Raspberry Pi to narrow Digital Divide?

By Lou CoveyEditorial Director 

The effort to close Digital Divide -- the separation between those who can and can't afford access to the Internet -- has been a point of frustration for government and social activists for more than a decade. However, the rousing success of the Raspberry Pi computer launch on Leap Day could significantly close the Divide with the right price point and distribution strategy and punch a hole in commercial efforts to derail low-cost computing.

The United Nations established the World Information Society Day (May 17) for the first time in 2001 and since then there has been a steady stream of programs and products aimed at closing the divide, from the One Laptop per Child (OLPC) non-profit organization to Intel's Classmate PC.  Even the popularity of netbooks and tablets demonstrated the demand for low cost "ports" to the internet.  None, however, have made a significant dent in the problem.  In the US, where the gap is the smallest, 22 percent of the population still lacks internet connectivity, a figure that has barely improved since 2000 (Internet World Stats).

Several issues continue to dog efforts to close the divide: usability, price and supply. OLPC entered into competitive issues with suppliers early on and is still struggling to bring the devices to below $100 without significant charitable and government subsidy.  Intel, in particular, cut ties with the organization over the price per unit and launched the Classmate PC with greater functionality, and made it difficult to the OLPC offerings to gain significant market presence.

The long-anticipated Raspberry Pi, however, smashed the $100 barrier with a $35, fully internet enabled, credit-card sized device, manufactured and distributed by several sources, including Premier Farnell.  The current version is in one, uncased configuration, powered by an ARM-based Broadcom system on chip, with two USB slots, 256MB of RAM, HDMI slot, SD memory card slot and an Ethernet port, running Fedora Linux OS.

The primary target for the device is education, especially below the college level, but according to Jeff Jussel of element14.com, Premier Farnell's engineering community, the foundation's wants to build an open user community of experienced engineers first, to provide essentially free resources for students to learn how to use the technology. "The Foundation really designed this for education - to give schools and students an exciting platform for rekindling interest in programming.  I think this is the most exciting computing platform for education since I had my first Apple IIe as a kid." (hear full interview with Jussel)

Hence the partnership with electronics distributors rather than chip and system manufacturers. Enter the first problem: While both the foundation and distributors anticipated a significant demand for the product, they had no idea how big it would be.

"We made a limited run," said Jussel, "just to see how fast they would go and we knew we would run out of inventory early on. We thought initially demand would be in the thousands."  That was an understatement.  Worldwide demand exhausted the inventory in two hours and caused the servers for both the distributors and the foundation to crash briefly.

"Demand was actually in the 10s of thousands," said foundation co-founder and executive director Eben Upton (hear interview with Upton).  "We knew we had something special.  We just didn't know how special."

Orders came in primarily from the developer community, as anticipated, leaving very little for education at the outset.  Upton admitted that marketing efforts to education have been focused almost exclusively to the United Kingdom where the government has provided significant support.  In the US, however, not only is Raspberry Pi seen as a misspelled dessert, alternatives like Beagleboard and Cotton Candy are also unknown outside of colleges.  New Tech Press contacted several secondary education technology leaders who did not know of any of the options.

Woodside High School in Redwood City, California has been nationally recognized for its innovative approaches to using and teaching technology, including being competitive in national robotics competitions has yet to use any of the options, and the faculty had not yet heard of Raspberry Pi. David Reilly, principal at WHS said options like Cotton Candy, in excess of $100 would be outside of the budgetary restraints of even the most well-off schools, but the $35 Raspberry Pi might actually been doable.

Jussel said Premier Farnell, through its element14 online community, will soon be launching a program in the US not only to raise awareness of the technology, but to get samples into the hands of educators by the beginning of the start of the new school year.

Once the education market is penetrated, Upton hopes the next step is attacking the Divide.  Upton said the foundation's program began 6 years ago to deal with an ever increasing lack of programming skills of students entering the Cambridge University computer sciences programs.  A study showed that many of the students had no regular access to computers prior to enrolling, a problem that seems to be increasing in families below the poverty level in developed countries.  The availability of a fully functioning, low-cost computing system could rapidly close the gap as long as students had the ability to learn how to use them.

In the US, according to the AC Nielsen company, low-income minority families are more likely to own smart phones and high-definition televisions, than middle income white families, but less likely to own a personal computer.  The families use the phones as their internet connection because the phone and data service are more cost effective than a high-speed cable connection.  Upton said the Raspberry Pi was specifically designed to be able to be plugged into a phone, keyboard and HDTV to keep the overall cost for the system below $100.

How can the engineering community and electronics industry use Raspberry Pi to help achieve the ultimate goal of closing the Digital Divide?  Join the conversation with your ideas at www.element14.com.

The roads less traveled around multicore walls

By Loring Wirbel and Lou CoveyA New Tech Press Report from Footwasher Media

For the better part of two decades, the processor industry has been running pell mell down the road of multicore design, packing more and more processor cores on a single chip.  But a funny thing happened on the way to the personal super computer.  It didn't work.

In 2007, a DARPA study on the potential of an exascale computer concluded that with the current architecture of processors, in other words x86 and PowerPC, we could not get there from here.  As a result, in January 2012, DARPA announced the Power Efficiency Revolution for Embedded Computing Technologies, (PERFECT), to figure out what to do next.

Dr. David Patterson, a RISC pioneer and a leading voice in the development of multicore processors, suggested that the problem could be solved using FPGAs as an experimentation platform in a program he called Research Accelerator for Multiple Processors (RAMP) at UC Berkeley.  In a series of RAMP presentations.  In a related 2006 Berkeley white paper, ‘The Landscape of Parallel Computing Research: The View from Berkeley,’ Patterson said that the power consumption of the logic in the CPU, converted into heat, limits the performance.  Since any heat that cannot be removed by a heat sink reduces the performance of the transistors, the results are:

  • If you increase the system clock to boost performance, heat rises, transistors slow down
  • If you increase memory bus width and you increase the number of transistors, heat will increase and transistors slow down
  • If you increase instruction-level parallelism (ILP) so more can get done at the same time, you increase the heat and...

The result of the RAMP effort?  "The memory wall has gotten a little lower and we seem to be making headway on ILP, but the power wall is getting higher," Patterson said.  One anonymous engineering wag pit it more succinctly,

"We're screwed."

Throughout this process, however, there have been voices, crying by the roadside as it were, "Go back! you're going the wrong way!"  And it may be time for those voices to be heard.

Going back to the turn of the century, companies like the UK-based Celoxica, were pointing out the weaknesses of the multi-core approach in contrast to a heterogeneous approach incorporating FPGAs.

"The first problem is the architecture of the standard processor doesn't lend itself to parallelism," Said Jeff Jussel, former VP of marketing for Celoxica and current senior director of technical marketing for Element14.  “No matter how many processors you put on a chip, we are not seeing any one algorithm processing faster because it is too hard to program access to all the processors.  What you end up with is a system that can do 12 things, but no actual system speed increase with an incredible power increase."

Celoxica's approach, according to Jussel, was to break up the algorithm over multiple processors inside the FPGA with dedicated memory.  "You end up with millions of tiny processors optimized for the algorithm.  When the algorithm changes, you just reprogram the FPGA."

At the time, the problem was not immediate and the market was entrenched. Celoxica ultimately spun off their tool business, eventually landing in the hands of Mentor Graphics and kept their board development business.  That business was always focused on one-off applications, ranging from mining to financial services.

Patterson said their work in RAMP showed that an FPGA approach, especially for highly focused applications, was "more and more attractive" but there were two specific obstacles: power and design tools.  "We found the CAD tools available were just not that easy to work with.  We were actually surprised at how difficult it was and it formed a major stumbling block.  And while FPGA providers have gotten better with power even as the number of transistors increase, they still need to get better with it before it can be a mainstream answer."

The reprogrammable nature of an FPGA has allowed several board- and system-level companies, ranging from Wall Street FPGA in financial analysis markets to Convey Computing in scientific analysis, to assign ARM cores or small hard-configured RISC blocks like MicroBlaze, to a variety of tasks, with subroutines handed off to coprocessors on the same FPGA.  But the dream of a fully retargetable FPGA system, touted in the mid-2000s by companies like Quicksilver, has been largely deferred because of the problem of developing parallel multithread software for such changing architectural types.

Think "many" not "multi"

ARM walks into the fray almost in a position of neutrality.  While it still endorses the validity of Intel's homogeneous approach to multicore, as early as last fall it began discussing a "many core" as opposed to multicore approach.  According to John Goodacre, program manager in the ARM Processor Division, the traditional approach of using full-performance cores still has a long road ahead of it, especially when you are considering dual- and quad-core designs, but it may not be necessary, especially in some consumer applications to use the large cores.

"Mobile applications are full of little processes." Goodacre explained."If you put all those processes into four or eight big cores, you don’t actually see a big performance improvement, but you see quite a big negative power impact. A Many-/multi-processing approach duplicates the capability of a big homogeneous multicore design that is inherently more power efficient."

Goodacre points to ARM's big.LITTLE concept that marries an A15―which he claims is capable of running more of today’s dual-core type software, with four small A7 cores, in a power efficient formation.

"This approach is mostly targeting toward power, but it’s also giving the next generation programmers the concept that there’s also a lot more power efficient processes available for that next generation of software.  The first next generation software I anticipate will be in gaming, but as time progresses and more and more availability of more cores, there will be more software available."

From the software side

Architecture experts developing RISC instruction sets for a mix of server and embedded applications – dominated by ARM, but also including MIPS, Tensilica, and other companies – have offered their cores to standard IT developers, to FPGA and ASIC vendors, and to embedded specialists.  Xilinx and Altera, among other FPGA vendors, say they see a mix of SMP and asynchronous RISC implementations.  Some ARM licensees, including Freescale Semiconductor, Texas Instruments Inc., Qualcomm Inc., and Broadcom Corp., utilize ARM as part of non-SMP designs that use a central control-plane processing environment, in conjunction with on-chip coprocessors for functions such as encryption, deep packet inspection, and fast list searches for tasks such as routing.

See the full story at element14.com and additional coverage at EDN.com.

Apple TV won't be AppleTV.

The growing relationship between Sharp and Apple that was revealed last week put to bed conjecture whether Apple's next leap might be into television.  It is.  And that leaves an open question: Has Apple gone mad?

The profit margin on TV's is razor thin at best. In 2007 the average screen sold for $982; this year it's $545 and, in many cases, TVs are a loss leader for electronics retailers (you make money on the cables, you know).  Apple has always been about margin and their phones, computers and tablets have had a much higher profit then just about anyone else.

While people will buy a new computer or car or phone every couple of years, they tend to stick with one TV for a long time.  The Consumer Electronics Association has HDTV penetration at 87 percent, which means anyone who wants one probably already has one.  Apple will have to find a way to convince buyers that they really need a new TV, and a technology bell or whistle isn't going to cut it. Sales of 3D TVs are in the toilet and that was supposed to be the next big thing.

Steve Jobs dropped a hint to his biographer when he said he had finally figured out how to change the TV market.  Like all of Apple's breakthroughs it had to be in the arena of the user interface demonstrated with the release of the iPhone 4 – the voice interface Siri. The Apple TV final product may not be hardware at all, but voice recognition software. And after all the years that Apple has remained steadfastly against licensing its technology, Siri could become a standard in television and a steady stream of revenue for Apple.

In the past two years, TVs have become connected to the internet, cable systems, and telephones with multiple input ports. But that has made their use even more complex for the average user.  A huge after-market industry for universal remotes relies on this complexity for their sales.  In fact, the complexity of modern electronics is the final barrier to adoption for many.

But Siri could make controlling the various functions as simple as vocalizing a request. "Adjust sound for music." "Record CSI:Miami." "Show me email."  Combining the technology with Facetime would make it possible for the user to say, "Call Mom" and start a video chat on the main screen.  The vision of a communications hub in the home could be realized, not with new hardware and a bunch of cables, but with one app.

That's pretty big.

What do you think the next big evolution in TV will be?   Join the discussion at www.element14.com

 

Can Solar survive Solyndra aftermath?

By Lou CoveyEditorial Director, Footwasher Media

The recent collapse of a few high-profile solar energy companies, like Solyndra and Beacon Power, has caused even the most ardent fans of alternative energy to ask, "Can this industry survive?"  The answer is a resounding, yes and no.  It all depends on what government on all levels does.

Current public impressions of the health of any industry are colored by recent history.  The financial failings of companies and industries considered "to big to fail" are what most people think of when hearing news about solar.  But unlike the auto industry, with a population of three major players, the solar industry is filled with hundreds of start-ups struggling to establish themselves.  Even if one, two or two dozen go down, it is still well populated.

"Although panel manufacturing is in trouble, the solar industry is doing relatively okay." said Chirag Rathi, a senior consultant on the energy industry for Frost and Sullivan. "This is largely due to the advent of solar leasing companies in the U.S. One such company, SolarCity, was even give a contract to install solar power on up to 160,000 military homes. The program was supposed to be supported by the Department of Energy (DoE), which had extended a conditional commitment for a partial guarantee of a $344 million loan to support the project."

Government subsidy and purchase are the key to whether the industry thrives. The DoE recently announced a new initiative to fund solar collection technology development and the Department of Defense (DoD) is under congressional mandate to reduce fossil-fuel consumption by 50 percent.

The reality is that all forms of energy production are heavily subsidized by government throughout the world.  China has invested hundreds of billions of dollars in their solar panel industry.  Spain's financial difficulties are directly tied to the 100 percent subsidy it gave to the industry there, that it can no longer support.  Even Germany, relatively healthy in the world economy, is struggling to maintain its levels of support to the industry.  In the US, most of the government support – Federal, state and local – is actually tied to the installation industry.

"The purpose of government subsidies for renewables is to reduce costs and make them economically viable alternatives to fossil fueled electricity generation." said Jay Holman, research manager for solar energy strategies at IDC. "As the cost of electricity from renewables drops, it is natural that the subsidies drop as well: this is an indication of progress. The trick with subsidies is to encourage industry growth without placing too heavy a burden on electricity ratepayers or taxpayers. A flat, constant subsidy won't do the trick: it needs to drop in line with falling costs."

Holman said Germany and Italy automatically reduce subsidies based on the amount of solar installed in the previous year, which provides transparency and predictability for the market.

"In the US, however, we send the issue back to congress every few years and let them duke it out. That is an incredibly inefficient approach that makes the subsidy situation extremely difficult to predict."

Holman concluded that what the US industry needs is a long term subsidy plan that makes automatic subsidy adjustments based on the rate of installations and/or the cost of electricity from renewables.

Solyndra collapses.  Why are the generals smiling?

imPARTing Knowledge 3: Understanding Component Engineering - Microcontrollers

By Steve Terry, SK CommunicationsAdvisor to ComponentsEngineering.com

Many small board designs benefit nicely from the use of a microcontroller.  But selecting an appropriate one for a particular design often brings on the feeling of "Where do I begin?"

This discussion limits its focus to low-end microcontrollers.  For this purpose, we'll stick with 8-bit devices.  8-bits simply means that internal processing only operates on 8 bits at a time.  As one would expect, 16- and 32-bit micros would operate much faster as they are processing more bits of data with each instruction.  To be sure, much of the same thinking applied to 8-bit microcontrollers can be applied to the 16- and 32-bit devices;  however, cost, size, capabilities, performance, feature integration, and a host of other upscaled attributes quickly make it increasingly difficult to generalize on approach and applicability.

That said, even in the 8-bit microcontroller world, there are many highly specialized devices.  So, to avoid confusion, we'll leave that subject for a future discussion and stay with the garden variety parts for now.  Quite often, if your design truly calls for one of these specialized micros, there's not going to be much choice, and you'll likely be familiar with those choices already, so you should be okay.

What is a microcontroller, anyway?

The key trait that distinguishes a microcontroller from a microprocessor is that it's a microprocessor with a smorgasbord of built-in peripherals.  For relatively simple board designs, such as controller boards, those embedded peripherals can save a lot on design effort and BOM (Bill of Materials) cost.  Microcontrollers are commonly referred to as MCUs (for "microcontroller unit");  it's nice and short and kinda rolls off your tongue, so we'll use it here, too.

Base MCU feature sets typically include three types of memory (flash, RAM, and EEPROM), general purpose I/O (GPIO), support for various communications ports (UART, I2C, CAN, etc.), timers/PWMs, ADCs, DACs, internal oscillators, temperature sensors, and low power management.  From there, the feature sets branch out widely.  And this is really where the details come in to play for component selection.

Establishing requirements

With so many vendors and varieties of low-end micros, you may find it surprising that a good percentage of them will likely satisfy your design requirements.  But even though so many will usually do the job, tailoring the selection tightly to your particular needs and preferences can make for a much smoother ride in the long run.

Generally, the first step is to define what functionality you must have.  For example:  How many GPIO pins? (always trying to include a few spare for those late design changes).  How many ADC or DAC channels and with what resolution? Do you need timers or PWM control?  How many?  8- or 16-bit?

How do you need to communicate to other devices on this board or another board, like I2C or SPI?  Keep in mind that it's always useful to bring a UART off the board for an RS232 debug port that you can connect to a terminal emulator on your PC.  And any components added to the board to support  it can generally be left off in volume production.

How much code space do you think you'll need?  And how much RAM?  (Here, we don't consider that you'll need so much extra of either one that you'll need to add external memory devices.)  If you're not really sure on memory requirements, err on the high side since:

1. running out of memory can seriously impose on the ability to implement those last few features the marketing guys said they really want included, and

2. you can generally downsize the part later if it turns out you have more memory than you need – maybe do this as part of a cost reduction board spin.  Or, quite often (and if you plan it carefully), it will be a pin-compatible part, so it's simply a BOM change.

And, well, there's one more good reason that consistently proves prophetic:

3. Murphy's Law Corollary:  Code size grows to the size of available memory + 1 byte.

Feel like you're ready to pick one?  Read the rest at: element14.

 

OctoBox eases testing of MIMO devices

Dropped calls on cell phones due to faulty antenna placement have been selectively publicized, as in the case of the Apple iPhone 4G, but have been a common occurrence in all phones released in the past two years.  Mobile carriers are putting heavy pressure on manufactures to avoid, if not eliminate the problem as soon as possible.  No, actually they want it done now. That puts the problem squarely in the laps of the test and measurement industry, which is meeting the demand with some alacrity as demand for the products increases and new technology boost speeds and transmission rates are coming online.

Of keen interest to product developers are compact solutions that test engineers can keep in their offices or at least within spitting distance.  Companies like Agilent, Aeroflex and Anritsu are providing several desktop solutions.  A small company in Boston, OctoScope, has pulled the wraps off a refrigerator-sized anechoic chamber, the OctoBox, that can test mobile devices without having to solder coax directly to the device antennae and deliver more real-world results.

"Lab testing with the devices’ actual antennas, even when the radios are not MIMO, is better than soldering coax to the antenna connections," said Charles Gervasi, an engineer with Four Lakes Technology in Madison, Wisconsin.  "For functional test in production, an over-the-air test is the only option.  Automated test equipment can be configured to test multiple devices at once in the chamber." (Read Gervasi's full review of the OctoBox at element14.) http://www.youtube.com/watch?v=J60uC-7xKsY

Can we survive the loss of Steve Jobs?

By Lou CoveyEditorial Director, Footwasher Media

In the outpouring of grief over the death of Apple founder Steve Jobs has been an underlying meme of concern regarding not just the future of Apple, but the potential for disaster in the semiconductor industry.  On one side are those people whose fortunes ride on continued success of Apple, while on the others are those that would prefer the current Apple leadership on consumer electronics and applications be blunted in favor of their own.  It makes it difficult to have an objective opinion one way or another.

One of the issues to consider is that Apple is now the largest buyer of chips in the world.  If Apple falters significantly in the near term, there is concern that the current growth of the chip industry could falter as well.  But is that true?

In June 2011, Apple surpassed HP as the largest buyer of chips, without a significant reduction in the amount purchased by HP.  What wasn't widely correlated was in July, Amazon also surpassed HP, driving the latter to third.  The phenomenon of Apple's iPhone and iPad has launched a massive buying season by other companies working to take a bite of Apple.  Should Apple's sales falter in the near term the market demand for competing products will probably take up the slack.

Another meme is disappointment in the latest announcement of the iPhone 4s, which turned out to be nothing but the speculation of uninformed bloggers, and the continued "delay" of the iPhone 5.  Apparently, the buying public wasn't as disappointed as pre-orders of the 4s have topped that of the 4 announced last year.  However, pundits seem to be missing critical pieces of information that could explain why Apple made an incremental rather than a radical advancement.

First, is the issue of Samsung.

Samsung, the largest tech company in the world by sales, is competing directly against Apple in both the tablet and mobile phone markets and is probably the leading competitor depending on who you talk to... but it is a distant competitor.  And Samsung's profit forecasts are tied directly to that competition in two ways: as a competitor and a partner.  Samsung also manufactures the A4 chip for both Apple's product lines.  Samsung downgraded its projected profit forecast at the beginning of summer in phones and tablets anticipating sales chilling from the iPhone 5.  When it was revealed that the 5 was yet to come to the market, Samsung's profit forecasts and actual profits rose. Second, there's the investment Apple has made in the current design.  A source close to Apple said the company invested $1 billion in manufacturing for the iPhone 4 and 4s. So walking away from a manufacturing investment and then announcing a new product that would hurt an important supplier, doesn't make a lot of sense -- especially when the current product, with minor tweaks, is blowing the doors off everywhere with the help of three distribution channels (AT&T, Verizon and Sprint).

So the "failure" of Apple to deliver the next generation of its killer product line does not portend the ultimate failure and beginning of the end of its dominance.  It's merely a smart business decision.

Finally, and the biggest question of all has been: "Can Apple actually survive, much less thrive, without Steve Jobs calling the shots."  The reality is that Jobs has not been calling the shots on his own for quite a while now.  A team of people, hand picked by Jobs prior to his first medical leave, have been the overall leadership.  During that process, Tim Cook emerged as the successor, just weeks before Jobs succumbed to his illness.  Many are blaming Cook for the less than stellar reception to the product announcement, but if the truth be told, there were moments as early as 2005 where even Jobs' decisions were questioned and identified as the beginning of the end for Apple's success.

A closer analogy to Apple's situation is when Bill Gates stepped down and installed Steve Ballmer as the new head of the company.  Many questioned that move, as well, but were comforted that Gates continued as Chairman of the Board.  With Gates keeping his finger in the pie while Ballmer led, Microsoft has lost half its value.  The difference between the two situations is Apple now has a clean break from Jobs' leadership allowing Cook, et al to create a new future for the company.

There is not enough data to determine if any one event, even one as earth-shattering as the death of a charismatic and visionary leader, will mark the finale of a remarkable business run, but this is what we do know:  Apple has products in completion to launch for the next 5 years; they have $76 billion in cash reserves; and have the largest valuation of any US company.  With that kind of foundation, the odds are that any speculation that focuses on one event or issue is as sure as a throw of the dice in a back alley craps game.

Can we learn from Job's life?  And can we do something positive now?

S2C bridges HW prototyping and SW development

As FPGAs have become larger their use as a prototyping tool has become more diverse, including using multiple processors in a single design and system. And the business of FPGA prototyping has grown with that ability. What began as a means of prototyping other silicon devices, has become a way to validate the FPGA itself, an indication of how the FPGA verification market can be used in bootstrapping a next-generation FPGA based on known designs. S2C is one of the companies that is making a profitable business in this niche as this New Tech Press Report demonstrates.

 

ImPARTing Knowledge: Live and Learn Product Assembly

By Douglas Alexander, Component EngineerSpecial to NewTechPress

A few years back, an employee of a capacitor manufacturer left the company and stole the formula for a low equivalent series resistance electrolytic capacitor. He brought the formula to a black market operation and began to produce the capacitors using the same markings as the original company.

As it turns out, his bogus operation did not get the formula right and produced millions of bad capacitors that were sent all over the world. My company was one of the unfortunate recipients of the bad caps and we had to spend thousands of dollars and hundreds of hours reworking boards, removing the bad counterfeit capacitors, and replacing them with the good parts. Had we performed an incoming inspection based upon what is known as an Acceptable Quality Level screening, we would have caught the bad parts and saved ourselves a lot of money and grief.

Over the years companies have developed a systematic approach to the business basics of components and product assembly, often from the hard lessons of costly errors. And now, there are new technologies being introduced to detect counterfeit integrated circuits, and companies are being formed for the sole purpose of screening for counterfeits.

Processes

Component selection: The task of identifying a “correct” component for the circuit may involve an understanding of how the circuit works and extrapolating the correct parametric for a device or it may involve identifying the device from a given “list” of parameters. The latter case may be presented as: “I need a low drop-out regulator that can handle 500 milliamps with a 5V input and 3.3V output.” The individual responsible for identifying the final component must also know what questions to ask the Design Engineer in order to expedite the selection of the right part. Is there a package preference, a preferred mounting configuration, an operating temperature consideration, a size constraint, or any number of other factors that may affect the final selection?

Testing: Screening is often required to verify that a device meets the manufacturer’s specifications and functions as expected in the design process or existing circuit under test. This can be as simple as verifying a resistor's value and tolerance on an LCR meter (Inductance/Capacitance/Resistance), or it can be as involved as qualifying a higher-level, purchased assembly that has hundreds of critical parameters.

Analysis: This may involve what is known as Failure Mode Effect Analysis where a component is found to be the cause of a failure in a circuit. Every failure must be examined for “Root Cause” in order to understand the fundamental reason for the failure. Until this is understood, there can be no assurance that the failure will not occur again. To say a component failed because of excessive electrostatic discharge (ESD) does not delineate the full causation of the failure. How much of a charge is needed to destroy the device? What was the source of the ESD? How did the charge reach the component? Is the circuit protected against ESD? These questions and many others must be asked in order to determine the ultimate “fix.”

See the rest of the article at element14.com

 

Douglas Alexander has been working in the electronics R&D and manufacturing sector for over 25 years with experience in all aspects of component selection, qualification, verification, specification control, reliability prediction, and assurance. His goal in Componentsengeineering.com is to offer the reader a comprehensive understanding of the various types of electronic components used by designers and manufacturers who are associated with electronic engineering and manufacturing.

DIY solar power can drive industry as subsidies decrease

Editor's note: This interactive article is the first installment of the New Tech Press Collaborative Journalism Program, produced for element14 by Footwasher Media.  It contains strategically placed links to videos, podcasts, discussions, articles and product lists throughout the narrative to give engineers a "starting point" for research or designing projects on the subject matter.  We encourage your participation in making this a living document with your input and additional links to relevant material. By IdaRose Sylvester Senior Correspondent, Footwasher Media

By 2020, California plans to generate 20,000 MW from renewable resources, one-third the current usage and triple the current renewable power, with 60% from “localized” sources, generated at or near consumption, such as roof-mounted solar panels or on covered parking lots.  Half the U.S. is legislating renewable requirements (and supporting incentives for homes and businesses. However, elsewhere in the world, the government solar incentives are decreasing as capacity comes online, reducing incentive to add supply.

As subsidies decrease, small generators (homes and businesses) will shoulder the burden in the coming years.  And while investments are focusing on materials and processes that bring down solar panel costs, the cost of labor is unchanged and becoming a higher percentage of installation cost. Smarter investments might be made in technologies that drive installation costs down and open a market for Do-it-Yourself (DIY) installation. The current solar installation industry is not necessarily inclined to give away their business to their customers, making it preciously rare to find guides regarding what must be done to create your own solar power system.

With some thought, the DIY installation project can make the effort cost effective and accessible to everyone. By considering power supplement needs, return on investment should come within 5-7 years with a guarantee of 20 years of the panels, once you figure in tax rebates and other incentives.  Local utilities require hiring of a certified electrician to make the grid connection, so, contact local utilities early for the requirements and approval process.

Adding solar to a site isn't just putting up panels and plugging them in.  Solar power is generated in DC while most systems operate on AC power.  That requires an inverter. Depending on the size of your installation you might want to consider micro inverters that can be "daisy chained" between or on each panel, or mini inverters that can be mounted to the side of the building.  Larger installations could require single or a series of grid-tied ground mount systems.  Cost, maintenance and monitoring are factors to be considered. Panels need to be three to 6 inches above the roof to allow airflow to cool the panels, which lose efficiency as heat rises.

Security is another concern.  Napa Valley vintners have experienced continued theft of panels from their ground-mount installations, requiring significant investment in surveillance and locks on the mounting systems.  That cost can be lessened with some imagination in what you use to construct the mounts.

Finally, panels don't keep themselves clean, so two to three times a year, depending on the dust and pollen levels in your area, you may need to get up there and hose the panels off to maintain efficiency.  There are several options, from hiring a guy with a garden hose and a scrub brush to more high tech choices.

While you can’t go to “Solar Depot” or “Sol-Mart” and buy what you need, presently, there is enough information and technology available to help the ambitious DIYer pull it all together.

IdaRose Sylvester is a former IDC semiconductor industry analyst and is currently founder of Silicon Valley Link.