Submitted by: Bryon Moyer, Editor of EE Journal
Sensor fusion has been all the rage over the last year. We’ve all watched as numerous companies – both makers of sensors and the “sensor-agnostic” folks – have sported dueling algorithms. Sensor fusion has broadened into “data fusion,” where other non-sensor data like maps can play a part. This drama increasingly unfolds on microcontrollers serving as “sensor hubs.”
But there’s something new stirring. While everyone has been focusing on the algorithms and which microcontrollers are fastest or consume the lowest power, the suggestion is being put forward that the best way to execute sensor fusion software may not be in software: it may be in hardware.
Software and hardware couldn’t be more different. Software is highly flexible, runs anywhere (assuming compilers and such), and executes serially. (So far, no one that I’m aware of has proposed going to multicore sensor fusion for better performance.) Hardware is inflexible, may or may not depend on the underlying platform, and can run blazingly fast because of massive inherent parallelism.
Of course, then there’s the programmable version of hardware, the FPGA. These are traditionally large and power-hungry – not fit for phones. A couple companies – QuickLogic and Lattice – have, however, been targeting phones with small, ultra-low-power devices and now have their eyes on sensor hubs. Lattice markets their solution as a straight-up FPGA; QuickLogic’s device is based on FPGA technology, but they bury that fact so that it looks like a custom part.
Which solution is best is by no means a simple question. Hardware can provide much lower power – unless sensor hub power is swamped by something else, in which case it theoretically doesn’t matter. (Although I’ve heard few folks utter “power” and “doesn’t matter” in the same breath.) Non-programmable hardware is great for standard things that are well-known; software is good for algorithms in flux. Much of sensor fusion is in flux, although it does involve some elements that are well-understood.
Which suggests that this might not just be a hardware-vs-software question: perhaps some portions remain in software while others get hardened. But do you end up with too many chips then? A sensor hub is supposed to keep calculations away from the AP. If done as hardware, that hub can be an FPGA (I can’t imagine an all-fixed-hardware hub in this stage of the game); if done in software, the hub can be a microcontroller. But if it’s a little of both hardware and software, do you need both the FPGA and the microcontroller?
Then there’s the issue of language. High-level algorithms start out abstract and get refined into runnable software in languages like C. Hardware, on the other hand, relies on languages like VHDL and Verilog – very different from software languages. Design methodologies are completely different as well. Converting software to optimal hardware automatically has long been a holy grail and remains out of reach. Making that conversion is easier than it used to be, and tools to help do exist, but it still requires a hardware guy to do the work. The dream of software guys creating hardware remains a dream.
There’s one even more insidious challenge implicit in this discussion: the fact that hardware and software guys all too often never connect. They live in different silos. They do their work during different portions of the overall system design phase. And hardware is expected to be rock solid; we’re more tolerant (unfortunately) of flaws in our software – simply because they’re “easy” to fix. So last-minute changes in hardware involve far whiter knuckles than do such out-the-door fixes in software.
This drama is all just starting to play out, and the outcome is far from clear. Will hardware show up and get voted right off the island? Or will it be incorporated into standard implementations? Will it depend on the application or who’s in charge? Who will the winners and losers be?
Gather the family around and bring some popcorn. I think it’s going to be a show worth watching.
Wednesday, November 27, 2013
Wednesday, November 6, 2013
Design Enablement and the Emergence of the Near Platform - Guest Blog by Peter Himes of Silex Microsystems
I am pleased to bring you this blog by Silex Microsystem’s
Peter Himes, vice president marketing & strategic alliances. Peter reflects
on MEMS and while other might lament at the conundrum of the uniqueness of all
MEMS process (you can hum it to the tune initially coined by Jean Christophe
Eloy of “one process, one product”) Peter instead sees opportunity. Through
this challenge, Peter sees opportunity for innovation and collaboration. And
what pleases me the most about his musings on MEMS is that the basic thesis
that is my mantra: “to succeed in MEMS,
you can’t go at it alone – you must partner.” In this example he describes
Silex’s partnership with A.M. Fitzgerald and Associates and their Rocket MEMS
program. Read on, plug in and share your thoughts on how you’ve creatively
sparked innovation in your own company; especially if you come up with the same
reflection: in MEMS, it takes a village; you can’t go at it alone.
Design Enablement and the Emergence of the Near Platform
What does it mean to enable a MEMS design? Is it enough to
have silicon wafers, a clean room and some tools? What bridges the idea to
product?
Traditionally it has meant a series of trials based on past
experiences on conceiving of a process flow which results in the final desired
structure. What steps are possible? What materials can be used? How will it
react to the process and how will it perform after all processing is done? All of
these questions need to be understood simultaneously. Being able to do this
consistently over many different projects is how Silex helps the most
innovative MEMS companies get their ideas to high volume manufacturing.
But in markets where MEMS is becoming mainstream, where
acceptance of MEMS technologies is encouraging traditional and non-traditional
customers alike to consider their own MEMS programs, is this enough to enable
the rapid growth of MEMS going forward? Is every MEMS device trapped in a
paradigm of custom process development and new materials development? Does
everything require MEMS PhD expertise to engineer a perfect solution? In a
market where customers are looking for customized MEMS devices AND rapid time
to market, can they have both?
The core of MEMS still lies in the custom process
integration and the universe of MEMS devices is still expanding, pushed by the
dark energy of innovation. Our SmartBlock™ approach to process integration is
why we can execute on these challenges in a consistent and high quality way.
But it still takes the time and effort of customized processes to achieve full
production qualification, so we also believe that another model is possible,
and we are beginning to see it emerge.
Process integration into a foundry environment is something
we also call Design Enablement, because a successful MEMS process enables
designs to be turned into an actual product. But the power of design enablement
is somewhat muted if the echo only rings once. The true power of Design
Enablement is when the process can resonate over many products or many
redesigns of the same product. This would break the “one product, one process”
paradigm and is what we believe is the next phase in the MEMS industry.
Alissa Fitzgerald of AMFitzgerald & Associates had a
dilemma and an idea. To her, the normal route for MEMS development was
difficult from the start: begin with an idea and use a university or research lab to get a prototype out. Once it is successful, contact a production MEMS foundry to manufacture it - only to find out there are still months or years of process qualification ahead. What if she could collaborate with a
foundry from the start and define a product design platform and a
process flow simultaneously? Using known process capabilities of an existing
foundry, build and characterize the product to that process, so that both the
processing window and the product spec windows are defined
simultaneously. Then you have a process platform that is solid, “de-risked,”
and ready to take customers to market quickly.
This is the idea behind the AMFitzgerald RocketMEMS program
and Silex’s support and partnership in the initiative. And it results in
something which is not fully customized for each new product, yet is not
completely and rigidly fixed either. Rather, it is a “Near Product Platform”
made possible by the design enablement of the Silex process integration
approach and AMFitzgerald’s product design framework and methodology. It allows
for product specific variability without breaking the mold out of which the
process was cast.
And it works.
Subscribe to:
Posts (Atom)