New microprocessor claims 10x energy improvement


As power consumption has become one of the most important metrics of CPU design, we’ve seen a variety of methods proposed for lowering CPU TDP. Intel makes extensive use of dynamic voltage and frequency scaling, ARM has big.Little, and multiple companies are researching topics like near threshold voltage (NTV) scaling as well as variable precision for CPU and GPU operations. Now, one small embedded company, Ambiq Micro, is claiming to have made a breakthrough in CPU design by building a chip designed for subthreshold voltage operation — with dramatic results.

Ambiq’s new design strategy could be critical to the long-term evolution of the wearables market, the Internet of Things, and for embedded computing designs in general — if the company’s technology approach can scale to address to a wide range of products.

Subthreshold and near-threshold voltage operation

The threshold voltage of a transistor is the voltage point required to create a conducting path between the source and drain terminals. In simplest terms, this is the point at which the transistor turns “on.” The voltage threshold is not an absolute, however — operation is possible in both the near-threshold and subthreshold regions.


The problem with NTV and subthreshold designs is that they tend to suffer from high amounts of leakage current, as shown above, and are capable of only very low operating frequencies within these voltage ranges. This can actually lead to higher energy consumption overall — by constantly operating in the subthreshold region, the total amount of energy a chip leaks can result in higher power consumption than would result if the SoC just ran at conventional voltages and then power gated cleanly or shut itself off.

To understand the problem with subthreshold circuits and performance, imagine you had to distinguish between an alternating field of white and black squares. The human eye can perform this feat relatively easily — even when the white-black swap occurs at high speeds, we can tell the difference between the two.

Ask people to identify the difference between two slightly different shades of gray, however, and they can only do so when the frames are presented for a much longer period of time. The eye will tend to combine the two shades into a single perceived hue — this fact is widely used in Twisted Nematic (TN) monitors to produce simulated 8-bit color using fast 6-bit panels. Instead of displaying a given shade — say, Red 250 — the monitor will alternate between Red 246 and Red 254. Flip between these two shades quickly enough, and the eye naturally “averages” them out to Red 250.

This difficulty between determining the “on” versus the “off” state is a major limiting factor on subthreshold operation and requires designing circuits to extremely tight tolerances. What Ambiq claims to have developed is a new method of designing circuits, dubbed Sub-threshold Power Optimized Technology (SPOT). The company’s full whitepaper is available.

Ambiq is claiming that its Apollo microcontroller, which is based on the ARM Cortex-M4 design with FPU, can deliver power consumption equivalent to a Cortex-M0+ part without compromising its M4 with FPU performance. That’s actually more significant than it sounds — the graph tot he right shows the results of a performance comparison and power analysis between the Cortex-M0 and Cortex-M4 , as published by EDA360 Insider.


The green line is the M4, while the yellow line is the Cortex-M0. According to that report: “The ARM Cortex-M4 with its SIMD and floating-point capabilities ran the tests 12 to 174 times faster than the ARM Cortex-M0 core and consumed 2x to 9x more power.”

In other words, a subthreshold version of the Cortex-M4 with Cortex-M0 power consumption would be an embedded chip that meshed the best of both worlds — incredible power efficiency and far more embedded performance than is currently available.

Why subthreshold embedded performance matters

In previous years, accomplishments like this in the embedded market would be of limited interest to anyone else. The combined pushes for better wearables and the growing Internet of Things, however, makes innovations like subthreshold voltage critically necessary. While there’s still a vast gulf between even a high-powered embedded chip like the Cortex-M4 and a Cortex-A7 smartphone class CPU, the only way to close that gap is to continue to push embedded performance per watt into new frontiers.

Ambiq is arguing that its new design and implementation approaches can double to quadruple power efficiency. Whether this is solely an embedded shift or if it can boost higher-end hardware is still unknown, but approaches like this could revolutionize embedded hardware — and make all-day smartwatch battery life a reality in the long run.

Courtesy – Extreme Tech

AMD’s next-gen CPU leak: 14nm, simultaneous multithreading, and DDR4 support


Ever since it became clear that AMD’s Carrizo would be a mobile update with a focus on energy efficiency as opposed to raw performance, enthusiasts and investors have been hungry for details about the company’s upcoming CPUs in 2016. AMD has been tight-lipped on these projects, though we heard rumors of a combined x86-ARM initiative that was up and running as of early last year — but now, a handful of early rumors have begun to leak about the eventual capabilities of these new cores.

As with all rumors, take these with a substantial grain of salt — but here’s what is reporting to date. We’ll rate the rumors as they’re given on the site: According to the post, the new AMD Zen is:

Built on 14nm: For a chip launching in 2016, this seems highly likely. Jumping straight for 14nm won’t obviate the gap between AMD and Intel, but the company is currently building its FX chips on legacy 32nm SOI while its Kaveri and Carrizo are both 28nm bulk silicon. The double-node jump from 28nm to 14nm should give AMD the same benefits as a single-node process transition used to grant. Given the advantage of FinFET technology, we’d be surprised if the company went with anything else. The chips are also expected to be built at GlobalFoundries, which makes sense given AMD’s historic relationship with that company.

Utilize DDR4: Another highly likely rumor. By 2016, DDR4 should be starting to supplant DDR3 as the mainstream memory of choice for desktop systems. AMD might do a hybrid DDR3/DDR4 solution as it did in the past with the DDR2/DDR3 transition, or it might stick solely with the new interface.

Up to 95W: Moderately likely, moderately interesting. This suggests, if nothing else, that AMD wants to continue to compete in the enthusiast segment and possibly retake ground in the server and enterprise space. Nothing has been said about the graphics architecture baked on to the die, but opting for an up-to 95W TDP suggests that the company is giving itself headroom to fight it out with Intel once again.

Opt for Simultaneous multithreading as opposed to Cluster Multithreading: With Bulldozer, AMD opted for an arrangement called cluster multi-threading, or CMT. This is the strategy used by Bulldozer, in which a unified front end issues instructions to two separate integer pipelines. The idea behind the Bulldozer design was that AMD would gain the benefits of having two full integer pipelines but save die space and power consumption compared to building a conventional multi-core design.


Intel, in contrast, has long used simultaneous multithreading (SMT), which they call Hyper-Threading, in which two instructions from different threads can be executed in the same clock cycle. In theory, AMD’s design could have given it an advantage, since each core contains a full set of execution units as opposed to SMT, where those resources are shared, but in practice Bulldozer’s low efficiency crippled its scaling.

The rumor now is that AMD will include an SMT-style design with Zen. It’s entirely possible that the company will do this — Hyper-Threading is one example of SMT, but it’s not the only implementation — IBM, for example, uses SMT extensively in its POWER architectures. The reason I’m not willing to completely sign off on this rumor is that it’s a rumor that’s dogged AMD literally since Intel introduced Hyper-Threading 15 years ago.

The benefits of using SMT are always dependent on the underlying CPU architecture, but Intel has demonstrated that the technology is often good for a 15-20% performance increase in exchange for a minimal die penalty. If AMD can achieve similar results, the net effect will be quite positive.

The final rumor floating around is that the chip won’t actually make an appearance until the latter half of 2016. That, too, is entirely possible. GlobalFoundries’ decision to shift from its own 14nm-XM process to Samsung’s 14nm designs could have impacted both ramp and available capacity, and AMD has pointedly stated that it will transition to new architectures only when it makes financial sense to do so. The company may have opted for a more leisurely transition to 14nm in 2016, with the new architecture debuting only when GF has worked the kinks out of its roadmap.


No information on performance or other chip capabilities is currently available, and the company has said nothing about the integrated GPU or possible use of technologies like HBM. The back half of 2016 would fit AMD’s timeline for possible APU integration of HBM — which means these new chips could be quite formidable if they fire on all thrusters out of the gate. During its conference call last week, AMD mostly dodged rumors about delays to its ARM products, noting that it had continued sampling them in house and was pleased with the response. Presumably the company’s partners remain under NDA — there are no published independent evaluations of these products to date.

Courtesy – Extreme Tech

Intel Expands Custom Chip Work for Big Cloud Providers


Intel is stepping up its efforts to build custom chips for online giants like eBay and Facebook, giving them a boost in performance and, Intel hopes, another reason to keep buying its microprocessors.

The chipmaker is working with the big online firms to embed programmable chips, called FPGAs, in the same chip package alongside its Xeon server processors. Those FPGAs can be programmed to run unique algorithms that each online firm depends on to run its services.

“For the right application, we believe FPGAs can provide an order-of-magnitude improvement in performance” over standard Xeon chips alone, said Diane Bryant, senior vice president and general manager of Intel’s data center division.

It’s a shift from Intel’s strategy in the past. Until a few years ago, all its customers got basically the same general purpose processors. They could select different clock speeds, core counts and other features, but everyone got the same basic chip design.

The rise of online giants like Google, Facebook, Amazon and eBay has changed that. Those companies run a relatively small set of applications, but they do so on a massive scale — a single workload might run across tens of thousands of servers. They also have a lot of buying power.

That alters the economics of the chip business. If a customer is buying thousands of servers for a particular workload, it becomes viable for Intel to optimize a processor for that task. And customers will pay a bit more if it lets them squeeze out a bit more performance or consume less power.

Intel has built custom chips for customers before. Last year it delivered about 15 unique designs, including processors for Facebook and eBay. But they involved relatively minor changes, such as disabling cores and adding extra I/O ports.

Integrating an FPGA (field-programmable gate array) with its Xeon chips is a step further. And in some cases, Intel will hardwire the instructions for the algorithm directly onto the Xeon itself, Bryant said.

It’s a new way for Intel to deliver custom chips, and this year it expects to deliver more than 30 unique designs, Bryant said. She was due to make the announcement at Gigaom’s Structure conference in San Francisco Wednesday.

It’s a smart move by Intel, said analyst Nathan Brookwood of Insight64. It gives its largest customers less incentive to license a competing chip design, such as the ARM architecture, and optimize that to run their algorithms instead, he said. IBM has also opened its Power8 design, which Google has been testing.

There are two ways customers can use the FPGAs, Bryant said. In one case, the online service provider deploys a Xeon package with the FPGA and tests a workload to ensure it delivers the desired benefits. If it does, Intel will burn the instructions onto the Xeon itself and manufacture the part without the FPGA.

The other use case takes advantage of the fact that FPGAs can be reprogrammed in the field. The service provider buys servers with the dual-chip package inside, and programs the FPGA depending on the workload they need to optimize. If their needs change later, they can reprogram the chip again.

Using FPGAs to accelerate workloads isn’t new, but they’re usually discrete components on the motherboard linked to the processor via PCIe. Integrating them into the chip package with Intel’s QPI interconnect reduces latency and allows the FPGA to access the Xeon’s on-chip cache and its main memory, Bryant said.

That doubles the performance gain that can normally be derived from the FPGA, compared to using it as a discrete component, she said.

Bryant said a handful of cloud providers are testing the FPGAs, though she wouldn’t name them. She also wouldn’t say whose FPGAs Intel will use, though it has a manufacturing partnership with Altera, making it a likely candidate.

It plans to begin production of the Xeon-FPGA chip packages soon, she said. They’ll be socket-compatible with standard Xeons, meaning customers can use them in standard servers.

She pointed to two trends that are driving the need for custom chips: the rise of large-scale cloud applications running across huge pools of servers, and the move to a more flexible, software-defined infrastructure.

Applications are changing faster than new chips can be designed and brought to market, Bryant said. “This is a great way for the silicon to keep up with the pace of software innovation,” she said.

Originally Published on
Author: James Niccolai

Sharks Cove mini PC from Microsoft, competitor for Raspberry Pi


Though it takes Microsoft a while to jump in on an emerging trend, the company does tend to throw its hat into the ring eventually. Despite releasing satisfactory hardware, the company was late to both the modern-day smartphone and tablet scene, hindering its success. This time around — likely thanks to the success of the Raspberry Pi — Microsoft is targeting the mini PC and development board market with the release of Sharks Cove.

Read More..

3D printer from the Arduino firm


Arduino is best known for its micro-controller of the same name and we’ve covered countless projects featuring the little electronic wonder over the years. With the 3D casino online printing industry making massive inroads in the do-it-yourself market, it’s not that surprising that the company behind one of the ultimate bits of hobbyist gear is releasing its own 3D printer.

Read More..

Arduino (for electronics enthusiasts and hobbyists) – Just What You Want to Play With!

For electronics enthusiasts and hobbyists one of the most interesting microcontrollers to fool around with is the Arduino. This open source hardware movement began in 2005 and has turned the heads of many DIYers.  Even so, many would be electronics enthusiasts are frightened by the idea of starting their own project, especially if they don’t have enough tech know-how. That’s what makes Arduino the perfect microcontroller to work with; it´s inexpensive and there is a lot of documentation available. You can create something great from an existing electronic device.


You don’t have to start from scratch; with Arduino there are a lot of existing ideas, training courses, classes and more. You can make things like electric meters, guitar amplifiers, gadgets that tell you when you need to give your plants water, and more. Currently there are at least 100,000 users of the Arduino platform.

What is Arduino?

This is an 8-bit microcontroller that gives you a lot of potential to use your technical and creative skills. This small microcontroller helps you bring your concept to life.

Where Do I Use It?

There are all types of starter projects where Arduino is appropriate. As with many other open source projects, it is much easier to edit and rewrite than it is to start a project from scratch. Arduino allows you to mod an idea, instead of having to start from a blank slate.

Arduino offers hundreds of projects and ideas that are shared by users.  You can take on a project such as making your espresso machine remote controlled. You can turn your bike jacket into a flashing turn signal, or you could even create a wireless electricity monitor that shows you your power usage.


What most hobbyists love about Arduino is the price. At just $30 apiece it is inexpensive enough to play with, unlike other similar boards.  The reason it is so inexpensive is because it is relatively easy to clone and completely open-source.

Durable Too!

Add to the cost the fact that it is virtually indestructible, and well, you’ve got a match made in heaven. Arduino is very resilient. You can drop it, hit it, and it still remains alive. Besides it’s incredible durability, the Arduino is also low power. You can run in on a 9V battery.

Bottom Line

It´s easy to use and affordable, what more could you want? Users say its simple to connect, you don’t have to be too techy to complete some neat projects and there are lots of tutorials online.  Arduino is a simple system made for creative souls who don’t have a lot of prior electronic knowledge. These are cheap and there are lots of different training courses we have available.

Raspberry Pi – The Pie for all Electronic Tinkering Buffs

Ever thought you would hold a computer as small as a credit card in your hand? Now, you can, the Raspberry Pi offers computer capability in a circuit board the size of your palm. Primarily, this small computer was built to help electronic and computer systems students learn to program, but now the Raspberry is being used for a variety of uses, and people from all over the world are really excited about owning at least one of these little guys.

Raspberry Pi

Raspberry Pi was developed in the UK and has a small processor, a very decent graphics accelerator and 256MB of RAM. While this may not make it the fastest computer in the world, it certainly has enough power to run many of the more common functions of a computer. Many find it very useful as something small to check email and surf the Internet.

Digital Buffs are Finding Many Ways to Use This Handy Device

Besides, being useful for basic computing, and for learning to program, a number of creative programmers are finding more unique ideas for the Raspberry Pi. Some electronic enthusiasts have successfully turned it into a media player for the TV, especially because among its basic functioning is the ability top lay media files.

Other digital enthusiasts find it makes the perfect alarm clock, especially because it can be programmed with multiple sounds. However, if you are really creative with electronics you could turn the Raspberry Pi into an automatic coffee pot initiator, or you could program it to automatically turn your lights on and off.

The Cost Makes It Even More Interesting

The Raspberry Pi computer is very interesting to hobbiest because of its size, low power consumption and capabilities, but especially because of its cost. The latest model sells for a fraction of the price of your typical dektop, and at approximately $25 per unit even without a case and peripherals, this makes an excellent tinkering computer.

Just keep in mind that these computers don’t come with a case, a keyboard or a power supply, although it can be powered by a cell phone power supply and you can connect a USB keyboard to it.

There aren’t very many places you can get the Raspberry Pi from just yet. After all, it is made by a non-profit organization. Still, you can order it online and get it in just a few days.

Five Reasons Why Arduino Simulator for iPhone is Fun

The Arduino Simulator for iPhone from Schogini Systems can be just the right app you would wish to own if you are ardent of using an open source single board microcontroller for a plethora of programming requisites. Adruino kits that are available these days and what many people end up buying are hardly explored as much as they should. Most aspiring programmers and those that have a penchant for electronics are likely to use the sample codes and make amends to customize the very same codes for different purposes. When the objective is to work on existing codes for multitude objectives then an Arduino Simulator for iPhone can be the best option.

IOS Arduino Simulator 4

Here is the link to the App’s page

Fve reasons why you should choose to own the Arduino Simulator for iPhone by Schogini Systems.

1. The Arduino Simulator is compatible with iPhone, iPad and iPod. The Full Pack version is compatible with iOS 3.1.3 and later whereas there is an upgraded version, Arduino simulator Full Pack 2X, works on iOS 3.2 and higher. If you have an iPad then you can very well opt for the upgraded version which would bring in more features.

2. Whichever version of Arduino Simulator for iPhone and iPad you choose to buy it is a one off purchase and all future add ons or upgrades would be available to you for free. It is a onetime small fee to explore the fascinating world of Arduino.

3. The prices are relatively much more congenial for people, especially students and kids. At $7.99 and $9.99 for the two versions and the prices being a one off charge, Arduino Simulator from Schogini Systems is definitely affordably priced.

4. The Arduino Simulator offers the exact feel as you would have while working on a kit. The app brings together all the popular and even some rare functions and features. Without the requisite of any hardware or software, the Arduino Simulator can allow you to make changes to the codes in regards to delay, pin number and state. Users and developers get the full breadboard with 14 LED pins and the luxury to work on various types of programs. There are several touch and drag features while working on the wiring and you can also change input and output settings as a program would demand.

5. With a horde of sensors including ultrasonic distance sensor, PIR sensor, knock sensor and touch sensor among others, the entire virtual experience can get as real as it can be.

Here is the link to the App’s page

Megapixel Camera? Try Gigapixel

 By synchronizing 98 tiny cameras in a single device, electrical engineers from Duke University and the University of Arizona have developed a prototype camera that can create images with unprecedented detail.

The camera’s resolution is five times better than 20/20 human vision over a 120 degree horizontal field.

The new camera has the potential to capture up to 50 gigapixels of data, which is 50,000 megapixels. By comparison, most consumer cameras are capable of taking photographs with sizes ranging from 8 to 40 megapixels. Pixels are individual “dots” of data — the higher the number of pixels, the better resolution of the image.

The researchers believe that within five years, as the electronic components of the cameras become miniaturized and more efficient, the next generation of gigapixel cameras should be available to the general public.

The camera was developed by a team led by David Brady, Michael J. Fitzpatrick Professor of Electric Engineering at Duke’s Pratt School of Engineering, along with scientists from the University of Arizona, the University of California — San Diego, and Distant Focus Corp.

“Each one of the microcameras captures information from a specific area of the field of view,” Brady said. “A computer processor essentially stitches all this information into a single highly detailed image. In many instances, the camera can capture images of things that photographers cannot see themselves but can then detect when the image is viewed later.”

“The development of high-performance and low-cost microcamera optics and components has been the main challenge in our efforts to develop gigapixel cameras,” Brady said. “While novel multiscale lens designs are essential, the primary barrier to ubiquitous high-pixel imaging turns out to be lower power and more compact integrated circuits, not the optics.”

The software that combines the input from the microcameras was developed by an Arizona team led by Michael Gehm, assistant professor of electrical and computer engineering at the University of Arizona.

“Traditionally, one way of making better optics has been to add more glass elements, which increases complexity,” Gehm said. “This isn’t a problem just for imaging experts. Supercomputers face the same problem, with their ever more complicated processors, but at some point the complexity just saturates, and becomes cost-prohibitive.”

“Our current approach, instead of making increasingly complex optics, is to come up with a massively parallel array of electronic elements,” Gehm said. “A shared objective lens gathers light and routes it to the microcameras that surround it, just like a network computer hands out pieces to the individual work stations. Each gets a different view and works on their little piece of the problem. We arrange for some overlap, so we don’t miss anything.”

The prototype camera itself is two-and-half feet square and 20 inches deep. Interestingly, only about three percent of the camera is made of the optical elements, while the rest is made of the electronics and processors needed to assemble all the information gathered. Obviously, the researchers said, this is the area where additional work to miniaturize the electronics and increase their processing ability will make the camera more practical for everyday photographers.

“The camera is so large now because of the electronic control boards and the need to add components to keep it from overheating,” Brady said, “As more efficient and compact electronics are developed, the age of hand-held gigapixel photography should follow.”

Details of the new camera were published online in the journalNature. Co-authors of the Nature report with Brady and Gehm include Steve Feller, Daniel Marks, and David Kittle from Duke; Dathon Golish and Estabon Vera from Arizona; and Ron Stack from Distance Focus. The team’s research was supported by the Defense Advanced Research Projects Agency (DARPA).

GPS Shield

Adding GPS to your Arduino has never been easier. Multiple GPS receivers attach easily to the shield, and with the example sketch (check below), you will be able to locate your exact position within a few meters. Here’s where we are. GPS also gives you amazingly accurate time!

A connector for the popular EM-406 GPS receiver is populated on the board, and footprints for EM-408 and EB-85A connectors are also made available (connectors are not included and can be found below in the related items). There is also a spot for the UP501 GPS module. The regular GPS pins (RX, TX, PPS, etc.) are also broken out to a 10-pin 0.1″ pitch header, and a small protoyping area is also provided.

The DLINE/UART switch switches the GPS module’s input/output between Arduino’s standard TX/RX pins or any digital pins on the Arduino (default setting uses pins 3 and 2 connected to TX and RX, respectively). The DLINE/UART switch must be set to DLINE in order to upload code through the Arduino IDE.

The shield also includes the footprint for a 12mm coin cell battery holder to provide battery backup to the optional EB-85A GPS module.An ON/OFF switch is included which controls power to the GPS module. Additionally, the Arduino reset switch is also brought out.

Note: GPS modules are not included with the GPS Shield, and only the EM-406 connector is populated. Headers are also not installed or included, we recommend the 6 and 8-pin stackable headers.




• EM-406 connector populated

• EM-408 and EB-85A connector footprints provided and connected for optional use

• UP501 connector and footprint

• Coin cell battery socket footprint provided and connected for optional battery backup of EB-85A GPS module

• Standard Arduino sized shield

• Prototyping area

• GPS serial and PPS signals broken out to a 0.1″ header for additional device connections

• Arduino reset button

• DLINE/UART switch controls serial communications

• ON/OFF switch controls power to GPS module