New microprocessor claims 10x energy improvement

AmbiqMicro1-640x353

As power consumption has become one of the most important metrics of CPU design, we’ve seen a variety of methods proposed for lowering CPU TDP. Intel makes extensive use of dynamic voltage and frequency scaling, ARM has big.Little, and multiple companies are researching topics like near threshold voltage (NTV) scaling as well as variable precision for CPU and GPU operations. Now, one small embedded company, Ambiq Micro, is claiming to have made a breakthrough in CPU design by building a chip designed for subthreshold voltage operation — with dramatic results.

Ambiq’s new design strategy could be critical to the long-term evolution of the wearables market, the Internet of Things, and for embedded computing designs in general — if the company’s technology approach can scale to address to a wide range of products.

Subthreshold and near-threshold voltage operation

The threshold voltage of a transistor is the voltage point required to create a conducting path between the source and drain terminals. In simplest terms, this is the point at which the transistor turns “on.” The voltage threshold is not an absolute, however — operation is possible in both the near-threshold and subthreshold regions.

Leakagecurrent

The problem with NTV and subthreshold designs is that they tend to suffer from high amounts of leakage current, as shown above, and are capable of only very low operating frequencies within these voltage ranges. This can actually lead to higher energy consumption overall — by constantly operating in the subthreshold region, the total amount of energy a chip leaks can result in higher power consumption than would result if the SoC just ran at conventional voltages and then power gated cleanly or shut itself off.

To understand the problem with subthreshold circuits and performance, imagine you had to distinguish between an alternating field of white and black squares. The human eye can perform this feat relatively easily — even when the white-black swap occurs at high speeds, we can tell the difference between the two.

Ask people to identify the difference between two slightly different shades of gray, however, and they can only do so when the frames are presented for a much longer period of time. The eye will tend to combine the two shades into a single perceived hue — this fact is widely used in Twisted Nematic (TN) monitors to produce simulated 8-bit color using fast 6-bit panels. Instead of displaying a given shade — say, Red 250 — the monitor will alternate between Red 246 and Red 254. Flip between these two shades quickly enough, and the eye naturally “averages” them out to Red 250.

This difficulty between determining the “on” versus the “off” state is a major limiting factor on subthreshold operation and requires designing circuits to extremely tight tolerances. What Ambiq claims to have developed is a new method of designing circuits, dubbed Sub-threshold Power Optimized Technology (SPOT). The company’s full whitepaper is available.

Ambiq is claiming that its Apollo microcontroller, which is based on the ARM Cortex-M4 design with FPU, can deliver power consumption equivalent to a Cortex-M0+ part without compromising its M4 with FPU performance. That’s actually more significant than it sounds — the graph tot he right shows the results of a performance comparison and power analysis between the Cortex-M0 and Cortex-M4 , as published by EDA360 Insider.

ganssle-test-image

The green line is the M4, while the yellow line is the Cortex-M0. According to that report: “The ARM Cortex-M4 with its SIMD and floating-point capabilities ran the tests 12 to 174 times faster than the ARM Cortex-M0 core and consumed 2x to 9x more power.”

In other words, a subthreshold version of the Cortex-M4 with Cortex-M0 power consumption would be an embedded chip that meshed the best of both worlds — incredible power efficiency and far more embedded performance than is currently available.

Why subthreshold embedded performance matters

In previous years, accomplishments like this in the embedded market would be of limited interest to anyone else. The combined pushes for better wearables and the growing Internet of Things, however, makes innovations like subthreshold voltage critically necessary. While there’s still a vast gulf between even a high-powered embedded chip like the Cortex-M4 and a Cortex-A7 smartphone class CPU, the only way to close that gap is to continue to push embedded performance per watt into new frontiers.

Ambiq is arguing that its new design and implementation approaches can double to quadruple power efficiency. Whether this is solely an embedded shift or if it can boost higher-end hardware is still unknown, but approaches like this could revolutionize embedded hardware — and make all-day smartwatch battery life a reality in the long run.

Courtesy – Extreme Tech

AMD’s next-gen CPU leak: 14nm, simultaneous multithreading, and DDR4 support

AMD-HQ-640x353

Ever since it became clear that AMD’s Carrizo would be a mobile update with a focus on energy efficiency as opposed to raw performance, enthusiasts and investors have been hungry for details about the company’s upcoming CPUs in 2016. AMD has been tight-lipped on these projects, though we heard rumors of a combined x86-ARM initiative that was up and running as of early last year — but now, a handful of early rumors have begun to leak about the eventual capabilities of these new cores.

As with all rumors, take these with a substantial grain of salt — but here’s what Sweclockers.com is reporting to date. We’ll rate the rumors as they’re given on the site: According to the post, the new AMD Zen is:

Built on 14nm: For a chip launching in 2016, this seems highly likely. Jumping straight for 14nm won’t obviate the gap between AMD and Intel, but the company is currently building its FX chips on legacy 32nm SOI while its Kaveri and Carrizo are both 28nm bulk silicon. The double-node jump from 28nm to 14nm should give AMD the same benefits as a single-node process transition used to grant. Given the advantage of FinFET technology, we’d be surprised if the company went with anything else. The chips are also expected to be built at GlobalFoundries, which makes sense given AMD’s historic relationship with that company.

Utilize DDR4: Another highly likely rumor. By 2016, DDR4 should be starting to supplant DDR3 as the mainstream memory of choice for desktop systems. AMD might do a hybrid DDR3/DDR4 solution as it did in the past with the DDR2/DDR3 transition, or it might stick solely with the new interface.

Up to 95W: Moderately likely, moderately interesting. This suggests, if nothing else, that AMD wants to continue to compete in the enthusiast segment and possibly retake ground in the server and enterprise space. Nothing has been said about the graphics architecture baked on to the die, but opting for an up-to 95W TDP suggests that the company is giving itself headroom to fight it out with Intel once again.

Opt for Simultaneous multithreading as opposed to Cluster Multithreading: With Bulldozer, AMD opted for an arrangement called cluster multi-threading, or CMT. This is the strategy used by Bulldozer, in which a unified front end issues instructions to two separate integer pipelines. The idea behind the Bulldozer design was that AMD would gain the benefits of having two full integer pipelines but save die space and power consumption compared to building a conventional multi-core design.

HyperThreading-new-e1422389076626

Intel, in contrast, has long used simultaneous multithreading (SMT), which they call Hyper-Threading, in which two instructions from different threads can be executed in the same clock cycle. In theory, AMD’s design could have given it an advantage, since each core contains a full set of execution units as opposed to SMT, where those resources are shared, but in practice Bulldozer’s low efficiency crippled its scaling.

The rumor now is that AMD will include an SMT-style design with Zen. It’s entirely possible that the company will do this — Hyper-Threading is one example of SMT, but it’s not the only implementation — IBM, for example, uses SMT extensively in its POWER architectures. The reason I’m not willing to completely sign off on this rumor is that it’s a rumor that’s dogged AMD literally since Intel introduced Hyper-Threading 15 years ago.

The benefits of using SMT are always dependent on the underlying CPU architecture, but Intel has demonstrated that the technology is often good for a 15-20% performance increase in exchange for a minimal die penalty. If AMD can achieve similar results, the net effect will be quite positive.

The final rumor floating around is that the chip won’t actually make an appearance until the latter half of 2016. That, too, is entirely possible. GlobalFoundries’ decision to shift from its own 14nm-XM process to Samsung’s 14nm designs could have impacted both ramp and available capacity, and AMD has pointedly stated that it will transition to new architectures only when it makes financial sense to do so. The company may have opted for a more leisurely transition to 14nm in 2016, with the new architecture debuting only when GF has worked the kinks out of its roadmap.

HBM-Memory-640x360

No information on performance or other chip capabilities is currently available, and the company has said nothing about the integrated GPU or possible use of technologies like HBM. The back half of 2016 would fit AMD’s timeline for possible APU integration of HBM — which means these new chips could be quite formidable if they fire on all thrusters out of the gate. During its conference call last week, AMD mostly dodged rumors about delays to its ARM products, noting that it had continued sampling them in house and was pleased with the response. Presumably the company’s partners remain under NDA — there are no published independent evaluations of these products to date.

Courtesy – Extreme Tech

Robotic glove teaches your hand the basics of drawing

robotartglove-640x353

Whether it was after getting hooked on your first comic, taking a college art class, or even idly doodling on your math book instead of paying attention to your teacher, we’ve all experimented with drawing. Unless you’re one of the people that can actually do it well, you likely gave up and moved on, wondering how other humans can mix lines together to create something both recognizable and aesthetically pleasing. If you’re illustrationally-challenged, your salvation may lie not with humanity, but with robotics. A new robotic glove teaches you how to draw by becoming training your muscle memory.

Copenhagen Institute of Interaction Design student Saurabh Datta developed the glove as part of his thesis, initially as a way to learn to play the piano. If his human hands couldn’t learn, maybe some robot hands could teach them — and no, the robot hand doesn’t come from the Robot Devil, despite the startlingly similar way the idea was conceived. Called Teacher, the glove-like robot straps onto your hand and fingers, and guides you through specific gestures over and over. If you do it enough, your hand will learn how to do it through sheer muscle memory.
handrobot-640x425
Obviously, this won’t teach you instinct or how to transfer something from your imagination to paper, but at the very least, the theory is that it’ll teach you basics — how to make aesthetically pleasing lines.

Now, it only took Datta a week to build the rig. It’s not exactly the teacher after which it’s named, but instead represents the way humans and robots can and do interact when working to achieve the same goal. Despite being presented with the potential to learn how to draw, Datta found that most participants didn’t like when the glove controlled the majority of the movement — they’d fight against the haptic feedback, and constantly readjust their hand within the contraption to find a more comfortable position. To fix the comfort issue, Datta recorded the fidgets made by the testers, and then adjusted the machine’s force feedback to account for them. In turn, this also helped the machine learn about the way humans naturally move.

Datta’s machines won’t suddenly help you create the best DeviantArt page the internet has ever known, but it’s essentially a proof-of-concept for machines doing our learning for us. You can check out the full project over here, including development diagrams and (long) demonstration videos.

How to install Windows 10 in a virtual machine ?

After last week’s Windows 10 briefing, a brand new build of the Windows 10 Technical Preview was released publicly. Anyone can sign-up for the Windows Insider program and get a taste of Windows 10. Of course, pre-release builds should never be used as a primary OS, so today I’ll walk you through how to run the Windows 10 Technical Preview in a virtual machine.

Under normal circumstances I would do this walkthrough with Oracle’s VirtualBox. It’s free, open source, and works on just about any operating system. Sadly, the drivers appear to be broken for the time being. I couldn’t get sound or networking to work at all, and the screen resolution is severely limited. A quick peek at the community forums shows that other people are having the exact same problems, so hold off using VirtualBox for Windows 10 until these major kinks get worked out.

Instead, I’ll be using the free VMware Player application. It works like a charm, but it’s only available for Windows and Linux. VMware does offer premium virtualization solutions for OS X, but that’s a large investment just to test a preview build of Windows. I can’t recommend dropping $70 if this is all you’ll be using it for. With all that in mind, let’s jump in.

11

1. Download the Windows 10 ISO

First off, head over to the Windows Insider site, and sign up. Once you’ve agree to the terms of service, proceed to the download page, and pick which disc image you want to download. For the purposes of this walkthrough, I’m using the 32-bit English ISO, but go with whatever works for your set-up.

22

2. Create a new virtual machine

Now, you need to install VMware Player. Head to the download page, pick which platform you want, and complete the installation.

Once the application is installed, launch it, and navigate to Player > File > New Virtual Machine to get this party started.
33

3. Find your Windows 10 ISO

Next, you need to tell VMware Player where to find the Windows 10 ISO. Select the second option labeled “Installer disc image file (ISO),” and then navigate to the Windows 10 ISO you downloaded earlier.
44

4. Choose your save location

Pick out a name for this virtual machine, and then select where you’d like it to be saved.

55

5. Configure your virtual hard disk

On this screen, you need to choose how big you want your virtual disk to be. 60GB is the default, but you can increase it as needed. Just make sure you have enough free space on your actual hard disk.

By default, VMware Player will split your virtual disk over multiple files, and I recommend leaving it that way. Unless you have a specific reason to change it, keep it as is.
66

6. Customize your hardware configuration

Next, click the “Customize Hardware” button before we finish the initial set-up.

77

7. Allocate RAM

The default here is 1GB, but more would be better. I have 16GB of RAM in my machine, so I decided 4GB was an appropriate allocation for this virtual machine. Follow the guide on the right of the screen, and don’t go above the maximum recommended memory. If you outstrip what’s available, you’ll end up paging to the hard disk, and making everything slow to a crawl.

88

8. Configure the CPU

Switch over to the CPU tab, and choose how many cores you want to dedicate to this machine. One is the default, and that’s probably a safe starting point. My machine has four cores, so I usually end up bumping it to two cores for virtual machines, but your milage may vary.

Now, take a look at the button labeled “Virtualize Intel VT-x/EPT or AMD-V/RVI.” If you’re using the 64-bit version of Windows 10, this is mandatory. Of course, your CPU needs to support this functionality, so use this tool from Microsoft to verify that it will work with your processor.

99

9. Begin the installation

Close out of the hardware configuration, and “Finish” the initial set-up. Now, boot up your virtual machine, and install Windows 10 just like you would normally.
1010

10. Install the VMware tools

Once Windows 10 has finally booted up, navigate to Player > Manage > Install VMware Tools. It will mount a virtual DVD, and pop up a notification in the bottom right. Navigate to the disc in Windows Explorer, launch the appropriate executable, and follow the on-screen instructions.

Note: If you don’t already have the VMware tools on your PC, follow this process to download them.
1111

11. Reboot your virtual machine

When it’s finished installing, reboot your virtual machine.

1212

And you’ve virtualized Windows 10!

Finally, your Windows 10 installation is ready to use — even in fullscreen mode. Poke around, download the OS updates, and enjoy the cutting edge of Windows. And when something inevitably breaks, it won’t matter. This is just a virtual machine, so toss it, and start over.

Courtesy – Extreme Tech

New aluminum air battery could blow past lithium-ion, runs on water

GreenBattery-640x358

As battery technologies go, the world has a love-hate relationship with lithium-ion. On the one hand, breakthroughs in Li-ion designs and construction are responsible for the Tesla Model S, new installations, green energy research, and the modern smartphone. On the other hand, lithium-ion limitations are the reason why most EVs have a range of 40-60 miles, the Model S costs upwards of $80,000, and why your smartphone can’t last all day on a single charge. For all its promise and capability, lithium-ion has limited long-term utility — which is why a new announcement from Fuji Pigment is so interesting. The company is claiming that its new aluminum-air batteries can run for up to two weeks and be refilled with normal water.

How an aluminum-air battery works

First, some basics. The problem with battery technology isn’t whether or not we can build better batteries — as the chart below shows, we can build batteries that blow traditional lithium-ion out of the water. Keep in mind that the chart below is exponential, meaning that fuel cell technology has 10 times the energy density of a typical cobalt-Li ion battery.

EnergyDensities-640x440

The various “Metal-Air” batteries, including zinc-air, aluminum-air, and lithium-air, have some of the highest energy densities its possible to build. The difficulties with aluminum-air construction, in particular, has been rapid degradation of the anode and, in early models of Al-Air, the release of hydrogen gas.

Fuji Pigment’s new announcement makes repeated reference to the work of Ryohei Mori, and while the referenced papers aren’t available for free, the abstracts are online. The studies in question are all aimed at enhancing the performance of Al-air batteries while extending their useful lifetimes — typically, Al-air solutions begin to degrade immediately after the first charge cycle. According to Mori’s work, creating a secondary aluminum-air battery adjacent to the primary buffered the accumulation of byproducts that normally prevent the battery from working properly over the long term.

The “rechargability” of Al-air batteries requires some explanation. Al-air batteries are primary cells, which means they can’t be recharged via conventional means. As the aluminum anode is consumed by contact with oxygen, hydrated aluminum forms as a byproduct. That material can be recycled and used to create a new aluminum anode, which is why the batteries are referred to as rechargeable. Periodically, the aluminum anode will have to be replaced — it’s not clear how often the Fuji Pigment battery would need servicing of this sort.

Could Al-air be the next big thing?

New battery technologies and announcements are a dime a dozen, but there’s reason to think that a workable Al-air technology could deploy within the next 2-5 years. Multiple manufacturers are working on commercializing designs (Alcoa partnered with Phinergy in 2013 with plans for a 2017 debut), and aluminum is abundant and relatively cheap. Al-air batteries have actually been used in specialized military applications for years, which is important — it means there’s some pre-existing expertise and known characteristics that can be leveraged to create additional capacity.

That said, there are question, too. The hydrated aluminum oxide solution produced during the battery’s normal operation would need to be recycled in some fashion, it’s not clear that fresh water is as effective an aqueous solution as saltwater (meaning there might be specific need for one particular kind of solution). The final price is also unknown, though previous estimations had put the cost of an Al-air system at roughly $1.1 per kg of aluminum anode. This was not given in precise terms relative to the cost of gasoline (and the weight of the aluminum anode in these batteries is unknown), but the team that performed that analysis noted that proper recycling would put Al-air in the same cost range as conventional internal combustion engines.

Fuji Pigment has stated that it intends to commercialize this technology as early as this year, which means we could see test demonstrations and proof of concepts by 2016. Whether auto manufacturers will jump for the technology remains to be seen — car companies tend to be conservative and Tesla has already thrown its weight behind the further use of lithium-ion technology.

Courtesy – Extreme Tech

Scientists use 20 billion fps camera to film a laser in flight

Laser-640x353

We’ve all been spoiled by the flashy lasers in science fiction to the point that the real thing can seem a little mundane. A laser, by definition, is tightly focused and all but invisible to the human eye. However, a team of physicists at Heriot-Watt University in Edinburgh, UK have managed to film a laser bouncing off mirrors with a new type of high-speed camera. It looks like something out of Star Wars, but it actually happened in real life.

Filming a laser pulse moving through air is no easy feat, and certainly no conventional camera was up to the task. So the team built their own, as really clever physicists sometimes do. Visualizing a laser as it moves through the air relies on exploiting the scattering of photons. You can get a feel for what’s going on in the video by shining a regular laser through smoke or water vapor. The light from the laser is scattered by the particulates in the air, which is why you can see it. You’re actually just seeing the photons split off from the focused beam.

The system devised by PhD student Genevieve Gariepy and colleagues at Heriot-Watt University is doing the same basic thing, but in air and much, much faster. Air scatters less light than fog or smoke would, so the camera used to capture the video below needed to be extremely sensitive. It’s composed of a 32×32 pixel grid of single-photon avalanche diode (SPAD) detectors. This type of CMOS sensor was chosen because it has high temporal resolution. That means the image data acquired by them can be accurately correlated with real life.

Courtesy & Original Post – Extreme Tech

YouTube Preview Image

Intel Expands Custom Chip Work for Big Cloud Providers

iStock_000015398184_Small-2-748x421

Intel is stepping up its efforts to build custom chips for online giants like eBay and Facebook, giving them a boost in performance and, Intel hopes, another reason to keep buying its microprocessors.

The chipmaker is working with the big online firms to embed programmable chips, called FPGAs, in the same chip package alongside its Xeon server processors. Those FPGAs can be programmed to run unique algorithms that each online firm depends on to run its services.

“For the right application, we believe FPGAs can provide an order-of-magnitude improvement in performance” over standard Xeon chips alone, said Diane Bryant, senior vice president and general manager of Intel’s data center division.

It’s a shift from Intel’s strategy in the past. Until a few years ago, all its customers got basically the same general purpose processors. They could select different clock speeds, core counts and other features, but everyone got the same basic chip design.

The rise of online giants like Google, Facebook, Amazon and eBay has changed that. Those companies run a relatively small set of applications, but they do so on a massive scale — a single workload might run across tens of thousands of servers. They also have a lot of buying power.

That alters the economics of the chip business. If a customer is buying thousands of servers for a particular workload, it becomes viable for Intel to optimize a processor for that task. And customers will pay a bit more if it lets them squeeze out a bit more performance or consume less power.

Intel has built custom chips for customers before. Last year it delivered about 15 unique designs, including processors for Facebook and eBay. But they involved relatively minor changes, such as disabling cores and adding extra I/O ports.

Integrating an FPGA (field-programmable gate array) with its Xeon chips is a step further. And in some cases, Intel will hardwire the instructions for the algorithm directly onto the Xeon itself, Bryant said.

It’s a new way for Intel to deliver custom chips, and this year it expects to deliver more than 30 unique designs, Bryant said. She was due to make the announcement at Gigaom’s Structure conference in San Francisco Wednesday.

It’s a smart move by Intel, said analyst Nathan Brookwood of Insight64. It gives its largest customers less incentive to license a competing chip design, such as the ARM architecture, and optimize that to run their algorithms instead, he said. IBM has also opened its Power8 design, which Google has been testing.

There are two ways customers can use the FPGAs, Bryant said. In one case, the online service provider deploys a Xeon package with the FPGA and tests a workload to ensure it delivers the desired benefits. If it does, Intel will burn the instructions onto the Xeon itself and manufacture the part without the FPGA.

The other use case takes advantage of the fact that FPGAs can be reprogrammed in the field. The service provider buys servers with the dual-chip package inside, and programs the FPGA depending on the workload they need to optimize. If their needs change later, they can reprogram the chip again.

Using FPGAs to accelerate workloads isn’t new, but they’re usually discrete components on the motherboard linked to the processor via PCIe. Integrating them into the chip package with Intel’s QPI interconnect reduces latency and allows the FPGA to access the Xeon’s on-chip cache and its main memory, Bryant said.

That doubles the performance gain that can normally be derived from the FPGA, compared to using it as a discrete component, she said.

Bryant said a handful of cloud providers are testing the FPGAs, though she wouldn’t name them. She also wouldn’t say whose FPGAs Intel will use, though it has a manufacturing partnership with Altera, making it a likely candidate.

It plans to begin production of the Xeon-FPGA chip packages soon, she said. They’ll be socket-compatible with standard Xeons, meaning customers can use them in standard servers.

She pointed to two trends that are driving the need for custom chips: the rise of large-scale cloud applications running across huge pools of servers, and the move to a more flexible, software-defined infrastructure.

Applications are changing faster than new chips can be designed and brought to market, Bryant said. “This is a great way for the silicon to keep up with the pace of software innovation,” she said.

Originally Published on infoworld.com
Author: James Niccolai

Flexible nanogenerator harvests muscle movement to power mobile devices

skinpatchgen-640x353

The consumer world is becoming powered by mobile devices, but those devices are still powered by being tethered to a wall or a reserve power pack. What if you could generate power for your mobile devices simply by moving your body, and the power source was almost unnoticeable? A new device developed at the National University of Singapore aims to fulfill both of those requirements.

The flexible nanogenerator resembles a small, stamp-sized patch that attaches to your skin. It uses your skin as a source of static electricity, and converts it to electrical energy — reportedly enough to power a small electronic device, like a wearable. The device, presented at the MEMS 2015 conference last week, can generate 90 volts of open-circuit voltage when tapped by a finger. The researchers presented the patch as a self-powered device that can track the wearer’s motion.

Electricity

The power generates thanks to the triboelectric effect, which is when certain types of materials can become electrically charged through contact and friction with another material — in this case, the patch gains the charge through fiction with human skin. When the two materials are pulled apart, they generate a current that can be harvested. An electrode is needed in order to harvest the current, so the research team installed a 50nm-thick gold film to get the job done. The gold film sits below a silicone rubber layer composed of thousands of tiny pillars that help create more surface area for skin contact, which in turn creates more friction.

Thanks to the triboelectric effect, creating the device is easier as well — the skin is one of the triboelectric layers that helps produce the effect, so that layer doesn’t need to be built into the device itself, saving time, money, and materials. It also removes something that can go wrong with the device — having one less layer built in means that’s one less part that can break.

In the researchers’ test, a finger-tap on the device was able to generate enough current to power 12 commercial LEDs.

Aside from the obvious benefit of being able to, in theory, indefinitely power a device so long as you keep moving, this type of generator could remove the need for batteries in certain mobile devices — your smartwatch or fitness tracker could be made even thinner and lighter. Who knows — one day this type of generator could even generate enough energy to power your smartphone, perhaps even removing the battery entirely, which is one of the biggest constraints to smartphone development and design.

Courtesy: Extreme Tech

Researchers make graphene magnetic, clearing the way for faster everything

graphene-640x353
Graphene has many fantastic properties that could change the course of human civilization. It’s chemically stable, highly conductive, and incredibly strong. One thing it is not, however, is magnetic. This is one of the issues cited by the likes of IBM, which has tried to dampen expectations for a future of super-efficient microprocessors built on graphene. That might not be a problem much longer, though. Scientists from the University of California, Riverside have successfully created graphene that has magnetic properties.

To make this happen, the team started with a sheet of regular (but still awesome) non-magnetic graphene. The graphene was placed on a layer of magnetic yttrium iron garnet, which actually transferred its magnetic properties to the graphene without disrupting its structure or other properties. Most magnetic substances interfere with graphene’s ability to conduct electricity, but yttrium iron garnet is also an electric insulator. That meant it was unlikely to negatively affect the graphene’s electron transport properties.

When removed and exposed to a magnetic field, the team found their treated graphene’s Hall voltage depended on the magnetic linearity of the yttrium iron garnet. This told the scientists that their graphene was magnetic all on its own and that magnetism had come from exposure to the yttrium iron garnet layer. This property should last indefinitely as it is not the result of depositing material on the graphene, but comes from the graphene itself.

yttrium iron garnet

It has been possible in the past to create magnetized graphene, but this always relied on adding additional magnetic compounds or coatings to the raw graphene — often lead or iron. This rather defeats the purpose of graphene by adding additional complications. It’s already hard enough to produce large quantities of sufficiently pure graphene without adding these additional complications. The addition of extra atoms to graphene’s single atom structure also screws up its electrical properties in the same way exposure to non-insulating magnetic materials can. What’s different this time is that the graphene is still just pure graphene.

The potential applications for magnetic graphene could be anything from information processing to advanced medicine. Researchers see a big future for magnetic graphene in the burgeoning field of spintronics — that’s shorthand for “spin transport electronics.” Spintronics involves processing a signal using magnetic spin rather than electric charge.

gatesEventual consumer and business applications for spintronics technology could be faster processors and memory with vastly higher capacities. Silicon transistors have been shrinking in size with every new generation of processors from Intel, ARM, and other firms. We’re down to 14nm process technology these days, which is nearing fundamental atomic limits. Intel also has 10nm technology on its roadmap, and 5nm is expected to happen around 2020. This might be the functional end of Moore’s Law, though.

Magnetic graphene utilizing spintronics could be there to take over when traditional silicon transistors simply can’t get any smaller. A graphene transistor could work on the atomic scale, but we’ve still got a few years to get things figured out before magnetic graphene becomes a necessity.

OnePlus will reveal its ‘OxygenOS’ February 12th

As Cyanogen moves along a new path (apparently with help from Microsoft) phone maker OnePlus is working on its own flavor of Android, and now it has a name: OxygenOS. Unfortunately, it doesn’t have much else beyond promises of going back to the drawing board with software that is “open, customizable, and free of bloat and unnecessary features.” That’s good to hear though, and matches up with the alpha build seen earlier this month. In classic OnePlus style, the tease just sets up another tease, and we’re told more information is coming on February 12th, so mark your calendars.

OnePlus-One