Scientists use 20 billion fps camera to film a laser in flight


We’ve all been spoiled by the flashy lasers in science fiction to the point that the real thing can seem a little mundane. A laser, by definition, is tightly focused and all but invisible to the human eye. However, a team of physicists at Heriot-Watt University in Edinburgh, UK have managed to film a laser bouncing off mirrors with a new type of high-speed camera. It looks like something out of Star Wars, but it actually happened in real life.

Filming a laser pulse moving through air is no easy feat, and certainly no conventional camera was up to the task. So the team built their own, as really clever physicists sometimes do. Visualizing a laser as it moves through the air relies on exploiting the scattering of photons. You can get a feel for what’s going on in the video by shining a regular laser through smoke or water vapor. The light from the laser is scattered by the particulates in the air, which is why you can see it. You’re actually just seeing the photons split off from the focused beam.

The system devised by PhD student Genevieve Gariepy and colleagues at Heriot-Watt University is doing the same basic thing, but in air and much, much faster. Air scatters less light than fog or smoke would, so the camera used to capture the video below needed to be extremely sensitive. It’s composed of a 32×32 pixel grid of single-photon avalanche diode (SPAD) detectors. This type of CMOS sensor was chosen because it has high temporal resolution. That means the image data acquired by them can be accurately correlated with real life.

Courtesy & Original Post – Extreme Tech

YouTube Preview Image

Intel Expands Custom Chip Work for Big Cloud Providers


Intel is stepping up its efforts to build custom chips for online giants like eBay and Facebook, giving them a boost in performance and, Intel hopes, another reason to keep buying its microprocessors.

The chipmaker is working with the big online firms to embed programmable chips, called FPGAs, in the same chip package alongside its Xeon server processors. Those FPGAs can be programmed to run unique algorithms that each online firm depends on to run its services.

“For the right application, we believe FPGAs can provide an order-of-magnitude improvement in performance” over standard Xeon chips alone, said Diane Bryant, senior vice president and general manager of Intel’s data center division.

It’s a shift from Intel’s strategy in the past. Until a few years ago, all its customers got basically the same general purpose processors. They could select different clock speeds, core counts and other features, but everyone got the same basic chip design.

The rise of online giants like Google, Facebook, Amazon and eBay has changed that. Those companies run a relatively small set of applications, but they do so on a massive scale — a single workload might run across tens of thousands of servers. They also have a lot of buying power.

That alters the economics of the chip business. If a customer is buying thousands of servers for a particular workload, it becomes viable for Intel to optimize a processor for that task. And customers will pay a bit more if it lets them squeeze out a bit more performance or consume less power.

Intel has built custom chips for customers before. Last year it delivered about 15 unique designs, including processors for Facebook and eBay. But they involved relatively minor changes, such as disabling cores and adding extra I/O ports.

Integrating an FPGA (field-programmable gate array) with its Xeon chips is a step further. And in some cases, Intel will hardwire the instructions for the algorithm directly onto the Xeon itself, Bryant said.

It’s a new way for Intel to deliver custom chips, and this year it expects to deliver more than 30 unique designs, Bryant said. She was due to make the announcement at Gigaom’s Structure conference in San Francisco Wednesday.

It’s a smart move by Intel, said analyst Nathan Brookwood of Insight64. It gives its largest customers less incentive to license a competing chip design, such as the ARM architecture, and optimize that to run their algorithms instead, he said. IBM has also opened its Power8 design, which Google has been testing.

There are two ways customers can use the FPGAs, Bryant said. In one case, the online service provider deploys a Xeon package with the FPGA and tests a workload to ensure it delivers the desired benefits. If it does, Intel will burn the instructions onto the Xeon itself and manufacture the part without the FPGA.

The other use case takes advantage of the fact that FPGAs can be reprogrammed in the field. The service provider buys servers with the dual-chip package inside, and programs the FPGA depending on the workload they need to optimize. If their needs change later, they can reprogram the chip again.

Using FPGAs to accelerate workloads isn’t new, but they’re usually discrete components on the motherboard linked to the processor via PCIe. Integrating them into the chip package with Intel’s QPI interconnect reduces latency and allows the FPGA to access the Xeon’s on-chip cache and its main memory, Bryant said.

That doubles the performance gain that can normally be derived from the FPGA, compared to using it as a discrete component, she said.

Bryant said a handful of cloud providers are testing the FPGAs, though she wouldn’t name them. She also wouldn’t say whose FPGAs Intel will use, though it has a manufacturing partnership with Altera, making it a likely candidate.

It plans to begin production of the Xeon-FPGA chip packages soon, she said. They’ll be socket-compatible with standard Xeons, meaning customers can use them in standard servers.

She pointed to two trends that are driving the need for custom chips: the rise of large-scale cloud applications running across huge pools of servers, and the move to a more flexible, software-defined infrastructure.

Applications are changing faster than new chips can be designed and brought to market, Bryant said. “This is a great way for the silicon to keep up with the pace of software innovation,” she said.

Originally Published on
Author: James Niccolai

Flexible nanogenerator harvests muscle movement to power mobile devices


The consumer world is becoming powered by mobile devices, but those devices are still powered by being tethered to a wall or a reserve power pack. What if you could generate power for your mobile devices simply by moving your body, and the power source was almost unnoticeable? A new device developed at the National University of Singapore aims to fulfill both of those requirements.

The flexible nanogenerator resembles a small, stamp-sized patch that attaches to your skin. It uses your skin as a source of static electricity, and converts it to electrical energy — reportedly enough to power a small electronic device, like a wearable. The device, presented at the MEMS 2015 conference last week, can generate 90 volts of open-circuit voltage when tapped by a finger. The researchers presented the patch as a self-powered device that can track the wearer’s motion.


The power generates thanks to the triboelectric effect, which is when certain types of materials can become electrically charged through contact and friction with another material — in this case, the patch gains the charge through fiction with human skin. When the two materials are pulled apart, they generate a current that can be harvested. An electrode is needed in order to harvest the current, so the research team installed a 50nm-thick gold film to get the job done. The gold film sits below a silicone rubber layer composed of thousands of tiny pillars that help create more surface area for skin contact, which in turn creates more friction.

Thanks to the triboelectric effect, creating the device is easier as well — the skin is one of the triboelectric layers that helps produce the effect, so that layer doesn’t need to be built into the device itself, saving time, money, and materials. It also removes something that can go wrong with the device — having one less layer built in means that’s one less part that can break.

In the researchers’ test, a finger-tap on the device was able to generate enough current to power 12 commercial LEDs.

Aside from the obvious benefit of being able to, in theory, indefinitely power a device so long as you keep moving, this type of generator could remove the need for batteries in certain mobile devices — your smartwatch or fitness tracker could be made even thinner and lighter. Who knows — one day this type of generator could even generate enough energy to power your smartphone, perhaps even removing the battery entirely, which is one of the biggest constraints to smartphone development and design.

Courtesy: Extreme Tech

Researchers make graphene magnetic, clearing the way for faster everything

Graphene has many fantastic properties that could change the course of human civilization. It’s chemically stable, highly conductive, and incredibly strong. One thing it is not, however, is magnetic. This is one of the issues cited by the likes of IBM, which has tried to dampen expectations for a future of super-efficient microprocessors built on graphene. That might not be a problem much longer, though. Scientists from the University of California, Riverside have successfully created graphene that has magnetic properties.

To make this happen, the team started with a sheet of regular (but still awesome) non-magnetic graphene. The graphene was placed on a layer of magnetic yttrium iron garnet, which actually transferred its magnetic properties to the graphene without disrupting its structure or other properties. Most magnetic substances interfere with graphene’s ability to conduct electricity, but yttrium iron garnet is also an electric insulator. That meant it was unlikely to negatively affect the graphene’s electron transport properties.

When removed and exposed to a magnetic field, the team found their treated graphene’s Hall voltage depended on the magnetic linearity of the yttrium iron garnet. This told the scientists that their graphene was magnetic all on its own and that magnetism had come from exposure to the yttrium iron garnet layer. This property should last indefinitely as it is not the result of depositing material on the graphene, but comes from the graphene itself.

yttrium iron garnet

It has been possible in the past to create magnetized graphene, but this always relied on adding additional magnetic compounds or coatings to the raw graphene — often lead or iron. This rather defeats the purpose of graphene by adding additional complications. It’s already hard enough to produce large quantities of sufficiently pure graphene without adding these additional complications. The addition of extra atoms to graphene’s single atom structure also screws up its electrical properties in the same way exposure to non-insulating magnetic materials can. What’s different this time is that the graphene is still just pure graphene.

The potential applications for magnetic graphene could be anything from information processing to advanced medicine. Researchers see a big future for magnetic graphene in the burgeoning field of spintronics — that’s shorthand for “spin transport electronics.” Spintronics involves processing a signal using magnetic spin rather than electric charge.

gatesEventual consumer and business applications for spintronics technology could be faster processors and memory with vastly higher capacities. Silicon transistors have been shrinking in size with every new generation of processors from Intel, ARM, and other firms. We’re down to 14nm process technology these days, which is nearing fundamental atomic limits. Intel also has 10nm technology on its roadmap, and 5nm is expected to happen around 2020. This might be the functional end of Moore’s Law, though.

Magnetic graphene utilizing spintronics could be there to take over when traditional silicon transistors simply can’t get any smaller. A graphene transistor could work on the atomic scale, but we’ve still got a few years to get things figured out before magnetic graphene becomes a necessity.

OnePlus will reveal its ‘OxygenOS’ February 12th

As Cyanogen moves along a new path (apparently with help from Microsoft) phone maker OnePlus is working on its own flavor of Android, and now it has a name: OxygenOS. Unfortunately, it doesn’t have much else beyond promises of going back to the drawing board with software that is “open, customizable, and free of bloat and unnecessary features.” That’s good to hear though, and matches up with the alpha build seen earlier this month. In classic OnePlus style, the tease just sets up another tease, and we’re told more information is coming on February 12th, so mark your calendars.


Google X Labs made synthetic skin to test a cancer-detecting bracelet

If someone asks you to name a Google X Labs moonshot, you’d probably say Glass, self-driving car or maybe even Project Loon. But as you might remember from the WSJD Live conference last year, it also has a flourishing Life Sciences division that employs 100 doctors and scientists, and one of their main projects is a Fitbit-like bracelet that can detect cancer cells. Now, The Atlantic has taken a peek into the division’s headquarters in Mountain View. The publication’s video (below the fold) also explains why the team has to create synthetic human skin mixed with the real thing to cover disembodied arms.

See, those arms serve as testers for the Labs’ cancer-detecting bracelet project. For that to make sense, though, you need to understand how the system works: first you’ll need to take pills packed with nanoparticles that circulate throughout the body looking for cancer cells. If they find any, they’ll bind to those cells, which then literally light up. After that, the cell-particle combos make their way underneath the bracelet, since it has a magnet that attracts the nanoparticles.

Since the team wants to make sure that an illuminated clump of cancer cells is visible through human skin, they created those practice arms. They’re covered in skin exhibiting different properties (thickness, etc.), as well as skin mimicking those of different ethnicities and skin tones. The bracelet probably won’t be sniffing out cancer in the real world anytime soon, though, so watch the interview for now to see what it’s like inside Google X Labs’ Life Sciences department.

Courtesy – Engadget

YouTube Preview Image

How to Set the Name of Your Maven Build File as That of Your Project Name

When you use Maven build tool, by default, the package file created will be named based on the artifactid and the version that you give. For example, lets say you added the following in your project’s pom.xml file :





Now your resultant package created will be something like this :


Now, if you want to prevent this and make the package file created to have the name as that of the artifactid, then you will have to add the following inside your “build” tag.




Now if you build the project, the resultant package file generated will be as follows :


Different Phases Involved in the Maven Site

Maven is a dependency management and build tool for Java projects. The site command will generate the documentation of the current project. This command can be executed in different phases.

– pre-site : This can be executed separately or when you execute site command.

– site : In this phase, the current project documentation will be generated.

– post-site : This will be executed after the execution of site phase.

– site-deploy : This phase will be used to deploy the documentation created.

Different Phases Involved in the Maven Clean

Maven is a dependency management and build tool for Java projects. The clean command will clean up the artifacts or packages created by prior builds. This command can be executed in different phases.

– pre-clean : This can be executed separately or when you execute clean command.

– clean : In this phase, the project’s target folder will be cleaned by removing all the artefacts and packages created by prior builds.

– post-clean : This will be executed after the execution of clean phase.

How to Locate the Builds Created in Maven Local Repository

Maven is a dependency management and build tool for Java projects. When you create the build of a package using the Maven tool, then a copy of the build will be copied to the local maven repository in your system as well. Here is how you can locate the the same on your local maven repository.

The location or placement of the build in your local maven repository will depend on the artifact id, group id and version that you gave for that build by default. For example, lets say you gave the group id, artifact id and version for your project like this :




Now if you want to locate this build in your local maven repository follow these steps :

1. Go to your local maven repository folder. It will be the .m2 folder in your User’s root folder in your system.

2. Inside this folder, you will see another folder called repository. Go to that folder.

3. Now if you already built your project, there will be one folder with name “com”. Go to that folder.

4. Inside, you will see another folder named “mycompany”. Go to that folder.

5. Inside, you will see another folder named “myproject”, which is the value you gave as your artifactid. Go to that folder.

6. Inside, you will see another folder named “1.0”, which is the value you gave as your build version. Go to that folder.

7. Inside this folder, you will see the project build file you were looking for. The name of the file will be “myproject-1.0.war” if your packaging is “war” and it will be “myproject-1.0.jar” if your packaging is “jar”.

Please note that if this file will be created only if your Maven build for the project is successful.