Upgrading WooCommerce

Automatic Upgrade
After you backup your site go to Plugins > Installed Plugins within WordPress to view a list of plugins you’ve previously installed. If an upgrade to WooCommerce is available you will see a notice beneath the plugin listing which states the point release which is now available as well as links to view the details of that version and to upgrade automatically.
To upgrade simply click the ‘Upgrade Automatically’ link. You may be asked for your webservers FTP details, if that’s the case fill them in to complete the upgrade.
Manual Upgrade
To manually upgrade WooCommerce: First, backup your site then download the latest version of the plugin from our WordPress plugin page and uploade it to the wp-content/pluginsdirectory on your web server overwriting the old files.

Installing and Uninstalling WooCommerce in WordPress

Automatic installation
Automatic installation is the easiest option as WordPress handles the file transfers itself and you don’t even need to leave your web browser. To do an automatic install of WooCommerce log in to your WordPress admin panel and go to: Plugins > Add New.
In the search field type “WooCommerce” and click Search Plugins. Once you’ve found the plugin you can view details about it such as the the point release, rating and description. Most importantly of course, you can install it by simply clicking Install Now. After clicking that link you will be asked if you’re sure you want to install the plugin. Click “yes” and WordPress will automatically complete the installation.
Manual installation
1. Download the plugin to your computer.
2. Unzip the file.
3. Using an FTP program, or your hosting control panel, upload the unzipped plugin folder to your WordPress installation’s wp-content/plugins directory.
4. Activate the plugin from the Plugins menu within the WordPress admin.
After you’ve installed and activated the plugin, be sure to select ‘Install WooCommerce Pages‘ to get started.

Uninstall WooCommerce
If you are wanting to uninstall WooCommerce there are a couple things to understand.
If you deactivate and delete the plugin from the WordPress admin you will delete the WooCommerce settings, database tables, and trash the pages created when first installed.
If you need to remove ALL WooCommerce data, including products, order data, etc… Head into WooCommerce > System Status > Tools, then enable the Remove post types on uninstall. Then when you deactivate and delete the pugin from the WordPress plugin admin, it will delete all WooCommerce data.

How to create a dynamic table row using angular js

attribute.html

Attribute Value Language Position Add Option
Remove

controller.js

app.controller('attributeCtrl', function($scope, $http) {
	$scope.title = "New Attribute";
	$scope.detTitle = "Add attribute details";
	$scope.addValueSelected = true;
	$scope.attributeList = [];
	$scope.attributeValueList = [];
	$scope.valueEntered = true;
	$scope.attributeGroup=['Bangle','Chain','Ring'];
	//$scope.count = 0;
	$scope.valueCount =['juhi'];
	$scope.counter = 1;
	$scope.aValue = {name:"", group:"", values:[]};

    	
	$scope.addValueRow = function(){
		$scope.valueCount.push('juhi '+ $scope.counter);
    	$scope.counter++;
   	};
	$scope.removeFromList = function(id){
		$scope.valueCount.pop('juhi '+id);
		$scope.aValue.values.pop(id);
	};
});

New microprocessor claims 10x energy improvement

AmbiqMicro1-640x353

As power consumption has become one of the most important metrics of CPU design, we’ve seen a variety of methods proposed for lowering CPU TDP. Intel makes extensive use of dynamic voltage and frequency scaling, ARM has big.Little, and multiple companies are researching topics like near threshold voltage (NTV) scaling as well as variable precision for CPU and GPU operations. Now, one small embedded company, Ambiq Micro, is claiming to have made a breakthrough in CPU design by building a chip designed for subthreshold voltage operation — with dramatic results.

Ambiq’s new design strategy could be critical to the long-term evolution of the wearables market, the Internet of Things, and for embedded computing designs in general — if the company’s technology approach can scale to address to a wide range of products.

Subthreshold and near-threshold voltage operation

The threshold voltage of a transistor is the voltage point required to create a conducting path between the source and drain terminals. In simplest terms, this is the point at which the transistor turns “on.” The voltage threshold is not an absolute, however — operation is possible in both the near-threshold and subthreshold regions.

Leakagecurrent

The problem with NTV and subthreshold designs is that they tend to suffer from high amounts of leakage current, as shown above, and are capable of only very low operating frequencies within these voltage ranges. This can actually lead to higher energy consumption overall — by constantly operating in the subthreshold region, the total amount of energy a chip leaks can result in higher power consumption than would result if the SoC just ran at conventional voltages and then power gated cleanly or shut itself off.

To understand the problem with subthreshold circuits and performance, imagine you had to distinguish between an alternating field of white and black squares. The human eye can perform this feat relatively easily — even when the white-black swap occurs at high speeds, we can tell the difference between the two.

Ask people to identify the difference between two slightly different shades of gray, however, and they can only do so when the frames are presented for a much longer period of time. The eye will tend to combine the two shades into a single perceived hue — this fact is widely used in Twisted Nematic (TN) monitors to produce simulated 8-bit color using fast 6-bit panels. Instead of displaying a given shade — say, Red 250 — the monitor will alternate between Red 246 and Red 254. Flip between these two shades quickly enough, and the eye naturally “averages” them out to Red 250.

This difficulty between determining the “on” versus the “off” state is a major limiting factor on subthreshold operation and requires designing circuits to extremely tight tolerances. What Ambiq claims to have developed is a new method of designing circuits, dubbed Sub-threshold Power Optimized Technology (SPOT). The company’s full whitepaper is available.

Ambiq is claiming that its Apollo microcontroller, which is based on the ARM Cortex-M4 design with FPU, can deliver power consumption equivalent to a Cortex-M0+ part without compromising its M4 with FPU performance. That’s actually more significant than it sounds — the graph tot he right shows the results of a performance comparison and power analysis between the Cortex-M0 and Cortex-M4 , as published by EDA360 Insider.

ganssle-test-image

The green line is the M4, while the yellow line is the Cortex-M0. According to that report: “The ARM Cortex-M4 with its SIMD and floating-point capabilities ran the tests 12 to 174 times faster than the ARM Cortex-M0 core and consumed 2x to 9x more power.”

In other words, a subthreshold version of the Cortex-M4 with Cortex-M0 power consumption would be an embedded chip that meshed the best of both worlds — incredible power efficiency and far more embedded performance than is currently available.

Why subthreshold embedded performance matters

In previous years, accomplishments like this in the embedded market would be of limited interest to anyone else. The combined pushes for better wearables and the growing Internet of Things, however, makes innovations like subthreshold voltage critically necessary. While there’s still a vast gulf between even a high-powered embedded chip like the Cortex-M4 and a Cortex-A7 smartphone class CPU, the only way to close that gap is to continue to push embedded performance per watt into new frontiers.

Ambiq is arguing that its new design and implementation approaches can double to quadruple power efficiency. Whether this is solely an embedded shift or if it can boost higher-end hardware is still unknown, but approaches like this could revolutionize embedded hardware — and make all-day smartwatch battery life a reality in the long run.

Courtesy – Extreme Tech

AMD’s next-gen CPU leak: 14nm, simultaneous multithreading, and DDR4 support

AMD-HQ-640x353

Ever since it became clear that AMD’s Carrizo would be a mobile update with a focus on energy efficiency as opposed to raw performance, enthusiasts and investors have been hungry for details about the company’s upcoming CPUs in 2016. AMD has been tight-lipped on these projects, though we heard rumors of a combined x86-ARM initiative that was up and running as of early last year — but now, a handful of early rumors have begun to leak about the eventual capabilities of these new cores.

As with all rumors, take these with a substantial grain of salt — but here’s what Sweclockers.com is reporting to date. We’ll rate the rumors as they’re given on the site: According to the post, the new AMD Zen is:

Built on 14nm: For a chip launching in 2016, this seems highly likely. Jumping straight for 14nm won’t obviate the gap between AMD and Intel, but the company is currently building its FX chips on legacy 32nm SOI while its Kaveri and Carrizo are both 28nm bulk silicon. The double-node jump from 28nm to 14nm should give AMD the same benefits as a single-node process transition used to grant. Given the advantage of FinFET technology, we’d be surprised if the company went with anything else. The chips are also expected to be built at GlobalFoundries, which makes sense given AMD’s historic relationship with that company.

Utilize DDR4: Another highly likely rumor. By 2016, DDR4 should be starting to supplant DDR3 as the mainstream memory of choice for desktop systems. AMD might do a hybrid DDR3/DDR4 solution as it did in the past with the DDR2/DDR3 transition, or it might stick solely with the new interface.

Up to 95W: Moderately likely, moderately interesting. This suggests, if nothing else, that AMD wants to continue to compete in the enthusiast segment and possibly retake ground in the server and enterprise space. Nothing has been said about the graphics architecture baked on to the die, but opting for an up-to 95W TDP suggests that the company is giving itself headroom to fight it out with Intel once again.

Opt for Simultaneous multithreading as opposed to Cluster Multithreading: With Bulldozer, AMD opted for an arrangement called cluster multi-threading, or CMT. This is the strategy used by Bulldozer, in which a unified front end issues instructions to two separate integer pipelines. The idea behind the Bulldozer design was that AMD would gain the benefits of having two full integer pipelines but save die space and power consumption compared to building a conventional multi-core design.

HyperThreading-new-e1422389076626

Intel, in contrast, has long used simultaneous multithreading (SMT), which they call Hyper-Threading, in which two instructions from different threads can be executed in the same clock cycle. In theory, AMD’s design could have given it an advantage, since each core contains a full set of execution units as opposed to SMT, where those resources are shared, but in practice Bulldozer’s low efficiency crippled its scaling.

The rumor now is that AMD will include an SMT-style design with Zen. It’s entirely possible that the company will do this — Hyper-Threading is one example of SMT, but it’s not the only implementation — IBM, for example, uses SMT extensively in its POWER architectures. The reason I’m not willing to completely sign off on this rumor is that it’s a rumor that’s dogged AMD literally since Intel introduced Hyper-Threading 15 years ago.

The benefits of using SMT are always dependent on the underlying CPU architecture, but Intel has demonstrated that the technology is often good for a 15-20% performance increase in exchange for a minimal die penalty. If AMD can achieve similar results, the net effect will be quite positive.

The final rumor floating around is that the chip won’t actually make an appearance until the latter half of 2016. That, too, is entirely possible. GlobalFoundries’ decision to shift from its own 14nm-XM process to Samsung’s 14nm designs could have impacted both ramp and available capacity, and AMD has pointedly stated that it will transition to new architectures only when it makes financial sense to do so. The company may have opted for a more leisurely transition to 14nm in 2016, with the new architecture debuting only when GF has worked the kinks out of its roadmap.

HBM-Memory-640x360

No information on performance or other chip capabilities is currently available, and the company has said nothing about the integrated GPU or possible use of technologies like HBM. The back half of 2016 would fit AMD’s timeline for possible APU integration of HBM — which means these new chips could be quite formidable if they fire on all thrusters out of the gate. During its conference call last week, AMD mostly dodged rumors about delays to its ARM products, noting that it had continued sampling them in house and was pleased with the response. Presumably the company’s partners remain under NDA — there are no published independent evaluations of these products to date.

Courtesy – Extreme Tech

Robotic glove teaches your hand the basics of drawing

robotartglove-640x353

Whether it was after getting hooked on your first comic, taking a college art class, or even idly doodling on your math book instead of paying attention to your teacher, we’ve all experimented with drawing. Unless you’re one of the people that can actually do it well, you likely gave up and moved on, wondering how other humans can mix lines together to create something both recognizable and aesthetically pleasing. If you’re illustrationally-challenged, your salvation may lie not with humanity, but with robotics. A new robotic glove teaches you how to draw by becoming training your muscle memory.

Copenhagen Institute of Interaction Design student Saurabh Datta developed the glove as part of his thesis, initially as a way to learn to play the piano. If his human hands couldn’t learn, maybe some robot hands could teach them — and no, the robot hand doesn’t come from the Robot Devil, despite the startlingly similar way the idea was conceived. Called Teacher, the glove-like robot straps onto your hand and fingers, and guides you through specific gestures over and over. If you do it enough, your hand will learn how to do it through sheer muscle memory.
handrobot-640x425
Obviously, this won’t teach you instinct or how to transfer something from your imagination to paper, but at the very least, the theory is that it’ll teach you basics — how to make aesthetically pleasing lines.

Now, it only took Datta a week to build the rig. It’s not exactly the teacher after which it’s named, but instead represents the way humans and robots can and do interact when working to achieve the same goal. Despite being presented with the potential to learn how to draw, Datta found that most participants didn’t like when the glove controlled the majority of the movement — they’d fight against the haptic feedback, and constantly readjust their hand within the contraption to find a more comfortable position. To fix the comfort issue, Datta recorded the fidgets made by the testers, and then adjusted the machine’s force feedback to account for them. In turn, this also helped the machine learn about the way humans naturally move.

Datta’s machines won’t suddenly help you create the best DeviantArt page the internet has ever known, but it’s essentially a proof-of-concept for machines doing our learning for us. You can check out the full project over here, including development diagrams and (long) demonstration videos.

How to install Windows 10 in a virtual machine ?

After last week’s Windows 10 briefing, a brand new build of the Windows 10 Technical Preview was released publicly. Anyone can sign-up for the Windows Insider program and get a taste of Windows 10. Of course, pre-release builds should never be used as a primary OS, so today I’ll walk you through how to run the Windows 10 Technical Preview in a virtual machine.

Under normal circumstances I would do this walkthrough with Oracle’s VirtualBox. It’s free, open source, and works on just about any operating system. Sadly, the drivers appear to be broken for the time being. I couldn’t get sound or networking to work at all, and the screen resolution is severely limited. A quick peek at the community forums shows that other people are having the exact same problems, so hold off using VirtualBox for Windows 10 until these major kinks get worked out.

Instead, I’ll be using the free VMware Player application. It works like a charm, but it’s only available for Windows and Linux. VMware does offer premium virtualization solutions for OS X, but that’s a large investment just to test a preview build of Windows. I can’t recommend dropping $70 if this is all you’ll be using it for. With all that in mind, let’s jump in.

11

1. Download the Windows 10 ISO

First off, head over to the Windows Insider site, and sign up. Once you’ve agree to the terms of service, proceed to the download page, and pick which disc image you want to download. For the purposes of this walkthrough, I’m using the 32-bit English ISO, but go with whatever works for your set-up.

22

2. Create a new virtual machine

Now, you need to install VMware Player. Head to the download page, pick which platform you want, and complete the installation.

Once the application is installed, launch it, and navigate to Player > File > New Virtual Machine to get this party started.
33

3. Find your Windows 10 ISO

Next, you need to tell VMware Player where to find the Windows 10 ISO. Select the second option labeled “Installer disc image file (ISO),” and then navigate to the Windows 10 ISO you downloaded earlier.
44

4. Choose your save location

Pick out a name for this virtual machine, and then select where you’d like it to be saved.

55

5. Configure your virtual hard disk

On this screen, you need to choose how big you want your virtual disk to be. 60GB is the default, but you can increase it as needed. Just make sure you have enough free space on your actual hard disk.

By default, VMware Player will split your virtual disk over multiple files, and I recommend leaving it that way. Unless you have a specific reason to change it, keep it as is.
66

6. Customize your hardware configuration

Next, click the “Customize Hardware” button before we finish the initial set-up.

77

7. Allocate RAM

The default here is 1GB, but more would be better. I have 16GB of RAM in my machine, so I decided 4GB was an appropriate allocation for this virtual machine. Follow the guide on the right of the screen, and don’t go above the maximum recommended memory. If you outstrip what’s available, you’ll end up paging to the hard disk, and making everything slow to a crawl.

88

8. Configure the CPU

Switch over to the CPU tab, and choose how many cores you want to dedicate to this machine. One is the default, and that’s probably a safe starting point. My machine has four cores, so I usually end up bumping it to two cores for virtual machines, but your milage may vary.

Now, take a look at the button labeled “Virtualize Intel VT-x/EPT or AMD-V/RVI.” If you’re using the 64-bit version of Windows 10, this is mandatory. Of course, your CPU needs to support this functionality, so use this tool from Microsoft to verify that it will work with your processor.

99

9. Begin the installation

Close out of the hardware configuration, and “Finish” the initial set-up. Now, boot up your virtual machine, and install Windows 10 just like you would normally.
1010

10. Install the VMware tools

Once Windows 10 has finally booted up, navigate to Player > Manage > Install VMware Tools. It will mount a virtual DVD, and pop up a notification in the bottom right. Navigate to the disc in Windows Explorer, launch the appropriate executable, and follow the on-screen instructions.

Note: If you don’t already have the VMware tools on your PC, follow this process to download them.
1111

11. Reboot your virtual machine

When it’s finished installing, reboot your virtual machine.

1212

And you’ve virtualized Windows 10!

Finally, your Windows 10 installation is ready to use — even in fullscreen mode. Poke around, download the OS updates, and enjoy the cutting edge of Windows. And when something inevitably breaks, it won’t matter. This is just a virtual machine, so toss it, and start over.

Courtesy – Extreme Tech

New aluminum air battery could blow past lithium-ion, runs on water

GreenBattery-640x358

As battery technologies go, the world has a love-hate relationship with lithium-ion. On the one hand, breakthroughs in Li-ion designs and construction are responsible for the Tesla Model S, new installations, green energy research, and the modern smartphone. On the other hand, lithium-ion limitations are the reason why most EVs have a range of 40-60 miles, the Model S costs upwards of $80,000, and why your smartphone can’t last all day on a single charge. For all its promise and capability, lithium-ion has limited long-term utility — which is why a new announcement from Fuji Pigment is so interesting. The company is claiming that its new aluminum-air batteries can run for up to two weeks and be refilled with normal water.

How an aluminum-air battery works

First, some basics. The problem with battery technology isn’t whether or not we can build better batteries — as the chart below shows, we can build batteries that blow traditional lithium-ion out of the water. Keep in mind that the chart below is exponential, meaning that fuel cell technology has 10 times the energy density of a typical cobalt-Li ion battery.

EnergyDensities-640x440

The various “Metal-Air” batteries, including zinc-air, aluminum-air, and lithium-air, have some of the highest energy densities its possible to build. The difficulties with aluminum-air construction, in particular, has been rapid degradation of the anode and, in early models of Al-Air, the release of hydrogen gas.

Fuji Pigment’s new announcement makes repeated reference to the work of Ryohei Mori, and while the referenced papers aren’t available for free, the abstracts are online. The studies in question are all aimed at enhancing the performance of Al-air batteries while extending their useful lifetimes — typically, Al-air solutions begin to degrade immediately after the first charge cycle. According to Mori’s work, creating a secondary aluminum-air battery adjacent to the primary buffered the accumulation of byproducts that normally prevent the battery from working properly over the long term.

The “rechargability” of Al-air batteries requires some explanation. Al-air batteries are primary cells, which means they can’t be recharged via conventional means. As the aluminum anode is consumed by contact with oxygen, hydrated aluminum forms as a byproduct. That material can be recycled and used to create a new aluminum anode, which is why the batteries are referred to as rechargeable. Periodically, the aluminum anode will have to be replaced — it’s not clear how often the Fuji Pigment battery would need servicing of this sort.

Could Al-air be the next big thing?

New battery technologies and announcements are a dime a dozen, but there’s reason to think that a workable Al-air technology could deploy within the next 2-5 years. Multiple manufacturers are working on commercializing designs (Alcoa partnered with Phinergy in 2013 with plans for a 2017 debut), and aluminum is abundant and relatively cheap. Al-air batteries have actually been used in specialized military applications for years, which is important — it means there’s some pre-existing expertise and known characteristics that can be leveraged to create additional capacity.

That said, there are question, too. The hydrated aluminum oxide solution produced during the battery’s normal operation would need to be recycled in some fashion, it’s not clear that fresh water is as effective an aqueous solution as saltwater (meaning there might be specific need for one particular kind of solution). The final price is also unknown, though previous estimations had put the cost of an Al-air system at roughly $1.1 per kg of aluminum anode. This was not given in precise terms relative to the cost of gasoline (and the weight of the aluminum anode in these batteries is unknown), but the team that performed that analysis noted that proper recycling would put Al-air in the same cost range as conventional internal combustion engines.

Fuji Pigment has stated that it intends to commercialize this technology as early as this year, which means we could see test demonstrations and proof of concepts by 2016. Whether auto manufacturers will jump for the technology remains to be seen — car companies tend to be conservative and Tesla has already thrown its weight behind the further use of lithium-ion technology.

Courtesy – Extreme Tech