Leap Motion technology

With the Leap Motion technology and the leap motion device we can operate a computer in a whole new way “that is   by just waving by your hands and lifting you can control your pc without touching any were“
Leap Motion technology

Features of Leap Motion technology

  • You can browse the web by waving your hand.
  • You can flip through the photos.
  • You can play music by lifting your fingers.
  • You can draw, paint, design with your finger tip.
  • You can even paint with a real pencil and paint brush.
  • You can steer a car.
  • You can slice a fruit and shoot the bad guys also.

Leap Motion technology

How it works?

Leap motion device is a tiny device it actually senses your movements and gestures of your fingers and the way its works.

Its Easy To Connect The Device

  • Just plug it in the usb port of the computer.
  • Place it near the keyboard.
  • Install the casino online software.
  • Start waving your hands.

Company

The leap motion company is a company developing advanced motion sensing technology for human-computer interaction

Leapmotion technology apps are also available in airspace store. companies like HP are going Implement
Leap Motion technology

The Leap Motion Controller gives you the ability to navigate with real gestures and natural motion, giving you glimpses of a time when PCs will be able to see us and understand what we want without having to learn a specialized interface or commands. The combination of hand and finger tracking is excellent, and there are plenty of applications were this sort of technology will have a big impact.

Space Mouse

In our day to day life, we will reach out of computer mouse, whenever we want to move the cursor or activate something. Our normal mouse will detect the motion in the X-Y plane, and thereby moves the cursor. Space Mouse is an advanced version of our computer mouse. It is a 3D object manipulation device, developed by DLR Institute of Robotics and Mechatronics.

  • This mouse has 6dof (degrees of freedom). That is; 3 linear motion along X,Y and Z axes, and 3 rotational motion w.r.t those axes.
  • This can act as both a 2D controller as well as a 3D controller.

                          space

 

FEATURES:

  • Ease of manipulating 3D objects
  • Fingertip operation
  • Minimum desk space is utilized
  • Natural hand position eliminates fatigue
  • Programmable buttons to customize users preference for motion control

ADVANTAGES:

  • Drawing times is reduced by 20%-30% .
  • Works  without an additional power supply.
  • Adapted for a wide range of tasks including: mechanical design,real time video animation, visual simulation.
  • Allows the user to move a robot system in the most natural way.

HDD – Hard Disk Drive

– It is the conventional disk drive.

– HDD is Hard Disk Drive.

– It has a rotating platter, movement of which will handle all the operations and functionality of the drive.

– It uses magnetism for data storage.

– It also uses a set of read or write heads to perform read and write operations.

– The speed of the drive depends on the platter spin speed

– An example for HDD is Seagate Barracuda.

SSD – Solid State Drive

– SSD is Solid State Drive.

– It has no moving parts.

– Here information is stored in microchips.

– It is much faster.

– It has NAND based flash memory.

– It has non-volatile type of memory, which means that the data can outlive us!

– It has no mechanical arm to read/write data.

– It uses embedded processor called a controller (“brain”).

– This controller will decide what the drive should do – store, retrieve, cache or clean up

– The controller also helps in some additional operations such as error correction, read/write caching, encryption, garbage collection etc.

– An example for SSD is SandForce SATA 3.0 (6 GB/s)

 

Hawkeye Technology

The HAWKEYE is one of the most commonly used technologies in the game of cricket today. It has been put to a variety of uses, such as providing a way to collect interesting statistics, generate very suggestive visual representations of the game play and even helping viewers to better understand the umpiring decisions, especially in the case of LBWs. While the system provides for things which we see every day on television, there is very impressive technology going into it, which many of us are oblivious to.

INTRODUCTION

The game of cricket has attained great commercial importance and Popularity over the past few years. As result, there has been felt a need to make the game more interesting for the spectators and also to try and make it affair as possible. The component of human error in making judgments of crucial decisions often turns out to be decisive. It is not uncommon to see matches turning from being interesting to being one sided due to a couple of bad umpiring decisions. There is thus a need to bring in technology to try and minimize the chances of human error in such decision making.Teams across the world are becoming more and more professional with the way they play the game. Teams now have official strategists and technical support staff which help players to study their past games and improve. Devising strategies against opponent teams or specific players is also Very common in modern day cricket. All this has become possible due to the advent of technology. Technological Developments have been harnessed to collect various data very precisely and use it for various purposes.

The HAWKEYE is one such technology which is considered to be really top notch in cricket. The basic idea into monitor the trajectory of the cricket ball during the entire duration of play. This data is then processed to produce life Like visualizations showing the paths which the ball took. Such data has been used for various purposes, popular uses including the LBW decision making software and colorful wagon wheels showing various statistics. This paper attempts to explain the intricate details of the technology which goes behind the HAWKEYE. We first start off with a general overview of the system and an outline of the challenges that we might face, then move on to the details of the technology and end with various applications where one sees this technology being put to use.

GENERAL OVERVIEW

Cricket is a ball game played within a predetermined area. A system comprising of video cameras mounted at specific angles can be used to take pictures. These pictures are then used to locate the position of the ball. The images are then put together and superimposed on a predetermined model to form a complete visualization of the trajectory of the ball. The model includes, in this case, the pitch, the field, the batsmen and fielders etc. For this to be possible, we need to sample images at a very high rate and thus need efficient algorithms which can process data in real time. Such technologies are widely used today in various sports such as Tennis, Billiards which also fall in the category of ball game splayed within a restricted area. Our discussion will mostly contain applications which specific to the game of cricket, however in some cases, we will mention how similar techniques are applied in other games. There are various issues which crop up when one tries to design and implement such a system. In the game of cricket, the general issues are:

  • The distance at which the cameras see the pitch and the ball are dependent on the dimensions of each ground and can vary greatly.
  • Just the individual images don‟t help too much; for the system to be of practical use, one must ensure that it can track the 3D trajectory of the ball with high precision. In order to get this accuracy, the field of view of each camera should be restricted to a small region – this means one needs more cameras to get the coverage of the entire field.
  • Fielders and spectators might obstruct the camera‟s view of the ball and the ball might get „lost‟ in its flight in one or more of the cameras. The system should be robust enough to handle this, possibly by providing some Redundancy.
  • The ball might get confused with other similar objects for instance, with flying birds or the shadow of the ball itself. The image processing techniques used need to take care of these issues. Luckily, there are techniques which are easy to implement and are well known to the Image Processing community on the whole, to take care of these.
  • To help in judging LBW calls, the system needs to beamed aware of the style of the batsman – whether he is right or left handed. This is because the rules of Blare dependent on the position of the stumps and are not symmetrical about the middle stump. Thus, the system needs to detect whether a particular ball has pitched outside the leg stump of a batsman or not.
  • To determine the points at which the ball makes contact with the pitch, the batsmen or other objects is very hard. This is because we don‟t really know these spots beforehand and the model and the real pictures taken by cameras need to be merged to give such a view. We will see how the HAWKEYE technology successfully treats each of these issues and provides robust system to be used in practice. The top-level schematic picture of the system and its various parts is as shown below (each color represents a block of steps which are related):

The figure above shows precisely the steps that are involved in the computation. The process is started with some calibration of the cameras. This is required to deal with the problem raised in 1 above, about the non-uniform distance of the cameras from the playing area. After this basic calibration is done and the system is up and running, we can start processing the video input which we get from the cameras. In each of the images obtained, the first aim is to find the ball in it. Once this is done, a geometric algorithm is used to look at multiple images (which are 2D) and then combine them cleverly to get the co-ordinates of the ball in 3D space. This process is now repeated for multiple times every second (typically at the rate of 100 times per second). Thus, we have the position of the ball in 3D space at many moments in every second. The final step is to process these multiple

Positions and find a suitable fitting curve which best describes the flight of the ball. As we have sampled the positions of the ball at very short time intervals, the flight of the ball can be very accurately determined.

THEORETICAL DESCRIPTION

A description of the exact algorithms involved in the entire process will be skipped here. We instead try to give an intuitive description of each step in great detail, so as to give the reader a feel of what goes into the system, without plunging into the gory details.

The cameras

Typically, for a cricket field, 6 cameras are used. As one can see, the 6 cameras in use are positioned at roughly 60 degree from each other. They are placed high in the stands, so that there is lesser chance of their view being blocked by the fielders. There are two cameras, one each looking at the wickets directly in sideways fashion. These 6 cameras are calibrated according to the distance they are at from the pitch. In order to get good accuracy, one needs to restrict the view of each camera to a smaller region. This means each camera image would show a more prominent picture of the ball and hence the ball will be located more accurately. However, we also need to keep in mind that the whole field of play has to be covered by just the 6 cameras which are available. This puts some limitation on how restricted the view of a camera can be. Nevertheless, the accuracy obtained by using 6 cameras is acceptable to the standards prevalent today.

Some further setting up is essential for the system to work correctly. The cameras need to be fixed to some frame of reference, which is defined very conveniently in terms of the wickets on the pitch, and the line joining them. This is useful when we want to use an automated program to merge images from different cameras to form one 3D image.

Also, to avoid unnecessary computation and make the system more efficient, the cameras can be operated in active or passive mode. In the passive mode, no imaging is done and hence the system is more or less completely inactive. The cameras can be triggered into active mode either by detecting some motion in the vicinity of the pitch, or manually by some external trigger. In either case, all the cameras are synchronized and go into active mode simultaneously. The cameras are then designed to stay in the active mode for a fixed time before going off into passive mode. This action of going into passive mode can be manually overridden in exceptional cases. The different modes for the cameras are especially effective for a game like cricket as the game involves significant pauses between phases of actual play.

As described in 5 in the list of issues, the system needs to know if the batsman is right or left handed. The front view cameras are used to do this. This information, as previously said is useful in making LBW decisions and formulating other statistics. For instance, we commonly see the analysis of a bowler‟s pitching areas done separately for a left and aright handed batsman. While this is not a very difficult task to do manually every time the batsman on strike changes, the system does provide some way of automating it.

Once this setting is done, the cameras are ready to take pictures in their field of view and have them sent to a computer which processes them.

Preparation before starting to process:

Additional features might be loaded into the system to enable it to process the data in a more reliable and useful manner. These might include a statistical generator, which is used to produce statistics based on the data collected. These are the statistics which we see on television during and after the match for analysis. Such statistics can also be used by teams and players to study their game and devise strategies against their opponents. Indeed, the raw data about the paths of the ball might be too much for any human to digest and such statistics turn out to be easier to hand lead understand. The statistics generator might also aid in storing data such as the average velocity of the ball. This data is crucial as it can help the ball detection algorithm to predict the rough location of the ball in an image given the position in the previous image. Such considerations are useful to reduce the computations involved in the processing of the data collected from the video cameras. Once such additional machinery is setup correctly, we are all set to start collecting data and start processing it to churn out tangible statistics and visualizations. It might be noted at this stage that there is some more information which might be required to process the data correctly. We will point out such things at later points in the paper, where it fits in more appropriately.

Core Image Processing Job:

This part of the system can be further divided into 3 major parts:

  • Identifying pixels representing the ball in each image.
  • Applying some geometric algorithm on the set of
    Images at each instant.
  • Coming up with the 3D position of the ball in space.

Putting frames at various times together:

Now we have the exact position of the ball in 3D space at given instant of time. Next, what needs to be done is putting together this data, collected at various time instants into single picture which shows us the trajectory of the ball. Wean split this part of the process into two parts. Again, the reader should understand that these parts are very much related and we split them here in our explanation just to make it easily understandable. The two parts to this computation are:

  • Tracking the ball at various instants.
  • Predicting the flight or trajectory of the ball.

APPLICATION OF HAWKEYE

HAWKEYE has had far-reaching consequences in many sports. Primarily in cricket, HAWKEYE is a process that makes the current judgmental call on a LBW decision, very predictive. While no technology is flawless and HAWKEYE has its own share of these, it is up to 99.9% accurate. This has made the LBW decision, a predictive one. More importantly, such technology can be used to evaluate the skills of the umpire as well. The England Cricket Board (ECB) has already set-up the HAWKEYE system not only at about 10cricket venues around the country but also in the training academy to aid umpires, as well.

Gathering statistics

While the Hawk-Eye has made its mark and derives its appeal from the ability to predict the flight of the delivery, it‟s a very useful tool for collecting statistics. The information associated with each delivery bowled is routinely processed, even when the outcome of the delivery is not doubted. As result, the strategy used by a bowler as a function of bowling spells, delivery no. in the over, batsman facing the delivery and so on can be gauged. Similarly, the scoring patterns of batsman around the ground using wagon-wheels are routine in match day telecasts. These are so cleverly generated that they give a real-life feel to it. Commentators also are able to move them about to make a finer point, about a batsman. However appealing and nice that it may seem, a keen cricketing eye will notice that the wagon wheel is less accurate than the other data. This is because the wagon-wheelies generated from data collected from outside the predetermined pitch area. The location, depth, trajectory of the ball in-flight at an arbitrary point on the ground is more difficult to determine, than when it is on the pitch. As a result, some errors manifest. These difficulties aren‟t faced in tennis – where HAWKEYE is used to decide whether the ball, was within the court limits or not. In the case of Tennis, lines calls made by HAWKEYE are completely accurate. Tennis has been quick to adapt to this technology and HAWKEYE arbitrations are legal since the NASDAQ-100 tennis tournament. Players can challenge line calls, following which HAWKEYE determines whether the ball was pitched in or out! The recently concluded US Open QF match between Feeder and Madelyn had a match-point being decided after a line-call challenge. We now briefly look at the various applications of HAWKEYE which the cricket broadcasters regularly use these days.

LBW decisions:

As mentioned previously, the HAWKEYE can accurately capture the trajectory of the ball and also predict the future direction of the ball using mathematical calculations. This is put to use in deciding whether a batsman was OUT LBW on a particular ball. Thus, the system determines the exact point at which the ball struck the batsman. Using the trajectory of the ball up to that point, the system predicts the path the ball would have taken had the Batsman not been present in the way. Thus one can know the lateral position of the ball with respect to the stumps as wells the height of the ball at the point when it reaches the line of the stumps. The figure below gives an example of the trajectory of the ball being predicted. Note that in this picture, the system has got rid of the batsman from the picture so as to give us a complete view of the path of the ball since it left the bowler‟s hand. This is exactly what one needs to decide if the ball would have hit the stumps and if that is the case, the batsman has a chance of being given OUT LBW.

The system is well equipped to handle the various complex clauses which the LBW rule has. For instance, it can check if the ball had pitched outside the leg stump of the batsman. If this is the case, the batsman is NOT OUT even if the ball is going on to the stumps. Recall that the front view Cameras are used to determine whether a batsman is right or left handed. That information is useful here. Another clause states that the batsman should not be given OUT if he is hit outside the line of off-stump and is attempting to play a shot. Now, the part of whether the batsman is playing a shot has to be decided manually and the system is not capable of doing it. However, the point of impact is accurately known and one can see exactly where the batsman was hit.

The kind of accuracy which HAWKEYE offers is difficult to get for any human umpire. The system also includes a way to do probabilistic analysis and hence bring in the factor of “benefit of doubt” which goes to batsman currently. The main idea behind this is to have a region which the human umpire would believe the ball would have been in this region is just taken to be a circle centered at the accurate position of the ball and radius .The value of this radius is calculated taking into consideration the distance between the Point of impact of the ball with the batsman and the stumps –that is, the distance which the ball is yet to cover. This models quite accurately the uncertainty which the umpires feel while making the decision manually. Thus, if the batsman is playing forward, the radius will have a higher value, than when the batsman is struck, playing back. To keep the “benefit of doubt” still with the batsman, the decision goes in favor of the bowler only if a significant portion of the probable region (circle or radius described above) lies in line with the stumps. The system thus is very robust and seems to be better than human umpires as it stands – and it can only improve. Hence there is heated discussion these days on whether one should completely rely on the HAWKEYE for LBW decisions. We choose not to go into those discussions here, in this paper.

Wagon Wheels:

The trajectories which the ball has taken after being hit by the batsman are recorded in the system. This is used to generate a graphic showing 1s, 2s, 3s, 4s, and 6s all indifferent colors for a batsman. These details allow the commentators, spectators and players to analyze the scoring areas of the batsman and also judge if he has played more shots along the turf or in the air. Such information is vital for fielding captain, who might alter his field placement in subsequent matches to adapt to the hitting pattern of a Particular batsman.

Pitch Maps:

As shown above, the Pitch Map graphic uses information about the position where the ball bounced on the pitch. The image above clearly shows the pitch being divided into various “zones” which the experts consider in their
Analysis. It can be very easily seen where the bowler has been pitching the ball primarily. Based on such pitch maps, one can easily see general characteristics of bowlers-for instance, on a particular day a bowler might be taken for a lot of runs. HAWKEYE can show the areas in which the bowler Landed the balls and he might be able to find out he was too short on most occasions and hence was being taken for runs. Batsmen also use such graphics to study the general tendency of the bowler and can plan to play him in subsequent games

De Spin:

The DE Spin graphics help us in understanding how the ball has deviated after pitching. The graphic produced shows the predicted path of the ball, had it held its line even after pitching. This is particularly interesting to look at, in the case of spinners, where one can see both the flight being given byte bowler and the spin that he manages to extract from the pitch. Looking at the action and the DE Spin graphics for particular bowler is useful for batsman to notice any changes in action when the spinner is bowling a “trick” ball –which might be a goodly or flipper in the case of a leg spinner, or a “doors” in the case of an off spinner.

Railcar:

The Railcar graphics show a sideways view of the ballast it left the bowler‟s hand. This is useful to compare the speeds of various deliveries bowled and the bounce the bowler was able to extract from the pitch. As a simulation against time, the slower balls can clearly be seen to reach the line of the stumps much later than the faster balls.

Beehives:

This graphic shows the position of various balls in the plane of the batsman. So, irrespective of whether the batsman played a shot or not, the system places a mark on the plane showing us the point at which the ball passed/would have passed the batsman. At some times, this might be part of the actual trajectory, while in other cases, it might be an extrapolated path. To add to the usefulness, the system canals show the balls on which the batsman scored, in one colour and the ones which he defended in another. This helps to get a very good idea of the strengths and weaknesses of batsman and his scoring zones. The bowler can easily make out if he needs to be bowling away from the body or into the body of the batsman, whether he should be bouncing it hard into the deck or pitching it up and invite the drive etc.

We have looked at various aspects of the HAWKEYE technology. Initially, We outlined the main problems which one could encounter while trying to implement such a system for a sport like cricket. Then, We looked into the details of each step of the process which finally gives us the wonderful looking graphics that we see on TV during cricket analysis shows. We got a fair understanding of the algorithms and mathematics which goes into the system. With the help of examples, We looked at the applications which the technology finds in modern day sport, with cricket being our main focus. We got an understanding of how the graphics can be produced, using the setup, which also was described in detail. We have thus seen that the HAWKEYE is a great innovation, which puts technology to good use in the field of sports. The technology is used widely these days, in sports such as Tennis and Cricket. The accuracy which can be achieved with the use of the system is making the authorities think seriously about reducing the human error component involved in important decisions. As the system runs in real-time, there is no extra time required to see the visualizations and graphics. The system is also a great tool which can bemused by players, statisticians, tacticians, coaches to analyze previous games and come up with strategies for subsequent ones.

Sixth Sense Technology

Sixth Sense is a wearable gestural interface that enhances the physical world around us with digital information and lets us use natural hand gestures to interact with that information. It is based on the concepts of augmented reality and has well implemented the perceptions of it. Sixth sense technology has integrated the real world objects with digital world. The fabulous 6th sense technology is a blend of many exquisite technologies. The thing which makes it magnificent is the marvelous integration of all those technologies and presents it into a single portable and economical product. It associates technologies like hand gesture recognition, image capturing, processing, and manipulation, etc. It superimposes the digital world on the real world.

Sixth sense technology is a perception of augmented reality concept. Like senses enable us to perceive information about the environment in different ways it also aims at perceiving information. Sixth sense is in fact, about comprehending information more than our available senses. And today there is not just this physical world from where we get information but also the digital world which has become a part of our life. This digital world is now as important to us as this physical world. And with the internet the digital world can be expanded many times the physical world. God hasn’t given us sense to interact with the digital world so we have created them like smart phones, tablets, computers, laptops, net books, PDAs, music players, and others gadgets. These gadgets enable us to communicate with the digital world around us.
sixth-sense-technology

 

 

 

 

 

 

 

 

 

 

The Hardware Components are

  • Camera
  • Projector
  • Mirror
  • Mobile Component
  • Colored Markers

Working of Sixth Sense

The Camera and Projector are contained in a wearable form such as a pendant, and are further connected to the mobile computing device, that may be present in the user’s pocket.

The camera recognizes changes in the surroundings and tracks hand gestures along with gathering ‘meta information’, to then articulate it with the digital domain.

The projector produces all images containing the information on any surface, turning it into a touch screen, eliminating the use of paper or screen.

The mobile-computing device is connected to the Cloud carrying all information through the web. The colorful marker caps worn on the fingers help the camera in detecting hand gestures. The fingers can also be painted colorfully to add a stylish touch to it.

The software program present in the mobile computing device processes the video stream data captured by the camera and tracks the tip of the user’s fingers with the help of the colored markers.

Applications of Sixth Sense

  • Eliminating the use of a mouse, drawing is made easier and more fun with the use of fingers.
  • A square frame made out of hands in the open air can replace the bulky camera, by highlighting the object that has to be captured into a photo.Also, a few movements made by fingers can perform the modern functions of editing and resizing the photo captured, along with e-mailing it to people.
  • A phone call can be made, without using any mobile hardware. This technology enables the projection of a phone keypad on the user’s hand, which can then be typed and the phone call made.
  • It reads aloud a book for its user, and also helps him find out the ratings and other aspects of a book that the user may wish to find out, with the help of information on the web.
  • It provides a video enabled newspaper experience to its users, through searching the web for the most appropriate video, related to the desired newspaper report or headline.
  • It helps in checking the status and other details of the flight from the web, while off-board, through the simple action of placing the ticket in front of the projector.
  • Through drawing a circle on his wrist, the device projects a digital clock face to its user.
  • It can help a consumer in buying better, by providing him with immediate information about various products and services, leading to easy price comparisons and other details.

 

Blue Eyes Technology

Animal survival depends on highly developed sensory abilities. Likewise, human cognition depends on highly developed abilities to perceive, integrate, and interpret visual, auditory, and touch information. Without a doubt, computers would be much more

powerful if they had even a small fraction of the perceptual ability of animals or humans. Adding such perceptual abilities to computers would enable computers and humans to work together more as partners. Toward this end, the Blue Eyes aims at creating computational devices with the sort of perceptual abilities that people take for granted Blue eyes is being developed by the team of Poznan University of Technology& Microsoft. It makes use of the “blue tooth technology “developed by Ericsson.

            PARTS OF A BLUE EYE SYSTEM

The major parts in the Blue eye system are Data Acquisition Unit and Central System Unit. The tasks of the mobile Data Acquisition Unit are to maintain Bluetooth connections, to get information from the sensor and sending it over the wireless connection, to deliver the alarm messages sent from the Central System Unit to the operator and handle personalized ID cards. Central System Unit maintains the other side of the Blue tooth connection, buffers incoming sensor data, performs on-line data analysis, records the conclusions for further exploration and provides visualization interface.

THE HARDWARE:

Data Acquisition Unit
Data Acquisition Unit is a mobile part of the Blue eyes system. Its main task is to fetch the physiological data from the sensor and to send it to the central system to be processed. To accomplish the task the device must manage wireless Bluetooth connections (connection establishment, authentication and termination). Personal ID cards and PIN codes provide operator’s authorization.
Figure Showing Jazz-multi Sensor

Communication with the operator is carried on using a simple 5-key keyboard, a small LCD display and a beeper. When an exceptional situation is detected the device uses them to notify the operator. Voice data is transferred using a small headset, interfaced to the DAU with standard mini-jack plugs.

The Data Acquisition Unit
The Data Acquisition unit comprises several hardware modules figure showing data
acquisition unit
· Atmel 89C52 microcontroller – system core
· Bluetooth module (based on ROK101008)
· HD44780 – small LCD display
· 24C16 – I2C EEPROM (on a removable ID card)

Block Diagram of Data Acquisition Unit:

· MC145483 – 13bit PCM codec
· Jazz Multisensor interface
· beeper and LED indicators, 6 AA batteries and voltage level monitor

CENTRAL SYSTEM UNIT :
Central System Unit hardware is the second peer of the wireless connection. The box contains a Bluetooth module (based on ROK101008) and a PCM codec for voice data transmission. The module is interfaced to a PC using a parallel, serial and USB cable.
The audio data is accessible through standard mini-jack sockets over view of central system unit To program operator’s personal ID cards we developed a simple programming device. The programmer is interfaced to a PC using serial and PS/2 (power source) ports. Inside, there is Atmel 89C2051 microcontroller, which handles UART transmission and I2C EEPROM (ID card) programming.

THE SOFTWARE:

Blue Eyes software’s main task is to look after working operators’ physiological condition. To assure instant reaction on the operators’ condition change the software performs real time buffering of the incoming data, real-time physiological data analysis and alarm triggering.

The Blue Eyes software comprises several functional modules System core facilitates the
transfers flow between other system modules (e.g. transfers raw data from the Connection Manager to data analyzers, processed data from the data analyzers to GUI controls, other data analyzers, data logger etc.).

The System Core fundamental are single-producer-multi-consumer thread safe queues. Any number of consumers can register to receive the data supplied by a producer. Every single consumer can register at any number of producers, receiving therefore different types of data.
Naturally, every consumer may be a producer for other consumers. This approach enables high system scalability – new data processing modules (i.e. filters, data analyzers and loggers) can be easily added by simply registering as a costumer

.
Connection Manager is responsible for managing the wireless communication between the mobile Data Acquisition Unit the central system. The Connection Manager handles:
· communication with the CSU hardware
· searching for new devices in the covered range
· establishing Bluetooth connections
· connection authentication
· incoming data buffering
· sending alerts
Data Analysis module performs the analysis of the raw sensor data in order to obtain information about the operator’s physiological condition. The separately running Data Analysis module supervises each of the working operators.
The module consists of a number of smaller analyzers extracting different types of information. Each of the analyzers registers at the appropriate Operator Manager or another analyzer as a data consumer and, acting as a producer, provides the results of the analysis. The most important analyzers are:
· saccade detector – monitors eye movements in order to determine the level of operator’s visual attention
· pulse rate analyzer – uses blood oxygenation signal to compute operator’s pulse rate
· custom analyzers – recognize other behaviors than those which are built-in the system. The new modules are created using C4.5 decision tree induction algorithm

Visualization module provides a user interface for the supervisors. It enables them to watch each of the working operator’s physiological condition along with a preview of selected video source and related sound stream. All the incoming alarm messages are instantly signaled to the supervisor.
The Visualization module can be set in an off-line mode, where all the data is fetched from the database.
Watching all the recorded physiological parameters, alarms, video and audio data the supervisor is able to reconstruct the course of the selected operator’s duty.
The physiological data is presented using a set of custom-built GUI controls:
· a pie-chart used to present a percentage of time the operator was actively acquiring the visual information
· A VU-meter showing the present value of a parameter time series displaying a history of selected parameters’ value.

                                      BLUE-EYES BENEFITS:

Prevention from dangerous incidents Minimization of ecological consequences financial loss a threat to a human life Blue Eyes system provides technical means for monitoring and recording human-operator’s physiological condition. The key features of the system are:

· visual attention monitoring (eye motility analysis)

· physiological condition monitoring (pulse rate, blood oxygenation)
· operator’s position detection (standing, lying)

· wireless data acquisition using Blue tooth technology
· real-time user-defined alarm triggering
· physiological data, operator’s voice and overall view of the control room recording
· recorded data playback
Blue Eyes system can be applied in every working environment requiring permanent operator’s attention:
· at power plant control rooms
· at captain bridges
· at flight control centers

In future it is possible to create a computer which can interact with us as we interact each other with the use of blue eye technology. It seems to be a fiction, but it will be the life lead by “BLUE EYES” in the very near future. ordinary household devices — such as televisions, refrigerators, and ovens — may be able to do their jobs when we look at them and speak to them.

 

 

5 pen pc technology

 

History

“Pen-style Personal Networking Gadget” created in 2003 by Japanese technology company NEC.

Its designer is Tour Ichihash.

P-ISM was first featured at the 2003 ITUTelecom world held in Geneva, Switzerland.

5 pen pc technology

Look closely and guess what they could be…

  • P-ISM (“Pen-style Personal Networking Gadget Package”).
  • Connected With A Wireless Technology.
  • Whole Set Is Connected To Internet Through Cellular Phone Function.

This ‘pen sort of instrument’ produces both the monitor  as well as the keyboard on any flat surfaces from where you can carry out functions you would normally do  on your desktop computer.

P-ISM is a gadget package including five functions:

  • CPU Pen
  • CAMERA
  • VIRTUAL KEYBOARD
  • VISUAL OUTPUT
  • A PHONE

CPU Pen

  • The functionality of the CPU is done by one of the pen.
  • It is also known as computing engine.
  • Dual Core processor is used.
  • Works with windows operating system.

Communication Pen

  • Wireless Bluetooth Technology.
  • Connected to internet through cellular phone function.
  • Uses Wi -Fi technology.
  • Exchange information with wireless connection.

LED Projector

  • Monitor is LED Projector.
  • Size is A4.
  • Resolution capacity is1024/768 approx.
  • It gives more clarity and good picture.

Virtual Keyboard

  • It emits laser  on desk.
  • Uses laser beam to generate full_size  perfect keyboard.

Digital Camera

  • It is useful in video recording , video conferencing simply as a webcam.
  • Connected with other devices.
  • portable.
  • 360 degree visual communication device.

Battery

  • Is the most important part in the portable type of computers.
  • Usually batteries are small in size and work for a long time.
  • It comes with a battery life of 6+ (i.e 6 days).
  • For normal use it can be used for 2 weeks.

Merits

  • Portable
  • Feasible
  • Ubiquitous computing is done
  • Wi_Fi technology

Demerits

  • High Cost
  • Battery Life
  • Difficulty of Positioning

The communication devices are becoming smaller and compact. This is only an example for the start of this new technology. We can expect more such developments in the future.

Li-Fi Technology

Li-Fi (Light Fidelity)-The future technology In Wireless communication

Whether you’re using wireless internet in a coffee shop, stealing it from the guy next door, or competing for bandwidth at a conference, you have probably gotten frustrated at the slow speeds you face when more than one device is tapped into the network. As more and more people and their many devices access wireless internet, clogged airwaves are going to make it. One germen phycist.Harald Haas has come up with a solution he calls “data through illumination” –taking the fibber out of fiber optic by sending data through an LED light bulb that varies in intensity faster than the human eye can follow.
It’s the same idea band behind infrared remote controls but far more powerful. Haas says his invention, which he calls DLIGHT, can produce data rates faster than 10 megabits per second, which is speedier than your average broadband connection. He envisions a future where data for laptops, smart phones, and tablets is transmitted through the light in a room. And security would be snap – if you can’t see the light, you can’t access the data.

LiFi is transmission of data through illumination by taking the fiber out of fiber optics by sending data through a LED light bulb that varies in intensity faster than the human eye can follow.Li-Fi is the term some have used to label the fast and cheap wireless-communication system, which is the optical version of Wi-Fi. The term was first used in this context by Harald Haas in his TED Global talk on Visible  Light Communication. “At the heart of this technology is a new generation of high brightness light-emitting diodes”, says Harald Haas from the University of Edinburgh, UK,”Very simply, if the LED is on, you transmit a digital 1, if it’s off you transmit a 0,”Haas says, “They can be switched on and off very quickly, which gives nice  opportunities for transmitted data.”It is possible to encode data in the light by varying the rate at which the LEDs flicker on and off to give different strings of 1s and 0s.The LED intensity is modulated so rapidly that human eye cannot notice, so the output appears constant. More sophisticated techniques could dramatically increase VLC data rate. Terms at the University of Oxford and the University of Edingburgh are focusing on parallel data transmission using array of LEDs, where each LED transmits a different data stream. Other group are using mixtures of red, green and blue LEDs to alter the light frequency encoding a different data channel.Li-Fi, as it has been dubbed, has already achieved blisteringly high speed in the lab. Researchers at the Heinrich Hertz Institute in Berlin,Germany,have reached data rates of over 500 megabytes per second using a standard white-light LED. The technology was demonstrated at the
2012 Consumer Electronics Show in Las Vegas using a pair of Casio smart phones to exchange data using light of varying intensity given off from their screens, detectable at a distance of up to ten metres.

In October 2011 a number of companies and industry groups formed the Li-Fi  Consortium, to promote high-speed optical wireless systems and to overcome the limited amount of radiobased wireless spectrum available by exploiting a completely
different part of the electromagnetic spectrum. The consortium believes it is possible to achieve more than 10 Gbps, theoretically allowing a high-definition film to be downloaded in 30 seconds.