ACCELEROMETER BASED HAND GESTURE CONTROLLED ROBOT.

Abstract

A gesture controlled robot is a kind of robot which can be controlled by your hand gestures not by old buttons. You just need to wear a small transmitting device in your hand which include an acceleration meter. Today, most industrial robots are still programmed using the typical teaching process,through the use of the robot teach pendant. In this paper we are using the two most natural human interfaces (gestures & speech), a force control system and several code generation techniques. Special attenstion will be given to the recognition of gestures, where the data extracted from a motion sensor ( 3-axis accelerometer). Gesture based interaction, as a natural way for human computer interaction, has a wide range of applications in ubiquitous environment.An accelertion based gesture recognition approach, called FDSVM(Frame based Descriptor and multi-class SVM), which needs only a wearable 3-dimensional accelerometer. A hand gesture based control interface for navigating a car-robot.

…………………………………………………………………………………………………….... Introduction:-
In many application of controlling robotic gadget it becomes quite hard and complicated when there comes the part of controlling it with remote or many different switches. Mostly in military application, industrial robotics, construction vehicles in civil side, medical application for surgery. In this field it is quite complicated to control the robot or particular machine with remote or switches, sometime the operator may get confused in the switches and button itself, so a new concept is introduced to control the machine with the movement of hand which will simultaneously control the movement of robot.  An Accelerometer is a kind of sensor which gives an analog data while moving in X,Y,Z direction or may be X,Y direction only depends on the type of the sensor.

984
Here is a small image of an Accelerometer shown. We can see in the image that there is some arrow showing if we tilt these sensors in that direction then the data at that corresponding pin will change in the analog form. The Accelerometer having 6 pins-1. VDD-We will give the +5volt to this pin. 2. GND-We simply connects this pin to the ground for biasing.
3. X-On this pin we will receive the analog data for x direction movement. 4. Y-On this pin we will receive the analog data for y direction movement. 5. Z-On this pin we will receive the analog data for z direction movement. 6. ST-this pin is use to set the sensitivity of the accelerometer 1.5g/2g/3g/4g.
The methods commonly used for humans to direct the actions of robots are cumbersome and insufficient for complex tasks. There is growing interest in using hand and arm gestures to supervise robots in environments shared with humans, with the aim of natural, intuitive, and flexible control interfaces. This paper presents our methods for gesture based robot control, including integrating with various robot autonomy capabilities.
THE increase in human-machine interactions in our daily lives has made user interface technology progressively more important. Physical gestures as intuitive expressions will greatly ease the interaction process and enable humans to more naturally command computers or machines. For example, in tale-robotics, slave robots have been demonstrated to follow the master's hand motions remotely.
Many kinds of existing devices can capture gestures, such as a "Wiimote", joystick, and trackball and touch tablet. Some of them can also be employed to provide input to a gesture recognizer. But sometimes, the technology employed for capturing gestures can be relatively expensive, such as a vision system or a data glove. To strike a balance between accuracy of collected data and cost of devices, a Micro Inertial Measurement Unit is utilized in this project to detect the accelerations of hand motions in three dimensions.

Gesture Motion Analysis:-
Gesture motions are in the vertical plane or the projection of the motions is mainly in the vertical plane, so the accelerations on x-and z-axis is adequate to distinguish each gesture. Therefore, the acceleration on y-axis is neglected to reduce computational requirement.
We propose that the exact shape of the acceleration curves is not critical, but only the alternate sign changes of acceleration on the two axes are required to uniquely differentiate any one of the 7 gestures: up, down, left, right, tick, circle, and cross. This is the basis of the recognition algorithms discussed in this paper. For instance, the gesture up has the acceleration on z-axis in the order: negative -positivenegative (positive. Z direction points downward) and nearly has no acceleration on x-axis; for a circle gesture, on x axis: positivenegative-positive and on z-axis: negative-positive-negative-positive. Experiments showed that each of these gestures has a special order of sign changes, and a kinematics analysis also proves this.
A kinematic motion a hand goes through in performing a gesture could non-intuitive at time.

Sensing System Overview:-
The sensing system utilized in our experiments for hand motion data collection and is essentially a 3-axis acceleration sensing chip integrated with data management and Bluetooth wireless data chips. The algorithms described in this paper were implemented and run in a PC.
When the sensing system is switched on, the accelerations in three perpendicular directions are detected by the sensors and transmitted to a PC via Bluetooth protocol. The gesture motion data go through a segmentation program which automatically identifies the start and end of each gesture so that only the data between these terminal points will be processed to extra feature.
Hand gesture recognition provides an intelligent, natural, and convenient way of human-computer interaction (HCL). Sign language (SLR) and gesture-based control are two major applications for hand gesture recognition 985 technologies. SLR aims to interpret sign languages automatically by a computer in order to help the deaf communicate with hearing society conveniently.
Programming an industrial robot by the typical teaching method, through the use of the robot teach pendant is a tedious and time-consuming task that requires some technical expertise. In industry, this type of robot programming can be justified economically only for production of large lot sizes. Hence, new approaches to robot programming are required. Contrary to the highly intelligent robots described in science fiction, most current industrial robots are "non-intelligent" machines that work in a controlled and well known environment. Generally, robots are designed, equipped and programmed to perform specific tasks, and thus, an unskilled worker will not be able to re-program the robot to perform a different task. The goal is to create a methodology that helps users to control and program a robot with a high-level of abstraction from the robot language. Making a demonstration in terms of high-level behaviors (using gestures, speech, etc.), the user should be able to demonstrate to the robot what it should do, in an intuitive way. This type of learning is often known as programming by demonstration (PbD). Several approaches for PbD have been investigated, using different input devices, manipulators and learning strategies.
Humans naturally use gesture to communicate. It has been demonstrated that young children can readily learn to communicate with gesture before they learn to talk. A gesture is non-verbal communication made with a part of the body. We use gesture instead of or in combination with verbal communication. Using this process, human can interface with the machine without any mechanical devices. Human movements are typically analyzed by segmenting them into shorter and understandable format. The movements vary person to person. It can be used as a command to control different devices of daily activities, mobility etc. So our natural or intuitive body movements or gestures can be used as command or interface to operate machines, communicate with intelligent environments to control home appliances, smart home, telecare systems etc. In this paper we also review the different types of technologies of gesture controlled system.

Future work:-
Due to the growing demand for natural Human Machine Interfaces and robot intuitive programming platforms, a robotic system that allows users to control an industrial robot using hand gesture and postures was proposed. Two 3axis accelerometers were selected to be the input devices of this system, capturing the human hand behaviors.
We proposed a fast and simple algorithm for hand gesture recognition for controlling robot. We have demonstrated the effectiveness of this computationally efficient algorithm on real images we have acquired. In our system of gesture controlled robots, we have only considered a limited number of gestures. Our algorithm can be extended in a number of ways to recognize a broader set of gestures. The gesture recognition portion of our algorithm is too simple, and would need to be improved if this technique would need to be used in challenging operating conditions. Reliable performance of hand gesture recognition techniques in a general setting require dealing with occlusions, temporal tracking for recognizing dynamic gestures, as well as 3D modeling of the hand, which are still mostly beyond the current state of the art The advantage of using neural networks is that you can draw conclusions from the network output. If a vector is not classified correct we can check its output and work out a solution. Even with limited processing power, it will be possible to design very efficient algorithms by-Advanced DSP processor can reduce the size of module, Understand their (static) gestures, Control for other biometric uses. Our software has been designed to be reusable for many behaviors that are more complex, which may be added to our work. Because we limited ourselves to low processing power, our work could easily be made more performing by adding a state-of-the-art processor. The use of real embedded OS could improve our system in terms of speed and stability. In addition, implementing more sensor modalities would improve robustness even in very complex scenes. Our system has shown the possibility that interaction with machines through gestures is a feasible task and the set of detected gestures could be enhanced to more commands by implementing a more complex model of a advanced vehicle for not only in limited space while also in broader area as in the roads too . In the future, service robot executing many different tasks from private movement to a fully-fledged advanced automotive that can make disabled to able in all sense.