INTERNATIONAL JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY (IJEET) International Journal of Electrical Engineering and Technology (IJEET), ISSN 0976 6545(Print), ISSN 0976 6545(Print) ISSN 0976 6553(Online) Volume 5, Issue 8, August (2014), pp. 13-20 IAEME: www.iaeme.com/ijeet.asp Journal Impact Factor (2014): 6.8310 (Calculated by GISI) www.jifactor.com IJEET I A E M E TWO WHEELED SELF BALANCING ROBOT FOR AUTONOMOUS NAVIGATION Jisha Kuruvilla 1, Jithin Abraham 2, Midhun S 2, Ranjini Kunnath 2, Rohin Reji Paul 2 1 Asst. Prof., Dept. of EEE, Mar Athanasius College of Engineering, Kothamangalam, India 2 UG Student, Dept. of EEE, Mar Athanasius College of Engineering, Kothamangalam, India ABSTRACT Self balancing robots are increasingly becoming popular because of their unique ability to move around in two wheels. They are characterized by their high maneuverability and excellent agility. This paper describes the design and testing of a self balancing robot that not just balances on two wheels but also navigates its way around with the help of an on-board image processing system. The robot as a whole can be considered as a combination of two units the balancing unit and the image processing unit. The balancing unit performs all functions that keep the robot upright whereas the image processing unit assists in autonomous navigation. The balancing unit runs a PID control loop which improves the stability of the system. A real-time data plot is done in MATLAB to analyze the stability of the system and also to improve it by tuning the PID controller constants. This system can be used as a base model to accomplish complicated tasks which would otherwise be performed by humans. Some situations include foot print analysis in wildlife reserves, autonomous indoor navigation, etc. Keywords: Accelerometer, Complementary Filter, Gyroscope, Image Processing, Inverted Pendulum, PID. 1. INTRODUCTION Unlike an ordinary robot, a two wheel self balancing robot requires just two point of contact with the floor surface. The unique stability control that is required to keep the robot upright differentiates it from ordinary robots. Such robots are characterized by the ability to balance on its two wheels and spin on the spot. This additional maneuverability allows easy navigation on various terrains, turn sharp corners and traverse small steps or curbs. These capabilities have the potential to solve a number of challenges in industry and society. Small carts built utilizing this technology 13
allows humans to travel short distances in a small area or factories as opposed to using cars which is more polluting. The basic idea of a self-balancing robot is simple: drive the wheels in the direction in which the robot tilts. If the wheels can be driven in such a way as to stay under the robot s center of gravity, the robot remains balanced. This is similar to the inverted pendulum model in control theory. The pendulum is usually mounted on a cart through a hinge. The cart moves forward or backward to ensure that the pendulum remains vertical. To drive the cart either forward or backward, knowledge of the angle and rate of tilt of the inverted pendulum is required. This can be measured using an inertial sensing unit. The general design of the robot is a rectangular body on two wheels. The wheels are placed parallel to each other. The robot body comprises of four layers placed one over the other by means of plastic extenders. The layers are made out of 3mm thick glass epoxy PCB. The bottommost layer consists of wheels, motors and battery. The second layer from bottom comprises of electronic circuitry which include the microcontroller, angle sensor, voltage regulator and motor driver. The third layer from bottom contains a microcomputer for image processing. The topmost layer incorporates a digital camera compatible with the microcomputer and a distance sensor.. Fig. 1: A 3-D model of the framework of the proposed robot The balancing unit and the image processing unit have a separate CPU. The balancing unit contains a microcontroller the ATmega168 which runs at 16MHz. The image processing unit contains a single board computer Raspberry Pi which runs at 700MHz [1]. An inertial measurement unit (IMU), a microcontroller, a motor driver and the motor forms the balancing unit. They together perform all actions that are necessary for the stable operation of the robot. The microcontroller continuously reads the data from the IMU and calculates the angle of tilt of the robot with respect to the vertical. Based on this data, the microcontroller then sends appropriate control signals to the motor driver to drive the motors. 14
Fig. 2: Block diagram of the proposed robot 2. ANGLE ESTIMATION AND BALANCING The balancing robot is a highly unstable two wheeled robot which functions like an inverted pendulum. The robot will naturally tend to tip over, and the further it tips, the stronger is the force causing it to tip over. Therefore to keep the robot stable, we have to continuously monitor the tilt angle of the robot and drive the wheels accordingly. 2.1. Angle Estimation To find the direction and the angle of tilt, an accelerometer and a gyroscope is used. Although both accelerometer and gyroscope can be individually used to calculate tilt angle, in practice, it is often used in collaboration. Accelerometer gives accurate reading over a sufficient interval of time but it is highly susceptible to noise which results due to sudden jerking movement of the robot. Since accelerometer measures linear acceleration, the sudden jerking movement throws off the sensor accuracy. Gyroscope actually measures angular velocity which is then integrated to find the angle of tilt. For a small interval of time, the value of gyroscope is very accurate, but since the gyroscope experiences drift and integration compounds the error, after some time the gyroscope reading becomes unreliable [2]. Thus we require some way to combine these two values. This is done with complementary filter. Complementary filter is a simple filter that is easy to implement, experimentally tune and it demands very little processing power. It is basically a high pass filter and a low pass filter combined where the high pass acts on the gyroscope and the low pass acts on the accelerometer. It makes use of the gyroscope for short term estimation and the accelerometer for absolute reference. 15
Fig. 3: Complementary Filter Equation Fig. 4: Complementary Filter Block Diagram [3] 2.2. Balancing Control The feedback control used for improving the balancing action is the PID controller. PID stands for Proportional, Integral, and Derivative. These three terms describe the basic elements of a PID controller. Each of these elements performs a different task and has a different effect on the functioning of a system. Proportional control is the easiest feedback control to implement, and simple proportional control is probably the most common kind of control loop. A proportional controller is just the error signal multiplied by a constant and fed out to the drive. Integral control is used to add long-term precision to a control loop. It is used to eliminate accumulation of errors. In a way Proportional control deals with the present behaviour, integral deals with the past. So we use derivative controller to sort of predict the future behaviour of the robot. It measures the rate of change of the control parameter (here, it is the tilt angle) [4]. The PID controller constants are tuned by real-time data visualization in MATLAB. A wireless transmitter module attached to the robot sends data to the receiver module connected to a desktop computer. The receiver captures the data and then plots the values using MATLAB. The input to the controller is the filtered tilt angle. The parameter to be controlled by the PID is the power supplied to the motor along with the direction. By examining the plot of the filtered angle versus time, it is possible to analyze the stability of the system and also to improve it by tuning the controller constants. 16
3. AUTONOMOUS NAVIGATION Autonomous navigation is achieved by combining an ultrasonic distance sensor and an image processing system. With this added feature, the robot is able to take decisions on its own as to what path should it take at a junction and what should it do when it confronts an obstacle. For this purpose, a single-board computer interfaced with a distance sensor and a digital camera is used. The ultrasonic distance sensor uses sonar to detect obstacles and to measure the distance to them. On application of an appropriate trigger signal, the sensor transmits a high frequency sound wave which is usually inaudible to humans. The obstacle then reflects back this wave and it is captured by the receiver. To determine the distance between the sensor and the object, the sensor measures the elapsed time between sending and receiving waves. The speed of sound in air is about 343 m/s, with minor dependence on temperature and humidity. So the distance in meters can be obtained as given below: Distance from object = 343 * elapsed time / 2 The single-board computer continuously measures the distance of the robot to the nearest obstacle in front of it. If the distance is less than a predefined value, 25cm in this case, the microcomputer triggers the camera and captures a frame. This image is then processed and is checked for any useful information. Based on the information obtained from the image, the microcomputer provides necessary instruction to the microcontroller to navigate the robot. The colour image captured by the camera is first binarized, meaning it is converted to black and white information. The binarization technique is done in the initial stage so that the load on the microcomputer in the further stages can be reduced. The pixels in a binarized image would be either black or white. The image is then inverted. This technique converts a black pixel to white and a white to black. The image is then cropped to remove any unwanted part of the frame [5]. The captured image is of reasonably high resolution and its size has to be reduced before further processing can be done. As a next step, the image is brought down in size to approximately one-third of its captured size. This image is then scanned for any recognizable text using efficient optical character recognition software that is already built in the microcomputer. Fig. 5 was obtained by holding a white paper, in front of the robot at a distance less than 25cm with the text RIGHT printed on it in black. Fig. 6 to Fig. 8 shows the changes in captured image during various stages of image processing. Fig. 8 was scanned using OCR technique and the microcomputer successfully identified the text RIGHT. The microcomputer then provides an appropriate control signal to the microcontroller to turn the robot rightward. This example shows how the robot reads data from its environment and how it effectively uses this data for its navigation. Fig. 5: Captured image Fig. 6: Binarized image 17
Fig. 7: Image obtained after inversion Fig. 8: Image obtained after cropping and scaling 4. EXPERIMENTAL RESULTS During this project we experimented with PD and PID controller. PD controller was easier to implement due to lower number of parameters to manipulate. It was found that with PD controller, the robot stability decreased with time. The error accumulated over time and the robot would tip over eventually. This was due to lack of integral term. MATLAB was used to plot the filtered angle versus time (Fig. 9). Fig. 9: MATLAB plot of filtered angle versus time while using PD controller Due to the lack of stability over long periods of time inherent in the PD controller, we implemented a PID controller. The constants where determined with the help of real time data plot using MATLAB. By using real time data plot, the three constants for proportional, integral and derivative components were determined. This led to improved stability which can be observed from the graph. The angle of tilt of the robot is well within the safe limits. It has not even crossed 10 degrees as shown in Fig. 10. 18
5. CONCLUSION Fig. 10: MATLAB plot of filtered angle versus time while using PID controller Figure 11 shows the final design of the robot. The robot was tested on different surfaces to find an optimum surface on which it would be perfectly balanced. While testing in various surfaces it was found that surfaces like sponge or soft rubber which is soft enough to be slightly compressed by the weight of the robot is the most suitable surface. In hard surfaces the contact area of the wheel with the ground is reduced resulting in poor stability since the robot has a tendency to over balance itself. Fig. 11: Final model of the proposed robot 19
REFERENCES [1] Matt Richardson, and Shawn Wallace, Getting Started with Raspberry Pi, Published by O Reilly Media, 2012, 1-31. [2] Hau-Shiue Juang and Kai-Yew Lum, Design and Control of a Two-Wheel Self-Balancing Robot using the Arduino Microcontroller Board, 10th IEEE International Conference on Control and Automation (ICCA), Hangzhou, China, 2013. [3] Shane Colton, The Balance Filter, Massachusetts Institute of Technology, Tech. Rep., 2007. [4] Tim Wescott, PID without a PhD, Embedded Systems Programming, 2000, 86-108. [5] Kurt Demaagd, Anthony Oliver, Nathan Oostendorp, and Katherine Scott, Practical Computer Vision with SimpleCV, Published by O Reilly Media, 51 74. [6] Maha M. Lashin, A Different Applications of Arduino, International Journal of Mechanical Engineering & Technology (IJMET), Volume 5, Issue 6, 2014, pp. 36-46, ISSN Print: 0976 6340, ISSN Online: 0976 6359. [7] Sarthak Pareek, Embedded Based Robotic Vehicle Protection using Stisim Software, International Journal of Electronics and Communication Engineering & Technology (IJECET), Volume 5, Issue 4, 2014, pp. 36-42, ISSN Print: 0976-6464, ISSN Online: 0976 6472. 20