OPEN CV BASED AUTONOMOUS RC-CAR B. Sabitha 1, K. Akila 2, S.Krishna Kumar 3, D.Mohan 4, P.Nisanth 5 1,2 Faculty, Department of Mechatronics Engineering, Kumaraguru College of Technology, Coimbatore, India 3,4,5 Students, Department of Mechatronics Engineering, Kumaraguru College of Technology, Coimbatore, India ABSTRACT Controlling the traffic in the metro cities is the huge due to increase in the vehicles population and increase in accidents too. To prevent, many of the plans are implemented but fails. To overcome these problems we come up the idea of Controllable Traffic Autonomous Car. The cars most nowadays are smart we utilize that idea and make a prototype which control according to the traffic signals and to prevent the accidents by the machine learning prediction and image processing technique. The main controlling unit if the car is raspberry pi which teaches the car to move in the path and to stop when the red signal in the traffic, to maintain the speed and to stop the car if it get collide with nearby vehicles. The distance between the cars is monitored with the help of ultrasonic sensor. It also associated with the GSM and GPS technique to locate the car and to alert in case of the emergency situations.the result that going to obtained based of the qualities of the image frame from the camera and the collision avoidance according to the sensor data from the ultrasonic sensor. The machine learning and the image processing are done by using the opencv module in python. The self is going to done by the convolution neural network and the object prediction by the harr classifiers. The advance is going to done by the deep learning of the objects Keywords: Open cv haar classifier, Monocular vision,region of interest (ROI), Hue saturation value (HUI) INTRODUCTION The car that is made based on to make the car to move according to the traffic rules and conditions to improve the traffic ability. In this work the RC car are made to make its movement by predicting the paths that are already trained to it. By the trained prediction it made its movements by mappings its path to the convolution neural network [1].The traffic signs of the car be predicted by the colour mapping to differentiate the colors to the camera or by using the concept of the pre-trained classifiers like haar-classifier. The distance of the signal and the stop sign to the car camera is measured by the monocular vision [1][2].The live JARDCS Special Issue On Engineering and Informatics 1009
feed from the camera is broadcasted to the driver trough the socket networking. The colloidence of the one car form other or to any other obstacles is prevented by the ultrasonic sensors through the distance measurement. The process are initially the car is pre trained by the path and travel in its track if the obstacles is in front ofit stops and check the obstacles behind it and convey the message to the driver and in the signals if it detects the red color it stops and make the move until thecolour is turned to green.. In case if the car met with the accident it alert the hospitals nearby and to the associated persons of the car owner. DESCRIPTION: The below shown diagram is the block diagram for the working and the step functionalities of the proposed work. Figure 1: Block Diagram AUTO-PREDICTION: In the self-driving part the car makes to start collecting images of the sample paths it going to travel it takes up to more than 100 samples. After collecting the samples it allowed to the training process the trained data are get stored in the raspberry pi memory. The trained data is made to prediction testing and results get stored in.npz format. The tested data is get teach to the raspberry pi by cv2.ml.ann_mlp_create () command by this command it made prediction algorithm with labels and its associated tested data get stored in the format.xml format by which the convolution neural[1] mapping has made. Then the live feed is telecasted to the diver through wireless protocol socket library in python. TRAFFIC& STOP SIGN DETECTION: The traffic light is detected by the process of rendering and masking the color in python. The image is first converted into gray scale and the image converted into HSV (Hue Saturation Value). The color to be identified is separated into two parts, upper and lower part by series of repetitions. Then the separated parts are masked and render according to HSV JARDCS Special Issue On Engineering and Informatics 1010
range, The obtained masked images are eroded and diluted to get the smooth and noise free required colors(pixel values) by repeated iterations(six iterations). Then the boundary for the obtained color is drawn by the contour process by defining the position and height for the range of color detected. Then by using cv2.puttext () command we write the distance from the color detected to the camera and the detected color. To recognize different states of the STOP sign, some image processing is needed beyond detection. Flowchart below summarizes the traffic light recognition process. Figure2: Flow chart for colour mapping Firstly, trained cascade harr classifier [4] is used to detect stop sign. The bounding box is considered as a region of interest (ROI) [1]. Secondly, Gaussian blur is applied inside the ROI to reduce noises. Thirdly, find the brightest point in the ROI. Finally, Stop sign determined simply based on the position of the brightest spot in the ROI. Figure3: Stop Sign Prediction The.xml file is created using this classifiers, which takes 4 days to get a good detection of images. Then this.xml file is imported in programming so that it detects the sign and contour is drawn. MONOCULAR VISION: The proposed work adapted a geometry model of detecting distance of the object to the car using monocular vision method [2]. Figure4: MV calculation JARDCS Special Issue On Engineering and Informatics 1011
P is a point on the target object; d is the distance from optical centre to the point P. based on the geometry relationship above, formula (1) shows how to calculate the distance d. In the formula (1), f is the focal length of the camera; is camera tilt angle; h is optical center height; (x0, y0)[1] refers to the intersection point of image plane and optical axis; (x, y) refers to projection of point P on the image plane. It is the mathematical calculation to detect the distance of stop and signal sides from the camera of the vehicles [5]. Another major part of the proposed work is prevent collide of the cars over the other while they make detection with the signals. The first tray of the car detects the signals and makes the stop. By using the ultrasonic sensor fitted infornt of the cars which makes the prevention of collidance with the other vehicles by making it to stop the wheels of the car if it reaches the particular distance ie.when the distance of the ultrasonic sensor and the other vehicle is in 1m gap it makes the car to stop. If the ultrasonic sensor is fitted in every side of the vehicles like a tray or like a array it makes the car to prevent collide with the side by near objects also and the sensors are to be fit in the back side to prevent in case the other car front detector fails. RESULTS The results obtained during test drive are shown below. The final RC car assembly is shown below Figure5: Prototype JARDCS Special Issue On Engineering and Informatics 1012
The obtained results are pretty good and the controlling of car through self-learning, the detection of traffic signals and stop sign detection are shown in figures. Figure 6: Red colour Detection Figure7: Yellow colour Detection Figure 8: Green colour Detection Figure 9: Stop Sign Detection CONCLUSION: The car makes it path clearly trained makes it move perfectly and detects the signals and the stop sign make the stop. The obstacles detector (ultrasonic sensor) makes the car to JARDCS Special Issue On Engineering and Informatics 1013
stop if the obstacle or the car is in front of it. The alert and the location mapping is also worked by sending the alert to the consumer persons and located perfectly. Area with wireless network range (Wi-Fi).The challenge faced are obtaining camera feed wirelessly to laptop, there was a lot of latency problems. The pixels are not obtained clearly. This is rectified by threading operation in the program where the multiple client server programs are made to run in multiple ports. The red colour are detected in various area in the frame due lot of color gradients around the surroundings. So the path of the car tracing must be clearly in the range of other than of traffic colors so that the car can easily detect further motion clearly or the car may go along uncontrolled manner. To obtain the clear detection of traffic signal the advanced neural network method is used, so that the traffic signal contour is also teaches so that it can sense only the contour size so that the detection flows along contour and inside of the contour section which makes the detection much clear and better. FUTURE SCOPE: The prediction is currently based on opencv and get to change scikit learn, tensor flow libraries to get the better accuracy for auto prediction. Ultrasonic sonic sensors are to replace by radar to better mapping and avoiding the obstacles. Traffic signal carries the camera which is teaches to detect the number plates by one of the algorithm like opencv haar classifier. In case of vehicles crossing the signals can be finding by scanning the particular vehicle number plate and sending the data to overall traffic head office and check with existing database, so that easily person is found, which happens in fraction of seconds. This case is acceptable, because not all vehicles can violate traffic signals, so it is easy to find the one vehicle which is crossing the signal. The codes are uploaded in the below github page: https://github.com/krishkribo/semi_autonomous_rc_car REFERENCES 1. Li, X., Wang, X., & Ouyang, Y. (2012). Prediction and field validation of traffic oscillation propagation under nonlinear car-following laws. Transportation research part B: methodological, 46(3), 409-423. JARDCS Special Issue On Engineering and Informatics 1014
2. Lienhart, R., & Maydt, J. (2002). An extended set of haar-like features for rapid object detection. In Image Processing. 2002. Proceedings. 2002 International Conference on (Vol. 1, pp. I-I). IEEE. 3. Janét, J. A., Schudel, D. S., White, M. W., England, A. G., Luo, R. C., & Snyder, W. E. (1996, December). Global self-localization for actual mobile robots: Generating and sharing topographical knowledge using the region-feature neural network. In Multisensor Fusion and Integration for Intelligent Systems, 1996. IEEE/SICE/RSJ International Conference on (pp. 619-626). IEEE. 4. Lienhart, R., & Maydt, J. (2002). An extended set of haar-like features for rapid object detection. In Image Processing. 2002. Proceedings. 2002 International Conference on (Vol. 1, pp. I-I). IEEE. 5. Sheng, W., Ou, Y., Tran, D., Tadesse, E., Liu, M., & Yan, G. (2013, November). An integrated manual and autonomous driving framework based on driver drowsiness detection. In Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on (pp. 4376-4381). IEEE. JARDCS Special Issue On Engineering and Informatics 1015