Autonomous UAV support for rescue forces using Onboard Pattern Recognition

Similar documents
OBSTACLE DETECTION AND COLLISION AVOIDANCE USING ULTRASONIC DISTANCE SENSORS FOR AN AUTONOMOUS QUADROCOPTER

Practical Results for Buoy-Based Automatic Maritime IR-Video Surveillance

Applying Multisensor Information Fusion Technology to Develop an UAV Aircraft with Collision Avoidance Model

Wide-area Motion Imagery for Multi-INT Situational Awareness

Fraunhofer Institute for High frequency physics and radar techniques FHR. Unsere Kernkompetenzen

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

ZJU Team Entry for the 2013 AUVSI. International Aerial Robotics Competition

Walking and Flying Robots for Challenging Environments

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Wide-Area Motion Imagery for Multi-INT Situational Awareness

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

Lane Detection in Automotive

Helicopter Aerial Laser Ranging

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005)

Heterogeneous Control of Small Size Unmanned Aerial Vehicles

Applying Multisensor Information Fusion Technology to Develop an UAV Aircraft with Collision Avoidance Model

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

ISTAR Concepts & Solutions

The Oil & Gas Industry Requirements for Marine Robots of the 21st century

The EDA SUM Project. Surveillance in an Urban environment using Mobile sensors. 2012, September 13 th - FMV SENSORS SYMPOSIUM 2012

Team Kanaloa: research initiatives and the Vertically Integrated Project (VIP) development paradigm

Robotic Technology for Port and Maritime Automation

Jager UAVs to Locate GPS Interference

Introduction Objective and Scope p. 1 Generic Requirements p. 2 Basic Requirements p. 3 Surveillance System p. 3 Content of the Book p.

Autonomous Mobile Robot Design. Dr. Kostas Alexis (CSE)

Author s Name Name of the Paper Session. DYNAMIC POSITIONING CONFERENCE October 10-11, 2017 SENSORS SESSION. Sensing Autonomy.

UAV BASED MONITORING SYSTEM AND OBJECT DETECTION TECHNIQUE DEVELOPMENT FOR A DISASTER AREA

The Evolution of Nano-Satellite Proximity Operations In-Space Inspection Workshop 2017

1 st IFAC Conference on Mechatronic Systems - Mechatronics 2000, September 18-20, 2000, Darmstadt, Germany

OFFensive Swarm-Enabled Tactics (OFFSET)

Intelligent driving TH« TNO I Innovation for live

Experimental Cooperative Control of Fixed-Wing Unmanned Aerial Vehicles

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION

CS 599: Distributed Intelligence in Robotics

Requirements Specification Minesweeper

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

The Autonomous Robots Lab. Kostas Alexis

Developing a New Type of Light System in an Automobile and Implementing Its Prototype. on Hazards

CEPT Workshop on Spectrum for Drones / UAS. Detection of Drones - Research Project AMBOS - Copenhagen, 29 June 2018

QUADROTOR ROLL AND PITCH STABILIZATION USING SYSTEM IDENTIFICATION BASED REDESIGN OF EMPIRICAL CONTROLLERS

Computer-Aided Safety and Risk Prevention Pushing collaborative robotics from isolated pilots to large scale deployment

CS594, Section 30682:

GPS data correction using encoders and INS sensors

SENLUTION Miniature Angular & Heading Reference System The World s Smallest Mini-AHRS

Mobile Robots (Wheeled) (Take class notes)

Detection and classification of turnouts using eddy current sensors

Ricoh's Machine Vision: A Window on the Future

An Agent-based Heterogeneous UAV Simulator Design

Image Processing and Particle Analysis for Road Traffic Detection

Intelligent Sensor Platforms for Remotely Piloted and Unmanned Vehicles. Dr. Nick Krouglicof 14 June 2012

KALMAN FILTER APPLICATIONS

CMRE La Spezia, Italy

FreeMotionHandling Autonomously flying gripping sphere

Measurement Level Integration of Multiple Low-Cost GPS Receivers for UAVs

Image Processing Based Vehicle Detection And Tracking System

THE modern airborne surveillance and reconnaissance

A Review of Optical Character Recognition System for Recognition of Printed Text

Advances in Vehicle Periphery Sensing Techniques Aimed at Realizing Autonomous Driving

Accurate Automation Corporation. developing emerging technologies

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

A Solution for Identification of Bird s Nests on Transmission Lines with UAV Patrol. Qinghua Wang

Countering Weapons of Mass Destruction (CWMD) Capability Assessment Event (CAE)

UNCLASSIFIED R-1 ITEM NOMENCLATURE FY 2013 OCO

Summary of robot visual servo system

LEARNING FROM THE AVIATION INDUSTRY

Team KMUTT: Team Description Paper

Automatic Licenses Plate Recognition System

International Conference on Computer, Communication, Control and Information Technology (C 3 IT 2009) Paper Code: DSIP-024

Development of a Sense and Avoid System

Smart License Plate Recognition Using Optical Character Recognition Based on the Multicopter

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems

RECOGNITION OF EMERGENCY AND NON-EMERGENCY LIGHT USING MATROX AND VB6 MOHD NAZERI BIN MUHAMMAD

NAV CAR Lane-sensitive positioning and navigation for innovative ITS services AMAA, May 31 st, 2012 E. Schoitsch, E. Althammer, R.

Target Recognition and Tracking based on Data Fusion of Radar and Infrared Image Sensors

Automated Driving Car Using Image Processing

International Journal of Informative & Futuristic Research ISSN (Online):

REAL-TIME GPS ATTITUDE DETERMINATION SYSTEM BASED ON EPOCH-BY-EPOCH TECHNOLOGY

Automatic License Plate Recognition System using Histogram Graph Algorithm

Airborne Satellite Communications on the Move Solutions Overview

TECHNOLOGY DEVELOPMENT AREAS IN AAWA

FLCS V2.1. AHRS, Autopilot, Gyro Stabilized Gimbals Control, Ground Control Station

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

Hanuman KMUTT: Team Description Paper

Teleoperation Assistance for an Indoor Quadrotor Helicopter

UAV Technologies for 3D Mapping. Rolf Schaeppi Director Geospatial Solutions APAC / India

USE OF IMPROVISED REMOTELY SENSED DATA FROM UAV FOR GIS AND MAPPING, A CASE STUDY OF GOMA CITY, DR CONGO

CAPACITIES FOR TECHNOLOGY TRANSFER

Range Sensing strategies

A 3D Gesture Based Control Mechanism for Quad-copter

MOD(ATLA) s Technology Strategy

Number Plate Recognition System using OCR for Automatic Toll Collection

The Z/I Imaging Digital Aerial Camera System

OughtToPilot. Project Report of Submission PC128 to 2008 Propeller Design Contest. Jason Edelberg

COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE

Situational Awareness A Missing DP Sensor output

Improving registration metrology by correlation methods based on alias-free image simulation

Design of a Remote-Cockpit for small Aerospace Vehicles

MAV-ID card processing using camera images

Toward autonomous airships: research and developments at LAAS/CNRS

Face Detection using 3-D Time-of-Flight and Colour Cameras

Transcription:

Autonomous UAV support for rescue forces using Onboard Pattern Recognition Chen-Ko Sung a, *, Florian Segor b a Fraunhofer IOSB, Fraunhoferstr. 1, Karlsruhe, Country E-mail address: chen-ko.sung@iosb.fraunhofer.de *E-mail address: chen-ko.sung@sung.de b Fraunhofer IOSB, Fraunhoferstr. 1, Karlsruhe, Germany E-mail address: florian.segor@iosb.fraunhofer.de Abstract During search and rescue operations in case of man-made or natural disasters the application forces need exact information about the situation as soon as possible. At the Fraunhofer IOSB a security and supervision system is developed which uses varied modern sensor systems for this purpose. Beside land robots, maritime vessels, sensor networks and fixed cameras also miniature flight drones are used to transport the most different payloads and sensors. To gain a higher level of autonomy for these UAVs, different onboard process chains of image exploitation for tracking landmarks and of control technologies for UAV navigation were implemented and examined to achieve a redundant and reliable UAV precision landing. First experiments have allowed to validate the process chains and to develop a demonstration system for the tracking of landmarks in order to prevent and to minimize any confusion on landings. Keyword: AMFIS, mobile sensor carrier systems, adaptive, guiding point, information fusion, landing system, rescue team, automatic landmark tracking 1. Introduction The civil security and supervision system AMFIS was developed at the Fraunhofer IOSB as a mobile support system for rescue forces in accidents or disasters. The system is designed as an open integration hub for a large number of heterogeneous sensors, sensor networks and sensor carriers to support rescue forces optimally. Beside cameras, sensor knots, ground robots, and underwater robots, unmanned aerial vehicles (UAVs) [1] are also used. This concern in most cases vertical take-off and landing (VTOL) systems, which navigate themselves on the basis of the global positioning system (GPS). To gain a higher level of autonomy for these systems different onboard process chains of image exploitation for tracking landmarks and of control technologies for UAV navigation were implemented and examined to achieve a redundant and reliable UAV precision landing.

The benefits of onboard process chains are multiple: the data transmission from the UAV to the ground station is reduced and the level of autonomy for UAV operations is increased. The methods used for the automatic landmark tracking are invariant to rotation and scaling. They are efficient, robust, and adaptive regardless of the flight level of the UAV. In this paper, the selected onboard process chains for the automatic landmark tracking, for UAV navigation and for conversion of landmark positions from image coordinates to world coordinates in video sequences are presented. First experiments have allowed to validate the process chains and to develop a demonstration system for the tracking of landmarks in order to prevent and to minimize any confusion on landing. 2. System Overview The security and surveillance system AMFIS developed as a technology demonstrator at the Fraunhofer IOSB is designed as a support system for rescue and emergency units. Assuming that stationary but also highly mobile sensor systems will become more and more relevant in the near future, the system provides a homogeneous an intuitive workspace to simplify the use of heterogeneous systems. To make this possible different attempts are realized and tested. Besides, the reduction of the operating complexity as well as the fusion of data from different data sources to generate a comprehensive situation picture, the construction of the user interfaces plays a central and important role. In addition, an essential factor is the modularity and hence the adaptability of the system. The sensors and sensor carriers used in the technology demonstrator can be seen as a place holder and therefore can be exchanged or complemented very simple to provide a suitable system for different purposes. To test the AMFIS system in different situations and application scenarios it was equipped exemplarily with a rather big range of diverging sensor systems. Primarily EO and IR cameras are used as sensors for surveillance purposes. These sensors are complemented with movement dispatch riders, gas or vibration sensors and a huge number of secondary sensors, like GPS, acceleration sensors or attitude sensors. These sensors are installed either on fixed positions, or are carried by mobile systems to their destination. On this occasion different flight platforms, ground robot, as well as over and under water vessels are used. The ground station allows the application of these different systems by providing the operator always with the same or an only slightly modified interface regardless of the type of the used asset. What is valid for the hardware assets is also considered by the development of the support systems for the operator. Thus, new backend systems can complement the available ones when required or substituted. To guarantee an easy, quick and efficient application of the different subsystems the operator is supported by different backend systems. Work

routines which are not necessarily relevant for application or only stress the operator needlessly are simplified as far as possible or completely automated. The aim is to use the integrated mobile systems as autonomous as possible and to process the data stream in such a way that an overflow can be precluded. Especially in case of the miniature flight drones this principle can be used efficiently. A great number of functions which are keeping the UAV alive and deals with the fulfillment of its job can be automated. Collision control, positioning, reaction to certain events up to heading to a desired position or flying over a defined area are done by the ground station when needed which therefore reduces the working load on the operator. A parallel application of different systems becomes possible. Additional intelligent analyzing systems support the work with the incoming data. Video analyzing systems as for example ABUL [2] can be integrated easily and simplify therefore the processing of the data. 3. Application Scenarios The security and surveillance system AMFIS [3] has been developed as an adaptive system of systems to be capable of dealing with a large number of different demands. Besides, changing surroundings, advances in the field of sensor technology and future demands had to be considered on creating the system. Hence, the essential application scenario can become very basically formulated. If in situ sensors and sensor systems are not sufficient or even do not exist to provide enough relevant information about the current situation, the advantages of small sensor carriers in combination with the most different sensors conformist on the situation can take effect. Because particularly UAVs can operate independent from infrastructure like paths or streets and regardless of the state of the ground, their application is examined under a special focus. Sophisticated multi rotor systems are equipped with a large number of sensors which support automatic flight. This allows only a short-term but extremely mobile aerial reconnaissance, in particular by providing a view into threatened or dangerous areas without endangering human life or by creating images of areas which are only very hardly accessible. In addition, the bird's-eye view can be also be used to receive a comprehensive overview about complex situations [4]. Beside the function as a sensor carrier for cameras the aerial systems are also used for the transport of other sensors as for example chemical measuring systems to analyze poisonous materials in the air. Thus, invisible menaces can be more exactly determined and allows a better protection of people and rescue forces. As big as the advantages of an aerial reconnaissance are, as complicated and, however, time-luxurious is the application of flying drones. To make this generally possible the

operating complexity has been strongly reduced by the AMFIS ground control station (GCS) and the autonomy of the drones. This is primarily possible because the drones are equipped with GPS systems. The positioning system GPS allows quick automatic starting, a position regulation and the homing, so that no pilot is required in these cases to manually take over control. Though, the landing using GPS is possible, a lot of space is required to provide a secure area on account of the inaccuracy of the GPS. To reduce the positioning error and to provide a UAV capable of a precise automatic landing without any manual intervention a system was designed, which allows real time onboard pattern detection. The detected guiding point is used to improve the positioning of the UAV in world coordinates and to reduce the inaccuracies of the inertial measurement unit (IMU). The hard- and software components used in this landing system are fully integrated into the AMFIS GCS to further reduce the workload on the operators. The following chapters describe the mark-based recognition procedure for positioning. 4. Landmark Detection using an adaptive operation The investigation concentrated on the visual detection of a man-made landmark with a fish-eye lens. For this purpose we assume, that the mark must always be clearly visible from the image sensor, independent of the flight level of the UAV during the in-flight detection process. Numbers and characters are good land marks, because they have a system behind them, and can encode with high information content. For the validation of the process chains and for the development of a demonstration system the character H is used for the mark of the landing site. To test the generality of our algorithm of the landmark detection and pattern recognition, other characters or patterns will be used in the future. Many segmentation methods have been developed and implemented in the past in order to reduce the search area for the detection of target objects [5, 6 and 7]. One of the methods is the so called binarization. An image is first binarized using the method of foreground-background separation. The binarization is performed using one or more threshold-values and creates a black and white image (blob image) as result. The white areas are intended to represent target images and the black areas the background. Crucial for a good binarization according to the respective task is an appropriate threshold setting. Such an adapted threshold determination is generally not trivial, if the gray or color value ranges for the relevant image regions are not known in advance. If an image is binarized with a lower threshold, the foreground gets too many pixels. If the image is binarized with a higher threshold, the target loses its signature or pixels. In our setup we assume that the landmarks have man-made special forms with selected colors. A color-to-gray conversion algorithm converts the color images to gray value images. The colors of a landmark are assigned to higher gray values of a gray value image. The adaptive

and selective multi-target method [8] is used to separate the landmarks from the background. Without reducing the original image resolution, for example by Gaussian pyramid, the images are segmented after a binarization and noise reduction. The search areas for the detection of landmarks are therefore drastically reduced to some blobs or regions of interest (ROIs). The blobs are candidates for the detection of a landmark on a landing site in image. Figure 1 shows the selected ROI with his vertices and green bounding box as a detected landmark. 5. Recognization and Inerpretation of Landmark Images The vertices of the blobs (see Figure 1c) are calculated for the correction of pattern distortion caused by the camera pose and fish-eye lens. The image data that lie within the vertices must be transformed back to a standard position with a standard size before the pattern recognition is applied. This step ensures a rotation and scale invariant pattern recognition. Many pattern recognition methods can be used for the interpretation of the transformed regions where the blobs are contained [9]. Knowledge-intensive and intensive learning methodologies do not fit the system requirement because the computational power is low within the flying platform. For the onboard image evaluation a non-compute-intensive process - so-called zigzag method - was developed and applied. This process analyzes how many binary values of relevant parts in the transformed region are correlated with the expected values. If a back-transformed region has a high correlation, this region is recognized as a landmark and interpreted as the capital character "H". The position and rotation of the landmark in the image are calculated from coordinates of their vertices. a) b) c) Figure 1: The search areas for the detection of landmarks are drastically reduced to some blobs or regions of interest (ROIs). a) An original image. b) Blobs after the binarization and noise reduction. c) A selected RIO with his vertices and green bounding box as a detected landmark.

6. Results A fish-eye lens must be used sometimes in order to capture more area, because sometimes the activities in the surroundings of the landing site should also be known. In this case the imaged landmarks may be distorted due to the fish-eye lens and rotational motion (see Figure 2). Nevertheless, the distorted landmark can still be correctly detected and recognized at a low height of about 10 meters. That means all parameters in the process chains are well matched for that case. No rectification for distorted images is made because there is no more processing power onboard the UAV. The resolution of the landmark is drastically impaired if the machine ascends to a higher flight level. Figure 3 shows the X- and Y-Positions of the detected and recognized landmark in image over 1000 images with green x. 164 images with unrecognized landmark and 281 images with no landmark are registered with red +. About 16.4% (= 164 / 1000) of input images are unrecognized. Thus the detection rate and the recognition rate is about 83.6%. The results after the rotation and scale-invariant pattern recognition in image sequences are shown by Figure 4. The feature detection and pattern recognition in the process chain are working properly, even when the UAV rotates around the landing site. The center of the recognized pattern is marked with a red circle. Independent of the sensor pose, the position of the landing site also on the image border in images is detected correctly. The image coordinates of the center of the landing site are transferred to world coordinates in order to calculate the UAV pose. Figure 7 shows that the UAV approached the landing site and then flew away. landmark Figure 2: The imaged landmarks may be distorted due to fish-eye lens and rotational motion. The resolution of the landmark is drastically impaired if the machine flies height.

Figure 3: The X- and Y-Positions of the detected and recognized landmark over 1000 images are registered with green x. The images with no landmark and unrecognized landmark are registered with red +. Figure 4: The pictures above show the fly maneuver over the landing site. The process chain works well with image sequences that are captured with a fish-eye lens.

7. Conclusions and Future Work In this paper we present a system to support rescue forces in disasters or accidents. To provide a better data basis for creating situation awareness different sensor assets are used to generate information. On account of their special qualities, on this occasion, a main focus lies on the application of flying miniature drones. To make their application possible important functions not relevant for application must be automated. For this purpose a concept for onboard pattern recognition for autonomous UAV landing has been presented. The cumulative histogram is used to find out backwards the adaptive threshold value for the detection of pattern images in image sequences. The extracted pattern can be recognized by using a so-called zigzag method. The results of the investigations motivated us to add more characters or patterns and active components. Goals are to develop a system for precision landing using an active landmark that is recognized by the drone, allowing an additional visual control of the air component, including the necessary procedures for the evaluation of multi-sensor image data. 8. References [1] Bürkle, A., Collaborating Miniature Drones for Surveillance and Reconnaissance, Proc. of SPIE Vol. 7480, 74800H, Berlin, Germany, 1-2 September (2009). [2] Heinze, N., Esswein, M., Krüger, W. and Saur, G., Image exploitation algorithms for reconnaissance and surveillance with UAV, Proc. of SPIE Vol. 7668, Airborne intelligence, surveillance, reconnaissance (ISR) systems and applications VII (2010). [3] Leuchter, S., Partmann, T., Berger, L., Blum, E.J. and Schönbein, R., Karlsruhe generic agile ground station, Beyerer J. (ed.), Future Security, 2nd Security Research Conference, Fraunhofer Defense and Security Alliance, 159-162 (2007). [4] Segor, F., Bürkle, A., Kollmann, M. and Schönbein, R., Instantaneous Autonomous Aerial Reconnaissance for Civil Applications - A UAV based approach to support security and rescue forces, The 6th International Conference on Systems ICONS 2011, St. Maarten, The Netherlands Antilles, 23-28 January (2011). [5] Sung, C.-K., Extraktion von typischen und komplexen Vorgängen aus einer Bildfolge einer Verkehrsszene, Bunke, H., Kübler, O., Stucki, P. (Hrsg.), Mustererkennung 1988, Informatik-Fachberichte 180, 90-96 (1988). [6] Navon, E., Miller, O. and Averbuch, A., Color image segmentation based on adaptive local thresholds, Image and Vision Computering, Vol. 23, 69-85 (2005). [7] Sezgin, M. and Sankur, B., "Survey over image thresholding techniques and quantitative performance evaluation", Journal of Electronic Imaging, Vol. 13, Issue 1, 146-165 (2004). [8] Sung, C.-K., Adaptive and Selective Multi-Target-Tracker, Proc. of SPIE Vol. 8137, 81370T-1, San Diego, California, USA, 23 August 2011. [9] Wood, J., Invariant pattern recognition: a review, Pattern Recognition, Vol. 29, No. 1, 1-17 (1996).