USE OF SMALL FORMAT DIGITAL AERIAL IMAGES FOR CLASSIFICATION OF SATELLITE IMAGES

Similar documents
Govt. Engineering College Jhalawar Model Question Paper Subject- Remote Sensing & GIS

PHOTOGRAMMETRIC RESECTION DIFFERENCES BASED ON LABORATORY vs. OPERATIONAL CALIBRATIONS

An Introduction to Remote Sensing & GIS. Introduction

CanImage. (Landsat 7 Orthoimages at the 1: Scale) Standards and Specifications Edition 1.0

Acquisition of Aerial Photographs and/or Imagery

Baldwin and Mobile Counties, AL Orthoimagery Project Report. Submitted: March 23, 2016

Remote Sensing Platforms

Module 3 Introduction to GIS. Lecture 8 GIS data acquisition

Acquisition of Aerial Photographs and/or Satellite Imagery

Sample Copy. Not For Distribution.

Helicopter Aerial Laser Ranging

APCAS/10/21 April 2010 ASIA AND PACIFIC COMMISSION ON AGRICULTURAL STATISTICS TWENTY-THIRD SESSION. Siem Reap, Cambodia, April 2010

Phase One 190MP Aerial System

DIFFERENTIAL APPROACH FOR MAP REVISION FROM NEW MULTI-RESOLUTION SATELLITE IMAGERY AND EXISTING TOPOGRAPHIC DATA

Visualizing a Pixel. Simulate a Sensor s View from Space. In this activity, you will:

Photo Scale The photo scale and representative fraction may be calculated as follows: PS = f / H Variables: PS - Photo Scale, f - camera focal

An Introduction to Geomatics. Prepared by: Dr. Maher A. El-Hallaq خاص بطلبة مساق مقدمة في علم. Associate Professor of Surveying IUG

36. Global Positioning System

Introduction. Introduction. Introduction. Introduction. Introduction

NR402 GIS Applications in Natural Resources

Lesson 4: Photogrammetry

GEOMETRIC RECTIFICATION OF EUROPEAN HISTORICAL ARCHIVES OF LANDSAT 1-3 MSS IMAGERY

Land Cover Change Analysis An Introduction to Land Cover Change Analysis using the Multispectral Image Data Analysis System (MultiSpec )

TEMPORAL ANALYSIS OF MULTI EPOCH LANDSAT GEOCOVER IMAGES IN ZONGULDAK TESTFIELD

Introduction to Remote Sensing

Image Registration Issues for Change Detection Studies

Hyperspectral Imagery: A New Tool For Wetlands Monitoring/Analyses

Suveying Lectures for CE 498

An Analysis of Aerial Imagery and Yield Data Collection as Management Tools in Rice Production

Sommersemester Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur.

APPLICATIONS AND LESSONS LEARNED WITH AIRBORNE MULTISPECTRAL IMAGING

Global Positioning Systems (GPS) Trails: the achilles heel of mapping from the air / satellites

White Paper. Medium Resolution Images and Clutter From Landsat 7 Sources. Pierre Missud

THE NATIONAL AERIAL PHOTOGRAPHY PROGRAM: AN AERIAL SYSTEM IN SUPPORT OF THE UNITED STATES SPATIAL DATA INFRASTRUCTURE

Abstract Quickbird Vs Aerial photos in identifying man-made objects

PHOTOGRAMMETRY STEREOSCOPY FLIGHT PLANNING PHOTOGRAMMETRIC DEFINITIONS GROUND CONTROL INTRODUCTION

Ground Truth for Calibrating Optical Imagery to Reflectance

Remote Sensing Platforms

GEO 428: DEMs from GPS, Imagery, & Lidar Tuesday, September 11

Application of GIS to Fast Track Planning and Monitoring of Development Agenda

Lecture # 7 Coordinate systems and georeferencing

Geo-localization and Mosaicing System (GEMS): Enabling Precision Image Feature Location and Rapid Mosaicing General:

NJDEP GPS Data Collection Standards for GIS Data Development

Appendix 2: Worked example using GPS

Keywords: Agriculture, Olive Trees, Supervised Classification, Landsat TM, QuickBird, Remote Sensing.

746A27 Remote Sensing and GIS

GPS Technical Overview N5TWP NOV08. How Can GPS Mislead

Leica - 3 rd Generation Airborne Digital Sensors Features / Benefits for Remote Sensing & Environmental Applications

EXAMPLES OF TOPOGRAPHIC MAPS PRODUCED FROM SPACE AND ACHIEVED ACCURACY CARAVAN Workshop on Mapping from Space, Phnom Penh, June 2000

NON-PHOTOGRAPHIC SYSTEMS: Multispectral Scanners Medium and coarse resolution sensor comparisons: Landsat, SPOT, AVHRR and MODIS

Objectives: 1. Assess the feasibility of using high spatial resolution image data for counting male Common Eiders.

Basic Functions. The program is able to remotely: - manage all the system parameters of the PPM box. - execute the sensor tuning process

REMOTE SENSING. Topic 10 Fundamentals of Digital Multispectral Remote Sensing MULTISPECTRAL SCANNERS MULTISPECTRAL SCANNERS

Digital Image Processing - A Remote Sensing Perspective

LONG STRIP MODELLING FOR CARTOSAT-1 WITH MINIMUM CONTROL

High Resolution Sensor Test Comparison with SPOT, KFA1000, KVR1000, IRS-1C and DPA in Lower Saxony

2019 NYSAPLS Conf> Fundamentals of Photogrammetry for Land Surveyors

High Resolution Multi-spectral Imagery

The Normal Baseline. Dick Gent Law of the Sea Division UK Hydrographic Office

Supervised Land Cover Classification An introduction to digital image classification using the Multispectral Image Data Analysis System (MultiSpec )

Photogrammetry. Lecture 4 September 7, 2005

Basic Hyperspectral Analysis Tutorial

GE 113 REMOTE SENSING

Monitoring agricultural plantations with remote sensing imagery

Remote Sensing for Rangeland Applications

Line and polygon features can be created via on-screen digitizing.

AMSRIce06 Aerial Photographs

University of Texas at San Antonio EES 5053 Term Project CORRELATION BETWEEN NDVI AND SURFACE TEMPERATURES USING LANDSAT ETM + IMAGERY NEWFEL MAZARI

[GEOMETRIC CORRECTION, ORTHORECTIFICATION AND MOSAICKING]

Geo/SAT 2 INTRODUCTION TO REMOTE SENSING

Blacksburg, VA July 24 th 30 th, 2010 Remote Sensing Page 1. A condensed overview. For our purposes

Land Cover Type Changes Related to. Oil and Natural Gas Drill Sites in a. Selected Area of Williams County, ND

DISTINGUISHING URBAN BUILT-UP AND BARE SOIL FEATURES FROM LANDSAT 8 OLI IMAGERY USING DIFFERENT DEVELOPED BAND INDICES

Including GNSS Based Heading in Inertial Aided GNSS DP Reference System

11/25/2009 CHAPTER THREE INTRODUCTION INTRODUCTION (CONT D) THE AERIAL CAMERA: LENS PHOTOGRAPHIC SENSORS

Urban Classification of Metro Manila for Seismic Risk Assessment using Satellite Images

Addressing Issues with GPS Data Accuracy and Position Update Rate for Field Traffic Studies

AT-SATELLITE REFLECTANCE: A FIRST ORDER NORMALIZATION OF LANDSAT 7 ETM+ IMAGES

Remote sensing image correction

MSB Imagery Program FAQ v1

PHINS, An All-In-One Sensor for DP Applications

Geometry of Aerial Photographs

Some Enhancement in Processing Aerial Videography Data for 3D Corridor Mapping

VisionMap A3 Edge A Single Camera for Multiple Solutions

Hydroacoustic Aided Inertial Navigation System - HAIN A New Reference for DP

CHAPTER 7: Multispectral Remote Sensing

Monitoring the vegetation success of a rehabilitated mine site using multispectral UAV imagery. Tim Whiteside & Renée Bartolo, eriss

A map says to you, 'Read me carefully, follow me closely, doubt me not.' It says, 'I am the Earth in the palm of your hand. Without me, you are alone

Basics of Photogrammetry Note#6

9/12/2011. Training Course Remote Sensing Basic Theory & Image Processing Methods September 2011

SPAN Technology System Characteristics and Performance

SUGAR_GIS. From a user perspective. Provides spatial distribution of a wide range of sugarcane production data in an easy to use and sensitive way.

Crop and Irrigation Water Management Using High-resolution Airborne Remote Sensing

PRINCIPLES AND FUNCTIONING OF GPS/ DGPS /ETS ER A. K. ATABUDHI, ORSAC

VisionMap Sensors and Processing Roadmap

GPS: The Basics. Darrell R. Dean, Jr. Civil and Environmental Engineering West Virginia University. Expected Learning Outcomes for GPS

Remote sensing in archaeology from optical to lidar. Krištof Oštir ModeLTER Scientific Research Centre of the Slovenian Academy of Sciences and Arts

An NDVI image provides critical crop information that is not visible in an RGB or NIR image of the same scene. For example, plants may appear green

Contents Remote Sensing for Studying Earth Surface and Changes

IMAGINE StereoSAR DEM TM

Transcription:

USE OF SMALL FORMAT DIGITAL AERIAL IMAGES FOR CLASSIFICATION OF SATELLITE IMAGES A. Abd-Elrahman 1, L. Pearlstine 1, S. Smith 1 and P. Princz 2 1 Geomatics Program, University of Florida Gainesville, FL USA 32611, U.S.A. 2 Living Planet Environmental Research, Ltd. H-2040 Budors, Szivarvany u. 10, Hungary Abstract: The process of acquiring small format aerial digital images for assessing land cover classification of Landsat Thematic Mapper (TM) satellite images is discussed. Images positional accuracy and suitability for assessing land cover classification is tested. Using an economic configuration of onboard navigation sensors, a positional accuracy of 15m in the images center points locations can be achieved. In addition, dithered 3 band color infrared images can be used for assessing land cover classification of TM satellite images. Keywords: satellite imagery, digital imagery, classification 1. INTRODUCTION Natural resource management for large regions can benefit from the rapid and cost-effective approach of aerial monitoring and sampling of vegetation cover. The National Gap Analysis Program (GAP) of the U.S. Geological Survey is an example [1] of this idea. The mission of GAP is to provide regional assessments of the conservation status of native vertebrate species and natural land cover types and to facilitate application of this information to land management. One of the primary objectives of GAP is to develop detailed land cover classification using Landsat TM satellite imagery. Perhaps more than any other factor, a good knowledge of the ground is critical to successful interpretation of satellite imagery [2]. Over large areas, ground surveys are time-consuming and expensive. Inaccessibility of many of the sites due their remoteness adds to the problem. However, ground surveys remain essential for any classification efforts, but attempts to depend on ground surveys alone for a classification effort that extends across the entire state would be impractical. Therefore, rapid and inexpensive acquisition with a standard light aircraft fitted with a simple window mount, of geo-coded color-infrared (CIR) digital photography was adopted by the Florida Gap Project to provide a practical approach for collecting thousands of ground samples to assist classification of the state's land cover. This approach came after two other GAP projects, Arizona and Massachusetts used aerial videography [3] to identify plant communities and, in many cases, individual species and after using aerial videography earlier in the Florida GAP project to assess south Florida land cover classification. A digital camera has a number of advantages over the video system including: 1) color infrared for better delineation of vegetation types, 2) increased spatial resolution compared to analog video, 3) digital images do not degrade over time or with copying, 4) digital ease of enhancement, and 5) random instead of sequential access to individual frames. These advantages in addition to the continuous and fast advances in digital camera technology motivated the Florida GAP project to switch from using videography to digital camera techniques. In this research, the hardware and software configuration used to acquire the digital camera images is described. The process of computing the images center points and testing their positional accuracy is introduced. Finally, the images suitability for providing ground truth information to assess Landsat TM satellite images land cover classification is evaluated. 2. HARDWARE AND SOFTWARE CONFIGURATION Two cameras were mounted on a single engine Cessna 172 aircraft. One camera was a Kodak 24-bit DCS 420 digital color infrared with 1524 x 1012 fixed plane array. Imagery from the camera loads to PC card in the camera. The data was later downloaded to a computer. The other camera was a video camera that linked to a video recorder inside the plane. A Garmin 12-channel GPS receiver that measured C/A code was used to provide real time differentially corrected coordinates for the

camera location at each exposure. The real time differential correction was broadcast from commercial ground control stations and received through the receiver. This system can provide coordinates of the camera location with errors from 1 to 3 meters. The aircraft attitude was determined by a Watson Industries Attitude and Heading Reference System (AHRS-BA303). This device is mainly utilizing three solid state attitude gyros to provide two attitude angles and a third heading angle. An accuracy of 0.3 degrees in attitude angles determination can be achieved using this instrument [4]. In addition, a Hortia time-code generator stamped each frame of the video image with the time. The geographic coordinates of the center of each frame of the video was then known by relating the time-stamp back to the computer file of times and GPS positions. The three types of sensors, i.e. (1) the digital camera, (2) GPS receiver and the (3) attitude measuring devices were linked together through an ACCUPHOTO navigational unit. This unit provides the capability of automatic firing of the camera at predefined flight lines and at certain time intervals. In addition, the ACCUPHOTO unit enables recording the GPS and attitude data at the same time the camera was fired. Table (1) shows sample data recorded by the ACCUPHOTO navigation system in a tabular format. Table 1. Part of tabulated images data file. Latitude(deg) longitude(deg) altitude(ft) GPS heading (deg) date time roll (deg) pitch Gyro (deg) Heading (deg) 29.683994-82.343138 1784 178 %12/27/99 19:12:36 3.7 3.4 184.2 29.677334-82.343430 1837 181 %12/27/99 19:12:55 3.1 4.0 182.5 29.670674-82.343014 1850 176 %12/27/99 19:13:14 5.1 3.5 184.8 29.663999-82.342857 1771 179 %12/27/99 19:13:33 5.0 5.3 179.5 29.657312-82.342917 1856 182 %12/27/99 19:13:54 4.5 4.7 189.7 29.650617-82.343201 1899 181 %12/27/99 19:14:15 2.9 3.4 190.3 29.643959-82.343286 1856 181 %12/27/99 19:14:34 0.8 3.3 185.4 29.637275-82.343435 1830 180 %12/27/99 19:14:53 2.2 3.1 184.3 29.630556-82.343154 1820 176 %12/27/99 19:15:11 0.1 2.2 177.9 29.623937-82.342268 1788 173 %12/27/99 19:15:28 4.4 3.5 180.5 3. GEO-REFERENCING DIGITAL IMAGES In order to use these aerial digital images for assessing the land cover classification and performing accuracy assessment, the ground coordinates of the center of each image had to be calculated. As previously mentioned, this was achieved through the onboard GPS receiver and attitude measuring device. The GPS receiver provides differentially corrected real time coordinates of the camera position each time the camera was exposed. The GPS coordinates are acquired in geographic projection (i.e. latitude and longitude) using the WGS84 datum. At the moment of camera exposure, the attitude and heading measuring device was also used to provide the attitude angles of the aircraft. The GPS data, attitude and heading measuring device data, the date and GPS time of each exposure are all put together in an output file. This output file was read and converted into a database file in which each record represents a complete data record for one camera exposure. Three data sets are created during the flight mission. The first data set was a video recording of the flight line. Although, this data set was not related to geo-referencing the digital images, it is mentioned here for documentation. The second data set was the digital images captured by the digital camera. Each one of these images was in a proprietary "Kodak tiff" format and had a time stamp in the image header that represented the time of each image in the camera time frame. The third data set was the database file having a record for each camera exposure. This record included GPS coordinates, aircraft attitude, date, and exposure time data. Although all the information needed to start computing the ground coordinates of the center points of images were known, the two data sets that were not tied to each other. In other words, we did not yet link each image to a record in the database file. This process was done using both the time stamp on each image taken in the camera timing frame and the GPS time recorded for each exposure. The time shift between the camera time frame and the GPS frame was eliminated and the two data sets were linked together by assigning image names to the records in the database file. Problems were encountered when doing this process. The most frequent problem was the existence of some records in the database that did not represent any image. These records were simply eliminated. The database file now had all the information necessary to compute the coordinates of the center point of each image. First, the GPS coordinate of the camera exposure locations listed in geographic

projection (i.e. latitude and longitude) and taken in WGS84 geodetic datum were transformed into NAD27 and projected into the UTM projection system. This step was important so that the computed coordinates would match the used geodetic datum and map projection system for the GAP project imagery and ancillary data. The coordinates of the ground point representing the center of the image were then calculated. Equations (1 and 2) were used to accomplish this purpose. Figure (1) illustrates the geometry from which these equations were derived. X c = X GPS + H' * tan(ω) * sin(α) - H' * tan(ϕ) * cos(α) (1) Y c = Y GPS + H' * tan(ω) * cos(α) + H' * tan(ϕ) * sin(α) (2) where: X GPS : UTM x coordinates of the camera acquired by the onboard GPS receiver. Y GPS : UTM y coordinates of the camera acquired by the onboard GPS receiver. X c : UTM x coordinates of the image center point. Y c : UTM y coordinates of the image center point H' : The altitude of the aircraft above ground ω : The pitch angle of the aircraft at exposure recorder by the attitude measuring device. ϕ : The roll angle of the aircraft at exposure recorder by the attitude measuring device. α : The heading of the aircraft at exposure recorder by the attitude measuring device. α H tan(ϕ) ω ϕ H tan(ω) H tan(ϕ) H tan(ω) Figure 1. Coordinates of image center points from GPS and attitude information Note: the horizontal component of the offset vector between the GPS antenna and camera was about.25m. This distance was small enough to be neglected for our purposes. 4. TESTING POSITIONAL ACCURACY Effort was oriented towards checking the positional accuracy of the computed coordinates of the images center points. The final positional accuracy of the computed coordinates should lie within the pixel size of Landsat TM images. This means that the maximum allowable error in the coordinates of the center of the images should not exceed 30 meters. More than 10,000 images were taken with this system. Although, the used instruments are capable of providing the coordinates to less than 10 meters from their actual locations, images were randomly selected and their computed center points coordinates were checked. The procedure used to check the positional accuracy of the images center points was based on using Digital Orthophoto Quads (DOQ) which are a one-meter resolution geo-rectified product from the US Geological Survey (USGS) for comparing the computed coordinates with the corresponding ones cited on the DOQ. First, images that have location gross errors due to temporarily mal-functioning of electronic devices or due to GPS cycle slip can be identified easily and eliminated. This was done visually by plotting the coordinates of the images center points and observing image locations severely outside the smooth curve representing the aircraft flight line.

The second step in checking the images was accomplished by selecting images that have hard boundary spatial signatures (e.g. roads or pathways intersections, boundaries between different land cover types, small water bodies, rural and suburban areas, etc). The center point of each image was identified. Then, these images were matched with corresponding locations identified on the DOQ. Comparing the two sets of coordinates provides information about some probable errors in the system. For example, gyros drifting, unsynchronized signals (i.e. time shift between camera firing and acquired position and attitude data) are common errors in systems involving two or three types of sensors. Tests were conducted on eight flight transects. The center point locations of images within each transect was identified and compared with the corresponding locations from the DOQs. Although, the differences between the two locations were generally below 15 meters and rarely reached 25 meters, images in whole segments of transects were found to have large differences. The main reason behind for these errors was found to be an electromagnetic field created by a metal bar inside the aircraft door in close proximity to the attitude and heading measuring device. The electromagnetic field was the reason for errors in the recorded aircraft attitude values. One of the problems with this type of errors was that their different magnitude varied significantly from one flight line to another and within the same flight line. Earlier test flight lines showed that the lack of synchronization between the instance of taking the picture and the recording of the GPS location and the aircraft attitude was another source of positional errors. This lack of synchronization was noticed at higher altitude flights and especially in cold weather. This kind of error has the characteristic of being primarily in the flying direction. 5. IMAGES QUALITY AND ADEQUACY The developed procedure for using the aerial digital images in assessing the land cover classification depends mainly on the quality of these images and their adequacy in identifying different types of land cover. Until now the images were visually interpreted to recognize the dominant land cover through the entire image. Although, this method of image interpretation was time consuming and dependent on the interpreter skills, it gave the best results especially when dealing with a wide range of land cover classes. The challenging part of this process was how to identify different vegetation signatures. Visual vegetation signatures are sometimes similar to each other and may confuse the interpreter. A collection of the digital aerial images was selected to be representative of many of the vegetation land cover classes. Field trips were conducted to visit these locations and prepare a library of the image signatures for the vegetation. This library aids both efficiency and consistency in identifying vegetation and interpreting land covers from the aerial images. In order to solve the problem of the trade off between the resolution and ground coverage, images at different altitudes and with different focal length lenses were tested. Aerial digital images were acquired at 350 and 750 meters altitudes and with 135 and 35mm focal-length lenses. Table (2) presents the ground resolution and ground coverage obtained with the Kodak DCs420 CIR camera using different altitude lens combinations. High altitudes and/or small focal lengths represent lower resolution and wider ground coverage. This scale of images is helpful in defining the spatial patterns and context, but identifying most vegetation to species is difficult. On the other hand, large scale images with smaller ground coverage provided better ability in identifying species, but with less information about spatial context. Table 2. The Kodak DCS420 CIR camera has a 1524x1012 pixel array and pixel size of 9x9µm resulting in the resolutions and ground coverage shown above. Flying Focal Fractional Scale X footprint Y footprint resolution Height(m) Length(mm) Scale (1:x) (m) (m) (cm) 366 35 0.000095691 10450 143 95 9.4 366 135 0.000369094 2709 37 25 2.4 762 35 0.000045932 21771 299 198 19.6 762 135 0.000177165 5644 77 51 5.1 MOST VALUES ROUNDED TO NEAREST WHOLE NUMBER Figure(2) show 3 sample images taken with different altitudes and focal-length lenses combinations. Results from testing these sets of images to identify the best altitude focal length combination that facilitates the interpretation process showed that the 135mm focal-length lens at 750m altitude yielded the best results with this particular camera. Vegetation species could be efficiently identified using these images and wider ecological changes could be recognized in nonhomogeneous areas using the videography data set.

(a) (b) (c) Figure 2. Sample aerial images taken with different altitude (H) and focal-lengths (f) combinations: (a) H 360m and f = 35mm; (b) H 360m and f = 135mm; (c) H 750m and f = 135mm 6. CONCLUSIONS The use of small format imagery proved to be very efficient and economical in providing ground truth information for land cover classification, especially for large scale mapping projects. Aerial digital images are superior to more traditional aerial videography in providing instantaneous and archival imagery. These images can be efficiently and automatically enhanced to provide details that are difficult to extract from the videographic imagery. Positional accuracy tests should be conducted on imagery captured using the multi sensor technology. Our positional accuracy tests showed that, generally, the computed image center points are less than 15 meters and rarely reaches 25 meters from the corresponding locations identified from the DOQ geo-referenced images. These results match the accuracy expected from our hardware and software configuration and lay within the 30m-pixel size of the Landsat satellite images. The algorithm developed for checking the aerial images positional accuracy was based on manual matching of the images with the DOQs as a high-resolution geo-referenced imagery. An algorithm for automatic sampling and checking of positional accuracy of the small format digital imagery using available georectified imagery would be a desirable further refinement. Different flying heights and lens focal lengths were also tested to get the best resolution/groundcoverage combination. Our tests indicated that for the Kodak DCS420 CIR camera, the 135mm focallength and average 750m above mean sea level flying height is the optimum combination to enable efficient recognition of vegetation in northern Florida. Wider ecological changes can be monitored through the continuous and wider field of view of aerial videography. REFERENCES [1]. J.M. Scott, F. Davis, B. Csuti, R. Noss, B. Butterfield, C. Groves, H. Anderson, S. Caicco, F. D'Erchia, T. Edwards, J. Ulliman, and G. Wright, "Gap Analysis: A Geographic Approach to Protection of Biological Diversity". Journal of Wildlife Management 57(1) (993)supplement, Wildlife Monographs No. 123. [2]. T. Lillesand and R. Kiefer, Remote Sensing and Image Interpretation, 3 rd Ed., Wiley and Sons, New York, 1994. Pp.524-634.

[3]. D.M. Slaymaker, K.M.L. Jones, C.R. Griffin and J.T. Finn, "Mapping Deciduous Forests in Southern New England using Aerial Videography and Hyperclustered Multi-temporal Landsat TM imagery". In Gap Analysis: A Landscape Approach to Biodiversity Planning, J.M. Scott, T.H. Tear and F.W. Davis (eds). American Society of Photogrammetry and Remote Sensing, Bethesda, MD. Pp.87-101. [4]. Watson Industries Inc., "Attitude and Heading Reference System: AHRS-BA303 Owner s Manual". Watson Industries, Inc, 1995. AUTHORS: A. Abd-Elrahman, Research Associate; L. Pearlstine, Associate In Wildlife; S. Smith, Associate Professor, Geomatics Program, University of Florida, Gainesville, FL USA 32611, Telephone: (352) 392 4990, E-Mail: ses@ce.ufl.edu. and P. Princz, Head, Living Planet, Ltd., H-2040 Budors, Szivarvany St., Hungary, Telephone: 36 1 161 0236, E-Mail: Lplanet@hungary.net.