Smart, texture-sensitive instrument classification for in situ rock and layer analysis
|
|
- Christine Copeland
- 5 years ago
- Views:
Transcription
1 GEOPHYSICAL RESEARCH LETTERS, VOL. 40, , doi: /grl.50817, 2013 Smart, texture-sensitive instrument classification for in situ rock and layer analysis K. L. Wagstaff, 1 D. R. Thompson, 1 W. Abbey, 2 A. Allwood, 2 D. L. Bekker, 3 N. A. Cabrol, 4 T. Fuchs, 5 and K. Ortega 6 Received 4 June 2013; revised 31 July 2013; accepted 1 August 2013; published 27 August [1] Science missions have limited lifetimes, necessitating an efficient investigation of the field site. The efficiency of onboard cameras, critical for planning, is limited by the need to downlink images to Earth for every decision. Recent advances have enabled rovers to take follow-up actions without waiting hours or days for new instructions. We propose using built-in processing by the instrument itself for adaptive data collection, faster reconnaissance, and increased mission science yield. We have developed a machine learning pixel classifier that is sensitive to texture differences in surface materials, enabling more sophisticated onboard classification than was previously possible. This classifier can be implemented in a Field Programmable Gate Array (FPGA) for maximal efficiency and minimal impact on the rest of the system s functions. In this paper, we report on initial results from applying the texturesensitive classifier to three example analysis tasks using data from the Mars Exploration Rovers. Citation: Wagstaff, K. L., D. R. Thompson, W. Abbey, A. Allwood, D. L. Bekker, N. A. Cabrol, T. Fuchs, and K. Ortega (2013), Smart, texturesensitive instrument classification for in situ rock and layer analysis, Geophys. Res. Lett., 40, , doi: /grl Introduction and Motivation [2] Landers and rovers equipped with cameras have been pivotal in developing a detailed understanding of surface morphology and geologic processes on solar system bodies including Venus [Garvin et al., 1984], Mars [Adams et al., 1986; Squyres et al., 2006], and Titan [Soderblom et al., 2007]. However, currently available flight cameras are passive data collectors requiring ground-based teams to analyze and make decisions on almost every maneuver a process 1 Machine Learning and Instrument Autonomy, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA. 2 Planetary Chemistry and Astrobiology, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA. 3 Instrument Flight Software and Ground Support Equipment, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA. 4 Space Science Division, NASA Ames Research Center/SETI Institute, Moffett Field, California, USA. 5 Mobility and Robotic Systems, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA. 6 Distributed and Real-Time Systems, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA. Corresponding author: K. L. Wagstaff, Machine Learning and Instrument Autonomy, Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, USA. (kiri.l.wagstaff@jpl.nasa.gov) American Geophysical Union. All Rights Reserved /13/ /grl which consumes precious mission lifetime and therefore limits ultimate science return of missions. This ground in the loop process is constrained by the speed of light and operational constraints such as the availability of power for radio transmission, the location of relay orbiters, the availability of Deep Space Network receivers, and time available from mission personnel and scientists for reviewing the data and formulating the next set of commands. [3] There is great interest in reliable onboard image processing that can identify scientifically relevant surface features [Gilmore et al., 2000; Gulick et al., 2001; Castano et al., 2007; Smith et al., 2007]. Cameras equipped with smart image analysis capabilities would enable missions to accomplish more science in the available time by reducing the need to send data to ground-based teams for some decisions. The Mars Exploration Rover Opportunity has taken steps in this direction by conducting some onboard, in situ analysis of images as they are collected. The Autonomous Exploration for Gathering Increased Science (AEGIS) system permits the rover to detect scientific features of interest (e.g., rocks of particular size, shape, or angularity) and automatically target such features for follow-up observations at higher resolution [Estlin et al., 2012]. However, because AEGIS uses the rover s main CPU, other activities must be suspended until the onboard image processing is complete. Necessarily, only a subset of the images collected can be analyzed onboard. More importantly, AEGIS can only extract simple albedo and shape features and therefore is unable to detect patterns such as rock layers or textured terrain. [4] We have formulated a new instrument concept that employs state-of-the-art machine learning methods to accomplish scientific objectives onboard. TextureCam integrates the imager and analyzer, going beyond simply recording pixels to classifying and interpreting the surface and rock textures present. Further, the algorithm can be implemented using a highly efficient Field Programmable Gate Array (FPGA) independent of the rover s main CPU. The results can inform onboard decisions such as which targets in a panoramic scene to analyze in more detail, how to analyze close-up targets effectively, or simply how to prioritize data and images for transmission back to Earth. By eliminating the command-loop delay, remote spacecraft can quickly respond to targets of opportunity or dynamic events in seconds rather than hours or days. [5] In this paper, we report on three kinds of scientific investigations that are enabled by TextureCam and that are commonly cited as important components of onboard image analysis [Gulick et al., 2001; Castano et al., 2007]. In order of increasing complexity, they are as follows: (1) find all rock targets within a scene (to characterize rock size distribution or inform further sampling decisions), (2) identify 4188
2 Intensity Channels Pixel values High pass Surface class probability map Stereo channels Range map Height map Training labels (in advance) Figure 1. TextureCam system architecture. To classify a newly acquired image, four input channels are computed: raw pixel values, high-pass filtered value, range (distance), and height. Each pixel is independently classified by a model trained on previously labeled images. The output is a probability map for each class; an example for the rock class is shown here. good sampling targets (more subtle than just finding rocks, this scenario seeks rocks with flat surfaces), and (3) find layered rocks (in support of astrobiology and sedimentary geology investigations). The results demonstrate the value of smart instruments that can conduct their own onboard analysis, with potential benefits for future rover, lander, and orbiter missions. 2. Methods: Random Forest Classifier [6] We aim to detect and characterize different kinds of texture that appear in images collected by the camera. Unlike the geologic concept of rock texture, which refers to an intrinsic property of the rock, here texture refers to discriminative, statistical patterns of pixels in an entire image (which may include some rocks with detectable geologic textures). We posit that these numerical features can capture enough information to characterize geologically relevant aspects of a site such as surface roughness, pavement coatings, unconsolidated regolith, sedimentary fabrics, and differential outcrop weathering. [7] We use machine learning methods to analyze training data with known properties (and textures) and construct a model that can later be used to classify textures in new images. Classifiers such as neural networks or support vector machines (SVMs) have been used to tackle geophysical analysis problems such as identifying boundaries between geologic facies [Tartakovsky and Wohlberg, 2004]. We instead employ a random forest classifier, a state-of-the-art technique that often outperforms neural networks and SVMs in terms of speed, accuracy, and robustness to noise in the data [Breiman, 2001]. A random forest consists of several decision trees, each of which is trained on a different subsample of the data. The trees vote collectively on each classification decision, which yields a more reliable result than any individual decision. The random forest is also well suited for an efficient FPGA implementation since it is highly parallelizable: each pixel can be classified independently of all others in the image. [8] Figure 1 shows the TextureCam classification architecture. We represent each pixel in the image with an attribute vector x 2 R d,whered is the number of attributes (e.g., intensity, high-pass filtered value, range, height). We train the random forest classifier using pixel vectors from training images in which each pixel was manually labeled with a class of interest (e.g., rock, sand, sky ). Finally, the classifier outputs a class probability map. The example in Figure 1 shows the output probability for rock, ranging from blue (low) to red (high). [9] Each tree in the random forest is trained on a different subset of the labeled pixels. The tree begins as a root node and progressively grows branches to distinguish between different subpopulations in the data. Each node in the tree is either pure (contains pixels that are all of the same class) or mixed. For mixed nodes, the algorithm searches for a test that can optimally split the pixels into distinct groups. Each test r takes one of these forms: p 1 >, p 1 p 2 >, p 1 p 2 >, p 1 + p 2 >, or p1 p 2 >, wherep 1 and p 2 are the attribute values for two randomly selected pixels within a local window around the pixel to be classified. The threshold value is selected to maximize the information gain (class discrimination) that can be achieved by splitting the pixels at that node using [Shotton et al., 2008]. The algorithm evaluates hundreds of candidate tests and uses the best scoring test/threshold pair as the final splitting criterion for that node. This generates two new child nodes that contain the pixels that passed or failed the test, respectively. The process continues recursively for each child, terminating at pure nodes. a Figure 2. (a) Example subimage from the Legacy panorama and (b) the rock probability map output from TextureCam. Red (blue) regions indicate high (low) probability. b 4189
3 Target selection precision Random Intensity Stereo Figure 3. Rock classification performance (precision) for the images in the Legacy panorama. Boxes show the range from the first to third quartiles of the results, and the horizontal band shows the median. [10] New pixels are assigned to one of the known classes as follows. For each tree in the random forest, beginning with the root node of the tree, the appropriate test is applied to the new pixel p. The outcome of that test tells the algorithm which child of the node to visit next. This process continues until the algorithm reaches a pure node, where it outputs the probability that p belongs to each class, using the distribution of labeled training pixels that reached the same node during training. The final classification output from the forest is the product of class probabilities from each tree. 3. Experimental Results [11] We conducted several experiments using random forests to train texture-based classifiers and then classify previously unobserved images into categories of interest. These demonstrations were conducted on images collected by the Mars Exploration Rovers Finding Rocks [12] Upon reaching a new location, an important first step is the identification and characterization of exposed rock surfaces. These rocks provide candidates for subsequent targeted instrument deployment and contact sensing. Further, direct analysis of identified rocks enables the characterization of the local environment by its distribution of rock sizes, colors, and compositions. A large amount of research has already been invested in methods for automatically identifying rocks in such images [Gulick et al., 2001; Castano et al., 2007; Thompson et al., 2011; Gong and Liu, 2012]. [13] We trained a random forest using 23 images from the Mission Success panorama collected by the Spirit rover during its first week of operations. These were manually labeled by analysts to include all rocks of at least 10 pixels in size, yielding thousands of rocks at ranges from 2 10 m [Golombek et al., 2005]. We assigned a terrain label to pixels within the labeling region that were not assigned to the rock class. We then tested the trained classifier on a separate set of 23 images taken from the Legacy panorama from sol 59. Both panoramas were acquired over a span of several days and exhibit a range of different illumination conditions. [14] The random forest was trained with information about pixel intensity (raw and high-pass filtered), range (distance), and height (see Figure 1). The height data was the result of convolving the pixel s altitude with a broad median filter to recover the ground plane and then subtracting this from the original altitude. The forest consisted of 32 trees trained on 1,000,000 pixels sampled from the labeled training data. The analysis window size was pixels. The number of trees is specified by the user and controls the diversity of the learned forest. Figure 2 shows classification results for one of the Legacy images. Rocks are clearly identified in red. [15] The rock detection output can guide subsequent sampling or data acquisition with a contact instrument. Performance can be evaluated in terms of precision (reliability) and/or recall (completeness). Since follow-up targeting can only be applied to a subset of potential targets, we focused on precision. We calculated the probability, over all images, that the classifier was correct in its identification of the pixel most likely to be part of a rock (see Figure 3). Using intensity information alone, the classifier was 91% correct, while incorporating stereo information increased performance to 97%. A blind selection of targets within the image ( random ) selected rock targets only 27% of the time Estimating Surface Condition [16] We seek to enable rovers to autonomously find new targets and collect data directly from them during long traverses. However, arm-mounted instruments such as Raman spectrometers are sensitive to the condition of the surface, and factors such as dust deposition or fracturing can thwart automatic data collection. These hazards may not be evident in the coarse stereo data used to place instruments. Image texture analysis provides an additional means to assess surface condition for sampling. [17] Figure 4 shows a typical result. For this task, stereo information is less relevant, so we trained a classifier using only the raw pixel intensities. We used examples from a previous image of a Meridiani outcrop, labeling as good several candidate sample sites that are fracture- and dust-free. These generally correspond to bright, contiguous surfaces. We also labeled image regions corresponding to broken rock, dust or other sediment as poor. We trained the system on 100,000 pixels sampled from a single panorama and several rotated versions to improve rotation invariance. The trained random forest consisted of 16 trees, with a window size of pixels. Figure 4 shows that the system identified several contiguous clean surfaces as high-probability targets. [18] We compared the output of the classifier to an independent classification of the same scene done by a human expert. Of the 14 targets identified by the classifier as good sampling surfaces (areas in red in Figure 4), 13 (93%) matched with a manually chosen area Finding Layered Structures [19] Layered rock structures are of particular interest for astrobiology, as they are a characteristic feature of many water-deposited sedimentary rocks. Water-lain sediments are high priority targets that may indicate ancient habitable surface environments, and their identification is an important 4190
4 Figure 4. Random forest classifier results for detecting sampling surfaces that are dust-free and in good condition. (a) The original image (Planetary Photojournal PIA014132) showing a young crater on Meridiani Planum. (b) Heat map (red = higher probability) showing identified sampling candidates. step in narrowing the search for ancient signs of microbial life. Igneous layered rocks are also important targets that can provide information about planetary interior processes and extrusive lava activity. [20] We trained a third classifier to identify layered structures using images from the Gibson panorama collected by the Opportunity rover (sols ) when it was stationed near Home Plate. We labeled a single image with four terrain types: layered rock, smooth rock, vesicular rock, and soil. We trained a random forest using 10 trees, 50,000 samples, and a window size of pixels. A smaller window size was used due to the smaller scale of the features of interest. For this task, we employed a suite of simple bar filters that are sensitive to linear image features at eight orientations between 0 ı and 180 ı.after convolving each input image with each filter, we stored the a b c d Figure 5. Random forest classifier results for layered surface detection on (left) the labeled training image and (right) one of the test images. The top row shows the probability maps (red = high probability of layers), and the bottom row shows the final layer detections (red), filtered using stereo data to those detections within 5 m of the camera. 4191
5 Figure 6. Layered region classification accuracy on the Gibson panorama images as a function of the analysis window size. Training accuracy is generally higher than test performance on new images. In both cases, we find a strong dependence on the size of the window used to analyze the images; the best performance is achieved when the window size matches the scale of the layers present in the scene. maximum filter response across all orientations, along with the raw pixel value, yielding a vector of two values for each pixel. [21] The top row of Figure 5 shows the resulting layer probability map for (left) the training image and (right) a test image. Each map shows the probability of membership in the class of interest (layered rock). Training the classifier with four classes allowed it to learn finer distinctions and improved performance on all classes. The bottom row of Figure 5 shows the final result, in which we used stereo information to filter detections (layer probability 50%) to only those realistically reachable in the next few command cycles ( 5 m away). Since the base images were collected with the left camera, stereo data was not available for the far left and no detections were reported for that region. Visually, the classifier did a good job of highlighting layered areas and omitting partially buried layers, which have a weaker texture response. This result could be used to inform further sampling by guiding a high-resolution rover instrument to examine layered regions more closely. [22] We also quantitatively evaluated the classifier s ability to detect layered regions that were (1) large enough to be useful for subsequent sampling and (2) close enough to be readily accessible. We further filtered the detections within 5 m to include only those that had an area of at least 1000 pixels. We manually reviewed each region identified by the classifier to determine whether it contained layers or not. Layer detection on the training image was accurate for 32 of 32 regions (100%), and detection across all test images was accurate for 58 of 58 regions (100%). Effectively, the classifier was conservative enough that it did not generate any false positives while still identifying a large number of good targets for further sampling. [23] Perfect results are always somewhat suspect, so we investigated further to assess the limits of the classifier. We investigated the classifier s sensitivity to the choice of analysis window size, which emerged as a critical factor. The preceding results were obtained with a window size of pixels. Figure 6 shows performance on the training and test images as the window size was varied. As expected, in general, training performance was equal to or higher than performance on previously unseen (test) images. We found that decreasing the window size led to more false classifications of small, flat areas as layered areas, while increasing the window size resulted in fewer total detections and lower reliability in those detections. We conclude that a window size of pixels is most appropriate for the scale of layers present in these images. This parameter allows the user to specify the scale of layers of interest; some may be interested in fine layers (small window) while others prioritize coarse layers (large window). If multiple scales are of interest, multiple classifiers can be trained using different window sizes. 4. Conclusions and Benefits for Future Missions [24] The technology to support onboard image analysis continues to mature. Its implementation was pioneered by the AEGIS system on the Mars Exploration Rover Opportunity [Estlin et al., 2012]. We propose a further innovation by incorporating texture-sensitive analysis into the onboard setting and by embedding analysis capabilities into the instrument that collects the data. TextureCam is a smart camera that employs an internal FPGA to quickly classify image contents using texture-based features. Our experimental results show that TextureCam s random forest classifier is effective at addressing a variety of scientifically relevant questions one might pose of imagery collected in a new environment: Are there rocks present? Where are good sampling surfaces? Are layers present? [25] The results of texture-based onboard analysis can inform in situ decisions about the next imaging, measurement, or sampling targets. It can also guide content-based image compression by allocating more bandwidth to image areas of greater scientific value [Dolinar et al., 2003; Wagstaff et al., 2004]. Further, the remote spacecraft can report content-based summaries for all images even if they are not all returned at full resolution (e.g., navigational images). These summaries could include information about the number and size distribution of rocks, the variety and extent of layered structures and their orientations, and so on. The three tasks we evaluated in this paper provide only a first glimpse of the kind of analyses possible with a smart camera. Mission planners can train different random forests to accomplish different objectives and upload or update them as needed. [26] Future missions to Mars, the surface of Europa, or other bodies with surface terrain of interest stand to benefit from the inclusion of a smart camera such as TextureCam. The classifiers employed by the camera can be configured to specific mission objectives. We expect that the system could easily be adapted for orbital cameras to capture other texture-sensitive phenomena of interest, such as the tigerstripe lineations on Enceladus [Porco et al., 2006], faults on Europa [Tufts et al., 1999], or new impact craters on Mars [Byrne et al., 2009]. TextureCam and its descendants have the potential to greatly increase the autonomy and scientific return of future rover and orbiter missions. 4192
6 [27] Acknowledgments. The TextureCam project is supported by the NASA Astrobiology Science and Technology Instrument Development program (NNH10ZDA001N-ASTID). The Mars Exploration Rover images used in this study were obtained from the Planetary Data System (PDS). We thank Matt Golombek, Rebecca Castaño, Ben Bornstein, and many students who contributed the manual rock labels used in our rock-finding study. This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration. Government sponsorship acknowledged. [28] The editor thanks Philip Christensen and an anonymous reviewer for their assistance in evaluating this manuscript. References Adams, J. B., M. O. Smith, and P. E. Johnson (1986), Spectral mixture modeling: A new analysis of rock and soil types at the Viking Lander 1 site, J. Geophys. Res., 91(B8), Breiman, L. (2001), Random forests, Mach. Learn., 45, Byrne, S., et al. (2009), Distribution of mid-latitude ground ice on Mars from new impact craters, Science, 325(5948), , doi: /science Castano, R., T. Estlin, R. C. Anderson, D. M. Gaines, A. Castano, B. Bornstein, C. Chouinard, and M. Judd (2007), Oasis: Onboard autonomous science investigation system for opportunistic rover science, J. Field Rob., 24(5), , doi: /rob Dolinar, S., et al. (2003), Region-of-interest data compression with prioritized buffer management (iii), in Proceedings of the NASA Earth Science Technology Conference., NASA Earth Science Technology Office, Greenbelt, MD. Estlin, T., B. Bornstein, D. Gaines, R. C. Anderson, D. R. Thompson, M. Burl, R. Castano, and M. Judd (2012), AEGIS automated targeting for MER opportunity rover, ACM Transactions on Intelligent Systems Technology, 3(3), doi: / Garvin, J. B., J. W. Head, M. T. Zuber, and P. Helfenstein (1984), Venus: The nature of the surface from Venera panoramas, J. Geophys. Res., 89(B5), Gilmore, M. S., R. Castao, T. Mann, R. C. Anderson, E. D. Mjolsness, R. Manduchi, and R. S. Saunders (2000), Strategies for autonomous rovers at Mars, J. Geophys. Res., 105(E12), 29,223 29,237, doi: / 2000JE Golombek, M. P., et al. (2005), Assessment of Mars exploration rover landing site predictions, Nature, 436, Gong, X., and J. Liu (2012), Rock detection via superpixel graph cuts, in Proceedings of the 19th IEEE International Conference on Image Processing (ICIP), , doi: /icip Gulick, V. C., R. L. Morris, M. A. Ruzon, and T. L. Roush (2001), Autonomous image analyses during the 1999 Marsokhod rover field test, J. Geophys. Res., 106(E4), , doi: /1999je Porco, C. C., et al. (2006), Cassini observes the active south pole of Enceladus, Science, 311(5766), , doi: /science Shotton, J., M. Johnson, and R. Cipollam (2008), Semantic texton forests for image categorization and segmentation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1 8. Smith, T., D. R. Thompson, D. S. Wettergreen, N. A. Cabrol, K. A. Warren-Rhodes, and S. J. Weinstein (2007), Life in the Atacama: Science autonomy for improving data quality, J. Geophys. Res., 112, G04S03, doi: /2006jg Soderblom, L. A., et al. (2007), Topography and geomorphology of the Huygens landing site on Titan, Planet. Space Sci., 55, Squyres, S. W., et al. (2006), Two years at Meridiani Planum: Results from the Opportunity rover, Science, 313(5792), Tartakovsky, D. M., and B. E. Wohlberg (2004), Delineation of geologic facies with statistical learning theory, Geophys. Res. Lett., 31(L18502), doi: /2004gl Thompson, D. R., D. S. Wettergreen, and F. J. C. Peralta (2011), Autonomous science during large-scale robotic survey, J. Field Rob., 28(4), , doi: /rob Tufts, B. R., R. Greenberg, G. Hoppa, and P. Geissler (1999), Astypalaea linea: A large-scale strike-slip fault on Europa, Icarus, 141(1), 53 64, doi: /icar Wagstaff, K. L., R. Castano, S. Dolinar, M. Klimesh, and R. Mukai (2004), Science-based region-of-interest image compression, in Proc. Lunar Planet. Sci. Conf., vol. 35, edited by S. Mackwell and E. Stansbery, p
Science on the Fly. Preview. Autonomous Science for Rover Traverse. David Wettergreen The Robotics Institute Carnegie Mellon University
Science on the Fly Autonomous Science for Rover Traverse David Wettergreen The Robotics Institute University Preview Motivation and Objectives Technology Research Field Validation 1 Science Autonomy Science
More informationWilliam B. Green, Danika Jensen, and Amy Culver California Institute of Technology Jet Propulsion Laboratory Pasadena, CA 91109
DIGITAL PROCESSING OF REMOTELY SENSED IMAGERY William B. Green, Danika Jensen, and Amy Culver California Institute of Technology Jet Propulsion Laboratory Pasadena, CA 91109 INTRODUCTION AND BASIC DEFINITIONS
More informationNASA s X2000 Program - an Institutional Approach to Enabling Smaller Spacecraft
NASA s X2000 Program - an Institutional Approach to Enabling Smaller Spacecraft Dr. Leslie J. Deutsch and Chris Salvo Advanced Flight Systems Program Jet Propulsion Laboratory California Institute of Technology
More information2. OASIS SYSTEM OVERVIEW...2
Opportunistic Rover Science: Finding and Reacting to Rocks, Clouds and Dust Devils Rebecca Castano, Tara Estlin, Daniel Gaines, Andres Castano, Caroline Chouinard, Ben Bornstein, Robert C. Anderson, Steve
More informationC. R. Weisbin, R. Easter, G. Rodriguez January 2001
on Solar System Bodies --Abstract of a Projected Comparative Performance Evaluation Study-- C. R. Weisbin, R. Easter, G. Rodriguez January 2001 Long Range Vision of Surface Scenarios Technology Now 5 Yrs
More informationRobotics for Space Exploration Today and Tomorrow. Chris Scolese NASA Associate Administrator March 17, 2010
Robotics for Space Exploration Today and Tomorrow Chris Scolese NASA Associate Administrator March 17, 2010 The Goal and The Problem Explore planetary surfaces with robotic vehicles Understand the environment
More informationAutonomous Planning and Execution for a Future Titan Aerobot
Autonomous Planning and Execution for a Future Titan Aerobot Daniel Gaines, Tara Estlin, Steve Schaffer, Caroline Chouinard and Alberto Elfes Jet Propulsion Laboratory California Institute of Technology
More informationColor Calibration of Spirit and Opportunity Rover Images
Color Calibration of Spirit and Opportunity Rover Images Ron L. Levin *, Lockheed Martin IS&S, Building 5, 1300 S. Litchfield Road, Goodyear, AZ 85338-1599 ABSTRACT The controversy about color Mars lander
More informationUsing RSVP for Analyzing State and Previous Activities for the Mars Exploration Rovers
Using RSVP for Analyzing State and Previous Activities for the Mars Exploration Rovers Brian K. Cooper 1, Frank Hartman 1, Scott Maxwell 1, John Wright 1, Jeng Yen 1 1 Jet Propulsion Laboratory, Pasadena,
More informationDan Dvorak and Lorraine Fesq Jet Propulsion Laboratory, California Institute of Technology. Jonathan Wilmot NASA Goddard Space Flight Center
Jet Propulsion Laboratory Quality Attributes for Mission Flight Software: A Reference for Architects Dan Dvorak and Lorraine Fesq Jet Propulsion Laboratory, Jonathan Wilmot NASA Goddard Space Flight Center
More informationReal-Time Face Detection and Tracking for High Resolution Smart Camera System
Digital Image Computing Techniques and Applications Real-Time Face Detection and Tracking for High Resolution Smart Camera System Y. M. Mustafah a,b, T. Shan a, A. W. Azman a,b, A. Bigdeli a, B. C. Lovell
More informationMission Reliability Estimation for Repairable Robot Teams
Carnegie Mellon University Research Showcase @ CMU Robotics Institute School of Computer Science 2005 Mission Reliability Estimation for Repairable Robot Teams Stephen B. Stancliff Carnegie Mellon University
More informationestec PROSPECT Project Objectives & Requirements Document
estec European Space Research and Technology Centre Keplerlaan 1 2201 AZ Noordwijk The Netherlands T +31 (0)71 565 6565 F +31 (0)71 565 6040 www.esa.int PROSPECT Project Objectives & Requirements Document
More informationK9 Operation in May 00 Dual-Rover Field Experiment
Proceeding of the 6 th International Symposium on Artificial Intelligence and Robotics & Automation in Space: i-sairas 2001, Canadian Space Agency, St-Hubert, Quebec, Canada, June 18-22, 2001. K9 Operation
More informationAN INVESTIGATION INTO SALIENCY-BASED MARS ROI DETECTION
AN INVESTIGATION INTO SALIENCY-BASED MARS ROI DETECTION Lilan Pan and Dave Barnes Department of Computer Science, Aberystwyth University, UK ABSTRACT This paper reviews several bottom-up saliency algorithms.
More informationGlobal Exploration Strategy. Jeff Volosin Strategy Development Lead NASA Exploration Systems Mission Directorate
Global Exploration Strategy Jeff Volosin Strategy Development Lead NASA Exploration Systems Mission Directorate February 27, 2007 2 What Is a Global Exploration Strategy Used For? A high-level compelling
More informationJet Propulsion Laboratory
Aerospace Jet Propulsion Laboratory Product Femap NASA engineers used Femap to ensure Curiosity could endure the Seven Minutes of Terror Business challenges Designing and building a new roving Mars Science
More informationROBOT VISION. Dr.M.Madhavi, MED, MVSREC
ROBOT VISION Dr.M.Madhavi, MED, MVSREC Robotic vision may be defined as the process of acquiring and extracting information from images of 3-D world. Robotic vision is primarily targeted at manipulation
More informationAN ONBOARD DATA ANALYSIS METHOD TO TRACK THE SEASONAL POLAR CAPS ON MARS.
1 AN ONBOARD DATA ANALYSIS METHOD TO TRACK THE SEASONAL POLAR CAPS ON MARS Kiri L. Wagstaff 1, Rebecca Castaño 1, Steve Chien 1, Anton B. Ivanov 1, Erik Pounders 1, and Timothy N. Titus 2 1 Jet Propulsion
More informationClassification in Image processing: A Survey
Classification in Image processing: A Survey Rashmi R V, Sheela Sridhar Department of computer science and Engineering, B.N.M.I.T, Bangalore-560070 Department of computer science and Engineering, B.N.M.I.T,
More informationWavelet-based Image Splicing Forgery Detection
Wavelet-based Image Splicing Forgery Detection 1 Tulsi Thakur M.Tech (CSE) Student, Department of Computer Technology, basiltulsi@gmail.com 2 Dr. Kavita Singh Head & Associate Professor, Department of
More informationU.S. Space Exploration in the Next 20 NASA Space Sciences Policy
U.S. Space Exploration in the Next 20 ScienceYears: to Inspire, Science to Serve NASA Space Sciences Policy National Aeronautics and Space Administration Waleed Abdalati NASA Chief Scientist Waleed Abdalati
More informationAutomatic Licenses Plate Recognition System
Automatic Licenses Plate Recognition System Garima R. Yadav Dept. of Electronics & Comm. Engineering Marathwada Institute of Technology, Aurangabad (Maharashtra), India yadavgarima08@gmail.com Prof. H.K.
More information2018 Landing Site Selection
E X O M A R S 2018 Landing Site Selection J. L. Vago, L. Lorenzoni, D. Rodionov, and the ExoMars LSSWG (composition in presentation) 1 NASA Planetary Protection Subcommittee 20-21 May 2014, Washington
More informationOnboard Science Data Analysis: Implications for Future Missions
Onboard Science Data Analysis: Implications for Future Missions For review by the Planetary Science Decadal Survey 15 September 2009 Dr. David R. Thompson, JPL 1,2 david.r.thompson@jpl.nasa.gov Dr. Robert
More informationLicense Plate Localisation based on Morphological Operations
License Plate Localisation based on Morphological Operations Xiaojun Zhai, Faycal Benssali and Soodamani Ramalingam School of Engineering & Technology University of Hertfordshire, UH Hatfield, UK Abstract
More informationGE 113 REMOTE SENSING
GE 113 REMOTE SENSING Topic 8. Image Classification and Accuracy Assessment Lecturer: Engr. Jojene R. Santillan jrsantillan@carsu.edu.ph Division of Geodetic Engineering College of Engineering and Information
More informationImproved SIFT Matching for Image Pairs with a Scale Difference
Improved SIFT Matching for Image Pairs with a Scale Difference Y. Bastanlar, A. Temizel and Y. Yardımcı Informatics Institute, Middle East Technical University, Ankara, 06531, Turkey Published in IET Electronics,
More informationAutonomous and Autonomic Systems: With Applications to NASA Intelligent Spacecraft Operations and Exploration Systems
Walt Truszkowski, Harold L. Hallock, Christopher Rouff, Jay Karlin, James Rash, Mike Hinchey, and Roy Sterritt Autonomous and Autonomic Systems: With Applications to NASA Intelligent Spacecraft Operations
More informationMain Subject Detection of Image by Cropping Specific Sharp Area
Main Subject Detection of Image by Cropping Specific Sharp Area FOTIOS C. VAIOULIS 1, MARIOS S. POULOS 1, GEORGE D. BOKOS 1 and NIKOLAOS ALEXANDRIS 2 Department of Archives and Library Science Ionian University
More informationCommittee on Astrobiology & Planetary Science (CAPS) Michael H. New, PhD Astrobiology Discipline Scientist
Committee on Astrobiology & Planetary Science (CAPS) Michael H. New, PhD Astrobiology Discipline Scientist Topics to be addressed Changes to Instrument Development Programs Update on Recent Workshops Origins
More informationFLIGHT SUMMARY REPORT
FLIGHT SUMMARY REPORT Flight Number: 97-011 Calendar/Julian Date: 23 October 1996 297 Sensor Package: Area(s) Covered: Wild-Heerbrugg RC-10 Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) Southern
More informationPlanetary Science Sub-committee Meeting. 9 July
Planetary Science Sub-committee Meeting 9 July 2009 http://www.lpi.usra.edu/vexag/ Completed: Sue Smrekar & Sanjay Limaye appointed as acting co-chairs of VEXAG in June 2009 Developing Decadal Survey inputs:
More informationAutomated Planetary Terrain Mapping of Mars Using Image Pattern Recognition
Automated Planetary Terrain Mapping of Mars Using Image Pattern Recognition Design Document Version 2.0 Team Strata: Sean Baquiro Matthew Enright Jorge Felix Tsosie Schneider 2 Table of Contents 1 Introduction.3
More informationThe Hand Gesture Recognition System Using Depth Camera
The Hand Gesture Recognition System Using Depth Camera Ahn,Yang-Keun VR/AR Research Center Korea Electronics Technology Institute Seoul, Republic of Korea e-mail: ykahn@keti.re.kr Park,Young-Choong VR/AR
More informationSupport Vector Machine Classification of Snow Radar Interface Layers
Support Vector Machine Classification of Snow Radar Interface Layers Michael Johnson December 15, 2011 Abstract Operation IceBridge is a NASA funded survey of polar sea and land ice consisting of multiple
More informationAutomatic Vehicles Detection from High Resolution Satellite Imagery Using Morphological Neural Networks
Automatic Vehicles Detection from High Resolution Satellite Imagery Using Morphological Neural Networks HONG ZHENG Research Center for Intelligent Image Processing and Analysis School of Electronic Information
More informationAn Efficient Color Image Segmentation using Edge Detection and Thresholding Methods
19 An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods T.Arunachalam* Post Graduate Student, P.G. Dept. of Computer Science, Govt Arts College, Melur - 625 106 Email-Arunac682@gmail.com
More informationRandomized Motion Planning for Groups of Nonholonomic Robots
Randomized Motion Planning for Groups of Nonholonomic Robots Christopher M Clark chrisc@sun-valleystanfordedu Stephen Rock rock@sun-valleystanfordedu Department of Aeronautics & Astronautics Stanford University
More informationReliability Impact on Planetary Robotic Missions
The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Reliability Impact on Planetary Robotic Missions David Asikin and John M. Dolan Abstract
More informationAutomatic Locating the Centromere on Human Chromosome Pictures
Automatic Locating the Centromere on Human Chromosome Pictures M. Moradi Electrical and Computer Engineering Department, Faculty of Engineering, University of Tehran, Tehran, Iran moradi@iranbme.net S.
More informationImage Processing Based Vehicle Detection And Tracking System
Image Processing Based Vehicle Detection And Tracking System Poonam A. Kandalkar 1, Gajanan P. Dhok 2 ME, Scholar, Electronics and Telecommunication Engineering, Sipna College of Engineering and Technology,
More informationDemonstrating Robotic Autonomy in NASA s Intelligent Systems Project
In Proceedings of the 8th ESA Workshop on Advanced Space Technologies for Robotics and Automation 'ASTRA 2004' ESTEC, Noordwijk, The Netherlands, November 2-4, 2004 Demonstrating Robotic Autonomy in NASA
More informationAutonomous Self-Extending Machines for Accelerating Space Exploration
Autonomous Self-Extending Machines for Accelerating Space Exploration NIAC CP 01-02 Phase I Hod Lipson, Evan Malone Cornell University Computational Motivation Robotic exploration has a long cycle time
More informationLinear Gaussian Method to Detect Blurry Digital Images using SIFT
IJCAES ISSN: 2231-4946 Volume III, Special Issue, November 2013 International Journal of Computer Applications in Engineering Sciences Special Issue on Emerging Research Areas in Computing(ERAC) www.caesjournals.org
More informationThe Lunar Split Mission: Concepts for Robotically Constructed Lunar Bases
2005 International Lunar Conference Renaissance Toronto Hotel Downtown, Toronto, Ontario, Canada The Lunar Split Mission: Concepts for Robotically Constructed Lunar Bases George Davis, Derek Surka Emergent
More informationEFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION
EFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION 1 Arun.A.V, 2 Bhatath.S, 3 Chethan.N, 4 Manmohan.C.M, 5 Hamsaveni M 1,2,3,4,5 Department of Computer Science and Engineering,
More informationAUTOMATIC DETECTION OF HEDGES AND ORCHARDS USING VERY HIGH SPATIAL RESOLUTION IMAGERY
AUTOMATIC DETECTION OF HEDGES AND ORCHARDS USING VERY HIGH SPATIAL RESOLUTION IMAGERY Selim Aksoy Department of Computer Engineering, Bilkent University, Bilkent, 06800, Ankara, Turkey saksoy@cs.bilkent.edu.tr
More informationThe Global Exploration Roadmap International Space Exploration Coordination Group (ISECG)
The Global Exploration Roadmap International Space Exploration Coordination Group (ISECG) Kathy Laurini NASA/Senior Advisor, Exploration & Space Ops Co-Chair/ISECG Exp. Roadmap Working Group FISO Telecon,
More informationDaring Mighty Things. AFCEA Los Angeles. Larry James (Lt. Gen. USAF, Ret.), Deputy Director. a presentation to. January 14, 2015
Jet Propulsion Laboratory California Institute of Technology Daring Mighty Things a presentation to AFCEA Los Angeles January 14, 2015 Larry James (Lt. Gen. USAF, Ret.), Deputy Director Jet Propulsion
More informationAnalog studies in preparation for human exploration of Mars
Analog studies in preparation for human exploration of Mars Kelly Snook Space Projects Division NASA Ames January 11, 2001 Science and the Human Exploration of Mars Workshop 1/11/01 What are the Questions?
More informationExploration Systems Mission Directorate: New Opportunities in the President s FY2011 Budget
National Aeronautics and Space Administration Exploration Systems Mission Directorate: New Opportunities in the President s FY2011 Budget Dr. Laurie Leshin Deputy Associate Administrator, ESMD Presentation
More informationConstellation Systems Division
Lunar National Aeronautics and Exploration Space Administration www.nasa.gov Constellation Systems Division Introduction The Constellation Program was formed to achieve the objectives of maintaining American
More informationA TECHNOLOGY ROADMAP TOWARDS MINERAL EXPLORATION FOR EXTREME ENVIRONMENTS IN SPACE
Source: Deep Space Industries A TECHNOLOGY ROADMAP TOWARDS MINERAL EXPLORATION FOR EXTREME ENVIRONMENTS IN SPACE DAVID DICKSON GEORGIA INSTITUTE OF TECHNOLOGY 1 Source: 2015 NASA Technology Roadmaps WHAT
More informationSpace Challenges Preparing the next generation of explorers. The Program
Space Challenges Preparing the next generation of explorers Space Challenges is the biggest free educational program in the field of space science and high technologies in the Balkans - http://spaceedu.net
More informationEC-433 Digital Image Processing
EC-433 Digital Image Processing Lecture 2 Digital Image Fundamentals Dr. Arslan Shaukat 1 Fundamental Steps in DIP Image Acquisition An image is captured by a sensor (such as a monochrome or color TV camera)
More informationSPACOMM 2009 PANEL. Challenges and Hopes in Space Navigation and Communication: From Nano- to Macro-satellites
SPACOMM 2009 PANEL Challenges and Hopes in Space Navigation and Communication: From Nano- to Macro-satellites Lunar Reconnaissance Orbiter (LRO): NASA's mission to map the lunar surface Landing on the
More informationChapter 17. Shape-Based Operations
Chapter 17 Shape-Based Operations An shape-based operation identifies or acts on groups of pixels that belong to the same object or image component. We have already seen how components may be identified
More informationImage Extraction using Image Mining Technique
IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,
More informationAPGEN: A Multi-Mission Semi-Automated Planning Tool
APGEN: A Multi-Mission Semi-Automated Planning Tool Pierre F. Maldague Adam;Y.Ko Dennis N. Page Thomas W. Starbird Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove dr. Pasadena,
More informationAdvanced Design for Robot in Mars Exploration
Proceedings of the 2010 International Conference on Industrial Engineering and Operations Management Dhaka, Bangladesh, January 9 10, 2010 Advanced Design for Robot in Mars Exploration P. Pradeep, M. Prabhakaran,
More information2009 ESMD Space Grant Faculty Project
2009 ESMD Space Grant Faculty Project 1 Objectives Train and develop the highly skilled scientific, engineering and technical workforce of the future needed to implement space exploration missions: In
More informationPLANETARY SURFACE EXPLORATION USING A NETWORK OF REUSABLE PATHS: A PARADIGM FOR PARALLEL SCIENCE INVESTIGATIONS
PLANETARY SURFACE EXPLORATION USING A NETWORK OF REUSABLE PATHS: A PARADIGM FOR PARALLEL SCIENCE INVESTIGATIONS B.E. Stenning 1, G.R. Osinski 2, T.D. Barfoot 1, G. Basic 1, M. Beauchamp 2, M. Daly 3, R.
More informationBasic Hyperspectral Analysis Tutorial
Basic Hyperspectral Analysis Tutorial This tutorial introduces you to visualization and interactive analysis tools for working with hyperspectral data. In this tutorial, you will: Analyze spectral profiles
More informationPanel Session IV - Future Space Exploration
The Space Congress Proceedings 2003 (40th) Linking the Past to the Future - A Celebration of Space May 1st, 8:30 AM - 11:00 AM Panel Session IV - Future Space Exploration Canaveral Council of Technical
More informationSECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS
RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT
More informationSpace Robotic Capabilities David Kortenkamp (NASA Johnson Space Center)
Robotic Capabilities David Kortenkamp (NASA Johnson ) Liam Pedersen (NASA Ames) Trey Smith (Carnegie Mellon University) Illah Nourbakhsh (Carnegie Mellon University) David Wettergreen (Carnegie Mellon
More informationSTARBASE Minnesota Duluth Grade 5 Program Description & Standards Alignment
STARBASE Minnesota Duluth Grade 5 Program Description & Standards Alignment Day 1: Analyze and engineer a rocket for space exploration Students are introduced to engineering and the engineering design
More informationVLSI Implementation of Impulse Noise Suppression in Images
VLSI Implementation of Impulse Noise Suppression in Images T. Satyanarayana 1, A. Ravi Chandra 2 1 PG Student, VRS & YRN College of Engg. & Tech.(affiliated to JNTUK), Chirala 2 Assistant Professor, Department
More informationDigitization of Trail Network Using Remotely-Sensed Data in the CFB Suffield National Wildlife Area
Digitization of Trail Network Using Remotely-Sensed Data in the CFB Suffield National Wildlife Area Brent Smith DLE 5-5 and Mike Tulis G3 GIS Technician Department of National Defence 27 March 2007 Introduction
More informationWHITE PAPER. Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception
Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Abstract
More informationOzobot Bit. Computer Science Engineering Program
3 rd Grade Ozobot Bit Computer Science Engineering Program Post Visit Activity Resources 2018 Winter/Spring 2018 Dear Third Grade Visiting Classroom Teacher, It is hoped that you and your students enjoyed
More informationLibyan Licenses Plate Recognition Using Template Matching Method
Journal of Computer and Communications, 2016, 4, 62-71 Published Online May 2016 in SciRes. http://www.scirp.org/journal/jcc http://dx.doi.org/10.4236/jcc.2016.47009 Libyan Licenses Plate Recognition Using
More informationBackground. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image
Background Computer Vision & Digital Image Processing Introduction to Digital Image Processing Interest comes from two primary backgrounds Improvement of pictorial information for human perception How
More informationA simple embedded stereoscopic vision system for an autonomous rover
In Proceedings of the 8th ESA Workshop on Advanced Space Technologies for Robotics and Automation 'ASTRA 2004' ESTEC, Noordwijk, The Netherlands, November 2-4, 2004 A simple embedded stereoscopic vision
More informationNASA Mission Directorates
NASA Mission Directorates 1 NASA s Mission NASA's mission is to pioneer future space exploration, scientific discovery, and aeronautics research. 0 NASA's mission is to pioneer future space exploration,
More informationSpace Challenges Preparing the next generation of explorers. The Program
Space Challenges Preparing the next generation of explorers Space Challenges is one of the biggest educational programs in the field of space science and high technologies in Europe - http://spaceedu.net
More informationThe Human Exploration of Mars: Why Mars? Why Humans?
The Human Exploration of Mars: Why Mars? Why Humans? Dr. Joel S. Levine Research Professor Department of Applied Science College of William and Mary Williamsburg, VA 23187-8795 jslevine@wm.edu MEPAG Human
More informationThe JPL A-Team and Mission Formulation Process
The JPL A-Team and Mission Formulation Process 2017 Low-Cost Planetary Missions Conference Caltech Pasadena, CA Steve Matousek, Advanced Concept Methods Manager JPL s Innovation Foundry jplfoundry.jpl.nasa.gov
More informationEarth Sciences 089G Short Practical Assignment #4 Working in Three Dimensions
Earth Sciences 089G Short Practical Assignment #4 Working in Three Dimensions Introduction Maps are 2-D representations of 3-D features, the developers of topographic maps needed to devise a method for
More informationThe Science Autonomy System of the Nomad Robot
Proceedings of the 2001 IEEE International Conference on Robotics & Automation Seoul, Korea May 21-26, 2001 The Science Autonomy System of the Nomad Robot Michael D. Wagner, Dimitrios Apostolopoulos, Kimberly
More informationTENNESSEE SCIENCE STANDARDS *****
TENNESSEE SCIENCE STANDARDS ***** GRADES K-8 EARTH AND SPACE SCIENCE KINDERGARTEN Kindergarten : Embedded Inquiry Conceptual Strand Understandings about scientific inquiry and the ability to conduct inquiry
More informationEvaluation of Image Segmentation Based on Histograms
Evaluation of Image Segmentation Based on Histograms Andrej FOGELTON Slovak University of Technology in Bratislava Faculty of Informatics and Information Technologies Ilkovičova 3, 842 16 Bratislava, Slovakia
More informationRegion Based Satellite Image Segmentation Using JSEG Algorithm
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 5, May 2015, pg.1012
More informationPrinciples of Autonomy and Decision Making. Brian C. Williams / December 10 th, 2003
Principles of Autonomy and Decision Making Brian C. Williams 16.410/16.413 December 10 th, 2003 1 Outline Objectives Agents and Their Building Blocks Principles for Building Agents: Modeling Formalisms
More informationAn Efficient Method for Vehicle License Plate Detection in Complex Scenes
Circuits and Systems, 011,, 30-35 doi:10.436/cs.011.4044 Published Online October 011 (http://.scirp.org/journal/cs) An Efficient Method for Vehicle License Plate Detection in Complex Scenes Abstract Mahmood
More informationScience Plenary II: Science Missions Enabled by Nuclear Power and Propulsion. Chair / Organizer: Steven D. Howe Center for Space Nuclear Research
Science Plenary II: Science Missions Enabled by Nuclear Power and Propulsion Chair / Organizer: Steven D. Howe Center for Space Nuclear Research Distinguished Panel Space Nuclear Power and Propulsion:
More informationExploration Partnership Strategy. Marguerite Broadwell Exploration Systems Mission Directorate
Exploration Partnership Strategy Marguerite Broadwell Exploration Systems Mission Directorate October 1, 2007 Vision for Space Exploration Complete the International Space Station Safely fly the Space
More informationESA Human Spaceflight Capability Development and Future Perspectives International Lunar Conference September Toronto, Canada
ESA Human Spaceflight Capability Development and Future Perspectives International Lunar Conference 2005 19-23 September Toronto, Canada Scott Hovland Head of Systems Unit, System and Strategy Division,
More informationAmes Research Center Improving Lunar Surface Science with Robotic Recon
Ames Research Center Improving Lunar Surface Science with Robotic Recon Terry Fong, Matt Deans, Pascal Lee, Jen Heldmann, David Kring, Essam Heggy, and Rob Landis Apollo Lunar Surface Science Jack Schmitt
More informationVOYAGER IMAGE DATA COMPRESSION AND BLOCK ENCODING
VOYAGER IMAGE DATA COMPRESSION AND BLOCK ENCODING Michael G. Urban Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena, California 91109 ABSTRACT Telemetry enhancement
More informationUrban Feature Classification Technique from RGB Data using Sequential Methods
Urban Feature Classification Technique from RGB Data using Sequential Methods Hassan Elhifnawy Civil Engineering Department Military Technical College Cairo, Egypt Abstract- This research produces a fully
More informationCoding and Analysis of Cracked Road Image Using Radon Transform and Turbo codes
Coding and Analysis of Cracked Road Image Using Radon Transform and Turbo codes G.Bhaskar 1, G.V.Sridhar 2 1 Post Graduate student, Al Ameer College Of Engineering, Visakhapatnam, A.P, India 2 Associate
More informationOn January 14, 2004, the President announced a new space exploration vision for NASA
Exploration Conference January 31, 2005 President s Vision for U.S. Space Exploration On January 14, 2004, the President announced a new space exploration vision for NASA Implement a sustained and affordable
More informationSmall-Body Design Reference Mission (DRM)
2018 Workshop on Autonomy for Future NASA Science Missions October 10-11, 2018 Small-Body Design Reference Mission (DRM) Issa Nesnas and Tim Swindle Small-Body DRM Participants Name Sarjoun Skaff Shyam
More informationA DEEP SPACE COMPANY BY A WORLD TEAM THE FED EXPRESS OF THE 21ST CENTURY TONY SPEAR OCTOBER 2007
A DEEP SPACE COMPANY BY A WORLD TEAM THE FED EXPRESS OF THE 21ST CENTURY TONY SPEAR OCTOBER 2007 1 PURPOSE OF THIS PRESENTATION TO INFORM YOU OF AN EXCITING SPACE OPPORTUNITY IN 2007 HUMANS CELEBRATE 50
More informationNASA Keynote to International Lunar Conference Mark S. Borkowski Program Executive Robotic Lunar Exploration Program
NASA Keynote to International Lunar Conference 2005 Mark S. Borkowski Program Executive Robotic Lunar Exploration Program Our Destiny is to Explore! The goals of our future space flight program must be
More informationRADIOMETRIC CALIBRATION OF MARS HiRISE HIGH RESOLUTION IMAGERY BASED ON FPGA
RADIOMETRIC CALIBRATION OF MARS HiRISE HIGH RESOLUTION IMAGERY BASED ON FPGA Yifan Hou a, b, *, Xun Geng a, Shuai Xing a, Yonghe Tang b,qing Xu a a Zhengzhou Institute of Surveying and Mapping, Zhongyuan
More informationCS594, Section 30682:
CS594, Section 30682: Distributed Intelligence in Autonomous Robotics Spring 2003 Tuesday/Thursday 11:10 12:25 http://www.cs.utk.edu/~parker/courses/cs594-spring03 Instructor: Dr. Lynne E. Parker ½ TA:
More informationImproving the Safety and Efficiency of Roadway Maintenance Phase II: Developing a Vision Guidance System for the Robotic Roadway Message Painter
Improving the Safety and Efficiency of Roadway Maintenance Phase II: Developing a Vision Guidance System for the Robotic Roadway Message Painter Final Report Prepared by: Ryan G. Rosandich Department of
More information