The Cityscapes Dataset for Semantic Urban Scene Understanding SUPPLEMENTAL MATERIAL

Size: px
Start display at page:

Download "The Cityscapes Dataset for Semantic Urban Scene Understanding SUPPLEMENTAL MATERIAL"

Transcription

1 The Cityscapes Dataset for Semantic Urban Scene Understanding SUPPLEMENTAL MATERIAL Marius Cordts 1,2 Mohamed Omran 3 Sebastian Ramos 1,4 Timo Rehfeld 1,2 Markus Enzweiler 1 Rodrigo Benenson 3 Uwe Franke 1 Stefan Roth 2 Bernt Schiele 3 1 Daimler AG R&D, 2 TU Darmstadt, 3 MPI Informatics, 4 TU Dresden A. Related Datasets In Tab. 7 we provide a comparison to other related datasets in terms of the type of annotations, the meta information provided, the camera perspective, the type of scenes, and their size. The selected datasets are either of large scale or focus on street scenes. B. Class Definitions Table 8 provides precise definitions of our annotated classes. These definitions were used to guide our labeling process, as well as quality control. In addition, we include a typical example for each class. The annotators were instructed to make use of the depth ordering and occlusions of the scene to accelerate labeling, analogously to LabelMe [59]; see Fig. 6 for an example. In doing so, distant objects are annotated first, while occluded parts are annotated with a coarser, conservative boundary (possibly larger than the actual object). Subsequently, the occluder is annotated with a polygon that lies in front of the occluded part. Thus, the boundary between these objects is shared and consistent. Holes in an object through which a background region can be seen are considered to be part of the object. This allows keeping the labeling effort within reasonable bounds such that objects can be described via simple polygons forming simply-connected sets. C. Example Annotations Figure 7 presents several examples of annotated frames from our dataset that exemplify its diversity and difficulty. All examples are taken from the and val splits and were chosen by searching for the extremes in terms of the number of traffic participant instances in the scene; see Fig. 7 for details. Figure 6. Exemplary labeling process. Distant objects are annotated first and subsequently their occluders. This ensures the boundary between these objects to be shared and consistent. D. Detailed Results In this section, we present additional details regarding our control experiments and baselines. Specifically, we give individual class scores that complement the aggregated scores in the main paper. Moreover, we provide details on the ing procedure for all baselines. Finally, we show additional qualitative results of all methods. D.1. Semantic labeling Tables 9 and 11 list all individual class-level IoU scores for all control experiments and baselines. Tables 10 and 12 give the corresponding instance-normalized iiou scores. In addition, Figs. 8 and 9 contain qualitative examples of these methods. Basic setup. All baselines relied on single frame, monocular LDR images and were preed on ImageNet [58], i.e. their underlying CNN was generally initialized with ImageNet VGG weights [67]. Subsequently, the CNNs were finetuned on Cityscapes using the respective portions listed in Tab. 4. In our own FCN [40] experiments, we additionally investigated first preing on PASCAL-Context [44], but found this to not influence performance given a sufficiently large number of ing iterations. Most baselines applied a subsampling of the input image, c.f. Tab. 4, probai

2 Dataset Labels Color Video Depth Camera Scene #images #classes [58] B Mixed Mixed 150 k 1000 [13] B, C Mixed Mixed 20 k (B), 10 k (C) 20 [44] D Mixed Mixed 20 k 400 [37] C Mixed Mixed 300 k 80 [68] D, C Kinect Pedestrian Indoor 10 k 37 [18] B, D a Laser, Stereo Car Suburban 15 k (B), 700 (D) 3 (B), 8 (D) [6] D Car Urban [34] D Stereo, Manual Car Urban 70 7 [60] D Stereo Car Urban [2] D Pedestrian Urban [64] C Stereo Car Facades [55] D 3D mesh Pedestrian Urban [74] D Laser Car Suburban 400 k 27 Ours D, C Stereo Car Urban 5 k (D), 20 k (C) 30 a Including the annotations of 3 rd party groups [21, 28, 31, 32, 57, 63, 76, 79] Table 7. Comparison to related datasets. We list the type of labels provided, i.e. object bounding boxes (B), dense pixel-level semantic labels (D), coarse labels (C) that do not aim to label the whole image. Further, we mark if color, video, and depth information are available. We list the camera perspective, the scene type, the number of images, and the number of semantic classes. bly due to time or memory consts. Only Adelaide [36], Dilated10 [78], and our FCN experiments were conducted on the full-resolution images. In the first case, a new random patch of size pixels was drawn at each iteration. In our FCN ing, we split each image into two halves (left and right) with an overlap that is sufficiently large considering the network s receptive field. Own baselines. The ing procedure of all our FCN experiments follows [40]. We use three-stage ing with subsequently smaller strides, i.e. first FCN-32s, then FCN- 16s, and then FCN-8s, always initializing with the parameters from the previous stage. We add a 4 th stage for which we reduce the learning rate by a factor of 10. The ing parameters are identical to those publicly available for ing on PASCAL-Context [44], except that we reduce the learning rate to account for the increased image resolution. Each stage is ed until convergence on the validation set; pixels with void ground truth are ignored such that they do not induce any gradient. Eventually, we re on and val together with the same number of epochs, yielding , , , and 5950 iterations for stages 1 through 4. Note that each iteration corresponds to half of an image (see above). For the variant with factor 2 downsampling, no image splitting is necessary, yielding , , , and 5950 iterations in the respective stages. The variant only ed on val (full resolution) uses for validation, leading to , , , and 0 iterations in the 4 stages. Our last FCN variant is ed using the coarse annotations only, with , , , and 0 iterations in the respective stage; pixels with void ground truth are ignored here as well. 3 rd -party baselines. Note that for the following descriptions of the 3 rd -party baselines, we have to rely on authorprovided information. SegNet [3] ing for both the basic and extended variant was performed until convergence, yielding approximately 50 epochs. Inference takes 0.12 s per image. DPN [39] was ed using the original procedure, while using all available Cityscapes annotations. For ing CRF as RNN [80], an FCN-32s model was ed for 3 days on using a GPU. Subsequently an FCN-8s model was ed for 2 days, and eventually the model was further finetuned including the CRF-RNN layers. Testing takes 0.7 s on half-resolution images. For ing DeepLab on the fine annotations, denoted DeepLab-LargeFOV-Strong, the authors applied the ing procedure from [8]. The model was ed on for iterations until convergence on val. Then val was included in the ing set for another iterations. In both cases, a mini-batch size of 10 was applied. Each ing iteration lasts 0.5 s, while inference including the dense CRF takes 4 s per image. The DeepLab variant including our coarse annotations, termed DeepLab- LargeFOV-StrongWeak, followed the protocol in [47] and is initialized from the DeepLab-LargeFOV-Strong model. Each mini-batch consists of 5 finely and 5 coarsely annotated images and ing is performed for iterations until convergence on val. Then, ing was continued for another iterations on and val. Adelaide [36] was ed for 8 days using random crops of the input image as described above. Inference on a single image takes 35 s. The best performing baseline, Dilated10 [78], is a convolutional network that consists of a front-end prediction module and a context aggregation module. The front-end module is an adaptation of the VGG-16 network based on dilated convolutions. The context module uses dilated convolutions ii

3 to systematically expand the receptive field and aggregate contextual information. This module is derived from the Basic" network, where each layer has C = 19 feature maps. The total number of layers in the context module is 10, hence the name Dilation10. The increased number of layers in the context module (10 for Cityscapes versus 8 for PASCAL VOC) is due to the higher input resolution. The complete Dilation10 model is a pure convolutional network: there is no CRF and no structured prediction. The Dilation10 network was ed in three stages. First, the frontend prediction module was ed for iterations on randomly sampled crops of size , with learning rate 10 4, momentum 0.99, and batch size 8. Second, the context module was ed for iterations on whole (uncropped) images, with learning rate 10 4, momentum 0.99, and batch size 100. Third, the complete model (front-end + context) was jointly ed for iterations on halves of images (input size , including padding), with learning rate 10 5, momentum 0.99, and batch size 1. D.2. Instance-level semantic labeling For our instance-level semantic labeling baselines and control experiments, we rely on Fast R-CNN [19] and proposal regions from either MCG (Multiscale Combinatorial Grouping [1]) or from the ground truth annotations. We use the standard ing and testing parameters for Fast R-CNN. Training starts with a model pre-ed on ImageNet [58]. We use a learning rate of and stop when the validation error plateaus after iterations. At test time, one score per class is assigned to each object proposal. Subsequently, thresholding and non-maximum suppression is applied and either the bounding boxes, the original proposal regions or their convex hull are used to generate the predicted masks of each instance. Quantitative results of all classes can be found in Tables 13 to 16 and qualitative results in Fig. 12. iii

4 Category Class Definition Examples human person 1 All humans that would primarily rely on their legs to move if necessary. Consequently, this label includes people who are standing/sitting, or otherwise stationary. This class also includes babies, people pushing a bicycle, or standing next to it with both legs on the same side of the bicycle. rider 1 Humans relying on some device for movement. This includes drivers, passengers, or riders of bicycles, motorcycles, scooters, skateboards, horses, Segways, (inline) skates, wheelchairs, road cleaning cars, or convertibles. Note that a visible driver of a closed car can only be seen through the window. Since holes are considered part of the surrounding object, the human is included in the car label. vehicle car 1 tinuous body shape (i.e. the driver s cabin and cargo compartment are one). Does not include This includes cars, jeeps, SUVs, vans with a con- trailers, which have their own separate class. truck 1 This includes trucks, vans with a body that is separate from the driver s cabin, pickup trucks, as well as their trailers. bus 1 This includes buses that are intended for 9+ persons for public or long-distance transport. 1 All vehicles that move on rails, e.g. trams, s. 1 Single instance annotation available. 2 Not included in challenges. Table 8. List of annotated classes including their definition and typical example. iv

5 Category Class Definition Examples vehicle motorcycle 1 without the driver or other passengers. The latter This includes motorcycles, mopeds, and scooters receive the label rider. bicycle 1 This includes bicycles without the cyclist or other passengers. The latter receive the label rider. caravan 1,2 Vehicles that (appear to) contain living quarters. This also includes trailers that are used for living and has priority over the trailer class. trailer 1,2 Includes trailers that can be attached to any vehicle, but excludes trailers attached to trucks. The latter are included in the truck label. nature vegetation Trees, hedges, and all kinds of vertically growing vegetation. Plants attached to buildings/walls/fences are not annotated separately, and receive the same label as the surface they are supported by. terrain Grass, all kinds of horizontally spreading vegetation, soil, or sand. These are areas that are not meant to be driven on. This label may also include a possibly adjacent curb. Single grass stalks or very small patches of grass are not annotated separately and thus are assigned to the label of the region they are growing on. 1 Single instance annotation available. 2 Not included in challenges. Table 8. List of annotated classes including their definition and typical example. (continued) v

6 Category Class Definition Examples construction building Includes structures that house/shelter humans, e.g. low-rises, skyscrapers, bus stops, car ports. Translucent buildings made of glass still receive the label building. Also includes scaffolding attached to buildings. wall Individually standing walls that separate two (or more) outdoor areas, and do not provide support for a building. fence Structures with holes that separate two (or more) outdoor areas, sometimes temporary. guard rail 2 Metal structure located on the side of the road to prevent serious accidents. Rare in inner cities, but occur sometimes in curves. Includes the bars holding the rails. bridge 2 Bridges (on which the ego-vehicle is not driving) including everything (fences, guard rails) permanently attached to them. tunnel 2 Tunnel walls and the (typically dark) space encased by the tunnel, but excluding vehicles. 1 Single instance annotation available. 2 Not included in challenges. Table 8. List of annotated classes including their definition and typical example. (continued) vi

7 Category Class Definition Examples object traffic sign Front part of signs installed by the state/city authority with the purpose of conveying information to drivers/cyclists/pedestrians, e.g. traffic signs, parking signs, direction signs, or warning reflector posts. traffic light The traffic light box without its poles in all orientations and for all types of traffic participants, e.g. regular traffic light, bus traffic light, traffic light. pole Small, mainly vertically oriented poles, e.g. sign poles or traffic light poles. This does not include objects mounted on the pole, which have a larger diameter than the pole itself (e.g. most street lights). pole group 2 Multiple poles that are cumbersome to label individually, but where the background can be seen in their gaps. sky sky Open sky (without tree branches/leaves) 1 Single instance annotation available. 2 Not included in challenges. Table 8. List of annotated classes including their definition and typical example. (continued) vii

8 Category Class Definition Examples flat road Horizontal surfaces on which cars usually drive, including road markings. Typically delimited by curbs, rail tracks, or parking areas. However, road is not delimited by road markings and thus may include bicycle lanes or roundabouts. sidewalk Horizontal surfaces designated for pedestrians or cyclists. Delimited from the road by some obstacle, e.g. curbs or poles (might be small), but not only by markings. Often elevated compared to the road and often located at the side of a road. The curbs are included in the sidewalk label. Also includes the walkable part of traffic islands, as well as pedestrian-only zones, where cars are not allowed to drive during regular business hours. If it s an all-day mixed pedestrian/car area, the correct label is ground. parking 2 Horizontal surfaces that are intended for parking and separated from the road, either via elevation or via a different texture/material, but not separated merely by markings. rail track 2 Horizontal surfaces on which only rail cars can normally drive. If rail tracks for trams are embedded in a standard road, they are included in the road label. 1 Single instance annotation available. 2 Not included in challenges. Table 8. List of annotated classes including their definition and typical example. (continued) viii

9 Category Class Definition Examples void ground 2 All other forms of horizontal ground-level structures that do not match any of the above, for example mixed zones (cars and pedestrians), roundabouts that are flat but delimited from the road by a curb, or in general a fallback label for horizontal surfaces that are difficult to classify, e.g. due to having a dual purpose. dynamic 2 Movable objects that do not correspond to any of the other non-void categories and might not be in the same position in the next day/hour/minute, e.g. movable trash bins, buggies, luggage, animals, chairs, or tables. static 2 This includes areas of the image that are difficult to identify/label due to occlusion/distance, as well as non-movable objects that do not match any of the non-void categories, e.g. mountains, street lights, reverse sides of traffic signs, or permanently mounted commercial signs. ego vehicle 2 unlabeled 2 Since a part of the vehicle from which our data was recorded is visible in all frames, it is assigned to this special label. This label is also available at test time. Pixels that were not explicitly assigned to a label. out of roi 2 rectification border 2 1 Single instance annotation available. 2 Not included in challenges. Narrow strip of 5 pixels along the image borders that is not considered for ing or evaluation. This label is also available at test-time. Areas close to the image border that contain artifacts resulting from the stereo pair rectification. This label is also available at test time. Table 8. List of annotated classes including their definition and typical example. (continued) ix

10 Largest number of instances and persons Largest number of riders Largest number of cars Largest number of bicycles Largest number of buses Largest number of trucks Largest number of motorcycles Large spatial variation of persons Fewest number of instances Figure 7. Examples of our annotations on various images of our and val sets. The images were selected based on criteria overlayed on each image. x

11 road sidewalk building wall fence pole traffic light traffic sign vegetation terrain sky person rider car truck bus motorcycle bicycle mean IoU static fine (SF) static coarse (SC) GT segmentation with SF GT segmentation with SC GT segmentation with [40] GT subsampled by GT subsampled by GT subsampled by GT subsampled by GT subsampled by GT subsampled by GT subsampled by nearest ing neighbor Table 9. Detailed results of our control experiments for the pixel-level semantic labeling task in terms of the IoU score on the class level. All numbers are given in percent. See the main paper for details on the listed methods. person rider car truck bus motorcycle bicycle mean iiou static fine (SF) static coarse (SC) GT segmentation with SF GT segmentation with SC GT segmentation with [40] GT subsampled by GT subsampled by GT subsampled by GT subsampled by GT subsampled by GT subsampled by GT subsampled by nearest ing neighbor Table 10. Detailed results of our control experiments for the pixel-level semantic labeling task in terms of the instance-normalized iiou score on the class level. All numbers are given in percent. See the main paper for details on the listed methods. xi

12 val coarse sub road sidewalk building wall fence pole traffic light traffic sign vegetation terrain sky person rider car truck bus motorcycle bicycle mean IoU FCN-32s FCN-16s FCN-8s FCN-8s FCN-8s FCN-8s [3] ext [3] basic [39] [80] [8] [47] [36] [78] Table 11. Detailed results of our baseline experiments for the pixel-level semantic labeling task in terms of the IoU score on the class level. All numbers are given in percent and we indicate the used ing data for each method, i.e. fine, val fine, coarse extra, as well as a potential downscaling factor (sub) of the input image. See the main paper and Appendix D.1 for details on the listed methods. val coarse sub person rider car truck bus motorcycle bicycle mean iiou FCN-32s FCN-16s FCN-8s FCN-8s FCN-8s FCN-8s [3] extended [3] basic [39] [80] [8] [47] [36] [78] Table 12. Detailed results of our baseline experiments for the pixel-level semantic labeling task in terms of the instance-normalized iiou score on the class level. All numbers are given in percent and we indicate the used ing data for each method, i.e. fine, val fine, coarse extra, as well as a potential downscaling factor (sub) of the input image. See the main paper and Appendix D.1 for details on the listed methods. Proposals Classifier person rider car MCG regions FRCN MCG bboxes FRCN MCG hulls FRCN GT bboxes FRCN GT regions FRCN MCG regions GT MCG bboxes GT MCG hulls GT Table 13. Detailed results of our baseline experiments for the instance-level semantic labeling task in terms of the region-level average precision scores AP on the class level. All numbers are given in percent. See the main paper and Appendix D.2 for details on the listed methods. truck bus motorcycle bicycle mean AP xii

13 Proposals Classifier person rider car MCG regions FRCN MCG bboxes FRCN MCG hulls FRCN GT bboxes FRCN GT regions FRCN MCG regions GT MCG bboxes GT MCG hulls GT Table 14. Detailed results of our baseline experiments for the instance-level semantic labeling task in terms of the region-level average precision scores AP 50% for an overlap value of 50 %. All numbers are given in percent. See the main paper and Appendix D.2 for details on the listed methods. truck bus motorcycle bicycle mean AP 50% Proposals Classifier person rider car MCG regions FRCN MCG bboxes FRCN MCG hulls FRCN GT bboxes FRCN GT regions FRCN MCG regions GT MCG bboxes GT MCG hulls GT Table 15. Detailed results of our baseline experiments for the instance-level semantic labeling task in terms of the region-level average precision scores AP 100m for objects within 100 m. All numbers are given in percent. See the main paper and Appendix D.2 for details on the listed methods. truck bus motorcycle bicycle mean AP 100m Proposals Classifier person rider car MCG regions FRCN MCG bboxes FRCN MCG hulls FRCN GT bboxes FRCN GT regions FRCN MCG regions GT MCG bboxes GT MCG hulls GT Table 16. Detailed results of our baseline experiments for the instance-level semantic labeling task in terms of the region-level average precision scores AP 50m for objects within 50 m. All numbers are given in percent. See the main paper and Appendix D.2 for details on the listed methods. truck bus motorcycle bicycle mean AP 50m xiii

14 Image Annotation static fine (SF) static coarse (SC) GT segmentation w/ SF GT segmentation w/ SC GT segmentation w/ [40] GT subsampled by 2 GT subsampled by 8 GT subsampled by 32 GT subsampled by 128 nearest ing neighbor Figure 8. Exemplary output of our control experiments for the pixel-level semantic labeling task, see the main paper for details. The image is part of our test set and has both, the largest number of instances and persons. xiv

15 Image Annotation FCN-32s FCN-8s FCN-8s half resolution FCN-8s ed on coarse SegNet basic [39] DPN [8] CRF as RNN [3] DeepLab LargeFOV StrongWeak [47] Adelaide [36] Dilated10 [78] Figure 9. Exemplary output of our baselines for the pixel-level semantic labeling task, see the main paper for details. The image is part of our test set and has both, the largest number of instances and persons. xv

16 Image Annotation static fine (SF) static coarse (SC) GT segmentation w/ SF GT segmentation w/ SC GT segmentation w/ [40] GT subsampled by 2 GT subsampled by 8 GT subsampled by 32 GT subsampled by 128 nearest ing neighbor Figure 10. Exemplary output of our control experiments for the pixel-level semantic labeling task, see the main paper for details. The image is part of our test set and has the largest number of car instances. xvi

17 Image Annotation FCN-32s FCN-8s FCN-8s half resolution FCN-8s ed on coarse SegNet basic [39] DPN [8] CRF as RNN [3] DeepLab LargeFOV StrongWeak [47] Adelaide [36] Dilated10 [78] Figure 11. Exemplary output of our baseline experiments for the pixel-level semantic labeling task, see the main paper for details. The image is part of our test set and has the largest number of car instances. xvii

18 Largest number of instances and persons Annotation FRCN + MCG bboxes FRCN + MCG regions FRCN + GT bboxes FRCN + GT regions Largest number of cars Annotation FRCN + MCG bboxes FRCN + MCG regions FRCN + GT bboxes FRCN + GT regions Figure 12. Exemplary output of our control experiments and baselines for the instance-level semantic labeling task, see the main paper for details. xviii

A Fuller Understanding of Fully Convolutional Networks. Evan Shelhamer* Jonathan Long* Trevor Darrell UC Berkeley in CVPR'15, PAMI'16

A Fuller Understanding of Fully Convolutional Networks. Evan Shelhamer* Jonathan Long* Trevor Darrell UC Berkeley in CVPR'15, PAMI'16 A Fuller Understanding of Fully Convolutional Networks Evan Shelhamer* Jonathan Long* Trevor Darrell UC Berkeley in CVPR'15, PAMI'16 1 pixels in, pixels out colorization Zhang et al.2016 monocular depth

More information

Colorful Image Colorizations Supplementary Material

Colorful Image Colorizations Supplementary Material Colorful Image Colorizations Supplementary Material Richard Zhang, Phillip Isola, Alexei A. Efros {rich.zhang, isola, efros}@eecs.berkeley.edu University of California, Berkeley 1 Overview This document

More information

arxiv: v1 [cs.cv] 15 Apr 2016

arxiv: v1 [cs.cv] 15 Apr 2016 High-performance Semantic Segmentation Using Very Deep Fully Convolutional Networks arxiv:1604.04339v1 [cs.cv] 15 Apr 2016 Zifeng Wu, Chunhua Shen, Anton van den Hengel The University of Adelaide, SA 5005,

More information

Deep Learning for Autonomous Driving

Deep Learning for Autonomous Driving Deep Learning for Autonomous Driving Shai Shalev-Shwartz Mobileye IMVC dimension, March, 2016 S. Shalev-Shwartz is also affiliated with The Hebrew University Shai Shalev-Shwartz (MobilEye) DL for Autonomous

More information

Convolutional Networks for Image Segmentation: U-Net 1, DeconvNet 2, and SegNet 3

Convolutional Networks for Image Segmentation: U-Net 1, DeconvNet 2, and SegNet 3 Convolutional Networks for Image Segmentation: U-Net 1, DeconvNet 2, and SegNet 3 1 Olaf Ronneberger, Philipp Fischer, Thomas Brox (Freiburg, Germany) 2 Hyeonwoo Noh, Seunghoon Hong, Bohyung Han (POSTECH,

More information

Learning to Predict Indoor Illumination from a Single Image. Chih-Hui Ho

Learning to Predict Indoor Illumination from a Single Image. Chih-Hui Ho Learning to Predict Indoor Illumination from a Single Image Chih-Hui Ho 1 Outline Introduction Method Overview LDR Panorama Light Source Detection Panorama Recentering Warp Learning From LDR Panoramas

More information

Improving Robustness of Semantic Segmentation Models with Style Normalization

Improving Robustness of Semantic Segmentation Models with Style Normalization Improving Robustness of Semantic Segmentation Models with Style Normalization Evani Radiya-Dixit Department of Computer Science Stanford University evanir@stanford.edu Andrew Tierno Department of Computer

More information

arxiv: v1 [stat.ml] 10 Nov 2017

arxiv: v1 [stat.ml] 10 Nov 2017 Poverty Prediction with Public Landsat 7 Satellite Imagery and Machine Learning arxiv:1711.03654v1 [stat.ml] 10 Nov 2017 Anthony Perez Department of Computer Science Stanford, CA 94305 aperez8@stanford.edu

More information

Scene Perception based on Boosting over Multimodal Channel Features

Scene Perception based on Boosting over Multimodal Channel Features Scene Perception based on Boosting over Multimodal Channel Features Arthur Costea Image Processing and Pattern Recognition Research Center Technical University of Cluj-Napoca Research Group Technical University

More information

TECHNICAL INFORMATION Traffic Template Catalog No. TT1

TECHNICAL INFORMATION Traffic Template Catalog No. TT1 Copyright 2016 by SIRCHIE All Rights Reserved. TECHNICAL INFORMATION Traffic Template Catalog No. TT1 INTRODUCTION Your SIRCHIE Traffic Template is a versatile police tool designed to make even the most

More information

Fully Convolutional Networks for Semantic Segmentation

Fully Convolutional Networks for Semantic Segmentation Fully Convolutional Networks for Semantic Segmentation Jonathan Long* Evan Shelhamer* Trevor Darrell UC Berkeley Presented by: Gordon Christie 1 Overview Reinterpret standard classification convnets as

More information

Semantic Segmentation on Resource Constrained Devices

Semantic Segmentation on Resource Constrained Devices Semantic Segmentation on Resource Constrained Devices Sachin Mehta University of Washington, Seattle In collaboration with Mohammad Rastegari, Anat Caspi, Linda Shapiro, and Hannaneh Hajishirzi Project

More information

Understanding Convolution for Semantic Segmentation

Understanding Convolution for Semantic Segmentation Understanding Convolution for Semantic Segmentation Panqu Wang 1, Pengfei Chen 1, Ye Yuan 2, Ding Liu 3, Zehua Huang 1, Xiaodi Hou 1, Garrison Cottrell 4 1 TuSimple, 2 Carnegie Mellon University, 3 University

More information

THE SCHOOL BUS. Figure 1

THE SCHOOL BUS. Figure 1 THE SCHOOL BUS Federal Motor Vehicle Safety Standards (FMVSS) 571.111 Standard 111 provides the requirements for rear view mirror systems for road vehicles, including the school bus in the US. The Standards

More information

Leaving Certificate 201

Leaving Certificate 201 Coimisiún na Scrúduithe Stáit State Examinations Commission Leaving Certificate 201 Marking Scheme Design and Communication Graphics Ordinary Level Note to teachers and students on the use of published

More information

Understanding Convolution for Semantic Segmentation

Understanding Convolution for Semantic Segmentation Understanding Convolution for Semantic Segmentation Panqu Wang 1, Pengfei Chen 1, Ye Yuan 2, Ding Liu 3, Zehua Huang 1, Xiaodi Hou 1, Garrison Cottrell 4 1 TuSimple, 2 Carnegie Mellon University, 3 University

More information

Physically-Based Rendering for Indoor Scene Understanding Using Convolutional Neural Networks SUPPLEMENTAL MATERIAL

Physically-Based Rendering for Indoor Scene Understanding Using Convolutional Neural Networks SUPPLEMENTAL MATERIAL Physically-Based Rendering for Indoor Scene Understanding Using Convolutional Neural Networks SUPPLEMENTAL MATERIAL Yinda Zhang Shuran Song Ersin Yumer Manolis Savva Joon-Young Lee Hailin Jin Thomas Funkhouser

More information

Tiny ImageNet Challenge Investigating the Scaling of Inception Layers for Reduced Scale Classification Problems

Tiny ImageNet Challenge Investigating the Scaling of Inception Layers for Reduced Scale Classification Problems Tiny ImageNet Challenge Investigating the Scaling of Inception Layers for Reduced Scale Classification Problems Emeric Stéphane Boigné eboigne@stanford.edu Jan Felix Heyse heyse@stanford.edu Abstract Scaling

More information

Detection and Segmentation. Fei-Fei Li & Justin Johnson & Serena Yeung. Lecture 11 -

Detection and Segmentation. Fei-Fei Li & Justin Johnson & Serena Yeung. Lecture 11 - Lecture 11: Detection and Segmentation Lecture 11-1 May 10, 2017 Administrative Midterms being graded Please don t discuss midterms until next week - some students not yet taken A2 being graded Project

More information

Deep Learning. Dr. Johan Hagelbäck.

Deep Learning. Dr. Johan Hagelbäck. Deep Learning Dr. Johan Hagelbäck johan.hagelback@lnu.se http://aiguy.org Image Classification Image classification can be a difficult task Some of the challenges we have to face are: Viewpoint variation:

More information

Mapping road traffic conditions using high resolution satellite images

Mapping road traffic conditions using high resolution satellite images Mapping road traffic conditions using high resolution satellite images NOBIM June 5-6 2008 in Trondheim Siri Øyen Larsen, Jostein Amlien, Line Eikvil, Ragnar Bang Huseby, Hans Koren, and Rune Solberg,

More information

Lecture 23 Deep Learning: Segmentation

Lecture 23 Deep Learning: Segmentation Lecture 23 Deep Learning: Segmentation COS 429: Computer Vision Thanks: most of these slides shamelessly adapted from Stanford CS231n: Convolutional Neural Networks for Visual Recognition Fei-Fei Li, Andrej

More information

An Introduction to Convolutional Neural Networks. Alessandro Giusti Dalle Molle Institute for Artificial Intelligence Lugano, Switzerland

An Introduction to Convolutional Neural Networks. Alessandro Giusti Dalle Molle Institute for Artificial Intelligence Lugano, Switzerland An Introduction to Convolutional Neural Networks Alessandro Giusti Dalle Molle Institute for Artificial Intelligence Lugano, Switzerland Sources & Resources - Andrej Karpathy, CS231n http://cs231n.github.io/convolutional-networks/

More information

Responsible Data Use Assessment for Public Realm Sensing Pilot with Numina. Overview of the Pilot:

Responsible Data Use Assessment for Public Realm Sensing Pilot with Numina. Overview of the Pilot: Responsible Data Use Assessment for Public Realm Sensing Pilot with Numina Overview of the Pilot: Sidewalk Labs vision for people-centred mobility - safer and more efficient public spaces - requires a

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems Light has to go where it is needed: Future Light Based Driver Assistance Systems Thomas Könning¹, Christian Amsel¹, Ingo Hoffmann² ¹ Hella KGaA Hueck & Co., Lippstadt, Germany ² Hella-Aglaia Mobile Vision

More information

Deep Multispectral Semantic Scene Understanding of Forested Environments using Multimodal Fusion

Deep Multispectral Semantic Scene Understanding of Forested Environments using Multimodal Fusion Deep Multispectral Semantic Scene Understanding of Forested Environments using Multimodal Fusion Abhinav Valada, Gabriel L. Oliveira, Thomas Brox, and Wolfram Burgard Department of Computer Science, University

More information

Coimisiún na Scrúduithe Stáit State Examinations Commission. Leaving Certificate Marking Scheme. Design and Communication Graphics

Coimisiún na Scrúduithe Stáit State Examinations Commission. Leaving Certificate Marking Scheme. Design and Communication Graphics Coimisiún na Scrúduithe Stáit State Examinations Commission Leaving Certificate 2016 Marking Scheme Design and Communication Graphics Ordinary Level Note to teachers and students on the use of published

More information

Moving Game X to YOUR Location In this tutorial, you will remix Game X, making changes so it can be played in a location near you.

Moving Game X to YOUR Location In this tutorial, you will remix Game X, making changes so it can be played in a location near you. Moving Game X to YOUR Location In this tutorial, you will remix Game X, making changes so it can be played in a location near you. About Game X Game X is about agency and civic engagement in the context

More information

Semantic Segmented Style Transfer Kevin Yang* Jihyeon Lee* Julia Wang* Stanford University kyang6

Semantic Segmented Style Transfer Kevin Yang* Jihyeon Lee* Julia Wang* Stanford University kyang6 Semantic Segmented Style Transfer Kevin Yang* Jihyeon Lee* Julia Wang* Stanford University kyang6 Stanford University jlee24 Stanford University jwang22 Abstract Inspired by previous style transfer techniques

More information

Real Time Traffic Light Control System Using Image Processing

Real Time Traffic Light Control System Using Image Processing Real Time Traffic Light Control System Using Image Processing Darshan J #1, Siddhesh L. #2, Hitesh B. #3, Pratik S.#4 Department of Electronics and Telecommunications Student of KC College Of Engineering

More information

AUTOMATIC DETECTION OF HEDGES AND ORCHARDS USING VERY HIGH SPATIAL RESOLUTION IMAGERY

AUTOMATIC DETECTION OF HEDGES AND ORCHARDS USING VERY HIGH SPATIAL RESOLUTION IMAGERY AUTOMATIC DETECTION OF HEDGES AND ORCHARDS USING VERY HIGH SPATIAL RESOLUTION IMAGERY Selim Aksoy Department of Computer Engineering, Bilkent University, Bilkent, 06800, Ankara, Turkey saksoy@cs.bilkent.edu.tr

More information

Suveying Lectures for CE 498

Suveying Lectures for CE 498 Suveying Lectures for CE 498 SURVEYING CLASSIFICATIONS Surveying work can be classified as follows: 1- Preliminary Surveying In this surveying the detailed data are collected by determining its locations

More information

Seeing Behind the Camera: Identifying the Authorship of a Photograph (Supplementary Material)

Seeing Behind the Camera: Identifying the Authorship of a Photograph (Supplementary Material) Seeing Behind the Camera: Identifying the Authorship of a Photograph (Supplementary Material) 1 Introduction Christopher Thomas Adriana Kovashka Department of Computer Science University of Pittsburgh

More information

Multi-task Learning of Dish Detection and Calorie Estimation

Multi-task Learning of Dish Detection and Calorie Estimation Multi-task Learning of Dish Detection and Calorie Estimation Department of Informatics, The University of Electro-Communications, Tokyo 1-5-1 Chofugaoka, Chofu-shi, Tokyo 182-8585 JAPAN ABSTRACT In recent

More information

Challenges for Deep Scene Understanding

Challenges for Deep Scene Understanding Challenges for Deep Scene Understanding BoleiZhou MIT Bolei Zhou Hang Zhao Xavier Puig Sanja Fidler (UToronto) Adela Barriuso Aditya Khosla Antonio Torralba Aude Oliva Objects in the Scene Context Challenge

More information

Contrast enhancement with the noise removal. by a discriminative filtering process

Contrast enhancement with the noise removal. by a discriminative filtering process Contrast enhancement with the noise removal by a discriminative filtering process Badrun Nahar A Thesis in The Department of Electrical and Computer Engineering Presented in Partial Fulfillment of the

More information

Single Frequency Precise Point Positioning: obtaining a map accurate to lane-level

Single Frequency Precise Point Positioning: obtaining a map accurate to lane-level Single Frequency Precise Point Positioning: obtaining a map accurate to lane-level V.L. Knoop P.F. de Bakker C.C.J.M. Tiberius B. van Arem Abstract Modern Intelligent Transport Solutions can achieve improvement

More information

Date Requested, 200_ Work Order No. Funding source Name of project Project limits: Purpose of the project

Date Requested, 200_ Work Order No. Funding source Name of project Project limits: Purpose of the project Bureau of Engineering SURVEY DIVISION REQUEST FOR TOPOGRAPHIC SURVEY Date Requested, 200_ Work Order No. Funding source Name of project Project limits: Purpose of the project Caltrans involvement (must

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

arxiv: v1 [cs.cv] 19 Jun 2017

arxiv: v1 [cs.cv] 19 Jun 2017 Satellite Imagery Feature Detection using Deep Convolutional Neural Network: A Kaggle Competition Vladimir Iglovikov True Accord iglovikov@gmail.com Sergey Mushinskiy Open Data Science cepera.ang@gmail.com

More information

Situational Awareness A Missing DP Sensor output

Situational Awareness A Missing DP Sensor output Situational Awareness A Missing DP Sensor output Improving Situational Awareness in Dynamically Positioned Operations Dave Sanderson, Engineering Group Manager. Abstract Guidance Marine is at the forefront

More information

Domain Adaptation & Transfer: All You Need to Use Simulation for Real

Domain Adaptation & Transfer: All You Need to Use Simulation for Real Domain Adaptation & Transfer: All You Need to Use Simulation for Real Boqing Gong Tecent AI Lab Department of Computer Science An intelligent robot Semantic segmentation of urban scenes Assign each pixel

More information

PRECISE GRADING PLAN CHECKLIST

PRECISE GRADING PLAN CHECKLIST PRECISE GRADING PLAN CHECKLIST PUBLIC WORKS DEPARTMENT / ENGINEERING DIVISION 8130 Allison Avenue, La Mesa, CA 91942 Phone: 619. 667.1166 Fax: 619. 667.1380 Grading plans shall address both rough grading

More information

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 Product Vision Company Introduction Apostera GmbH with headquarter in Munich, was

More information

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS INFOTEH-JAHORINA Vol. 10, Ref. E-VI-11, p. 892-896, March 2011. MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS Jelena Cvetković, Aleksej Makarov, Sasa Vujić, Vlatacom d.o.o. Beograd Abstract -

More information

Plan Preparation Checklist

Plan Preparation Checklist Appendix D Plan Preparation Checklist It is the responsibility of the Designer to complete and submit this checklist along with all required drawings for OUC (EFP) Review. All drawings submitted for OUC

More information

Driver Education Classroom and In-Car Curriculum Unit 3 Space Management System

Driver Education Classroom and In-Car Curriculum Unit 3 Space Management System Driver Education Classroom and In-Car Curriculum Unit 3 Space Management System Driver Education Classroom and In-Car Instruction Unit 3-2 Unit Introduction Unit 3 will introduce operator procedural and

More information

Evaluation of Image Segmentation Based on Histograms

Evaluation of Image Segmentation Based on Histograms Evaluation of Image Segmentation Based on Histograms Andrej FOGELTON Slovak University of Technology in Bratislava Faculty of Informatics and Information Technologies Ilkovičova 3, 842 16 Bratislava, Slovakia

More information

Experiments with An Improved Iris Segmentation Algorithm

Experiments with An Improved Iris Segmentation Algorithm Experiments with An Improved Iris Segmentation Algorithm Xiaomei Liu, Kevin W. Bowyer, Patrick J. Flynn Department of Computer Science and Engineering University of Notre Dame Notre Dame, IN 46556, U.S.A.

More information

SOLIDWORKS 2018 Basic Tools

SOLIDWORKS 2018 Basic Tools SOLIDWORKS 2018 Basic Tools Getting Started with Parts, Assemblies and Drawings Paul Tran CSWE, CSWI SDC PUBLICATIONS Better Textbooks. Lower Prices. www.sdcpublications.com Powered by TCPDF (www.tcpdf.org)

More information

CONFIGURATION AND GENERATION OF ROAD SEGMENTS AND JUNCTIONS FOR VERIFICATION OF AUTONOMOUS SYSTEMS

CONFIGURATION AND GENERATION OF ROAD SEGMENTS AND JUNCTIONS FOR VERIFICATION OF AUTONOMOUS SYSTEMS CONFIGURATION AND GENERATION OF ROAD SEGMENTS AND JUNCTIONS FOR VERIFICATION OF AUTONOMOUS SYSTEMS Kick-Off Workshop ASAM OpenDRIVE 2018-10 Martin Herrmann, Martin Butz Bosch Corporate Research Verification

More information

Travel Photo Album Summarization based on Aesthetic quality, Interestingness, and Memorableness

Travel Photo Album Summarization based on Aesthetic quality, Interestingness, and Memorableness Travel Photo Album Summarization based on Aesthetic quality, Interestingness, and Memorableness Jun-Hyuk Kim and Jong-Seok Lee School of Integrated Technology and Yonsei Institute of Convergence Technology

More information

Caatinga - Appendix. Collection 3. Version 1. General coordinator Washington J. S. Franca Rocha (UEFS)

Caatinga - Appendix. Collection 3. Version 1. General coordinator Washington J. S. Franca Rocha (UEFS) Caatinga - Appendix Collection 3 Version 1 General coordinator Washington J. S. Franca Rocha (UEFS) Team Diego Pereira Costa (UEFS/GEODATIN) Frans Pareyn (APNE) José Luiz Vieira (APNE) Rodrigo N. Vasconcelos

More information

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Activity Recognition Based on L. Liao, D. J. Patterson, D. Fox,

More information

CS 7643: Deep Learning

CS 7643: Deep Learning CS 7643: Deep Learning Topics: Toeplitz matrices and convolutions = matrix-mult Dilated/a-trous convolutions Backprop in conv layers Transposed convolutions Dhruv Batra Georgia Tech HW1 extension 09/22

More information

Post Hike Log Guide St. Joseph s Pelandok Scout Group

Post Hike Log Guide St. Joseph s Pelandok Scout Group Post Hike Log Guide St. Joseph s Pelandok Scout Group 1 POST HIKE LOG GUIDE - ST. JOSEPH S PELANDOK SCOUT GROUP - Overview 1.0 PREFACE... 3 2.0 AREA SKETCH... 4 2.1 Information... 4 2.2 Assessment... 4

More information

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions.

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions. 12 Image Deblurring This chapter describes how to deblur an image using the toolbox deblurring functions. Understanding Deblurring (p. 12-2) Using the Deblurring Functions (p. 12-5) Avoiding Ringing in

More information

Landscaping Tutorial

Landscaping Tutorial Landscaping Tutorial This tutorial describes how to use Home Designer Essentials s Terrain Tools. In it, you will learn how to add elevation information to your terrain, how to create terrain features,

More information

Lecture 7: Scene Text Detection and Recognition. Dr. Cong Yao Megvii (Face++) Researcher

Lecture 7: Scene Text Detection and Recognition. Dr. Cong Yao Megvii (Face++) Researcher Lecture 7: Scene Text Detection and Recognition Dr. Cong Yao Megvii (Face++) Researcher yaocong@megvii.com Outline Background and Introduction Conventional Methods Deep Learning Methods Datasets and Competitions

More information

Landscaping Tutorial. Chapter 5:

Landscaping Tutorial. Chapter 5: Chapter 5: Landscaping Tutorial This tutorial was written to help you learn how to use Home Designer Landscape and Deck s Terrain tools. In this tutorial, you will learn how to add elevation information

More information

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques Kevin Rushant, Department of Computer Science, University of Sheffield, GB. email: krusha@dcs.shef.ac.uk Libor Spacek,

More information

1 st Keypoints Challenge. ImageNet and COCO Visual Recognition Challenges Workshop. Yin Cui, Tsung-Yi Lin, Matteo Ruggero Ronchi, Genevieve Patterson

1 st Keypoints Challenge. ImageNet and COCO Visual Recognition Challenges Workshop. Yin Cui, Tsung-Yi Lin, Matteo Ruggero Ronchi, Genevieve Patterson 1 st Keypoints Challenge Yin Cui, Tsung-Yi Lin, Matteo Ruggero Ronchi, Genevieve Patterson ImageNet and COCO Visual Recognition Challenges Workshop Sunday, October 9th, ECCV 2016 Dataset Dataset Statistics

More information

Revision of the EU General Safety Regulation and Pedestrian Safety Regulation

Revision of the EU General Safety Regulation and Pedestrian Safety Regulation AC.nl Revision of the EU General Safety Regulation and Pedestrian Safety Regulation 11 September 2018 ETSC isafer Fitting safety as standard Directorate-General for Internal Market, Automotive and Mobility

More information

Advances in Vehicle Periphery Sensing Techniques Aimed at Realizing Autonomous Driving

Advances in Vehicle Periphery Sensing Techniques Aimed at Realizing Autonomous Driving FEATURED ARTICLES Autonomous Driving Technology for Connected Cars Advances in Vehicle Periphery Sensing Techniques Aimed at Realizing Autonomous Driving Progress is being made on vehicle periphery sensing,

More information

Numerical Modeling of Grouted Soil Nails

Numerical Modeling of Grouted Soil Nails Numerical Modeling of Grouted Soil Nails Dr. Haider S. Al -Jubair Department of Civil Engineering University of Basrah-College of Engineering Basrah, Iraq Afaf A. Maki Department of Civil Engineering University

More information

Mixed Pixels Endmembers & Spectral Unmixing

Mixed Pixels Endmembers & Spectral Unmixing Mixed Pixels Endmembers & Spectral Unmixing Mixed Pixel Analysis 1 Mixed Pixels and Spectral Unmixing Spectral Mixtures Areal Aggregate Intimate TYPES of MIXTURES Areal Aggregate Intimate Pixel 1 Pixel

More information

DSNet: An Efficient CNN for Road Scene Segmentation

DSNet: An Efficient CNN for Road Scene Segmentation DSNet: An Efficient CNN for Road Scene Segmentation Ping-Rong Chen 1 Hsueh-Ming Hang 1 1 National Chiao Tung University {james50120.ee05g, hmhang}@nctu.edu.tw Sheng-Wei Chan 2 Jing-Jhih Lin 2 2 Industrial

More information

Virtual Worlds for the Perception and Control of Self-Driving Vehicles

Virtual Worlds for the Perception and Control of Self-Driving Vehicles Virtual Worlds for the Perception and Control of Self-Driving Vehicles Dr. Antonio M. López antonio@cvc.uab.es Index Context SYNTHIA: CVPR 16 SYNTHIA: Reloaded SYNTHIA: Evolutions CARLA Conclusions Index

More information

Contents. Notes on the use of this publication

Contents. Notes on the use of this publication Contents Preface xxiii Scope Notes on the use of this publication xxv xxvi 1 Layout of drawings 1 1.1 General 1 1.2 Drawing sheets 1 1.3 Title block 2 1.4 Borders and frames 2 1.5 Drawing formats 2 1.6

More information

THE problem of automating the solving of

THE problem of automating the solving of CS231A FINAL PROJECT, JUNE 2016 1 Solving Large Jigsaw Puzzles L. Dery and C. Fufa Abstract This project attempts to reproduce the genetic algorithm in a paper entitled A Genetic Algorithm-Based Solver

More information

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Why is this important What are the major approaches Examples of digital image enhancement Follow up exercises

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Deep Learning Barnabás Póczos Credits Many of the pictures, results, and other materials are taken from: Ruslan Salakhutdinov Joshua Bengio Geoffrey Hinton Yann LeCun 2

More information

Lesson 08. Convolutional Neural Network. Ing. Marek Hrúz, Ph.D. Katedra Kybernetiky Fakulta aplikovaných věd Západočeská univerzita v Plzni.

Lesson 08. Convolutional Neural Network. Ing. Marek Hrúz, Ph.D. Katedra Kybernetiky Fakulta aplikovaných věd Západočeská univerzita v Plzni. Lesson 08 Convolutional Neural Network Ing. Marek Hrúz, Ph.D. Katedra Kybernetiky Fakulta aplikovaných věd Západočeská univerzita v Plzni Lesson 08 Convolution we will consider 2D convolution the result

More information

Analyzing Hemispherical Photographs Using SLIM software

Analyzing Hemispherical Photographs Using SLIM software Analyzing Hemispherical Photographs Using SLIM software Phil Comeau (April 19, 2010) [Based on notes originally compiled by Dan MacIsaac November 2002]. Program Version: SLIM V2.2M: June 2009 Notes on

More information

MATLAB 및 Simulink 를이용한운전자지원시스템개발

MATLAB 및 Simulink 를이용한운전자지원시스템개발 MATLAB 및 Simulink 를이용한운전자지원시스템개발 김종헌차장 Senior Application Engineer MathWorks Korea 2015 The MathWorks, Inc. 1 Example : Sensor Fusion with Monocular Vision & Radar Configuration Monocular Vision installed

More information

PASS Sample Size Software

PASS Sample Size Software Chapter 945 Introduction This section describes the options that are available for the appearance of a histogram. A set of all these options can be stored as a template file which can be retrieved later.

More information

Research on Hand Gesture Recognition Using Convolutional Neural Network

Research on Hand Gesture Recognition Using Convolutional Neural Network Research on Hand Gesture Recognition Using Convolutional Neural Network Tian Zhaoyang a, Cheng Lee Lung b a Department of Electronic Engineering, City University of Hong Kong, Hong Kong, China E-mail address:

More information

Human Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc.

Human Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc. Human Vision and Human-Computer Interaction Much content from Jeff Johnson, UI Wizards, Inc. are these guidelines grounded in perceptual psychology and how can we apply them intelligently? Mach bands:

More information

Laboratory 1: Uncertainty Analysis

Laboratory 1: Uncertainty Analysis University of Alabama Department of Physics and Astronomy PH101 / LeClair May 26, 2014 Laboratory 1: Uncertainty Analysis Hypothesis: A statistical analysis including both mean and standard deviation can

More information

PART XII: TOPOGRAPHIC SURVEYS

PART XII: TOPOGRAPHIC SURVEYS PART XII: TOPOGRAPHIC SURVEYS 12.1 Purpose and Scope The purpose of performing topographic surveys is to map a site for the depiction of man-made and natural features that are on, above, or below the surface

More information

Automatic tumor segmentation in breast ultrasound images using a dilated fully convolutional network combined with an active contour model

Automatic tumor segmentation in breast ultrasound images using a dilated fully convolutional network combined with an active contour model Automatic tumor segmentation in breast ultrasound images using a dilated fully convolutional network combined with an active contour model Yuzhou Hu Departmentof Electronic Engineering, Fudan University,

More information

OBJECTIVE OF THE BOOK ORGANIZATION OF THE BOOK

OBJECTIVE OF THE BOOK ORGANIZATION OF THE BOOK xv Preface Advancement in technology leads to wide spread use of mounting cameras to capture video imagery. Such surveillance cameras are predominant in commercial institutions through recording the cameras

More information

Derek Allman a, Austin Reiter b, and Muyinatu Bell a,c

Derek Allman a, Austin Reiter b, and Muyinatu Bell a,c Exploring the effects of transducer models when training convolutional neural networks to eliminate reflection artifacts in experimental photoacoustic images Derek Allman a, Austin Reiter b, and Muyinatu

More information

DEEP LEARNING ON RF DATA. Adam Thompson Senior Solutions Architect March 29, 2018

DEEP LEARNING ON RF DATA. Adam Thompson Senior Solutions Architect March 29, 2018 DEEP LEARNING ON RF DATA Adam Thompson Senior Solutions Architect March 29, 2018 Background Information Signal Processing and Deep Learning Radio Frequency Data Nuances AGENDA Complex Domain Representations

More information

Example of Analysis of Yield or Landsat Data Based on Assessing the Consistently Lowest 20 Percent by Using

Example of Analysis of Yield or Landsat Data Based on Assessing the Consistently Lowest 20 Percent by Using GIS Ag Maps www.gisagmaps.com Example of Analysis of Yield or Landsat Data Based on Assessing the Consistently Lowest 20 Percent by Using Soil Darkness, Flow Accumulation, Convex Areas, and Sinks Two aspects

More information

Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1

Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1 Objective: Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1 This Matlab Project is an extension of the basic correlation theory presented in the course. It shows a practical application

More information

Exploiting Geo-fences to Document Truck Activity Times at the Ambassador and Blue Water Bridge Gateways

Exploiting Geo-fences to Document Truck Activity Times at the Ambassador and Blue Water Bridge Gateways Exploiting Geo-fences to Document Truck Activity Times at the Ambassador and Blue Water Bridge Gateways Mark R. McCord The Ohio State University Columbus, OH Ohio Freight Conference Toledo, Ohio September

More information

Close-Range Photogrammetry for Accident Reconstruction Measurements

Close-Range Photogrammetry for Accident Reconstruction Measurements Close-Range Photogrammetry for Accident Reconstruction Measurements iwitness TM Close-Range Photogrammetry Software www.iwitnessphoto.com Lee DeChant Principal DeChant Consulting Services DCS Inc Bellevue,

More information

ESD 4.0 Quick Start Lessons

ESD 4.0 Quick Start Lessons ESD 4.0 Quick Start Lessons Overview The following lessons will teach you the skills needed to draw most traffic accident scenes. Using Easy Street Draw, follow the step-by-step instructions to create

More information

Contents Modeling of Socio-Economic Systems Agent-Based Modeling

Contents Modeling of Socio-Economic Systems Agent-Based Modeling Contents 1 Modeling of Socio-Economic Systems... 1 1.1 Introduction... 1 1.2 Particular Difficulties of Modeling Socio-Economic Systems... 2 1.3 Modeling Approaches... 4 1.3.1 Qualitative Descriptions...

More information

Eyedentify MMR SDK. Technical sheet. Version Eyedea Recognition, s.r.o.

Eyedentify MMR SDK. Technical sheet. Version Eyedea Recognition, s.r.o. Eyedentify MMR SDK Technical sheet Version 2.3.1 010001010111100101100101011001000110010101100001001000000 101001001100101011000110110111101100111011011100110100101 110100011010010110111101101110010001010111100101100101011

More information

CHAPTER 11 SURVEY CADD

CHAPTER 11 SURVEY CADD CHAPTER 11 SURVEY CADD Chapter Contents Sec. 11.01 Sec. 11.02 Sec. 11.03 Sec. 11.04 Sec. 11.05 Sec. 11.06 Sec. 11.07 Sec. 11.08 Sec. 11.09 Sec. 11.10 General Description of Survey File Contents of Survey

More information

Deep Learning for Infrastructure Assessment in Africa using Remote Sensing Data

Deep Learning for Infrastructure Assessment in Africa using Remote Sensing Data Deep Learning for Infrastructure Assessment in Africa using Remote Sensing Data Pascaline Dupas Department of Economics, Stanford University Data for Development Initiative @ Stanford Center on Global

More information

LD20558-L Parking Lots with AutoCAD Civil 3D Corridors

LD20558-L Parking Lots with AutoCAD Civil 3D Corridors LD20558-L Parking Lots with AutoCAD Civil 3D Corridors Steven Hill CAD Manager, Civil Designer / Geosyntec Consultants.NET Application Developer / Red Transit Consultants, LLC Learning Objectives Discover

More information

State Road A1A North Bridge over ICWW Bridge

State Road A1A North Bridge over ICWW Bridge Final Report State Road A1A North Bridge over ICWW Bridge Draft Design Traffic Technical Memorandum Contract Number: C-9H13 TWO 5 - Financial Project ID 249911-2-22-01 March 2016 Prepared for: Florida

More information

SOLIDWORKS 2017 Basic Tools

SOLIDWORKS 2017 Basic Tools SOLIDWORKS 2017 Basic Tools Getting Started with Parts, Assemblies and Drawings Paul Tran CSWE, CSWI SDC PUBLICATIONS Better Textbooks. Lower Prices. www.sdcpublications.com Powered by TCPDF (www.tcpdf.org)

More information

Structure Inference Net: Object Detection Using Scene-Level Context and Instance-Level Relationships

Structure Inference Net: Object Detection Using Scene-Level Context and Instance-Level Relationships Structure Inference Net: Object Detection Using Scene-Level Context and Instance-Level Relationships Yong Liu,2, Ruiping Wang,2,3, Shiguang Shan,2,3, Xilin Chen,2,3 Key Laboratory of Intelligent Information

More information

Landscaping Tutorial

Landscaping Tutorial Landscaping Tutorial This tutorial describes how to use Home Designer Architectural s Terrain Tools. In it, you will learn how to add elevation information to your terrain, how to create terrain features,

More information

Contents Technical background II. RUMBA technical specifications III. Hardware connection IV. Set-up of the instrument Laboratory set-up

Contents Technical background II. RUMBA technical specifications III. Hardware connection IV. Set-up of the instrument Laboratory set-up RUMBA User Manual Contents I. Technical background... 3 II. RUMBA technical specifications... 3 III. Hardware connection... 3 IV. Set-up of the instrument... 4 1. Laboratory set-up... 4 2. In-vivo set-up...

More information