E190Q Lecture 15 Autonomous Robot Navigation

Similar documents
Search then involves moving from state-to-state in the problem space to find a goal (or to terminate without finding a goal).

Foundations of AI. 3. Solving Problems by Searching. Problem-Solving Agents, Formulating Problems, Search Strategies

Informed search algorithms. Chapter 3 (Based on Slides by Stuart Russell, Richard Korf, Subbarao Kambhampati, and UW-AI faculty)

Improvement of Robot Path Planning Using Particle. Swarm Optimization in Dynamic Environments. with Mobile Obstacles and Target

Randomized Motion Planning for Groups of Nonholonomic Robots

Motion planning in mobile robots. Britta Schulte 3. November 2014

Foundations of AI. 3. Solving Problems by Searching. Problem-Solving Agents, Formulating Problems, Search Strategies

Outline for today s lecture Informed Search Optimal informed search: A* (AIMA 3.5.2) Creating good heuristic functions Hill Climbing

Solving Problems by Searching

Artificial Intelligence Lecture 3

Path Planning in Dynamic Environments Using Time Warps. S. Farzan and G. N. DeSouza

Heuristics & Pattern Databases for Search Dan Weld

An Improved Path Planning Method Based on Artificial Potential Field for a Mobile Robot

Kinodynamic Motion Planning Amidst Moving Obstacles

COS Lecture 7 Autonomous Robot Navigation

Informed Search. Read AIMA Some materials will not be covered in lecture, but will be on the midterm.

Yusuke Tamura. Atsushi Yamashita and Hajime Asama

Unit 12: Artificial Intelligence CS 101, Fall 2018

Kinodynamic Motion Planning Amidst Moving Obstacles

Informed search algorithms

Homework Assignment #1

Path Planning of Mobile Robot Using Fuzzy- Potential Field Method

AIMA 3.5. Smarter Search. David Cline

CS 387: GAME AI BOARD GAMES. 5/24/2016 Instructor: Santiago Ontañón

Structure and Synthesis of Robot Motion

CS 171, Intro to A.I. Midterm Exam Fall Quarter, 2016

May Edited by: Roemi E. Fernández Héctor Montes

CRYPTOSHOOTER MULTI AGENT BASED SECRET COMMUNICATION IN AUGMENTED VIRTUALITY

Problem 1. (15 points) Consider the so-called Cryptarithmetic problem shown below.

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Integrating Phased Array Path Planning with Intelligent Satellite Scheduling

Searching for Solu4ons. Searching for Solu4ons. Example: Traveling Romania. Example: Vacuum World 9/8/09

UMBC 671 Midterm Exam 19 October 2009

Heuristics, and what to do if you don t know what to do. Carl Hultquist

21073 Hamburg, Germany.

Decision Science Letters

UNIVERSITY of PENNSYLVANIA CIS 391/521: Fundamentals of AI Midterm 1, Spring 2010

Motion Planning in Dynamic Environments

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

Artificial Neural Network based Mobile Robot Navigation

HAPTIC GUIDANCE BASED ON HARMONIC FUNCTIONS FOR THE EXECUTION OF TELEOPERATED ASSEMBLY TASKS. Carlos Vázquez Jan Rosell,1

: Principles of Automated Reasoning and Decision Making Midterm

CS 771 Artificial Intelligence. Adversarial Search

the question of whether computers can think is like the question of whether submarines can swim -- Dijkstra

A GRASP HEURISTIC FOR THE COOPERATIVE COMMUNICATION PROBLEM IN AD HOC NETWORKS

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

CSE 573: Artificial Intelligence Autumn 2010

A GRASP heuristic for the Cooperative Communication Problem in Ad Hoc Networks

Informatics 2D: Tutorial 1 (Solutions)

Simple Search Algorithms

CSE 573 Problem Set 1. Answers on 10/17/08

Robot Team Formation Control using Communication "Throughput Approach"

Conversion Masters in IT (MIT) AI as Representation and Search. (Representation and Search Strategies) Lecture 002. Sandro Spina

Heuristic Search with Pre-Computed Databases

Travel time uncertainty and network models

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots

PSO based path planner of an autonomous mobile robot

CS 188 Introduction to Fall 2014 Artificial Intelligence Midterm

10/5/2015. Constraint Satisfaction Problems. Example: Cryptarithmetic. Example: Map-coloring. Example: Map-coloring. Constraint Satisfaction Problems

Game-Playing & Adversarial Search

Problem Solving and Search

Robot Motion Control and Planning

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

COMP5211 Lecture 3: Agents that Search

Craiova. Dobreta. Eforie. 99 Fagaras. Giurgiu. Hirsova. Iasi. Lugoj. Mehadia. Neamt. Oradea. 97 Pitesti. Sibiu. Urziceni Vaslui.

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments

6.034 Quiz 2 20 October 2010

Free Cell Solver. Copyright 2001 Kevin Atkinson Shari Holstege December 11, 2001

Obstacle Displacement Prediction for Robot Motion Planning and Velocity Changes

A Probabilistic Method for Planning Collision-free Trajectories of Multiple Mobile Robots

Intelligent Agents & Search Problem Formulation. AIMA, Chapters 2,

Adversarial Search 1

22c:145 Artificial Intelligence

Supervisory Control for Cost-Effective Redistribution of Robotic Swarms

Announcements. CS 188: Artificial Intelligence Fall Today. Tree-Structured CSPs. Nearly Tree-Structured CSPs. Tree Decompositions*

Online Replanning for Reactive Robot Motion: Practical Aspects

Experimental Comparison of Uninformed and Heuristic AI Algorithms for N Puzzle Solution

Extended Kalman Filtering

Artificial Intelligence Uninformed search

Game Theory and Randomized Algorithms

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 2 February, 2018

2048: An Autonomous Solver

TRIAL-BASED HEURISTIC TREE SEARCH FOR FINITE HORIZON MDPS. Thomas Keller and Malte Helmert Presented by: Ryan Berryhill

CSC384 Introduction to Artificial Intelligence : Heuristic Search

Path Planning with Fast Marching Methods

ARTIFICIAL INTELLIGENCE (CS 370D)

Plan Folding Motion for Rigid Origami via Discrete Domain Sampling

Distributed Collaborative Path Planning in Sensor Networks with Multiple Mobile Sensor Nodes

Decentralized Coordinated Motion for a Large Team of Robots Preserving Connectivity and Avoiding Collisions

On the Probabilistic Foundations of Probabilistic Roadmaps (Extended Abstract)

A Reconfigurable Guidance System

Overview PROBLEM SOLVING AGENTS. Problem Solving Agents

FAST GOAL NAVIGATION WITH OBSTACLE AVOIDANCE USING A DYNAMIC LOCAL VISUAL MODEL

A Comparison of Particle Swarm Optimization and Gradient Descent in Training Wavelet Neural Network to Predict DGPS Corrections

Conflict Management in Multiagent Robotic System: FSM and Fuzzy Logic Approach

DYNAMIC ROBOT NETWORKS: A COORDINATION PLATFORM FOR MULTI-ROBOT SYSTEMS

CS221 Project Final Report Gomoku Game Agent

Robot Autonomy Project Auto Painting. Team: Ben Ballard Jimit Gandhi Mohak Bhardwaj Pratik Chatrath

Transcription:

E190Q Lecture 15 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 2014 1 Figures courtesy of Probabilistic Robotics (Thrun et. Al.)

Control Structures Planning Based Control Prior Knowledge Operator Commands Localization Cognition Perception Motion Control 2

MP: Outline 1. Multi-Query PRMs 2. Graph Search 3. Artificial Potential Fields 3

MP: Outline 1. Multi-Query PRMs 2. Graph Search 3. Artificial Potential Fields 4

Multi-Query PRMs Multi-Query Strategy 1. Learning Phase: Generate the PRM with two steps: Construction Expansion 2. Query Phase: Connect start and goal configurations to PRM Perform a graph search to find path 5

Multi-Query PRMs milestone local path free space mg mb 6 [Kavraki, Svetska, Latombe,Overmars, 95]

Multi-Query PRMs Nomenclature R=( N, E ) N E c e RoadMap Set of Nodes Set of edges Configuration edge 7

Multi-Query PRMs Learning Phase Construction Step Algorithm 8 Start with empty R=( N, E ) while (not done) { } Generate a random free config c and add to N Choose a subset N c of candidate neighbors around c from N Try to connect c to each node in N c with local planner in the order of increasing distance from c Add the edge found to E

Multi-Query PRMs Learning Phase Construction Step Efficiency-driven Robots with many dofs (high-dim C-spaces) Collision! Static environments 9 Courtesy of C. Allocco

Multi-Query PRMs Learning Phase Local Planner Used to connect two nodes. Must contain collision-check. For good performance, the LP must be: 1. Deterministic - Eliminates the need for storing local plans. 2. Fast - To ensure quick planning queries. 10

Multi-Query PRMs Learning Phase Expansion Step 1. Find the nodes in difficult regions using heuristic weight function w(c) 2. Expand c using random-bounce walks 3. Repeat as necessary 11

Multi-Query PRMs Learning Phase Expansion Step Several options to define weight function w(c) Inversely proportional to the number of nodes within some predefined distance from c Inversely proportional to the distance from c to the nearest connected component not containing c Proportional to the failure ratio of the local planner 12

Multi-Query PRMs Learning Phase Expansion Step 1. Loop 1. Pick a random direction of motion in C-space 2. Move in the direction until an obstacle is hit 3. Check for connection with another node 4. Repeat until the path can be connected to another node 13

Multi-Query PRMs Learning Phase Expansion Step 1.00 0.50 0.50 0.50 0.50 1.00 Efficiency-driven 1.00 0.33 0.50 Robots with many dofs (high-dim C-spaces) Static environments 0.50 0.50 1.00 0.33 0.33 0.33 14 0.50 1.00 0.50 0.25 0.25 0.50 Courtesy of C. Allocco

Multi-Query PRMs Learning Phase Expansion Step 15 1. Loop 1. Pick a random direction of motion in C-space 2. Move in the direction until an obstacle is hit 3. Check for connection with another node 4. Repeat until the path can be connected to another node 2. Store the final config n and the edge (c, n) in R 3. Store the computed path (non-deterministic) 4. Record that n belongs to the same connected component as c

Multi-Query PRMs Query Phase Query Phase Algorithm 1. Given the start and goal configurations s and g, calculate feasible paths P s and P g to the nodes ~ s and g ~ on the roadmap (w/ LP) 2. Calculate the path P from s to g using the roadmap and a tree search planner 16

Multi-Query PRMs Query Phase Efficiency-driven s ~ s Robots with many dofs (high-dim C-spaces) Static environments 17 ~ g g Courtesy of C. Allocco

Probabilistic Road Maps Two Tenets: 1. Checking sampled configurations and connections between samples for collision can be done efficiently. 2. A relatively small number of milestones and local paths are sufficient to capture the connectivity of the free space. 18

Probabilistic Road Maps: Discrete and Continous Planning 19 Courtesy of T. Bretl

MP: Outline 1. Multi-Query PRMs 2. Graph Search 3. Artificial Potential Fields 20

Graph Search Cell decomposition Decompose the free space into simple cells and represent the connectivity of the free space by the adjacency graph of these cells 21

Graph Search Given a discretization of C, a search can be carried out using a Graph Search or gradient descent, etc. Example: Find a path from D to G C D A B D E G B E F A C F G 22

Tree Search Tree nomenclature: Parent Node Child Node Algorithms differ in the order in which they search the branches (edges) of the tree 23

Data Structures The Fringe or Frontier is the collection of nodes waiting to be expanded. 24 Fringe

Tree Search Search Algorithms 1. Breadth First Search 2. Depth First Search 3. A* 25

Breadth-First All the nodes at depth d in the search tree are expanded before nodes at depth d+1 26

Breadth-First Snapshot 1 1 2 3 Initial Visited Fringe Current Visible Goal 27 Fringe: [] + [2,3]

Breadth-First Snapshot 2 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 28 Fringe: [3] + [4,5]

Breadth-First Snapshot 3 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 29 Fringe: [4,5] + [6,7]

Breadth-First Snapshot 4 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 30 Fringe: [5,6,7] + [8,9]

Breadth-First Snapshot 5 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 31 Fringe: [6,7,8,9] + [10,11]

Breadth-First Snapshot 6 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 32 Fringe: [7,8,9,10,11] + [12,13]

Breadth-First Snapshot 7 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 33 Fringe: [8,9.10,11,12,13] + [14,15]

Breadth-First Snapshot 8 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 34 16 17 Fringe: [9,10,11,12,13,14,15] + [16,17]

Breadth-First Snapshot 9 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 35 16 17 18 19 Fringe: [10,11,12,13,14,15,16,17] + [18,19]

Breadth-First Snapshot 10 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 36 16 17 18 19 20 21 Fringe: [11,12,13,14,15,16,17,18,19] + [20,21]

Breadth-First Snapshot 11 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 37 16 17 18 19 20 21 22 23 Fringe: [12, 13, 14, 15, 16, 17, 18, 19, 20, 21] + [22,23]

Breadth-First Snapshot 12 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 Note: The goal node is visible here, but we can not perform the goal test yet. 38 16 17 18 19 20 21 22 23 24 25 Fringe: [13,14,15,16,17,18,19,20,21] + [22,23]

Breadth-First Snapshot 13 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 39 16 17 18 19 20 21 22 23 24 25 26 27 Fringe: [14,15,16,17,18,19,20,21,22,23,24,25] + [26,27]

Breadth-First Snapshot 14 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 40 16 17 18 19 20 21 22 23 24 25 26 27 28 29 Fringe: [15,16,17,18,19,20,21,22,23,24,25,26,27] + [28,29]

Breadth-First Snapshot 15 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 41 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Fringe: [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29] + [30,31]

Breadth-First Snapshot 16 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 42 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Fringe: [17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]

Breadth-First Snapshot 17 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 43 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Fringe: [18,19,20,21,22,23,24,25,26,27,28,29,30,31]

Breadth-First Snapshot 18 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 44 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Fringe: [19,20,21,22,23,24,25,26,27,28,29,30,31]

Breadth-First Snapshot 19 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 45 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Fringe: [20,21,22,23,24,25,26,27,28,29,30,31]

Breadth-First Snapshot 20 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 46 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Fringe: [21,22,23,24,25,26,27,28,29,30,31]

Breadth-First Snapshot 21 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 47 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Fringe: [22,23,24,25,26,27,28,29,30,31]

Breadth-First Snapshot 22 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 48 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Fringe: [23,24,25,26,27,28,29,30,31]

Breadth-First Snapshot 23 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 49 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Fringe: [24,25,26,27,28,29,30,31]

Breadth-First Snapshot 24 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 14 15 Note: The goal test is positive for this node, and a solution is found in 24 steps. 50 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Fringe: [25,26,27,28,29,30,31]

Breadth First Search Complete Optimal if cost is increasing with path depth. Computational complexity O(b d ), where b is the branching factor and d is the depth Space (memory) complexity O(b d ) 51

Tree Search Search Algorithms 1. Breadth First Search 2. Depth First Search 3. A* 52

Depth-First Expands one of the nodes at the deepest level of the tree 53

Depth-First Snapshot 1 1 2 3 Initial Visited Fringe Current Visible Goal 54

Depth-First Snapshot 2 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 55

Depth-First Snapshot 3 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 8 9 56

Depth-First Snapshot 4 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 8 9 57 16 17

Depth-First Snapshot 5 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 8 9 58 16 17

Depth-First Snapshot 6 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 8 9 59 16 17

Depth-First Snapshot 7 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 8 9 60 16 17 18 19

Depth-First Snapshot 8 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 8 9 61 16 17 18 19

Depth-First Snapshot 1 2 3 Initial Visited Fringe Current Visible Goal 4 5 6 7 8 9 10 11 12 13 62 16 17 18 19 20 21 22 23 24 25

Depth First Search Complete if finite depth NOT Optimal if we take first goal found Computational complexity O(b m ), where b is the branching factor and m is the depth Space (memory) complexity O(bm) 63

Graph Search: Outline Search Algorithms 1. Breadth First Search 2. Depth First Search 3. A* 64

Motion Planning: A* Search There are a set of algorithms called Best- First Search They try to search the children of the best node to expand. A* is a best first search algorithm It attempts to make the best node the one that will find the optimal solution and do so in less time. 65

Motion Planning: A* Search A* is optimal and complete, but can take time Its complexity depends on the heuristic, but is exponential with the size of the graph. 66

Motion Planning: A* Search We evaluate a node n for expansion based on the function: Where f(n) = g(n) + h(n) g(n) = path cost from the start node to n h(n) = estimated cost of the cheapest path from node n to the goal 67

Motion Planning: A* Search Example: Cost for one particular node f(n) = g(n) + h(n) 68 n start n n goal g(n) = 1 h(n) = 2

Motion Planning: A* Search Example: Cost for each node f(n) = g(n) + h(n) g=2 h= 3 g=1 h=2 g=3 h= 2 g=4 h=1 n goal 69 n start g=1 h= 2 g=2 h=1

Motion Planning: A* Search The strategy is to expand the node with the cheapest path (lowest f ). This is proven to be complete and optimal, if h(n) is an admissible heuristic. 70

Motion Planning: A* Search Here, h(n) is an admissible heuristic is one that never overestimates the cost to the goal Example: the Euclidean distance. 71

Motion Planning: A* Search Search example: Iteration 1 Fringe set = { f 1 = 2.4, f 2 = 3} f=3 n goal 72 n start f=2.4

Motion Planning: A* Search Search example: Iteration 2 Fringe set = {f 2 = 3, f 3 = 3} f=3 n goal 73 n start f=2.4 f=3

Motion Planning: A* Search Search example: Iteration 3 Fringe set = {f 3 = 3, f 4 = 3.8} f=3.8 f=3 n goal 74 n start f=2.4 f=3

Motion Planning: A* Search Search example: Iteration 4 f=3.8 f=3 n goal 75 n start f=2.4 f=3

Motion Planning: Final Notes A * is often used as a global planner Planner that considers kinematic/dynamic constraints is used for local planning. 76

MP: Outline 1. Multi-Query PRMs 2. Graph Search 3. Artificial Potential Fields 77

Artificial Potential Fields Potential field Define a function over the free space that has a global minimum at the goal configuration and follow its steepest descent 78

Artificial Potential Fields Electric Potentials The electric potential V E (J C -1 ) created by a point charge Q, at a distance r from the charge (relative to the potential at infinity), can be shown to be 79 V E = 1 Q 4πε 0 r

Artificial Potential Fields Electric Fields The electric field intensity E is defined as the force per unit positive charge that would be experienced by a point charge It is obtained by taking the negative gradient of the electric potential 80 E = - V E

Artificial Potential Fields Electric Potential Fields Different arrangements of charges can lead to various fields 81

Artificial Potential Fields In APFs, the robot is treated as a point under the influence of an artificial potential field. Electrical analogy: The generated robot movement is similar to an electric charge under the force of an electric field Mechanical analogy: The generated robot movement is similar to a ball rolling down the hill 82

Artificial Potential Fields In APFs Goals generates attractive force Obstacles generate repulsive forces 83

Artificial Potential Fields For a given configuration space and desired goal, place potentials on obstacles and goals q goal 84 q

Artificial Potential Fields For a given configuration space and desired goal, place potentials on obstacles and goals q goal 85 q

Artificial Potential Fields For any robot configuration q, the forces felt by the robot can be calculated to steer the robot towards the goal. q goal 86 q F attraction

87 Artificial Potential Fields

Potential Field Generation Given potential functions U, Generate artificial force field F(q) Sum all potentials (repulsive and attractive). Differentiate to determine forces Note: functions must be differentiable F(q) = - U(q) = - U att (q) - U rep (q) = - δu/ δx - δu/ δy 88

Attractive Potential Fields Parabolic function representing the Euclidean distance ρ goal (q) = q - q goal to the goal. 89 U att (q) = 1 k att ρ 2 goal (q) Attracting force converges linearly towards 0 (goal) F att (q) = - 2 U att (q) = - k att (q - q goal )

Repulsive Potential Fields Generate a barrier around the obstacle Does not influence robot if far from the obstacle U rep (q) = 1 k rep 1-1 2 if ρ(q) ρ 0 2 ρ(q) ρ 0 0 if ρ(q) > ρ 0 90 Where ρ(q) = q - q obst is the minimum distance to the object

Repulsive Potential Fields Field is positive or zero and tends to infinity as q gets closer to the object F rep (q) = - U rep (q) = k rep 1-1 q - q obst if ρ(q) ρ 0 ρ(q) ρ 0 ρ 3 (q) 0 if ρ(q) > ρ 0 91

Artificial Potential Fields Given current configuration of the robot q 1. Sum total force vectors F(q) generated by the potential fields. 2. Set desired robot velocity (v, w) proportional to the force F(q) q goal q F attraction 92 F repulsion F total

Artificial Potential Fields Local minimums q goal 93 If objects are not convex (i.e. concave), there exist situations where several minimal distances exist and can result in oscillations Not complete

Artificial Potential Fields Extended Potential Fields Many modifications to potential fields have been done in order to improve completeness, optimality. Example: Orientation based potentials Can increase potential depending on orientation of robot Robot Repulsion force 94 Object

Artificial Potential Fields Extended Potential Fields Also, can use rotational fields in one direction 95 Linear source Rotational source

Artificial Potential Fields Example: http://www.youtube.com/watch?v=r9fd7p76zjs 96