D2.3 Safety evaluation and standardization

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "D2.3 Safety evaluation and standardization"

Transcription

1 D2.3 Safety evaluation and standardization

2 Project Acronym: ColRobot Project full title: Collaborative Robotics for Assembly and Kitting in Smart Manufacturing Project No: Call: H2020-ICT-2015 Coordinator: ENSAM Project start date: February 1, 2016 Project duration: 36 months Abstract This public deliverable describes the evaluation of the safety components and presents work relevant for standardization bodies. Document control sheet Title of Document Safety evaluation and standardization Work Package WP2 Safety and standardization Last version date Status Final Document Version: v.8 File Name ColRobot D2.3 Dissemination Level Partner Responsible IFF Versioning and contribution history Version Date Revision Description Partner v.1 15/12/2017 First draft version IFF v Second draft with input from UC and CITC UC, CITC v Revised version IFF v Revisions IFF v Final version ready for internal review IFF v Coordinator review ENSAM v Changes from Technaid and formatting Technaid, IFF v Changes from UC and IFF UC, IFF Disclaimer This document is provided «as is» with no warranties whatsoever, including any warranty or merchantability, noninfringement, fitness for any particular purpose, or any warranty otherwise arising out of any proposal, specification or sample. No license, express or implied, by estoppels or otherwise, to any intellectual property rights are granted herein. The members of the project ColRobot do not accept any liability for actions or omissions of ColRobot members or third parties and disclaim any obligation to enforce the use of this document. This document reflects only the authors' view and the Commission is not responsible for any use that may be made of the information it contains. This document is subject to change without notice. 2

3 Index 1. Introduction Safety evaluation Experimental evaluation workspace monitoring system for safety Objective Description Results Evaluation approach speed of humans (CITC) Objective Description Results Evaluation speed of hand estimation (UC) Objective Description Results High resolution camera with structured aperture evaluations Evaluation soft safety functionality Objective Description Results Evaluation process support functionality Objective Description Results Discussion high resolution camera system with a structured aperture Competing requirements Outlook Standardization Relevant standards Standardization relevant issues Activities to coordinate with standards

4 5. Summary Introduction In this document, we present testing results of the safety system in use in ColRobot, including specific results which will contribute to standards and best practices. The safety components used and developed in the ColRobot project are specific for mobile manipulators and seek to challenge the state of the art regarding human detection and dynamic safety areas. This is particularly important for mobile manipulators, as a high degree of flexibility is needed to accommodate for the robot moving around in a given workspace. As a recap, the overall objectives of the work package 2 can be summarized as the following: Set up a system for human and environment detection Detect human operators working in the robot s workspace, track their relative position to influence the robot s velocity Provide dynamic safety areas that change in size and position according to the risks and robot states, and investigate how different safeguarding methods can be combined for specific applications Evaluation of the systems according to international standards Based upon the risk analysis of the specific use-cases, there are a number of different safety sensors for safeguarding the mobile platform, the manipulator, and the tool and/or handled parts during different specific phases of work. Put simply, laser scanners safeguard the platform during normal platform motion. Additionally a workspace monitoring system has been adapted to work on the various ColRobot demonstrators in order to safeguard different tools. To support the workspace monitoring system and to better calculate the minimum required safety distance for any time, a combination of UWB and IMU sensors will be used to determine the speed and position of human operators and their arms in the workspace relative to the robot. In addition to the safety functionality offered by the workspace monitoring system, further soft safety features have also been implemented to support the robot and to maintain high system availability and throughput. An overview of the components of the workspace monitoring system and the functionalities offered by each is in Figure 1. Figure 1: Overview of the workspace monitoring hardware and the functionalities offered by the different components This document describes the tests and evaluations carried out on the safety systems and for the novel soft safety functionalities of the workspace monitoring system. In particular, the following evaluations have been carried out and will be described here: 4

5 - Evaluation of the detection of intruding objects for hard safety by the workspace monitoring system under various lighting conditions according to IEC Evaluation of the performance of the UWB sensors for measuring the position and speed of humans relative to the mobile manipulator - Evaluation of the performance of the IMU sensors for measuring the position and speed of human arms - Evaluation of the performance of soft safety functionality of the workspace monitoring system for the following tasks: o Detection and classification of humans for soft safety applications o Measurement of objects for supporting bin-picking or related tasks Furthermore, this document will describe which standards this work could interest and will offer insights, questions, and input for specific standardization committees. 2. Safety evaluation In this section we will describe the evaluations carried out related to safety functionalities Experimental evaluation workspace monitoring system for safety An experiment was carried out to test the limitations of the monitoring system under adverse lighting, as described in IEC Safety of machinery Electro-sensitive protective equipment Part 4: Particular requirements for equipment using vision based protective devices (VBPD) Objective The objective of the experiment is to determine what effect different adverse lighting conditions have on the workspace monitoring system. In particular we want to know whether the system always moves into a failsafe state or whether a situation whereby a false positive (non-alarm) occurs due to lighting conditions. Furthermore, we aim to determine with the current set up what the limits of external lighting are (how much extra light is too much, and what kinds of lighting are challenging) Description The test set-up utilized the current demonstrator of the ColRobot workspace monitoring system available at the IFF facilities. This includes a table, the workspace monitoring system mounted approximately 1,5 m above the ground and at a distance of approximately 1,5 m from the table. 5

6 Figure 2: Test set-up with workspace monitoring system overseeing ColRobot work table For testing purposes we mounted an aluminium rod with a diameter of 12 mm to the table (Figure 3). As per the standard, the rod was colored a matte black, as a worst-case. We then conducted two sets of tests, one with the stationary rod in the workspace, and a second control without any objects in the workspace. A virtual safety zone with dimensions 50 cm x 50 cm x 60cm to be monitored was manually positioned on the table surface, so that the stationary rod was either in the safety zone or not. (Figure 4). Figure 3: Test rod with diameter 12mm attached to stationary flange for tests (left) and approximate size of monitored zone (right) 6

7 Figure 4: Tests with stationary rod in monitored safety zone (left) and with monitored safety zone empty (right) During our tests we logged the camera measurements and the output signal from the workspace monitoring system and which would be passed on to the safety circuit in the ColRobot systems. Figure 5: Exemplary adverse lighting set-up (with incandescent lamp) The following table shows the specifications of the different lights used to simulate adverse lighting conditions. Table 1: Specification of lights used to simulate adverse lighting conditions Type of Specifications Comments illumination for testing a) Incandescent Hedler H25s halogen (quartz) lamp: The light intensity was adjusted by 7

8 light W rated power changing the distance of the - Rated voltage: 230V interfering light to the test set-up. b) Flourescent light Here we used two different light sources. The light intensity was adjusted by The first one was fixed 1200 mm above changing the distance of the the workspace and offered an interfering light (on the tripod) to the additional 600 lux in lighting. test set-up. - Size: T8 x 600 mm (25mm diameter) - Rated power: 55 W, 230V - Colour temperature: K The second flourescent light source was on a tripod. - Size: T8 x 600 mm (25mm diameter) - Rated power: 18-20W - Colour temperature: K c) Flashing beacon Nikon SB600 Manually triggered at approximately 1 Hz d) Stroboscopic light Cameo Thunder Wash 100RGB Manually programmed with RGB RGB LEDs channels at full intensity (white light) - each LED with 0,2W and with 1 Hz strobe light frequency The ambient lighting was measured at 400 lux and the accuracy of the light intensity measurement was +/- 5%. The set-up was in a laboratory with no direct sunlight, on an overcast winter day. The tests were carried out such that the light was switched on / off during the measurement, so that not only the stationary behaviour during a specific light intensity was measured, but also the system reaction during sudden changes in light type and intensity. As we will see in the results, besides the overall limits regarding how much incandencent light the system is able to handle, the most errors were due to a sudden change in the light, which resulted in a short false measurement. Each individual test was repeated 10 times. The results are listed with a result and further information about how many times out of 10 that result was observed (e.g. No 9/10 means that in nine out of the ten tests no intrusion was detected) Results Table 2 shows the test sequence and the test results. Of particular importance was the transition from one lighting situation to another. We observed that sudden changes in light were often more of a challenge than the static lighting condition. Therefore, for all cases except those with stroboscopic light, there are separate results for the transitions (turning interfering light on and off) and the static situation (interfering light is on). Table 2: Test sequence and results Without test piece (nothing should be With test piece (intrusion should be 8

9 Test number and description Lighting conditions detected) Test result (intrusion? Yes/no) detected) Test result (intrusion? Yes/no) Q-Tests - Normal operation Test 1 switch on incandescent light with 250 lux increase over ambient light 400 lux -> 690 lux No (10/10) Yes (10/10) incandescent light on with 250 lux increase over ambient light 690 lux No (10/10) Yes (10/10) switch off interfering light 690 lux -> 400 lux No (9/10) Yes (10/10) Test 2 switch on flashing beacon source placed at outer limit of sensing zone, at least 3 m from optical axis of sensor and 2m in height 800 lux No (10/10) Yes (10/10) Test 3 switch on flourescent light sources (with uniform light intensity increase of 250 lux over ambient light) 400 lux -> 640 lux No (10/10) Yes (10/10) flourescent light on 640 lux No (10/10) Yes (10/10) switch off interfering light 640 lux -> 400 lux No (10/10) Yes (10/10) Test 4 switch on incandescent light source with a round object in front of the light to cast a shadow on the passive pattern (<50% of the area viewed by projection system). 789 lux No (7/10) Yes (10/10) Interfering light with shadow on 789 lux No (10/10) Yes (10/10) switch off interfering light 789 lux -> 400 lux No (9/10) Yes (10/10) R-Test Failure to danger caused by indirect light (pattern) Test 5 switch on incandescent light source. Incandescent light source should produce light increase of 1000 lux over 500 lux ambient light. 400 lux -> 1400 lux Yes (10/10) Yes (10/10) Incandescent light source on 1400 lux No (10/10) Yes (10/10) Switch off interfering light 1400 lux -> 400 lux Yes (9/10) Yes (10/10) Test 6 switch on stroboscopic light source 400 lux No (10/10) Yes (10/10) 9

10 Test 7 switch on Flourescent light sources) should produce a uniform light intensity increase of 500 lux over ambient light of 500 lux) 400 lux -> 1000 lux No (9/10) Yes (10/10) flourescent light on 1000 lux No (10/10) Yes (10/10) switch off interfering light 1000 lux -> 400 lux No (10/10) Yes (10/10) S-Test Failure to danger caused by direct light interference (sensor) Test 8 switch on incandescent light source. Incandescent light source should produce light increase of 3000 lux over 500 lux ambient light. Stroboscopic light source placed at outer limit of sensing zone, at least 3 m from optical axis of sensor and 2m in height 400 lux -> 3500 lux Yes (10/10) Yes (10/10) 3500 lux Incandescent light and Stroboscopic light on Yes (10/10) Yes (10/10) switch off interfering light 3500 lux ->400 lux Yes (10/10) Yes (10/10) Test 9 switch on Flourescent light sources (should produce a uniform light intensity increase of 1000 lux over ambient light) 400 lux -> 1400 lux Yes (6/10) Yes (10/10) Flourescent light on 1400 lux No (10/10) Yes (10/10) switch off interfering light 1400 lux -> 400 lux No (10/10) Yes (10/10) T-Test failure to danger due to fading ambient light Test 10 reduce ambient light to 250 lux 250 Lux No (10/10) Yes (10/10) Test 11 lights completely out) 5 lux No (10/10) Yes (10/10) There were two main insights that were gained during the tests. 1) On the one hand, there is an absolute limit how much additional incandescent light the system is able to handle. This was measured at approximately 1100 lux over ambient light. This was very reproducible (the light intensity was adjusted by manually moving the incandescent light source closer and further away from the table). 2) The second insight is that the system reacts badly to quick changes in lighting (both fluorescent in Test 9 and incandescent in Test 5), resulting in false positive detections. From a safety standpoint, it is important that any failure lead to a failsafe mode. Therefore, a false positive is less critical than a false negative (a situation where the workspace monitoring system doesn t detect a real intrusion). There were no false negative situations. False positives occurred either during a transition in the lighting (sudden change in either direction), or in the case of test 8, when there was generally too much light in the scene. 10

11 2.2. Evaluation approach speed of humans (CITC) Objective The objective of the experiment is evaluate the performance of the UWB sensors for measuring the speed and position of humans relative to the robot in a shared workspace. Relevant KPIs include: - Accuracy - System frequency - Latency for sending signal to safety controller Description A UWB geolocation solution is deployed (solution provided by the Ubisense 1 company) within the ENSAM Platform, composed of 6 UWB antennas, 1 controller, and UWB Tags 2 on mobile robot to identify its angle of movement and 1 in the helmet of human operator. A simple schematic drawing of this is in Figure 6. Figure 6: Schematic drawing of UWB sensor set-up with multiple antenna in room, two receivers on robot, and one in helment of human operator Two positioning algorithms are used, the AOA (angle of arrival) and the TDOA (Time difference of arrival), to obtain better accuracy and reliability of the geolocation information The theoretical spatial resolution is between 150 and 500 mm and the frequency of measurements reaches 100 hz. We carried out a web visualization interface allowing to supervise the position of mobile robot and human operator (Figure 7), evaluation the instantaneous and average speeds of both of them. We also developed a web API, able to send instantaneous geolocation information of human and robot to other applications / safety controller. [operator, x=., y=., time=.] [Robot_Front, x=., y=., time=.] [Robot_Back, x=., y=., time=.]

12 Figure 7: Web visualization interface to supervise position of mobile robot and human operator Different tests and experimentation were carried out by the CITC and Ubisense teams, to benchmark the UWB geolocation solution. The main results are presented in the following Results For the UWB solution deployed within the ENSAM platform, the frequency of tags is defined as 30hz maximum. The web visualization interface we carried out shows an accuracy of 500 mm. A second benchmark platform was developed by Ubisense to define the accuracy of a runner person within an area (Figure 8). The frequency of tags is defined as 100 hz and the accuracy is about 200 mm. A video presenting this test is available hereafter: Figure 8: Test of UWB geolocation solution of a runner person according to accuracy and latency Finally, we test a geofencing solution developed by ubisense, dedicated to the traceability of objects in bins or shelves (Figure 9). The frequency of tags is defined as 100 hz and the accuracy is less than 100 mm (bins are 100 mm spaced). A video presenting this test is available hereafter: 12

13 Figure 9: Test of UWB geofencing solution according to accuracy and latency The latency required to display the geolocation information (or to send this information to a controller) was verified for the three tests. We show that the Ubisense geolocation solutions are very efficient in terms of latency, even with high speed of tags (runner person or thrown object from one place to another). In conclusion, we showed through our tests that the UWB solution developed by Ubisense is efficient, reliable and can be used to evaluate the safety zone of the human operator, on the condition that the frequency of tags Is sufficiently high ; the lifetime of the tags batteries is of course inversely proportional to the tags frequency Evaluation speed of hand estimation (UC) Objective The objective of the experiment is to evaluate the performance of the IMU sensors for measuring the position (and speed derived from positional data) of human arms relative to their bodies. Relevant KPIs include: - Accuracy estimating the arm pose in space in different configurations; - System speed (including communication latencies). 13

14 Description Human arm positioning estimation is required to avoid collisions in human-robot collaborative operations, as in ColRobot prototype, Figure 10. The knowledge of human arm position can be used to speed up robot task execution and at the same time improve safety conditions for human workers. In order to capture human arm movement the IMUs system Tech-MCS V3 from Technaid is used. The Tech-MCS V3 incorporates 3D inertial sensors called "Tech-IMU" (that contains 3D accelerometer, gyroscope, magnetometer and temperature sensors) and a hub device called "Tech-HUB V3" that organizes and sends the data obtained from the Tech-IMUs to a PC. Data is transmitted by USB cable or by Bluetooth. These IMUs can be attached directly to the human body. Five Tech-IMUs are used to capture arm movements being placed in different human body limbs: left forearm; left arm; right forearm; right arm; and chest. Several kind of data can be extracted from the IMU system, in this case the system provides 3D Orientation, i.e. the orientation of each Tech-IMU; that is then transformed in relation to the Tech-IMU placed in the chest of the human. Taking into account the human torso kinematics, the distance between the human chest and each human hand can be obtained. Nevertheless, these distances are subject to errors. There are two main sources of errors: those related to the estimation of IMU orientation provided by each Tech-IMU and also those related to rough estimation of the human s body measures (as well as the differences from human to human in terms of body dimensions). The body measurements we considered are: (1) the distance between human belly and human shoulder, (2) the length of the human arm (the distance between shoulder and elbow), and (3) the length of the human forearm (the distance between elbow and wrist). Notice that the length of the human hand was not taken into account in this approach. The velocity is estimated by deriving the estimated arm position. From the human upper body kinematics, the following equation is obtained to calculate the positioning error (e max ) when estimating the position of the human wrist. where: e max = θ max (3d d 7 + d d 4 2 ) θ max is the maximum angular error in each Tech-IMU (in radians); d 10 is the forearm length (distance between the elbow and the wrist); d 7 is the arm length (distance between the shoulder and the elbow); d 3 is the distance between the belly and the throat; d 4 is the distance between the throat and the shoulder. The above variables are illustrated in Erreur! Source du renvoi introuvable. Figure 10: Schematic drawing of use of IMUs to track speed of human arms, to be combined with information from UWB sensors for improved human position and speed tracking. 14

15 d 4 d 7 d 3 d 10 e max Figure 11: Schematic drawing illustrating the error in each Tech-IMU Results The provider of the IMUs system estimates the angular error θ max of each Tech-IMU in 0.7 degree RMS in static conditions. We conducted experiments to evaluate the real system error by placing humans in known configurations (in static conditions) and comparing the achieved minimum distance with the ground truth. Humans who were between 1.65 and 1.85 meters tall and their body measures differ in 5 % were considered in these experiments. The wrist positioning was estimated, obtaining an average error of about +-2 centimetres. This error is affected by the IMUs angles error and the human physiognomy. We receive human wrist position estimations at 25 Hz. 3. High resolution camera with structured aperture evaluations As described in the introduction, the workspace monitoring system has a number of features combined into one system. In this section, we will evaluate the functionalities offered by the high resolution camera with a structured aperture and focus on the tasks of human detection and process support Evaluation soft safety functionality In this section we will describe the evaluation of the soft safety functionality of the workspace monitoring system, namely the ability of the system to detect humans and distinguish humans from other objects in the workspace. This is an evaluation of the software we use with the camera system, and our approach uses a RGB-images and Deep Learning. For the experiment, we used the mobile robot Annie equipped with a 15

16 head-mounted camera system similar to the one we developed in ColRobot. Since we are focusing on the software aspects and since the image quality is comparable Objective The objective of the experiment is determine the performance of the human detection algorithm. We wanted to know whether it is possible to use an arbitrary dataset of persons for detection of persons and human body parts in our specific laboratory and production environment. The set-up is meant to test variations in the number of persons, their clothes, head cover, gloves, and occlusion of body parts. For evaluation, we determined the true positives and false positives based on the Intersection over Union of the detected bounding box and a ground truth box. The relevant KPIs include the hit rate, precision and the mean average precision Description For the human detection, we used the Tensorflow Object Detection API which provides several models of neural networks and detection methods. For this application, we chose the net ResNet101, which was the winner of the 2015 COCO dataset challenge, and the Faster-RCNN detection architecture. For the Tensorflow API, a model Faster-RCNN-ResNet101 with pre-trained weights on the COCO dataset is available. To develop the soft safty functionality, we labeled a subset of the ImageNet dataset and created bounding boxes to determine safety-relevant categories. These seven categories are person, head, body, arm, hand, leg or foot, and they can occur multiple times in one image. Figure 12 shows example images of the labeled dataset, illustrating the arbitary content regarding people. In total, we labeled 200 images with an overall amount of 511 objects dedicating to head -category and only 154 foot -objects (table 3). We split the labeled dataset into a training and a test set which is processed into the transfer-learning method of the Tensorflow Object API. The training was done for 100,000 steps, and results in a total loss of (a) (b) (c) Figure 12: Variation in training and testing datasets 16

17 Table 3: Partitioning of the dataset Subset for training Subset for testing Total person head body arm hand leg foot Images total For our experiment we used the mobile robot Annie equipped with a camera system similar to the one developed for ColRobot (Figure 13). The data of the ground truth dataset was hand-labelled (Table 4). Table 4: Labelled Data Ground Truth Figure 13: Mobile robot Annie equipped with a headmounted camera system similar to the ColRobot workspace monitoring system. person 246 head 216 body 316 arm 305 hand 216 leg 250 foot 262 Objects total 1811 Images total 184 We made different types of images focusing on the following variations: - single and multiple persons, - changes in illumination, - partly hidden persons and body parts (gloves, helmets). Moreover, we used different clothes (color of jackets, lab coat) and changed the body posture (sitting, standing, walking). Figure 11 illustrates a subset of these images. 17

18 (a) (b) (c) (d) (e) (f) Figure 14: Image subset of experiment We determined the performance of the algorithm for these datasets by calculating the Intersection of Union (IoU) for the bounding boxes of the detected object and a ground truth object. If the IoU is greater than a certain threshold, it is counted as true positives (TP), else as false positives (FP). If the detector misses an object which is in the ground truth, it is counted as false negative (FN). We used 0.5, as the threshold for the IoU. For evaluation, we determine the hit rate (R) and the precision (P) for each class, and the Mean Average Precision (MAP) of the classifier. The hit rate indicates whether relevant objects are detected (and not left out), whereas the precision describes whether the detection of an object is relevant. We computed these metrices as follows: Hit rate R of category c: Precision P of category c: Mean Average Precision MAP: R(c) = TP(c) (TP(c) + FN(c) P(c) = TP(c) (TP(c) + FP(c) MAP = 1 C P(c) c C Results The method is able to detect objects and to classify in seven categories. We received correct detections mostly for persons and heads (Figure 12 (a) and (b), although the lightling was at times challenging (e.g. humans not well lit, or backlighting) (Figure 15 (d). The classifier is however errorous in some images. It did not detect all categories (Figure 15 (c): arms and hands) and missclassifies objects which are not in the ground truth (Figure 12 (e): robotic arm is not a human arm). The overall performance for the detection of seven categories is measured by the MAP= using IoU of 0.5 (Table 5). Our results show that the detection of persons is stable in most images (Table 5). Furthermore, the algorithm detects the head and body more frequently, but for the other body parts, it has a lower performance. (a) (b) (c) 18

19 (d) (e) (f) Figure 15: Image subset of experimental results Table 5: Detection Results IOU=0.5 TP FN FP Hit Rate Precision person head arm hand body leg foot objects total MAP When considering safety, a false negative signal is more critical than a false positive. However, given that the rationale behind the human detection is to avoid unnecessary machine stops, false positive classifications can be considered as the more critical case and what should be minimized. It is therefore a good result that the number of false positive signals are much lower than the number of false negative, especially for extremities (arms, hands, legs, and feet). We can also see that the detection of the head, body, and of a person as a whole is much better than detection of extremeties. This is due to low contrast between extremeties and the environment in our tests, as well as the much larger variation in poses the extremeties can take in a given picture. This points to a need for a larger data set for training these body parts. However, again given our goal of soft safety, detection of a whole person is sufficient. This means that the image needs to be so large so that body and head are able to be captured (requirement on the optics). In the future we will test the results using another neural network (as a benchmark), and we will also, during testing at end-user sites, be able to generate more training data. Since light and environmental conditions play a large role in detection, it will also be good to have tests in end-user facilities. In particular we believe that these conditions will be more favourable in the Thales facilities, with one large room (vs smaller rooms with different lighting at IFF facilities) and without daylight (which can lead to strong variations in lighting). Given that the high resolution camera with a structured aperture is used both for soft safety functionality and process support, an overall conclusion regarding the complete camera system will be discussed in Section Evaluation process support functionality Objective The objective of the experiment is determine the performance of the high resolution camera with a structured aperture for supporting the picking processes. We originally chose to use a high resolution camera with a structured aperture to be able to detect objects and their position with the same camera system that is already being used for safety purposes and which has a good view of the scene, in order to support picking processes. The functional principle of the camera is described in detail in Deliverable Description To test the measurement error, we took images with the high resolution camera with a structured aperture of objects that are known a-priori. The objects have been previously measured by hand. The objects were 19

20 placed on a table at a distance of approximately 1 m from the camera system. This distance corresponds to the planned distance between the camera and objects during normal operation. The images were taken with an ambient lighting of approximately 800 lux, which also corresponds to the lighting conditions to be expected. The objects were moved in different orientations and positions on the table and in total over 10 images were taken. The images were processed according to the stereo techniques used and a measurement for a length of the was taken. These measurements were compared to the ground truth measurements to determine the error Results For the first test, a wooden box with side length 172 x 172 mm was placed on a table surface ca mm from the camera system (Figure 16). Given the basis length from the calibration of the high resolution camera with a structured aperture of 24mm, and given the 50 mm lens used, we expected to have quite a high error with regard to the depth of a measurement. The high resolution camera with a structured aperture measured a side length of 138 mm (averaged over multiple measurements), resulting in an error of approximately 20%. Figure 16: Wooden box with length and width of 172 mm A further measurement was carried out with an aluminium profile (Figure 17) with side length 200 mm that was place also at a distance of ca mm from the camera system. Here the average measurement was 215 mm, resulting in an error of approximately 5%. 20

21 Figure 17: Aluminum profile with length of 200 mm The differences in error can be attributed to the low depth resolution and the different viewing angle of the measured objects. The length measurement of the wooden box was more in the z-direction of the image, and is more affected by the depth resolution. The aluminium profile is oriented more parallel to the plane of the camera and the length measurement therefore is less dependent on the component from the z- direction. In general, given the combination of relatively small basis length of the stereo system (24 mm), the wide angle of view (a 50mm lens was used) and the distance between the camera and object (ca. 1000mm), we expected for a relatively high sensitivity in depth information. Rough calculations for the sensitivity of the depth measurement show that the system should have a depth resolution of approximately 8 mm with the given configuration. In the original conception of the camera system solely for process information, the use of a tele-lens (150mm 200 mm) was considered, which would have a smaller field of view but a much higher depth resolution. One particular challenge for the measurement was the fact that our structured aperture set-up resulted in individual images with different gray values. This made the use of traditional stereo matching techniques very hard. A further challenge was the fact that, again due to the structured aperture, the focal plane was quite small and objects outside of the focal plane were quite unsharp. Both of these challenges also contributed to the large error we measured Discussion high resolution camera system with a structured aperture In this section we will discuss the results from Sections 3.1 and 3.2 in the context of the overall system for ColRobot. In the first section we will highlight competing requirements arising from the various functionalities assigned to the high resolution camera with a structured aperture. Then we will discuss how these competing requirements and the ensuring compromises made in the design contributed to the results achieved in this evaluation. 21

22 Competing requirements We would first like to point out that the two main tasks assigned to the high resolution camera with a structured aperture include a number of competing requirements. These competing requirements influence the lens angle of view and the overall dimensions and size of the system. With regards to the lens, we have seen that the lens should have as wide an angle of view as possible in order to detect humans for the softsafety functionality. On the other hand, to support process information, the lens should have much smaller angle of view. The compromise of using a 50 mm lens was in practice not optimal for either situation. A further requirement on the system design was to make it as small and lightweight as possible. In particular, this drove us to try the possibility of using a structured aperture to use only one camera. This saved both overall size (normall stereo systems have a larger base length between individual cameras) and weight (only 1 camera was needed instead of two or more) Outlook Due to the strongly diverging requirements and the compromises with regards to the high resolution camera with a structured aperture, we propose focusing instead on one capability to see how we can improve the results within the time available in the project. Therefore we suggest focusing on how we can use the high resolution camera with a structured aperture to increase the performance of the human detection algorithms. We propose to segment the images with the depth information received by the camera system. This pre-processing step would mask the distractive features of the background. With this action taken, we think that the results of the human detection algorithm can be improved due to less intrusive features in the images. The depth resolution, even with a wider field of view and longer viewing distances, are sufficient for image segmentation. In this case, only relevant areas near the robot and tools are considered for human detection. 4. Standardization 4.1. Relevant standards Relevant standards for mobile manipulation and robotic safety include: - ISO , -2 - ISO TS ISO EN RIA (currently in development) 4.2. Standardization relevant issues Specific issues and/or questions regarding these standards: - ISO TS / ISO o Measuring speed of humans. What specific requirements are on this measurement (performance level, etc.)? Is this separate system to be viewed as part of a complete system (sensor for human detection + sensor for speed measurement + all communication pathways+all software = the sensor that needs to fulfil PL d, Cat. 3 according to 61508? Considering current system integration approaches to use off-the-shelf components, this is extreme barrier to innovation. What could be sufficient pathway forward here? o We would like to inform the community about novel use of tactile sensor as 3-position enabling switch for hand-guided movement. Currently only traditional 3-position enabling switch used, which can be ergonomically awkward for user. Our approach to use tactile sensor ring, which has 3-position functionality (through thresholds) and which further requires contact at two cells (not neighbors) to ensure a full grip around wrist is made. This allows a user to change their grip in between movements to the most comfortable and ergonomically viable position. 22

23 4.3. Activities to coordinate with standards Standardizing bodies such as the ISO are consensus-based organizations that make their decisions based on input from members from national committtees. It is very difficult to influence a standardizing committee from the outside, and the best means to interact with the standards committees is through interaction with individuals who are on the committee and who can raise concerns based on new input. The ColRobot has identified a number of ways that they can interact with standardizing committees to raise awareness regarding the issues mentioned above: 1) The Fraunhofer IFF is on the national committee for the working group that is responsible for the ISO-TS In national meetings, the IFF can raise questions and issues that have been identified in ColRobot. 2) ColRobot can interact with other individuals who are involved in standardization through specific workshops. The eurobotics sponsored ERF often has workshops focusing on standardization issues and is attended many individuals who are active on ISO standards at the international level. ColRobot representatives from the Fraunhofer IFF and ENSAM will be present at the ERF 2018 workshop on standardization and will discuss issues relevant to ColRobot in those workshops. 3) Dissemination of best practices, especially with regard to the issues mentioned above. 4) The EU project COVR, which just started in January 2018 is focusing on issues regarding safety and shared safety facilities. The Fraunhofer IFF is a consortium member. This project provides a larger base for sharing best practices, and will provide a further opportunity for best practices developed in ColRobot regarding risk analysis for mobile platforms to be further disseminated in the community amoung relevant stakeholders. 5. Summary In summary we have presented the results of various evaluations of the systems used for safety in the ColRobot project, as well as the high resolution camera with a structured aperture and software developed for further soft-safety and process support functionalities. Furthermore, we have identified standardization relevant issues that have arisen in ColRobot and listed activities to coordinate with standards and disseminate best practice among various stakeholders (not just standardization bodies, but also robotics end-users, component manufacturers, and system integrators). 23