ee106a

Introduction Design Implementation: Hardware Implementation: Software Results & Video Reflection Team Additional Materials

A Ball Catching 7 DOF Robot Arm

Reflection and Conclusion

With the final implementation, we were able to achieve most of our design criteria with the exception of high accuracy and optimal distances given joint constraints (software limits for the joints had not yet been fully implemented). The catches were made robust due to the attached end effector and we chose an implementation that operated in real-time. As mentioned above, resolving the convergence rate of the vision system/Kalman filter combination would greatly help accuracy while preserving real-time operation. Given more time, the addition of trajectory optimization would solve the constrained movement problem in conjunction with software limits.

Difficulties

Ball Detection Vision System

Due to the lack of time involved in detecting a thrown ball (~1.5 to 2.0 seconds), the major difficulty was maximizing the ball position publishing rate in order to ensure that the Kalman Filter would have enough data to estimate a final position. This frequency rate issue stemmed from determining the 3D point (X, Y, Z) of the ball in the Camera frame rather than detecting the (u, v) center of the ball.Knowing that the Kinect2 had a maximum publishing rate of 30Hz, we knew our solutions would be upper bounded by this rate. Interestingly, the initial inefficient method of directly reading from the time synchronized complete point cloud worked at a rate of ~25Hz for one week, but for some yet to be determined reason (possibly due to other independent high CPU using programs) this deteriorated on our computer to a rate of ~12Hz even after multiple restarts. While investigating the issue, we found that the time synchronization was now throwing away a number of valid points and had high latency in synchronizing the point cloud and rgb_images. As a work around in an attempt to boost frequency, we removed the time synchronization of the point cloud, and used the most recent point cloud. Then we used an initial Calibration for manual Z depth detection, and use of the camera_info for manual X, Y calculation. Finally, using the PinmodelCamera ROS package to find the 3D ray directly from the raw_rgb, raw_depth image and camera_info. This was the final, most efficient method used for all qhd/sd images.

Kalman Filter and Ball Intersection Trajectory Calculation

In the case of the actual demonstration, the Kalman filter/ball detection combination did not calculate the trajectory quickly enough for convergence due to light glare in detecting the ball, and thus the target position of the end effector was constantly updated as the estimate of the current ball position from the ball tracking code. This was a noisier outcome but with quicker reaction rate.

Robot Controller

Implementing the robot control was very difficult. One of the most blatant problems with the controller is that we do not take joint constraints into account into the controller. Thus in some situations, the robot tried to move to a position past its joints limits, as it was, in the cartesian space, the best thing to do according to the controller.

Future Improvements

Ball Detection Vision System

A potential flaw is that detection of the ball relies on bitmasking on the color of the ball, which not only requires an initial calibration of the ball color but allows the flaw of incorrectly finding a similarly colored item in the camera view and is heavily reliant on the lighting in the room. With additional time, we would attempt to train a CNN to detect the (u, v) center of a colored ball of unknown size [lower bounded by size] in an image. Another improvement we were hoping to add, was utilizing multiple sensors and use a sensor fusion method to determine a more accurate 3D position of the ball. This could also ensure an increase in frequency if we utilized offset frequency webcams and applied the initial calibration method that relies only on rgb_image and the focal lengths and principle points of the camera.

Kalman Filter and Ball Trajectory Calculation

One of the main hacks that helped convergence time was resetting the measurement variance to a large number once nothing was detected for a time longer than the frequency of the camera. The biggest drawback to this is if the object is tracking continuously before it is thrown, and never loses sight in the throwing process. This wasn’t significant for us, as the camera detection distance was always less than where we were throwing from, but this would become a new challenge once we improve our detection distance.

Robot Controller

This could be mitigated by using a motion planner and integrating Move it into our robot control system. The worry here is that the computation required to plan a trajectory could take too long, prevent the robot to actually catch the ball. Another issue we had with control is the inherent limitations with the hardware we were working with, which was at 60 hz. Having a higher frequency rate at which our low level controller was updating would lead to smother control making it much more robust and less likely to go unstable.

Team