ee106a

Introduction Design Implementation: Hardware Implementation: Software Results & Video Reflection Team Additional Materials

A Ball Catching 7 DOF Robot Arm

Implementation: Software

Overview of System Operation

Detection and TF broadcast of the current ball position in the camera frame occur via the Kinect2 Sensor and OpenCV. The Kalman Filter Node listens for the current ball position then calculates the ball trajectory and solves for the intersection point between the trajectory and a sphere with a particular radius centered at the shoulder joint. The filter publishes a PoseStamped Vector for the estimated trajectory intersection point for which the arm should immediately move to. The 7-DOF end effector PD controller takes this target pose, compares it to the current pose, and controls the arm to the correct position using current/torque control.

We also wrote two launch files. One to run all simulation files and Koko robot controller, and one to initialize the Kinect2, ball detection node, static world transform, and Kalman Filter. Below is the final system transform tree [rqt_tf_tree].

Ball Detection Node

Taking the rgb_image from the Kinect2, we transform the image to the HSV color space, then apply a bitmask for a tennis neon color. Using OpenCV, we find the maximum contour and the minimum enclosing circle around the contour to obtain a (u, v) center pixel of the ball in our 2D image. After finding the (u, v) center pixel of the ball, we use the PinmodelCamera ROS package to find the 3D ray (X, Y, Z) directly from the raw_rgb, raw_depth image and camera_info. This Final Camera frame (X, Y, Z) position is then broadcasted as a TF as “ball,” which the Kalman filter Node can then listen for.

Kalman Filter and Ball Intersection Trajectory Node

The Kalman Filter ball tracking node is a custom publisher subscriber node, which listens for published ball positions from the vision system and publishes a PoseStamped end effector pose for any predicted intersections with the boundary of the robot’s workspace. The included software can be divided into three main parts: pruning for the filter, the kalman filter, and the intersection solver. The kalman filter was written to operate within the camera frame. Because the gravity vector was known beforehand and noise was assumed to be decoupled, the problem of estimating the position and velocities of x, y, and z could be decoupled resulting in faster computation. The entire node was run once every new ball PointStamped was published, following: Pruning:

  1. Calculate the change in time (dt) from the last estimate received. If this change in time was greater than some threshold, the estimated state’s variance would be reset to a large number to state that the filter has no idea where anything actually is, and thus would listen very intently to any subsequent data points
  2. Outlier points are then pruned in an attempt to pass only clean data to the filter Kalman Filter:
  3. Run a predict step of the kalman filter using a discrete double integrator parametrized by dt, adding an affine term due to the effects of gravity.
  4. Run a measurement update of the filte. If there’s bad data, just propagate dynamics and don’t do a measurement update. Otherwise, perform a standard measurement update to x, y, and z Intersection Solver:
  5. Plug the latest estimates of x, y, z, and their respective velocities into a custom 4th order quartic solver to calculate if there will be an intersection with a sphere around the robot’s workspace: If no intersection, exit out and don’t publish anything If intersection, calculate how much time until there is a collision with the sphere.
  6. Take the earliest time, propagate the current state’s trajectory by this much time, and pass this point (position, velocity) forward
  7. Calculate a quaternion that is aligned with the velocity vector Project a stationary vector onto a plane normal to the velocity vector Get a third orthonormal vector by cross product of these 2 vectors Create orthonormal basis defined in the base frame from these three vectors and convert to a quaternion
  8. Publish a PoseStamped target (Position + Quaternion), transformed into the Base frame and close out

Robot Controller

We had to write custom controller code to actuate the robot. Exposed to us were encoder values read from the robot, and input torques to each of the robots joints. This is quite challenging due to the robots extreme nonlinearities as well as having to consider the speed at which the robot must actuate to catch the ball. This means the robot must act virtually instantly once the ball trajectory is known, and therefore cannot spend time calculating optimal trajectories. A more direct control method in the end effector frame space is what we ended up controlling the arm with, as it allowed for a more natural translation between the robot control and our end objective. Seen below is the block diagram of the robot controller.

At each control loop the following occurs

  1. Current joint angles are computed and the current end effector pose is calculated using forward kinematics
  2. The effective pose difference is computed between the current end effector frame and command pose (This is the relative change in position and rotation from one frame to the next)
  3. This difference is essentially the error in our tracking objective, but represented in the cartesian space. We can consider this as a twist that we want the end effector to actuate towards. We scale this pose difference by P and D constants. Interesting to note is that because the cartesian position and the orientation are in difference units, the gains on the respective twist vector components had to be tuned separately and ended up being magnitudes apart.
  4. This difference in the cartesian space gets converted to a output torque on the motors by applying the jacobian transpose. We take the current joint configuration at the update stage and find the relative twists using the URDF model, giving us the robot jacobian for the current joint state.
  5. The respective torques are then sent to the robot hardware interface to actuate the robot.

Robot Controller Simulation

Testing our code in simulation was vital for making sure the individual components of our object was working before everything could be integrated together and before the real robot hardware was working. We were able to test the integration of the ball tracking, ball trajectory estimation, and catching all in simulation. The simulator we used was Gazebo, letting us simulate the effects of gravity on the arm. We were able to integrate sensory information from the real world and inferencing that with our simulated robot. This was also useful as a sanity check to insure the robot control was working in a perfect environment before debugging the code in a real world setting.

Results & Video