Hand-eye Calibration

Jonathon Gibbs

Jonathon Gibbs

Keywords: Camera calibration, Three-Dimensional, Models, OpenCV, Python

For my first tutorial I want to dive into an area I am particularly familiar with, that is hand-eye calibration which is the combination of both robot and camera calibration, and in this tutorial we go one step further and add a turntable to the mix. For clarity I have split this tutorial into two sections. In Part 1 we look at camera calibration. I am aware, however, that there are a lot of tutorials out there on this so I will provide a brief overview and references for those of you with less experience. Camera calibration is an essential part of hand-eye calibration problem, so it is worth recapping if you have forgotten any of it. Part 2 will look at the calibration of a robot and camera together, which involves solving a homogeneous matrix equation AX=YB for two unknowns

Part 1: Camera Calibration

I expect if you are reading this you already have some idea what camera calibration is and why it is needed. In simple terms it is the process of estimating the intrinsic (K) and extrinsic ([R|\overrightarrow{t}]) camera parameters. Such parameters are a crucial when attempting to acquire metric information of the scene, i.e., thee-dimensional models. The intrinsic parameters represent the cameras internal characteristics such as focal length and skew coefficient and can usually be obtained in the documentation or on the manufacturer’s website.

K = \begin{bmatrix}a_x&y&u_0 \\ 0&a_y&v_0\\0&0&1\end{bmatrix}

where a_x and a_y represents the focal length, y is the skew coefficient between the x and y axis, and the principal point, typically the centre of the image, u_0 and v_0.

The extrinsic parameters describe the position and orientation in world space, where R is the rotation matrix and \overrightarrow{t}] is the translation, the position of the origin of the world coordinate system expressed as the coordinates of the camera centred coordinate system, this is represented as

A = [R|\overrightarrow{t}] = \begin{bmatrix}R_11&R_12&R_13&t_1\\R_21&R_22&R_23&t_2\\R_31&R_32&R_33&t_3\\0&0&0&1\end{bmatrix}

A calibrated camera allows for some 3D homogenous point [x_w,y_w,z_w,1]^T to be mapped to some 2D image point [u,v,1]^T, where a 2D point m is formed by an optical ray passing through the optical centre, C, intersecting the image plane.

There are many methods for camera calibration, all of which produce the desired parameters we need, though in this tutorial I will focus specifically on calibration using the checkerboard method, one of the most common methods. For the implementation I use Python, Atom and Anaconda to manage the packages. There are many tutorials for those of you who are new to Python and a quick Google will provide you with all the necessary tools you need to set up on your Windows, Linux or Mac PC.

I would like to point out that by no means am I a Python expert, and you may be aware of more efficient methods for doing this, however this will provide a novice Python user with a good starting point for those wishing to dip their toe into calibration methods.

import numpy as np
import cv2, os
import glob
import copy
from shutil import copyfile

class calibrateCamera(object):
def __init__(self, file_path, output_path, cb_w, cb_h, cb_size):
self.image_source = file_path
self.output_destination = output_path
self.calibration_destination = ""
self.cb_w = int(cb_w)
self.cb_h = int(cb_h)
self.cb_size = float(cb_size)
self.images = glob.glob(self.image_source + '*.jpg') + glob.glob(self.image_source + '*.png') + glob.glob(self.image_source + '*.tif')

def create_folders(self, paths):
for path in paths:
if not os.path.exists(path):
os.makedirs(path)

# Output path is the same as input, the output is a series of transformation matrices
def calibrate(self):
np.set_printoptions(precision=5, suppress=True)

print("==> Performing camera calibration")

# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.001)

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((self.cb_h * self.cb_w, 3), np.float32)
objp[:, :2] = np.mgrid[0:self.cb_w, 0:self.cb_h].T.reshape(-1, 2)
#objp *= self.cb_size

# Arrays to store object points and image points from all the images.
objpoints = []  # 3d point in real world space
imgpoints = []  # 2d points in image plane.

found, unfound = 0, 0

image_count = len(self.images)

if image_count <= 3:
            print("==> Calibration error: insufficient images. 3 required %s provided." % str(image_count))
return

for i in range(image_count):
img = cv2.imread(self.images[i])
# Adjust gamma - under some light conditions the checkerboard quality is reduced, increase/decreasing value can result in more stable camera estimates
#img = adjust_gamma(img, 0.7)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# If found, add object points, image points (after refining them)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (self.cb_w, self.cb_h), flags=cv2.ADAPTIVE_THRESH_GAUSSIAN_C)

# If found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp * float(self.cb_size))

corners2 = cv2.cornerSubPix(gray,corners,(14,14),(-1,-1),criteria)
imgpoints.append(corners2)

found += 1
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (width,length), corners2,ret)
cv2.imshow('img',cv2.resize(img, (0,0), fx=0.3, fy=0.3))
cv2.waitKey(500)
else:
print("==> Error: Unable to detect checkerboard at " + self.images[i])
unfound += 1

print("==> Found %s checkerboards, didn't find %s checkerboards." % (found, unfound))

# Extracting camera parameters
retval, cameraMatrix, distCoeffs, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None, flags=cv2.CALIB_RATIONAL_MODEL)

# Create all of the necessary folders
self.create_folders([self.output_destination, self.output_destination + "/txt/", self.output_destination + "/images/"])

# Export the camera matrices
for i in range(0, len(self.images)):
file_name = str(i).zfill(8)
camera_matrix = np.append(cv2.Rodrigues(rvecs[i])[0],tvecs[i],axis=1)
camera_matrix = np.append(camera_matrix, [[0, 0, 0, 1]], axis=0)
np.savetxt(self.output_destination + "/txt/" + file_name + ".txt", camera_matrix,  fmt='%.6f')
copyfile(self.images[i], self.output_destination + "/images/" + file_name + ".jpg")

def adjust_gamma(self, image, gamma=1.0):
# build a lookup table mapping the pixel values [0, 255] to
# their adjusted gamma values
invGamma = 1.0 / gamma
table = np.array([((i / 255.0) ** invGamma) * 255
for i in np.arange(0, 256)]).astype("uint8")

# apply gamma correction using the lookup table
return cv2.LUT(image, table)

Part 2: Hand-Eye Calibration

When I first investigated the hand-eye calibration problem I was rather overwhelmed (it does not take much!) by the process I also found very little information particularly with respect to performing this with a Universal Robot (The UR5). I hope that this tutorial, despite being designed for the UR5, offers a glimpse of how simple it is.

The problem of determining the position of the camera relative to the end effector, known as hand-eye calibration, has been extensively studied, with research going back to the late 1980’s, first by (Shiu & Ahmad 1987) and then (Tsai & Lenz 1989). Hand-eye calibration requires that four transformations that must be known, two can be calculated initially; these are, A, the camera position, and, B, the robot forward kinematics. A can be calculated by performing camera calibration as discussed in Part 1 and B can be calculated from the robot joint angles. Kinematics refers to the mechanics of the robot. It is used to describe the motion of joints without considering the cause of movement. It provides a transformation between these two spaces known as inverse and forward kinematics, X, the end effector of the robot, and Y, the world position of the object in relation to the robot base, are the two unknowns which need to be calculated. AXYB are all transformation matrices consisting of rotation and translation. In order to calculate the matrices, the robot needs to move at least n=3 times to obtain A_1, A_2, A_3 and B_1, B_2, B_3.
[INSERT AX=YB IMAGE HERE]

To solve the forward kinematics for the robot, Denavit–Hartenberg parameters are used in which only one of the four necessary parameters is variable, the joint angle \theta_i, and the other three parameters, link length a_i, link twist \alpha_i and link offset d_i are all constant. \theta_i can be acquired from the UR5, and most traditional robots, returning a vector of the 6 joint angles in radians that results in the orientation and translation of the end effector at the current position, B, we are able to obtain the transformation matrix B=[R|\overrightarrow{t}], from base to end effector as B=T^0_6= T^0_1T^1_2T^2_3T^3_4T^4_5T^5_6

where T^i_j denotes the homogenous transformation from join i to j and T^{i-1}_j=Trans_{z_{i-1}}(d_i) \sdot Rot_{z_{i-1}}(\theta_i)\sdot Trans_{x_i}(a_i) \sdot Rot_{x_i}(\alpha_i)

and where,

Trans_{z_{i-1}} (d_i)=\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&d_i\\0&0&0&1\end{bmatrix}
Rot_{z_{i-1}} (\theta_i)=\begin{bmatrix}cos(\theta_i)&-sin(\theta_i)&0&0\\sin(\theta_i)&cos(\theta_i)&0&0\\0&0&1&0 \\0&0&0&1\end{bmatrix}
Trans_{x_{i-1}} (\alpha_i)=\begin{bmatrix}1&0&0&\alpha_i\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}
Trans_{x_{i-1}} (d_i)=\begin{bmatrix}1&0&0&0\\0&cos(\alpha_i)&-sin(\alpha_i)&0\\0&sin(\alpha_i)&cos(\alpha_i)&0 \\0&0&0&1\end{bmatrix}

which can be written as;

T^{i-1}_j (\theta_i)=\begin{bmatrix}cos(\theta_i)&-sin(\theta_i)cos(\alpha_i)&sin(\theta_i)sin(\alpha_i)&\alpha_{i}cos(\theta_i)\\sin(\theta_i)&cos(\theta_i)cos(\alpha_i)&-cos(\theta_i)cos(\alpha_i)&\alpha_{i}sin(\theta_i)\\0&sin(\alpha_i)&d_i&0 \\0&0&0&1\end{bmatrix}

On a side note for those using a product of Universal Robots (i.e., the UR5): The UR5 uses axis-angles, a unit vector indicating the direction of an axis of rotation, and an angle describing the magnitude of the rotation about the axis, to perform movements, and therefore conversions are required. The axis-angle can be calculated as follows:

A = arccos\left({trace(B)-1 \over 2}\right) \sdot\left({1 \over 2\sdot sin(\theta)} \begin{bmatrix} B[3,2] &– &B[2,3]\\ B[1,3] &– & B[3,1]\\ B[2,1] &– & B[1,2]\end{bmatrix}\right)

^^ back to top ^^

Join the Conversation

Leave a Reply

Your email address will not be published. Required fields are marked *

Comments (1 replies.)

  1. Jonathon Gibbs

    jmhuhu

    Reply