0

I have a Keyence barcode scanner that can give me the coordinates of a detected barcode in pixels. I have a 6 DOF robot whose end effector pose I know at any given time. My ultimate goal is to place a new label on top of an existing label that is on a box. I am really looking for (x,y,z,theta) from the sensor and passing it on to the robot. I can work without having the Z values by incorporating sensor to know when to stop exploring in the Z.

I am figuring out how to calibrate this robot end effector to the barcode scanner. AFAIK, the scanner cannot detect a checkerboard pattern.

If I mount a barcode onto a sheet and attach it to the robot and note a set of (pixels,6D pose) readings, how would I figure out the 4x4 transform between the arm and the scanner?

trycatch22
  • 101
  • 1
  • 1
    Which pixel position is the scanner giving you? Center of barcode? Bounding box? – FooTheBar Apr 26 '19 at 08:04
  • It gives the center and the four vertices. – trycatch22 Apr 26 '19 at 17:52
  • How many joints exist between the robot body and the end effector? What is the freedom of each joint? What is the freedom of the end effector? – koverman47 Apr 26 '19 at 21:54
  • 6 DOF. I haven't decided on the robot yet, but it could be a UR5 or the Meca500 (https://www.mecademic.com/products/Meca500-small-robot-arm)

    The end effector is passive. It'll most probably be a vacuum cup.

    – trycatch22 Apr 26 '19 at 22:50
  • are 500g and this tiny workspace enough for the scanner and you labelprinter? – FooTheBar Apr 29 '19 at 08:19

1 Answers1

1

As the scanner gives you the corner points of a the barcode, you can compute the pose of the scanner relative to the barcode (e.g. with OpenCV and solvePnP) and with that information, run most hand-eye calibration algorithms. The normal checkerboard pattern is also only a tool to easily locate known features on a planar surface so there is no big difference between using the corners of the checkerboard or the corners of you barcode.

FooTheBar
  • 1,375
  • 7
  • 18
  • This is useful. Thanks. A couple of questions. The input parameters for it are:

    objectPoints – Array of object points in the object coordinate space,

    imagePoints – Array of corresponding image points,

    cameraMatrix - Input camera matrix

    Say, I collected a dozen points of the barcode center and the corresponding four vertices.

    I assume those would be the imagePoints (2x12). What would the objectPoints be? the corresponding translation of the robot's end effector pose?

    and how do I determine the cameraMatrix?

    Thanks once again for your response

    – trycatch22 Apr 29 '19 at 18:48
  • You should put that in a separate question as it's no longer restricted to the barcode-camera calibration but is a general question of hand-eye calibration. – FooTheBar Apr 30 '19 at 07:15