This will be the saga of my searching for an "easy" way to give my robots some capability to see their surroundings. Commercial systems are apparently pretty capable, but are often quite expensive; such as CognaChrome at about $3500.
I have seen a couple approaches to getting vision capability. Steve Vorres of San Diego has taken a small black and white camera taken from a baby monitor, and with some relatively simple circuitry has demonstrated the ability to read one selected line in the frame and detect where a black line (for line following) falls on that line. x,y,and z from the RSSC have added a color camera with a frame grabber to their PC based system and have shown some success in line following.
I've also heard, recommended by some respectable experts, not to waste your time trying vision. "You'll never find time to build robots anymore." However, I figured that if I limit my goals for vision to a very basic initial task, maybe I could have some success. So, my initial goal is to have a vision system for my 2002 Trinity Firefighting contest robot which can detect the yellow furniture cylinders and provide enough data to permit the robot to plan navigation around the furniture. Detecting yellow against a white background; how hard can that be??????
As design requirements go, a color camera seems highly desirable as detecting yellow against white seems easier than a light colored object against a white wall with black and white. The current crop of WebCams shows that a color camera with output already converted to digital is available at a reasonable cost ($50 or so). And the WebCams have generally 320x240 resolution with streaming video and often, 640x480 at lower speeds. This seems to be excellent resolution for robotic purposes.
So, my first approach was to use a WebCam (Creative WebCam Plus $49.99). Hooking it up to my 866 MHz PC with Windows ME gave a great picture using their proprietary software. However, I really didn't want to run under Windows since I don't trust it. So, I put in a Linux system on a 400MHz PC and used the software supplied with Linux. It was capable of getting and displaying a picture, but appeared to update the picture MUCH less often than the Windows system. But, I wasn't too excited about trying to fit a PC based system onto my firefighting robot.
Some internet research determined that the chip used in the Creative WebCam is made by Omnivision and is their OV7620. Find the spec sheets on this chip here. (add link) This chip has a high speed parallel interface which connects to another Omnivision chip, OV511, which interfaces the camera chip to a USB bus. I liked the idea of a parallel interface since I could then hook the camera up to a MiniRoboMind card. The MRM is a Motorola 68332 based robotics controller made by Mark Castellucci (add link). Hacking the WebCam to get at the camera chip didn't seem practical due to the very tight wiring layout. However, more internet searches found that there is a PC board implementation of the OV7620 camera comes with a lens built in and all the pins brought out to an IDC connector. And even better, there is an Evaluation board available which reads the data from the camera. Stores it in RAM, and lets another computer easily read the RAM through a simple parallel interface. The Camera card and the Evaluation board are available from www.electronic123.com at US$78.95 and $83.95 respectively. They cost more than a WebCam, but are much easier to work with.
The OV7620 appears to have many advantages for robotics use. Some of the problems with doing vision are the large amount of memory required to store the picture, the time required to download such large amounts of data, and the time to process the data to find whatever you're looking for. Considering these problems, the usefulness of the camera becomes a tradeoff between how well the picture can be analyzed versus how old the data is by the time the analysis is done.
The quantity of memory is the problem of the receiving processor; the MRM can be procured with 512K of RAM which is plenty to hold a picture.
There are two download speeds to consider. The camera downloads its picture to the evaluation board RAM at up to 30 hz. This says that at least some of the data is at least 33 milliseconds old before it even gets to the evaluation board. The download to the MRM is the second chunk of time. My data reading implementation from the evaluation board into the MRM RAM is much slower, taking up to ????milliseconds to download a full 131K.
The OV7620, however, has features which can help the downloading time problem. The chip can be directed to just provide a subset (a window) of the full picture. So, if you are only interested in part of the picture, like a horizontal sweep, you can direct the camera to read just a few lines; reducing the download from 131K to just a few K. See some samples of this capability. (add link)
The camera has many features which reduce the need for the processor to control the camera. It has AutoExposure, AutoBrightness, and AutoWhiteBalance. I don't know much about cameras (yet), but having these features enabled makes it possible to get a consistently good picture under all kinds of lighting conditions. These features can also be turned off which allows other tricks; for instance, by turning off the auto modes and adjusting brightness to very low and contrast to high, a candle flame (in the firefighting contest) can be made to be a small white patch in an otherwise completely black background. That should be easy to spot (I hope).
The camera does NOT have autofocus, but a fixed setting gives pretty good pictures on anything at two feet or farther, and just gives blurry pictures for closer objects. And, I don't think my picture analysis software will care if the picture is a little blurry anyway.
Connecting the Evaluation board to the MRM
description, wiring and picture.
The evaluation board interface to the outside world is by a standard 25 pin parallel plug. This plug brings out all the wires to control the camera chip and download a picture. For reasons unknown, the evaluation board designers didn't bring out the capability for the PC (or MRM) to TAKE as picture. The picture is taken by moving a switch on the evaluation board. Since the switch is just changing a logic signal from +5 to zero and back, it should be possible to let the MRM control the signal. That change still to come.
One thing which is rather deceptive is the picture resolution. The camera chip is spec'd at 640x480 with a total pixel count of 664x492; and the pictures as presented by the included software and processed on a PC look up to VGA (640x480) quality. However, this chip takes color pictures using a Bayer Filter. A Bayer filter overlays a 664x492 matrix of pixels with a matrix of colored filters. The first line may be green, red, green, red, etc; and the second line is blue, green, blue, green, etc. So for each two by two batch of pixels, you get a red pixel, a blue pixel and two green pixels. (add picture of bayer filter) So, for any particular color, you only have 1/4 the total resolution (except for green which is one half). rewrite this someday. If you read in the data, download it to a PC and draw just the individual pixels in their respective R,B,and B intensities, the picture doesn't appear nearly as good as the pictures generated by the supplied software. However, clearly the same information is there and it still looks good enough for a robot to use.
RGB vs. YUV
The camera chip support several different output formats including RGB and YUV. I sort of understood how RGB works so I intended to use it. But, when I selected RGB mode on the camera chip, I found that the pictures weren't nearly as good as in YUV mode. Often the color was way off...very red. After time (a minute or so), the picture would be better (for unknown reasons), but was still inferior to the YUV pictures. Rereading the camera chip spec, I noted that RGB mode is also referred to as "raw data" mode. I finally concluded that in RGB mode, the camera just put out the raw pixel readings and provided none of the auto features of the YUV mode. While I COULD do white balance and brightness adjustments in the MRM, I'm sure it would take a lot of processing time to do so.
So, what is YUV? It is a different way to encode RGB information. In fact, there are equations to convert RGB to YUV and back again. And YUV even seems to have some advantages for picture processing. The INTENSITY of the picture is represented by the Y term. The color is coded by the U and V terms. So, for my purposes, I only have to look at two numbers (rather than 3) to distinguish yellow from white.
add a picture of a U/V graph showing where colors are.
The color yellow is generated by a combination of Red and Green with relatively little Blue. So, yellow can be detected (in a simplistic way) as U being close to the mid value of 128, and V being significantly less than 128.
Add a picture showing color bar vs Y,U & V plots.