During the doctorate research, I used the "classic" vision system consisting of camera, frame grabber and a personal computer.
Although this is a cost-effective development platform, it has its limitations. As many machine-vision people before, I found that low-level image processing is a bottleneck. This limits the number of images per second the system can process. This leads to a slow maximum speed the robot can safely move with.
The bottleneck is that all pixels of all images have to be processed with one single, however powerful, processor. The amount of data that has to be transferred back and forth is huge. In the SIMERO system with four grayscale cameras the cameras produced more than 40 MBytes/s.
In the first stages of image-processing usually the amount of data is reduced significantly. If this data reduction could be done locally in the camera, the rest of the system could use more processing and bandwidth. The robot could move safely much faster.
»Prof. Ishikawa of the »Ishikawa-Namiki laboratory at »Tokyo University developed a »vision-chip that has one arithmetic-logic-unit with local memory for each pixel. Thus it can process all pixels of an image at the same time.
During my postdoctoral research from April 2004 to March 2005, I am focusing on using this vision-chip for fast and safe human-robot-coexistence. My homepage at the laboratory is »here. A »JSPS »postdoctoral grand finances the research with »Alexander-von-Humboldt Foundation being the nominating organization.
|