A friend of mine once showed me how layers in Photoshop worked and that is when I 'got' how cool graphical design was and also realized it's potential. When I see a well detailed image, the shadows and colors and the perceived depth are evident. Could we take this layered approach to building a robotic vision system? What would be required? First, let's scrap the limitation of the eyeball's physical size, we just aren't there yet in terms of shrinking the tech. For now, let's think of video layers, each with it's own sensor input. What could we use as layer data?
Layer 1: HD camera input
Layer 2: IR sensor input
Layer 3: Thermal input
Bedtime, to be continued....
No comments:
Post a Comment