Title
On Specifying and Performing Visual Tasks with Qualitative Object Models
Abstract
Vision-based control has aimed to develop general-purpose, high accuracy systems for manipulating objects. While much of the scientific and technological infrastructure needed to accomplish this aim is now in place, several stumbling blocks still remain. One continuing issue is accuracy, and its relationship to system calibration. We describe a generative task structure for vision-based control of motion that admits a simple, geometric approach to task specification. At the same time, this approach allows one to state precisely what types of miscalibration lead to errors in task performance. A second hurdle has been the programmability of hand-eye systems. However, we argue that a structured object representation sufficient for flexible hand-eye coordination is a possibility. The result is a high-level, object-centered language for expressing hand-eye tasks.
Year
DOI
Venue
2000
10.1109/ROBOT.2000.844124
ICRA
Keywords
Field
DocType
control systems,computer science,motion control,feedback control,robot kinematics,object model,visual tracking,calibration,artificial intelligence,object recognition
Social robot,Motion control,3D single-object recognition,Robot vision,Robot calibration,Computer science,Control engineering,Human–computer interaction,Optical tracking,Artificial intelligence,Generative grammar,Cognitive neuroscience of visual object recognition
Conference
Volume
Issue
ISSN
1
1
1050-4729
Citations 
PageRank 
References 
3
0.45
14
Authors
2
Name
Order
Citations
PageRank
Gregory D. Hager13871400.32
Zachary Dodds222433.70