The mechanical structure of a robot must be controlled to perform tasks. The control
of a robot involves three distinct phases – perception, processing, and action. Sensors give
information about the environment or about the robot itself. This information
is then processed to calculate the appropriate signals to the actuators which
move the mechanical.
The processing phase can range in complexity. At a reactive level, it may
translate raw sensor information directly into actuator commands. Sensor fusion may first be used to estimate parameters of interest from noisy
sensor data. An immediate task is inferred from these estimates. Techniques
from control theory convert the task into commands that drive the actuators.
Control systems may also have varying levels of autonomy.
1.
Direct interaction
is used for haptic or tele-operated devices, and the human has nearly complete control
over the robot's motion.
2.
Operator-assist
modes have the operator commanding medium-to-high-level tasks, with the robot
automatically figuring out how to achieve them.
3.
An autonomous robot
may go for extended periods of time without human interaction. Higher levels of
autonomy do not necessarily require more complex cognitive capabilities. For
example, robots in assembly plants are completely autonomous, but operate in a
fixed pattern.
Another classification takes into account the interaction between human
control and the machine motions.
1. Teleoperation. A human controls each movement, each machine actuator change is specified
by the operator.
2.
Supervisory. A
human specifies general moves or position changes and the machine decides
specific movements of its actuators.
3.
Task-level
autonomy. The operator specifies only the task and the robot manages itself to
complete it.
4.
Full autonomy. The
machine will create and complete all its tasks without human interaction.
No hay comentarios:
Publicar un comentario