An Assistive Navigation Paradigm for Semi-Autonomous Wheelchairs Using Force Feedback and Goal Prediction
John Staton, MS, Manfred Huber, PhD
John Staton, MS, Manfred Huber, PhD
Illustration
1 - Outer Loop Diagram, indicating progression from the external user
preference system, to the goal selection system, to the harmonic
function path planner, into the run-time system. (Click for larger
view) Photograph
- This photograph shows the implementation setup, with two monitors,
one for the simulation window and one for the “Dashboard” GUI interface,
and the Microsoft Sidewinder Force Feedback Pro joystick. (Click for
larger view)
The objective of the outer procedural loop is to estimate the
desired navigation goal of the user based on the information available
to the system and to provide this estimate to the run-time loop,
enabling it to direct the user towards that goal.The outer loop utilizes
run-time data of the user’s position and behavior together with
information about the set of potential goals in the environment provided
by an external user preference system to predict the intended goal
location.This prediction is made by comparing a set of recent user
actions to the actions necessary for approaching each individual
goal.The more similar the user actions are to the path that would
approach a goal, the more likely that goal is the user’s intended
destination.Once the most likely goal is selected, the system calculates
the harmonic function for that goal, which it passes in a discretized
grid format to the Run-Time System.This process is repeated when the
user’s intended goal needs to be recalculated based on the new location,
orientation and behavioral data from the user; this repetition can
occur periodically according to a specific rate, or can potentially be
event-driven, repeating when the user actions no longer match with the
path to the selected goal.
Illustration
2 - Run Time Loop Diagram, indicating a progression of the wheelchair’s
location and orientation into a system to generate the force effect, to
the joystick for force effect playback, and whose output progresses to
the motors which translate the joystick’s position into motor commands,
which effects the wheelchair’s location and orientation. (Click for
larger view) Screenshot 1 - This screenshot shows the “Dashboard” GUI interface, with path planning display. (Click for larger view)
The Run-Time Loop runs as the user is directing the wheelchair
around the environment.In this loop location and orientation data are
first acquired from the wheelchair and then used to produce a force
vector (derived in terms of a direction and a “risk” factor, which will
be discussed in the next sections).The direction of the force vector is a
translation of the gradient of the harmonic function at the user’s
location.The amount of “risk” that the user’s action incurs is a
heuristic whose factors incorporate the velocity of the wheelchair, the
potential value of the harmonic function at the user’s location, and the
next potential value of the harmonic function in the direction that the
user is heading.This vector is then translated into a force-feedback
effect which is played on the user’s joystick.The joystick’s position is
finally used to drive the wheelchair’s motors and the loop repeats.In
this process the path prediction of the autonomous system only
indirectly influences the wheelchair’s behavior by providing guidance to
the user. The actual drive commands are always provided by the user
(although the user could opt to simply follow the force vector, and thus
follow the harmonic function path).
No comments:
Post a Comment