Watch the full demo showing the complete robot stack in action:
https://github.com/Kushion32/security_robot/raw/master/media/fulldemo.mov
Please find the rest of the demonstration videos and photos sprinkled throughout the features sections below.
The project implemented below includes a full ROS-based software stack for a MOVO service robot, featuring:
- Navigation to waypoints with UI integration
- Person-following safety behavior using leg detection
- Automatic head/screen flipping interactions
- Real-time person counting using YOLOv8 on RealSense camera feed
- Face authentication for unauthorized user detection
- Easy map management for SLAM and room labeling
- Operator UI (
movo-nav-ui) and attendee UI (secure-a-bot-ui/robot-ui)
This repository combines ROS navigation, interaction behaviors, and two web UIs:
movo-nav-ui: operator-facing dashboard for navigation and map toolssecure-a-bot-ui/robot-ui: attendee-facing touchscreen UI for selecting destinationssrc/kinova-movo/movo_nav: waypoint backend (goto_points.py) and nav integrationsrc/kinova-movo/movo_people: people detection, person-following monitor, head/screen flip nodes
- Send robot to named waypoints from UI or terminal
- Return robot to home after user confirmation
- Publishes UI-ready status and arrival topics:
/ui_nav_ready/ui_waypoints_list/ui_robot_arrived
- Uses leg detections and monitor logic to ensure user remains with robot
- Robot pauses if person is lost and can resume when person returns
- Key topics:
/leg_tracker_measurements/movo/person_following_status
- Auto mode: detect the side a person is standing on from depth, rotate head/tablet side accordingly
- Supports return-to-front behavior when robot navigation trip is complete and user sends robot back
- Interfaces:
- Service mode:
/movo/flip_to_back_screen,/movo/flip_to_front_screen - Auto mode topics:
/movo/auto_screen_flip/command,/movo/auto_screen_flip/state
- Service mode:
- Uses YOLOv8 object detection on RealSense color camera stream
- Continuously counts number of people in the camera's field of view
- Publishes count to
/people/count(std_msgs/Int32)
- Validates faces against authorized profiles using
face_recognition - Identifies whether a detected person is an authorized user or unauthorized
- Publishes status to
/face_auth/status(std_msgs/String) - Publishes annotated camera feed with bounding boxes to
/face_auth/overlay - Authorized face encodings are loaded from a local directory (e.g.,
config/known_faces/) and should be pre-populated with images of authorized users (e.g.,me.jpg)
- Handles dynamic SLAM map saving and loading during runtime
- Manages room labels and mapping statuses dynamically
- Interface: JSON command inputs on
/map_manager/command - Publishes JSON status on
/map_manager/status
- ROS-connected operator dashboard
- Waypoint/nav control and map utilities
- Path:
movo-nav-ui
- Touch-friendly destination selection for attendees
- Hardcoded destination buttons mapped to waypoint names (e.g.
washroom,elevator,stairs) - Path:
secure-a-bot-ui/robot-ui
catkin_ws/
src/kinova-movo/
movo_nav/
movo_people/
movo_demos/
...
movo-nav-ui/
secure-a-bot-ui/robot-ui/
COMMANDS.md
- Ubuntu + ROS Noetic
- Catkin workspace configured at
~/catkin_ws - MOVO robot network access (movo1/movo2 reachable)
- Node.js and npm for both UIs
- Power on MOVO main system (movo1).
- Power on secondary computer (movo2) from the back panel.
- Verify connectivity:
ping movo1
ping movo2- Source ROS workspace:
cd ~/catkin_ws
source devel/setup.bash- Follow the rest of the setup instructions in 'src/kinova-movo/README.md' for extra networking setup
Set ROS networking in each terminal (update values for your network): The below works when connected to 'movo' networking projected by the robot
export ROS_MASTER_URI=http://10.66.171.2:11311
export ROS_IP=10.66.171.254
export ROS_HOSTNAME=10.66.171.254
source ~/catkin_ws/devel/setup.bashNotes:
- Use the interface/IP reachable by the robot network.
- If running over wired tether, prefer the robot subnet IP (for example
10.66.x.x) instead of hotspot-only IPs.
cd ~/catkin_ws
catkin build
source devel/setup.bashOnce you've gone through the launch commands on each of movo1 and movo2 as detailed in 'src/kinova-movo/README.md', you can run the following core commands for each feature:
Run navigation stack:
roslaunch movo_demos map_nav.launch sim:=false local:=true map_file:=movo_mapOpen RViz navigation view:
roslaunch movo_viz view_robot.launch function:=map_navSave map:
rosrun map_server map_saver -f ~/movo_mapStart waypoint navigator (UI mode):
rosrun movo_nav goto_points.py _interactive:=false _ui_mode:=trueOther modes:
# No UI confirmation required
rosrun movo_nav goto_points.py _interactive:=false _ui_mode:=false
# Interactive terminal mode
rosrun movo_nav goto_points.py
# Custom auto-return delay
rosrun movo_nav goto_points.py _ui_mode:=false _auto_return_delay:=10.0Start person following system, can be used when navigating with waypoints:
roslaunch movo_people person_following_navigation.launchVerify detections/status:
rostopic echo /leg_tracker_measurements
rostopic echo /movo/person_following_statusManual service-based node:
rosrun movo_people screen_flip_node.py
rosservice call /movo/flip_to_back_screen
rosservice call /movo/flip_to_front_screenAutomatic depth-based node:
roslaunch movo_people auto_screen_flip.launch trigger_distance_m:=0.40 clear_distance_m:=0.60Monitor/send commands:
rostopic echo /movo/auto_screen_flip/state
rostopic pub -1 /movo/auto_screen_flip/command std_msgs/String "data: 'home'"Run the YOLOv8 person counting node (connects to /camera/color/image_raw):
rosrun movo_people realsense_person_counter.pyMonitor count:
rostopic echo /people/countRun the face authentication node (requires pre-configured known face image me.jpg in config):
roslaunch movo_nav face_authentication.launchMonitor status and view video feed:
rostopic echo /face_auth/status
rosrun image_view image_view image:=/face_auth/overlayStart the runtime mapping session and label manager Node:
rosrun movo_nav map_manager.pySend a command manually to list saved maps or start mapping:
rostopic pub -1 /map_manager/command std_msgs/String "data: '{\"action\": \"start_mapping\"}'"
rostopic pub -1 /map_manager/command std_msgs/String "data: '{\"action\": \"list_maps\"}'"cd ~/catkin_ws/movo-nav-ui
npm install
npm startDefault local URL: http://localhost:3000
cd ~/catkin_ws/secure-a-bot-ui/robot-ui
npm install
npm run devIf needed, set ROSBridge URL in UI settings or via env before launch:
export VITE_ROSBRIDGE_URL=ws://<rosbridge_host>:9090/ui_nav_ready(std_msgs/Bool)/ui_waypoints_list(std_msgs/String, JSON array)/ui_robot_arrived(std_msgs/String)
/ui_navigation_command(std_msgs/String)go:<waypoint_name>confirm:<waypoint_name>
Example:
rostopic pub -1 /ui_navigation_command std_msgs/String "data: 'go:washroom'"
rostopic pub -1 /ui_navigation_command std_msgs/String "data: 'confirm:washroom'"Add your recorded media here.
- Notes: Navigation testing
- Notes: Shows the safety behavior using leg detection
https://github.com/Kushion32/security_robot/raw/master/media/headmoving.MOV
- Notes: Evaluates YOLOv8 counts
https://github.com/Kushion32/security_robot/raw/master/media/facedetection.mp4
- Notes: Live SLAM demonstrations
- Check
goto_points.pyis running. - Check nav readiness:
rostopic echo /ui_nav_ready- Confirm action server exists:
rostopic list | grep move_base- Confirm waypoint file contains the exact key name:
cat ~/catkin_ws/src/kinova-movo/movo_nav/movo_waypoints.yaml- Confirm UI command:
rostopic echo /ui_navigation_command- In RViz, use
2D Pose Estimateto set initial pose. - Rotate robot slowly in place to converge AMCL.
- Ensure either of these is active:
screen_flip_node.py(service path)auto_screen_flip_node.py(topic command path)
- Kinova MOVO ROS packages and navigation stack
- ROS Noetic ecosystem packages (
move_base,amcl,map_server, etc.)



