Foundational Models for Robotics (HawAII)
Vertically Integrated Project
Embodied Foundation Models for Robotics is a Vertically Integrated Project focused on using foundation models to control robots across many physical forms, including robotic hands, wheeled platforms, quadrupeds, humanoids, marine robots, and aerial vehicles. The project explores how vision-language-action models, multimodal policies, and embodiment-aware representations can help robots transfer skills across different sensors, actuators, body structures, and task environments.
Goals
Develop general-purpose robot policies that transfer across platforms and tasks.
Represent robot embodiments in terms of sensors, actuators, kinematics, and constraints.
Enable rapid adaptation to new robots and environments with limited task-specific engineering.
Build a reusable research pipeline for data collection, training, evaluation, and deployment.
Methods
Students will work on:
Data collection: teleoperation, simulation, classical learned and scripted controllers, physical robot trials, and human demonstrations.
Model training: imitation learning, reinforcement learning, offline learning, vision-language-action model fine-tuning, and sim-to-real transfer.
Embodiment modeling: conditioning policies on robot morphology, action spaces, sensors, and task constraints.
Deployment: testing policies in simulation and on available physical robots.
Tasks and Robot Platforms
Robotic hands and arms: grasping, assembly, packaging, clothing manipulation
Wheeled robots: patrolling, delivery, navigation, inspection
Quadrupeds: rough-terrain navigation, surveillance, industrial inspection
Humanoids: household assistance, whole-body manipulation
Marine robots: monitoring, sampling, underwater inspection, cleaning, repairs
Aerial vehicles: mapping, visual inspection, search, monitoring
Metrics
Progress will be measured by:
Task success rate
Completion time
Generalization to new robots, objects, and environments
Amount of data needed for adaptation
Robustness to noise and disturbances
Safety, collisions, and constraint violations
Real-world reliability and repeatability
Expected Outcomes
The project will produce reusable datasets, simulation environments, trained robot policies, evaluation benchmarks, deployment tools, technical reports, and research publications.
Prerequisites by Student Level
Sophomores
Comfortable programming in Python; basic Git/Linux experience is helpful.
Some coursework in linear algebra, statistics, data structures, ML, AI, robotics, or controls.
Interest in robotics, machine learning, computer vision, or autonomous systems.
Expected role: data collection, simulation setup, annotation, benchmarking, and software support.
Juniors
Solid Python skills; exposure to C++, ROS/ROS 2, embedded systems, or robotics software is helpful.
Coursework or project experience in ML, robotics, controls, computer vision, AI, optimization, or signals.
Some familiarity with PyTorch, simulation, perception, planning, imitation learning, reinforcement learning, or robot experiments.
Expected role: model training, embodiment modeling, simulation, robot testing, and evaluation.
Seniors
Strong interest or experience in robotics, ML, controls, computer vision, or autonomous systems.
Prior project, research, internship, or advanced coursework related to robot learning, perception, planning, control, or multimodal AI.
Ability to work independently, read technical papers, define experiments, and support reusable code.
Expected role: technical leadership, research implementation, deployment, benchmarking, and mentoring.
Join
To sign up, email molybog@hawaii.edu.
|