Look, Focus, Act:
Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers

1University of California, Berkeley 2University of California, Davis 3Tongji University

We introduce a human-inspired foveated vision framework for robot learning that integrates human gaze with foveated Vision Transformers and robotic control, enabling efficient and robust policies.

Abstract

Human vision is a highly active process driven by gaze, which directs attention and fixation to task-relevant regions and dramatically reduces visual processing. In contrast, robot learning systems typically rely on passive, uniform processing of raw camera images. In this work, we explore how incorporating human-like active gaze into robotic policies can enhance both efficiency and performance. We build on recent advances in foveated image processing and apply them to an Active Vision robot system that emulates both human head movement and eye tracking. Extending prior work on the AV-ALOHA robot simulation platform, we introduce a framework for simultaneously collecting eye-tracking data and robot demonstrations from a human operator as well as a simulation benchmark and dataset for training robot policies that incorporate human gaze. Given the widespread use of Vision Transformers (ViTs) in robot learning, we integrate gaze information into ViTs using a foveated patch tokenization scheme inspired by recent work in image segmentation. Compared to uniform patch tokenization, this significantly reduces the number of tokens—and thus computation—without sacrificing visual fidelity near regions of interest. We also explore two approaches to gaze imitation and prediction from human data. The first is a structured, hierarchical two-stage model that first predicts gaze, which is then used to guide foveation and action prediction. The second is a novel method that treats gaze as an extension of whole-body control, integrating it into the robot's action space such that the policy directly predicts both future gaze and actions in an end-to-end manner. Our results show that our method for foveated robot vision not only drastically reduces computational overhead, but also improves performance for high precision tasks and robustness to unseen distractors. Together, these findings suggest that human-inspired visual processing offers a useful inductive bias for robotic vision systems.

Data Collection



We use the AV-ALOHA simulation platform to collect bimanual robot demonstrations with human eye-tracking data. The robot streams stereo camera images to the VR headset for visual feedback, while the headset transmits head and hand controller poses to control the robot, along with human gaze data.

Human Demonstrations with Eye-Tracking

Cube Transfer

Peg Insertion

Slot Insertion

Hook Package

Pour Test Tube

Thread Needle

Policy Architecture



Gaze Prediction: Gaze is predicted using two approaches: Fov-UNet, a hierarchical two-stage model that first predicts gaze with a UNet and then uses it to guide the policy, and Fov-Act, a novel end-to-end method that treats gaze as part of the robot's action space where the policy predicts both gaze and action together. Tokenization: Fov-UNet and Fov-Act methods use foveated tokenization around predicted gaze. The other methods, Fine and Coarse, do not predict gaze and use a standard uniform tokenization. Policy Architecture: We use a Transformer-Based Flow Matching Policy. Image observations Oimg are tokenized, processed by ViT, and compressed with a Q-Former module into tokens cimg, which condition the Flow Transformer (FT) via cross-attention. Proprioception is encoded by an MLP into tokens cproprio and added to the FT input sequence. Timestep t is embedded and conditions FT via AdaLN. FT predicts flow matching velocity vθ from noisy action latent zt, cimg, cproprio, and t. Actions are generated via Euler integration.

Foveated Tokenization



(Left) The input image is divided into patches using either the standard uniform tokenization (Middle) or foveated tokenization (Right). Foveated tokenization mimics the human retina by assigning high-resolution patches near the gaze point and lower resolution in the periphery. This reduces the number of tokens from 324 (uniform) to just 20 (foveated), greatly lowering computational cost while preserving detail where it matters most.

Autonomous Rollout with Foveated Vision

Cube Transfer

Peg Insertion

Slot Insertion

Hook Package

Pour Test Tube

Thread Needle

BibTeX

@misc{chuang2025lookfocusactefficient,
      title={Look, Focus, Act: Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers}, 
      author={Ian Chuang and Andrew Lee and Dechen Gao and Jinyu Zou and Iman Soltani},
      year={2025},
      eprint={2507.15833},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2507.15833}, 
}