Home VideoGUI: A Benchmark for GUI Automation
from Instructional Videos

Show Lab, National University of Singapore, Microsoft

Can AI assistants recreate this animation effects in PowerPoint?

These effects include 3D model, rotation, and transition. Learn more from this instructional video.

How do Humans and Agents behave?

👨‍💻 Human demonstration: record after watching the video.

🤖️ GPT-4o Agent: workflow automation using PyAutoGUI.

Challenges: The agent fails to complete the full task, struggles with each milestone, and execute at a slow speed (2x).

TL;DR

A Multi-modal Benchmark for Visual-centric GUI Automation from Instructional Videos.

What's New?

  • Visual-centric softwares and tasks: VideoGUI focuses on professional and novel software like PR and AE for video editing, or Stable Diffusion and Runway for visual creation. Besides, the task query emphasizes visual preview rather than textual instructions.
  • Instructional videos with human demonstration: We source novel tasks from high-quality instructional videos, with annotators replicating these to reproduce effects.
  • Hierarchical planning and actions: We provide detailed annotations with planning procedures and recorded actions for hierarchical evaluation.


A brief illustration of VideoGUI. VideoGUI focuses on professional and novel software like PR, AE for video editing, and Stable Diffusion, Runway for visual creation. We source tasks from high-quality instructional videos, with annotators replicating these to reproduce effects; We provide detailed annotations with planning procedures and recorded actions for hierarchical evaluation.




Abstract

Graphical User Interface (GUI) automation holds significant promise for enhancing human productivity by assisting with computer tasks. Existing task formulations primarily focus on simple tasks that can be specified by a single, language-only instruction, such as “Insert a new slide.” In this work, we introduce VideoGUI, a novel multi-modal benchmark designed to evaluate GUI assistants on visual-centric GUI tasks. Sourced from high-quality web instructional videos, our benchmark focuses on tasks involving professional and novel software (e.g., Adobe Photoshop or Stable Diffusion WebUI) and complex activities (e.g., video editing). VideoGUI evaluates GUI assistants through a hierarchical process, allowing for identification of the specific levels at which they may fail: (i) high-level planning: reconstruct procedural subtasks from visual conditions without language descrip- tions; (ii) middle-level planning: generate sequences of precise action narrations based on visual state (i.e., screenshot) and goals; (iii) atomic action execution: perform specific actions such as accurately clicking designated elements. For each level, we design evaluation metrics across individual dimensions to provide clear signals, such as individual performance in clicking, dragging, typing, and scrolling for atomic action execution. Our evaluation on VideoGUI reveals that even the SoTA large multimodal model GPT4o performs poorly on visual-centric GUI tasks, especially for high-level planning.

Data Statistic

stat

More Examples

SD
Image Editing by StableDiffusion-WebUI
PS
Image Editing by Adobe PhotoShop
RW
Video Creation by Runway
CC
Video Editing by Capcut

Main Results on SoTA Multi-modal Language Models

stat

BibTeX

@article{lin2024videogui,
      title = {VideoGUI: A Benchmark for GUI Automation from Instructional Videos},
      author = {Kevin Qinghong Lin and Linjie Li and Difei Gao and Qinchen Wu and Mingyi Yan and Zhengyuan Yang and Lijuan Wang and Mike Zheng Shou},
      year = {2024},
}