High Fidelity Video Prediction with Large Stochastic Recurrent Neural Networks

Ruben Villegas, Arkanath Pathak, Harini Kannan, Dumitru Erhan, Quoc V. Le and Honglak Lee

NeurIPS 2019

Paper, Code (Coming soon), ArXiv

Abstract

Predicting future video frames is extremely challenging, as there are many factors of variation that make up the dynamics of how frames change through time. Previously proposed solutions require complex inductive biases inside network architectures with highly specialized computation, including segmentation masks, optical flow, and foreground and background separation. In this work, we question if such handcrafted architectures are necessary and instead propose a different approach: finding minimal inductive bias for video prediction while maximizing network capacity. We investigate this question by performing the first large-scale empirical study and demonstrate state-of-the-art performance by learning large models on three different datasets: one for modeling object interactions, one for modeling human motion, and one for modeling car driving.

Supplemental Videos

128x128 VIDEOS (All videos below are generated):

Human 3.6M

KITTI Driving

VIDEO COMPARISONS (64x64):

GREEN means input, RED means prediction

SVG'

Towel pick (more videos)

Human 3.6M (more videos)

KITTI Driving (more videos)

LSTM

Towel pick (more videos)

Human 3.6M (more videos)

KITTI Driving (more videos)

Contact

For questions, please contact rubville@umich.edu