VideoPoet

From Wikipedia the free encyclopedia

VideoPoet
Developer(s)Google
Initial releaseFebruary 8, 2024; 2 months ago (2024-02-08)
TypeLarge language model

VideoPoet is a large language model developed by Google Research in 2023 for video making.[1][2][3][4] It can be asked to animate still images.[5] The model accepts text, images, and videos as inputs, with a program to add feature for any input to any format generated content. VideoPoet is in a private test phase, and was announced publicly on February 8, 2024.[6] It uses an autoregressive language model.

References[edit]

  1. ^ Krithika, K. L. (December 20, 2023). "Google Unveils VideoPoet, a New LLM for Video Generation". Analytics India Magazine.
  2. ^ "Google has introduced VideoPOET breaking new ground in coherent video generation - Gizmochina".
  3. ^ Kondratyuk, Dan; Yu, Lijun; Gu, Xiuye; Lezama, José; Huang, Jonathan; Hornung, Rachel; Adam, Hartwig; Akbari, Hassan; Alon, Yair; Birodkar, Vighnesh; Cheng, Yong; Chiu, Ming-Chang; Dillon, Josh; Essa, Irfan; Gupta, Agrim; Hahn, Meera; Hauth, Anja; Hendon, David; Martinez, Alonso; Minnen, David; Ross, David; Schindler, Grant; Sirotenko, Mikhail; Sohn, Kihyuk; Somandepalli, Krishna; Wang, Huisheng; Yan, Jimmy; Yang, Ming-Hsuan; Yang, Xuan; Seybold, Bryan; Jiang, Lu (December 21, 2023). "VideoPoet: A Large Language Model for Zero-Shot Video Generation". arXiv:2312.14125 [cs.CV].
  4. ^ "VideoPoet – Google Research". VideoPoet – Google Research.
  5. ^ Franzen, Carl (December 20, 2023). "Google's new multimodal AI video generator VideoPoet looks incredible".
  6. ^ "VideoPoet – Google Research". VideoPoet – Google Research. Retrieved February 20, 2024.

External links[edit]

  • Media related to VideoPoet at Wikimedia Commons