Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Introducing new text-to-video methods

License

NotificationsYou must be signed in to change notification settings

fredzhang7/Astro-Diffusion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

143 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


Drone View V2

TheDrone View V2 feature enables you to create a video with a duration of your choice from a drone's perspective by providing a description or prompt for a scene. While the drone is set to autopilot mode, you can modify its movements and responses to obstacles inside theDroneArgs class. See the playlist below for examples of video outputs.

drone view diagram

Just a heads up, the art style in the videos are, for the most part, determined by the text-to-image model you choose, and is not influenced by Astro Stable Diffusion methods.

Text-to-video generation for Drone View V3, Virtual Reality, Panorama Photography, and Pan Shot are currently being developed.

Examples

Astro Stable Diffusion Examples

Setup

Download the required packages and repositories.

pip install -r requirements.txtgit clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

Download and save a Stable Diffusion model to the./stable-diffusion-webui/models/Stable-diffusion folder. Lastly, launchwebui-user.bat in./stable-diffusion-webui before running Astro Stable Diffusion plugins.


Notes

This repository is similar toDeforum Stable Diffusion in that both are based on the image-to-image and text-to-image methods of Stable Diffusion. However, Astro Stable Diffusion differs in that it uses non-interpolation methods to create videos.

Earlier work has been moved to theprevious folder, but it still provides useful AI Art helpers.

Inspirations

@article{Forsgren_Martiros_2022,  author = {Forsgren, Seth* and Martiros, Hayk*},  title = {{Riffusion - Stable diffusion for real-time music generation}},  url = {https://riffusion.com/about},  year = {2022}}
@article{DBLP:journals/corr/abs-2005-12872,  author    = {Nicolas Carion and               Francisco Massa and               Gabriel Synnaeve and               Nicolas Usunier and               Alexander Kirillov and               Sergey Zagoruyko},  title     = {End-to-End Object Detection with Transformers},  journal   = {CoRR},  volume    = {abs/2005.12872},  year      = {2020},  url       = {https://arxiv.org/abs/2005.12872},  archivePrefix = {arXiv},  eprint    = {2005.12872},  timestamp = {Thu, 28 May 2020 17:38:09 +0200},  biburl    = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},  bibsource = {dblp computer science bibliography, https://dblp.org}}

About

Introducing new text-to-video methods

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2026 Movatter.jp