StabilityAI has unveiled its latest innovation in image generation technology – Stable Cascade. This new model promises not only to produce photorealistic images from text or existing images but also to do so faster and more efficiently than its predecessors.

Unlike previous diffusion models like Stable Diffusion, Stable Cascade operates on a novel three-stage architecture, creating a cascade of images that improves in quality as it progresses through each stage. This approach allows for easier fine-tuning and customization, making it ideal for companies seeking to adapt the model to their specific needs or train it on licensed and restricted image libraries.

Built on the innovative Würstchen architecture, Stable Cascade prioritizes cost-effectiveness without compromising on performance at scale. This design choice enables the model to generate high-quality images with remarkable speed, rivaling real-time generation while maintaining superior resolution.

Stability AI Introduces Stable Video Diffusion Model
The current release of Stable Video Diffusion offers two versions of image-to-video models, generating either 14 or 25 frames. Users can customize frame rates, ranging from 3 to 30 frames per second.

The company has released the model and its weights under a non-commercial license, allowing developers to train, fine-tune, and customize the model to suit their specific requirements. Training and inference code are available on the Stability GitHub page.