- OpenAI has unveiled its text-to-video model, Sora, generating detailed videos from simple text prompts, continuing existing videos, and creating scenes based on a still image.

- Sora, based on a "diffusion" model like its predecessor DALL-E 3, can create movie-like scenes up to 1080p resolution with multiple characters, specific motion types, and accurate details.

- Sora is acknowledged to have weaknesses, such as struggling with accurate physics simulation, causing issues with cause-and-effect relationships and spatial details.

- The model is currently available to "red teamers" for cybersecurity assessment and select designers, visual artists, and filmmakers to gather feedback.

- OpenAI CEO Sam Altman opened himself to custom video-generation requests on X, sharing seven Sora-generated videos, receiving positive reactions from users.

- Despite praise, concerns about ethical implications, especially after the revelation of AI image-generation tools trained on illegal material, have been raised.

- Sora is described by Nvidia senior researcher Jim Fan as a "data-driven physics engine" rather than a simple creative tool, as it deterministically creates the physics of objects in the scene.

#OpenAI #OpenAI's #SORA