Google AI features MediaPipeDiffusion plug-ins that enable controllable generation of text-to-image on the device

https://ai.googleblog.com/2023/06/on-device-diffusion-plugins-for.html

Diffusion models have been used extensively with considerable success in text-to-image generation in recent years, leading to significant improvements in image quality, inference performance, and the scope of our creative possibilities. However, effective generation management remains a challenge, especially under conditions that are difficult to put into words.

MediaPipe dispersion plug-ins, developed by Google researchers, allow text-to-image generation to be performed on the device under user control. In this study, we extend our previous work on GPU inference for large generative models on the device itself and present low-cost solutions for programmable text-in-image creation that can be integrated into pre-existing diffusion models and their low-rank adaptation (LoRA) variations.

Iterative denoising is modeled for imaging in diffusion models. Each iteration of the diffusion model starts with a noise-tainted image and ends with an image of the target notion. Understanding the language through text prompts has greatly improved the image generation process. Text embedding is linked to the model for text-to-image production through levels of cross-attention. However, the location and pose of an object are two examples of details that may be more difficult to convey using text prompts. Researchers introduce control information from a condition image into the scattering using additional models.

Check out 100s AI Tools in our AI Tools Club

Plug-and-Play, ControlNet, and T2I Adapter methods are often used to generate controlled text-to-image output. To encode state from an input image, Plug-and-Play uses a copy of the diffusion model (860 million parameters for Stable Diffusion 1.5) and a widely used Implicit Diffusion Noise Model Inversion (DDIM) approach which reverses the generation process from an input image to derive an initial noise input. Self-attentive spatial features are extracted from the copied spread and injected into the text-image spread using Plug-and-Play. ControlNet builds a trainable duplicate of a diffusion model encoder and links it via a zero-initialized parameter convolution layer to encode the conditioning information that is then passed down the decoder layers. Unfortunately, this led to a significant increase in the size of about 450 million parameters for stable scattering 1.5 half of the scattering model itself. The T2I adapter offers comparable results in controlled generation despite being a smaller network (77 million parameters). The condition image is the only input to the T2I adapter and the result is used by all subsequent diffusion cycles. However, this type of adapter is not made for mobile devices.

The MediaPipe diffusion plug-in is a self-contained network that we have developed to make conditional generation effective, flexible and scalable.

  • It simply connects to a trained base model; connectable.
  • Zero-based training means no weights from the original model were used.
  • It’s portable because it can run independently of the base model on mobile devices at almost no extra cost.
  • The plugin is its own network, the results of which can be integrated into an existing template to convert text into images. The downsampling level corresponding to diffusion patterns (blue) receives the features retrieved from the plug-in.

A portable on-device paradigm for text-in-image creation, the MediaPipe scatter plugin is available as a free download. It takes a conditioned image and uses multiscale feature extraction to add features at the appropriate scales to the encoder of a diffusion model. When paired with a text-image diffusion model, the plug-in model adds a conditioning signal to image production. We intend the plug-in network to have only 6 million parameters, making it a relatively simple model. To get fast inference about mobile devices, MobileNetv2 uses depth convolutions and inverted bottlenecks.

Key features

  • Easy-to-understand abstractions for self-service machine learning. To modify, test, prototype and release an application, use a low-code API or a no-code studio.
  • Innovative machine learning (ML) approaches to common problems, developed using Google’s ML know-how.
  • Full optimization, including hardware acceleration, while remaining small and efficient enough to run smoothly on battery-powered smartphones.

Check out theProject page ANDGoogle blog.Don’t forget to subscribeour 25k+ ML SubReddit,Discord channel,ANDEmail newsletter, where we share the latest news on AI research, cool AI projects, and more. If you have any questions regarding the above article or if you have missed anything, please do not hesitate to email us atAsif@marktechpost.com


Featured tools:

Check out 100s AI Tools in the AI ​​Tools Club

Dhanshree Shenwai is a software engineer and has good experience in FinTech companies covering Finance, Cards & Payments and Banking with keen interest in AI applications. He is enthusiastic about exploring new technologies and advancements in today’s changing world, making everyone’s life easier.

StoryBird.ai has just released some great features. Generate an illustrated story from a prompt. Check it out here. (Sponsored)

#Google #features #MediaPipeDiffusion #plugins #enable #controllable #generation #texttoimage #device
Image Source : www.marktechpost.com

Leave a Comment