Text to image generation and lens.

Options

Is it feasible or likely that snap lens will be able to send a request to a model like stable diffusion and than use the remote newly generated resource as a part of a dynamic lens either by incorporating the resulting image or using it as a texture in some way? [I'm thinking overlay or some mesh like a floaty polygonal orb that displays your freshly approximated AI aberration.] This would require using the text input so that data would be outgoing, can it be done?

Comments

  • Bakari Mustafa
    Bakari Mustafa Posts: 178 🔥🔥🔥
    Options

    There are a few different ways you could do this, depending on the specific requirements of your Snap Lens. One possibility is to use a machine learning model that takes text input and generates an image as output, and then use that image as a texture or overlay in the Snap Lens. Alternatively, you could use a machine learning model that generates some other type of resource, such as a 3D model, and use that resource in the Snap Lens.