[Return] [Catalog] [★]
Thread 251 from /f/.
Title  
Options
Post
File URL
Password  (For post deletion, optional)
File types: GIF, JPG, PNG . Maximum file size: 8 MB.

Important! Read the rules before posting.
Post only images you have permission to use. If you find your personal photos here and you want them to be removed, click here.

Sexychan is a completely new imageboard! That's why most of our boards are still empty. Feel free to share the best sexy photos/videos from your collection. Welcome to the community! :)
Looking for a good deepnude app or website, can so...
[E]
168846399331.jpg [S] ( 262.58KB , 1280x720 , IMG_20230629_07...jpg )
251
Anonymous

/#/ 251 []

Looking for a good deepnude app or website, can someone help?

>> Anonymous /#/ 266 [X]
168850608838.png [S] ( 759.13KB , 673x720 , 00098-255219842.png )
266
Online AI may not allow you to generate the images you desire.
If you want to alter/create images (like the one shown below ) you will need to run the Stable Diffusion GUI FrameWork on your computer. The website

https://rentry.co/voldy

has a complete beginners guide on how to do this. Although no programming is required, it does require some effort. The rewards, however, are immense: no censoring of prompts or images.

The website,

https://civitai.com/

shows a sampling of the many models that can be used within the Stable Diffusion GUI FrameWork. These models are several gigabytes large and have been "trained" on data sets to generate images of a specific nature. (cartoon images, photo realistic images , porn images, etc)

>> Anonymous /#/ 272 [X]
>>266
Much appreciated for your advice. Thanks for sharing.

>> Anonymous /#/ 273 [X]
>>266
I've looked around the page you posted, but I understand it is mostly generic images. How can I achieve what you just shared, ie create a naked version of a woman based on the images I have

>> Anonymous /#/ 278 [X]
>>273
What you are looking for is called inpainting. From the guide that was posted there is a link to this guide on inpainting: https://rentry.org/inpainting-guide-SD

and this guide: https://rentry.org/inpainting-guide-SD

The second shows some step by step examples. It takes time to get used to the software and what all the buttons and sliders all mean.

>> Anonymous /#/ 305 [X]
>>278

thx for the details, I shall look into them

>> Anonymous /#/ 307 [X]
168867043948.png [S] ( 566.60KB , 1266x895 , screenshot_1.png )
307
The Stable Diffusion Framework uses textual prompts to generate new images or alter existing images, which, as the previous poster points out, is refered to as inpainting. A very brief example of inpainting applied to the "girl in a field" image will now be provided. Screenshot_1 shows a user painted masked area in the image that will be changed. The top box lists the model to be used, which in this case is: uberRealisticPornMerge_urpmv12 model. Typically for inpainting one would chose a model tuned for inpainting, but any model that generates images consistent with your image, can be used. The positive text prompts (things one wants in the image) and negative text prompts (things one does not want) are clearly listed. Negative prompts are important, as they reduce the probablity of generating AI induced errors and artifacts. Inpainting can also be used to repair AI induced errors. The image on the right is the newly generated image.

>> Anonymous /#/ 309 [X]
168867346612.jpg [S] ( 100.15KB , 711x1262 , screenshot2.jpg )
309
n screenshot_2 the various model parameters are shown. The mask mode parameters instruct the algorithm to alter (paint) the masked area using the text prompts and image information only under the mask. The sampling method is applied to the painted area and is chosen from a pull down menu. For this example Euler and DPM++ SDE Karras worked well. 20 is a lower limit on the number of sampling steps. (More steps takes longer, but may result in better images) Resize affects only the masked painted area used by the algorithm. 512 by 512 is what is commonly used. Larger resolutions take longer and the AI tends to produce more unwanted artifacts. At lower resolutions the algorithm may not produce sufficient detail. The algorithm will intelligently upscale the painted area so that the original image size is kept. ( i.e., if the input image is a 1600 by 800 , the output image will be a 1600 by 800. The CFG parameters affect how much emphasis is given to the text prompts. Low numbers give the AI more freedom. The denoising strength determines how much the image is allowed to change between steps during the image synthesis process. The default settings 7 and .75 for those two parameters were used in this example. The seed sets the initial starting point. If one uses the same seed, model, sampling method, parameters, and text prompts, one should be able to reproduce exactly the same image generated by another user. Different seeds produce different AI generated images that should be very "close" to one another. In other words, variations on the same theme.

>> Anonymous /#/ 314 [X]
>>308
Interestingly, lately I've started using 'Euler a' as the sampler with only 12 steps! It produces really good results:

>>308

for example...

>> Anonymous /#/ 319 [X]
>>309

great explanation my man. thank you

>> Anonymous /#/ 449 [X]
Ive been having bad results with the textures.They look way off. Im using the uberRealisticPornMerge_urpmv12 model and the parameters here listed but couldnt get any decent results. Any ideas?

>> Anonymous /#/ 454 [X]
>>449
Matching skin tones with Stable Diffusion (SD) inpainting is a known issue, especially with darker skinned subjects.

The following should help:
In the SD gui go to settings, go to stable diffusion, check the box: Apply color correction to img2img results to match original colors.


Delete threads/posts
Report thread/post
[Return] [Catalog] [★]