Комментарии:
First
ОтветитьAnother peak into the crazy world of AI image and film creation.
ОтветитьUsing dpm_2 + Karras also works with other models to produce more small details.
ОтветитьHello , thank you , I have a small problem , I don't have the string literal and int literal nodes
ОтветитьThis video is SO good! Thanks for the shoutout! 🤟
ОтветитьWhat do you think about Kling ai?
Ответить😮 Amazing info again ty sir 👌🙌
ОтветитьLooks amazing. Wow - just wow
Ответитьgreat tutorial as usual, can you repeat it on web ui forge ? thanks
ОтветитьMake a video for 3D or cartoon characters if you have one. thank you
ОтветитьI like spaghetti on my plate, but Comfy local is not it, it's a buzz kill from a creative standpoint. I echo your sediments, Fooocus someday.
Ответитьdpm_2 + Karras settings are not available on Replicate?
ОтветитьAs far as I know the dev model is not for commercial use but that’s ONLY the model and derivatives of it. NOT the images you produce. It says the images you make are not considered derivatives of the model. So it’s still ok to use the images.
Ответитьwell I have no problem with flux images and trained my own lora but the question is how to make a video from the images locally
ОтветитьYes, let's outsource more to AI!
ОтветитьExcellent 🎉
ОтветитьLUMA generates much worse than shown in this video and in free mode their generations are very slow.
ОтветитьWhat about achieve this result using Midjourney V6.1?
ОтветитьI don’t have a computer. Only a iPad. There’s about 5 different flux ai apps. What app is what ? Which one is free to make pictures and videos
ОтветитьThank you.
ОтветитьWhat's the difference between setting the guidance to zero and not using that node at all?
Ответитьlove your tutorial and workflow. I just purchased your prompt pdf book. Thanks a lot. btw, is there any way I can connect flux Controlnet on your workflow from this tutorial? the sampler doesn't have 'controlnet condition' input. please let me know.
ОтветитьNone of those images look realistic. More like game intro quality. But the tech is getting better.
ОтветитьYou can tell he's a legitimate film editor, because he looks harried, overworked, underfed, and emaciated.
I don't think AI film making or FLUX can save him. What he needs to do is pivot and get a job in Fast Food, where he'll actually make some money and can eat some free food on the down low.
If guidance set to 0, make it sense to just disable the guidance node completely?
ОтветитьMan you did great in district 9. We miss seeing u in movies!
Ответитьany idea how to get the background more detailed and less blurred?
ОтветитьTHANK YOU🎖🎖
Ответить1. how do I get multiple lora to work with your workflow?
2. how do i add img2img node to your workflow?
👍🏾👍🏾You are Awesome, my Good Brother, thank you!! I instantly subscribed. 😄
Ответитьthe CFG trick does not works on FORGE, i tried using the DISTILLED cfg from 3.5 to 1 to 0 and all images are the same....... no errors, only the same....
the XLABS REALISM Lora works well for background details, but changes the FACES to LESS quality on FORGE..... bummer. i have read the same issue BEFORE.
Oh wow that was great ! i've also made a lot of tests with ComfyUi and Flux and i was disapointed by the clean slick and new look of evereything. No i know how to make it look more real, i would never have thought that playing with the guidance with so low value would work, i also didn't know exactly what the ModelSamplingFlux was doing. I was far too conservative in my tests, now it's time to make the Cuda warm up my office, fortunately the weather is cooler now!
ОтветитьSince you mentioned hybrid prompting, you probably know about the 'ClipTextEncodeFlux' node in comfy core, but just in case you don't; It lets you send different prompts to the different text encoders and can help reduce the bleed between your language prompting and your tag prompting.
ОтветитьWhy are you using SD3 Clip files? Is there a reason for?
ОтветитьI have 4gb grafics card can i run it locally??
Ответитьone of the best videos for flux
ОтветитьYou can also “bypass” the ModelSamplingFlux node entirely like I do. This results in better looking images without abnormalities.
ОтветитьWill definitely try the FluxGuidance = 0 trick! Thanks!
Ответитьgreat tutorial, any chance you can make the prompts you used available as well
ОтветитьAwesome video ! Thank you for your hard work !!
What did you use at the end to create a video with the images you generated before ? (like the cosmonaut running for example)
Thank you very much for your incredibly detailed explanations. I need to know the minimal requirements of a computer to be able to manifest these images. As much as possible I will add more to get to grow into this new media.
Thanks again, R
Thanks for this video! Why you don't use Fuse localy on your pc?
Ответитьcan i adapt this workflow also via api? i need it for automated image generation in an app.
Thank you for help!
Thank you very much - Please enjoy your Coffee.
ОтветитьThank you !
Ответить