Unlock Realistic & Film-Like Images in Flux AI

Unlock Realistic & Film-Like Images in Flux AI

Digital Magic

2 месяца назад

31,487 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@ACEPandian
@ACEPandian - 27.08.2024 19:06

First

Ответить
@SimonWackerle
@SimonWackerle - 27.08.2024 19:17

Another peak into the crazy world of AI image and film creation.

Ответить
@MikevomMars
@MikevomMars - 27.08.2024 19:24

Using dpm_2 + Karras also works with other models to produce more small details.

Ответить
@sebastienmorere6925
@sebastienmorere6925 - 27.08.2024 19:30

Hello , thank you , I have a small problem , I don't have the string literal and int literal nodes

Ответить
@rundiffusion
@rundiffusion - 27.08.2024 19:38

This video is SO good! Thanks for the shoutout! 🤟

Ответить
@burskyozbekov
@burskyozbekov - 27.08.2024 19:47

What do you think about Kling ai?

Ответить
@PixelsVerwisselaar
@PixelsVerwisselaar - 27.08.2024 20:11

😮 Amazing info again ty sir 👌🙌

Ответить
@blakegibbons6128
@blakegibbons6128 - 27.08.2024 20:44

Looks amazing. Wow - just wow

Ответить
@nicdisan
@nicdisan - 27.08.2024 20:45

great tutorial as usual, can you repeat it on web ui forge ? thanks

Ответить
@mr.entezaee
@mr.entezaee - 27.08.2024 22:00

Make a video for 3D or cartoon characters if you have one. thank you

Ответить
@FuturisticAgent
@FuturisticAgent - 27.08.2024 23:19

I like spaghetti on my plate, but Comfy local is not it, it's a buzz kill from a creative standpoint. I echo your sediments, Fooocus someday.

Ответить
@yutu3327
@yutu3327 - 28.08.2024 02:14

dpm_2 + Karras settings are not available on Replicate?

Ответить
@THEJABTHEJAB
@THEJABTHEJAB - 28.08.2024 05:15

As far as I know the dev model is not for commercial use but that’s ONLY the model and derivatives of it. NOT the images you produce. It says the images you make are not considered derivatives of the model. So it’s still ok to use the images.

Ответить
@mlnima
@mlnima - 28.08.2024 08:20

well I have no problem with flux images and trained my own lora but the question is how to make a video from the images locally

Ответить
@Kiwi-Ahh-Nah
@Kiwi-Ahh-Nah - 28.08.2024 10:06

Yes, let's outsource more to AI!

Ответить
@youtubeccia9276
@youtubeccia9276 - 28.08.2024 10:14

Excellent 🎉

Ответить
@NEURAMos
@NEURAMos - 28.08.2024 13:44

LUMA generates much worse than shown in this video and in free mode their generations are very slow.

Ответить
@mipiaceiltubo
@mipiaceiltubo - 28.08.2024 14:05

What about achieve this result using Midjourney V6.1?

Ответить
@TheXandercage1
@TheXandercage1 - 28.08.2024 20:00

I don’t have a computer. Only a iPad. There’s about 5 different flux ai apps. What app is what ? Which one is free to make pictures and videos

Ответить
@dreamphoenix
@dreamphoenix - 28.08.2024 23:24

Thank you.

Ответить
@therookiesplaybook
@therookiesplaybook - 29.08.2024 04:42

What's the difference between setting the guidance to zero and not using that node at all?

Ответить
@Pauluz_The_Web_Gnome
@Pauluz_The_Web_Gnome - 29.08.2024 08:00

AI will fullow ulong! hehehehehehe

Ответить
@FCCEO
@FCCEO - 29.08.2024 08:37

love your tutorial and workflow. I just purchased your prompt pdf book. Thanks a lot. btw, is there any way I can connect flux Controlnet on your workflow from this tutorial? the sampler doesn't have 'controlnet condition' input. please let me know.

Ответить
@domehouse79
@domehouse79 - 30.08.2024 02:31

None of those images look realistic. More like game intro quality. But the tech is getting better.

Ответить
@spitfeueranna
@spitfeueranna - 30.08.2024 06:02

You can tell he's a legitimate film editor, because he looks harried, overworked, underfed, and emaciated.

I don't think AI film making or FLUX can save him. What he needs to do is pivot and get a job in Fast Food, where he'll actually make some money and can eat some free food on the down low.

Ответить
@Poppinthepagne
@Poppinthepagne - 30.08.2024 13:53

If guidance set to 0, make it sense to just disable the guidance node completely?

Ответить
@ryan18462
@ryan18462 - 31.08.2024 00:48

Man you did great in district 9. We miss seeing u in movies!

Ответить
@kair5902
@kair5902 - 31.08.2024 01:06

any idea how to get the background more detailed and less blurred?

Ответить
@TheGalacticIndian
@TheGalacticIndian - 31.08.2024 13:50

THANK YOU🎖🎖

Ответить
@kair5902
@kair5902 - 31.08.2024 15:04

1. how do I get multiple lora to work with your workflow?
2. how do i add img2img node to your workflow?

Ответить
@FilmSpook
@FilmSpook - 31.08.2024 16:04

👍🏾👍🏾You are Awesome, my Good Brother, thank you!! I instantly subscribed. 😄

Ответить
@liquidmind
@liquidmind - 02.09.2024 19:48

the CFG trick does not works on FORGE, i tried using the DISTILLED cfg from 3.5 to 1 to 0 and all images are the same....... no errors, only the same....

the XLABS REALISM Lora works well for background details, but changes the FACES to LESS quality on FORGE..... bummer. i have read the same issue BEFORE.

Ответить
@kukipett
@kukipett - 04.09.2024 08:57

Oh wow that was great ! i've also made a lot of tests with ComfyUi and Flux and i was disapointed by the clean slick and new look of evereything. No i know how to make it look more real, i would never have thought that playing with the guidance with so low value would work, i also didn't know exactly what the ModelSamplingFlux was doing. I was far too conservative in my tests, now it's time to make the Cuda warm up my office, fortunately the weather is cooler now!

Ответить
@schonsense
@schonsense - 05.09.2024 22:10

Since you mentioned hybrid prompting, you probably know about the 'ClipTextEncodeFlux' node in comfy core, but just in case you don't; It lets you send different prompts to the different text encoders and can help reduce the bleed between your language prompting and your tag prompting.

Ответить
@andreh4859
@andreh4859 - 06.09.2024 00:13

Why are you using SD3 Clip files? Is there a reason for?

Ответить
@S_M44Z
@S_M44Z - 08.09.2024 20:25

I have 4gb grafics card can i run it locally??

Ответить
@mattm7319
@mattm7319 - 13.09.2024 17:43

one of the best videos for flux

Ответить
@2008spoonman
@2008spoonman - 16.09.2024 00:24

You can also “bypass” the ModelSamplingFlux node entirely like I do. This results in better looking images without abnormalities.

Ответить
@2008spoonman
@2008spoonman - 16.09.2024 00:26

Will definitely try the FluxGuidance = 0 trick! Thanks!

Ответить
@ChrisOndrovic
@ChrisOndrovic - 16.09.2024 22:13

great tutorial, any chance you can make the prompts you used available as well

Ответить
@ethanholmes9284
@ethanholmes9284 - 21.09.2024 16:37

Awesome video ! Thank you for your hard work !!
What did you use at the end to create a video with the images you generated before ? (like the cosmonaut running for example)

Ответить
@Richard-p2e5g
@Richard-p2e5g - 26.09.2024 19:09

Thank you very much for your incredibly detailed explanations. I need to know the minimal requirements of a computer to be able to manifest these images. As much as possible I will add more to get to grow into this new media.
Thanks again, R

Ответить
@EnricoPintonello
@EnricoPintonello - 08.10.2024 13:53

Thanks for this video! Why you don't use Fuse localy on your pc?

Ответить
@JakobGross
@JakobGross - 22.10.2024 21:46

can i adapt this workflow also via api? i need it for automated image generation in an app.
Thank you for help!

Ответить
@sbfox4795
@sbfox4795 - 03.11.2024 03:10

Thank you very much - Please enjoy your Coffee.

Ответить
@MarceloPlaza
@MarceloPlaza - 18.11.2024 06:55

Thank you !

Ответить