FLUX TOOLS - Run Local - Inpaint, Redux, Depth, Canny

FLUX TOOLS - Run Local - Inpaint, Redux, Depth, Canny

Olivio Sarikas

1 день назад

10,929 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@FrankWildOfficial
@FrankWildOfficial - 22.11.2024 02:11

Can we use the inpainting model together with lora trained on regular dev model?
This would be game changer because like this 2 consistent unique characters in one image would be possible 🥳

Ответить
@Gli7chSec
@Gli7chSec - 22.11.2024 02:18

I just want video generation in forge FLUX

Ответить
@blutacckk
@blutacckk - 22.11.2024 02:41

Would my 3070 8gb be able to run flux?

Ответить
@CHATHK
@CHATHK - 22.11.2024 02:48

On time!!

Ответить
@forgottenwisdoms
@forgottenwisdoms - 22.11.2024 02:54

easiest way to run flux on mac in comfy?

Ответить
@jaywv1981
@jaywv1981 - 22.11.2024 03:04

Using that same workflow for inpainting, im getting error that its missing noise input.

Ответить
@Skettalee
@Skettalee - 22.11.2024 03:21

Great video and I want you to know that I really like your shirt!

Ответить
@middleman-theory
@middleman-theory - 22.11.2024 03:51

My dawg, that shirt. Love it.

Ответить
@mateuszpaciorek7219
@mateuszpaciorek7219 - 22.11.2024 04:03

Where i can find all workflows that you're using in this video?

Ответить
@David.Charles.
@David.Charles. - 22.11.2024 04:56

I just noticed Olivio has a mouse callus. It is a true badge of honor.

Ответить
@user-hi3ke6qh7q
@user-hi3ke6qh7q - 22.11.2024 04:59

Those tabs and that mask shape was wild. Thanks for the info :)

Ответить
@Darkwing8707
@Darkwing8707 - 22.11.2024 05:15

As always with inpainting, I strongly recommend using the Inpaint-CropAndStitch nodes. That way it's much less likely that you'll need to upscale. It behaves like "inpaint only masked" in A1111/Forge. I've tested it with this new fill model and it works like a charm.

Ответить
@LydianMelody
@LydianMelody - 22.11.2024 05:35

I need that shirt :O (edit: oh hello link! Thanks!!!)

Ответить
@thedevilgames8217
@thedevilgames8217 - 22.11.2024 06:14

why everything is comfy

Ответить
@FusionDeveloper
@FusionDeveloper - 22.11.2024 06:35

Great video.

Ответить
@jiexu-j9w
@jiexu-j9w - 22.11.2024 07:35

does Redux work with gguf q4 version ? as i only has 8g Vram.

Ответить
@alpaykasal2902
@alpaykasal2902 - 22.11.2024 08:02

GREAT t shirt.... and episode, as always.

Ответить
@geyck
@geyck - 22.11.2024 08:35

Can you do OpenPose yet for Flux-Forge?

Ответить
@KK47..
@KK47.. - 22.11.2024 09:02

Thank you Again, OV

Ответить
@AdvancExplorer
@AdvancExplorer - 22.11.2024 10:07

Is it working with GGUF flux models ?

Ответить
@bause6182
@bause6182 - 22.11.2024 10:35

Can you run this with 12gb vram with gguf q4 flux ?

Ответить
@ericpanzer8159
@ericpanzer8159 - 22.11.2024 11:04

I would recommend lowering your Flux guidance and trying DEIS/SGM_uniform or Heun/Beta to reduce the plastic skin appearance. The default guidance for Flux in sample workflows is way too high. For example 3.5 is the default but 1.6-2.7 yields superior results.

Ответить
@KDawg5000
@KDawg5000 - 22.11.2024 11:05

Finally playing w/this a bit. I wish the depth map nodes would keep the same resolution as the input image. I'm sure I could just use some math nodes to do that, but seems like it would be automatic, or a checkbox on the node. This matters in these setups because the input controlnet image (depth/canny) drives the size of the latent image, thus the size of your final image.

Ответить
@AlexeySeverin
@AlexeySeverin - 22.11.2024 12:29

Thanks for sharing! That's great news! Let's see if native control nets work better... As it usually happens with FLUX, some things just don't seem to make a lot of sense... Like what on Earth is with Flux Guidance 10? Or 30??! Also, why do we need a whole 23GB separate model just for the inpainting (which we can already do with the help of masking and differential diffusion anyways). Why? So many questions, Black Forest Labs, so many questions...

Ответить
@Showbiz_CH
@Showbiz_CH - 22.11.2024 13:17

What about Forge integration?

Ответить
@tats5850
@tats5850 - 22.11.2024 16:08

Thank you for the video. The inpaint looks promising. Do you think the 24GB inpainting model will work with a 4060Ti (16GB of VRAM) ?

Ответить
@ian2593
@ian2593 - 22.11.2024 16:44

Mine threw up an error when running through Canny Edge but not with Depth Anything. If I disconnect it, run the process once and then reconnect/run, it works. Says I'm trying to run conflicting models the first time but everything exactly matches what you're running. Just letting other who might have the same issue what to do.

Ответить
@zebmac
@zebmac - 22.11.2024 17:28

Great video! Redux, Depth and Canny (have not tried Fill yet) works with Pixelwave model too.

Ответить
@researchandbuild1751
@researchandbuild1751 - 22.11.2024 18:07

How did you know you need a visual CLIP model?

Ответить
@bobobaba2080
@bobobaba2080 - 22.11.2024 18:53

I get this error "CLIPVisionLoader
Error(s) in loading state_dict for CLIPVisionModelProjection:" while loading clip vision, even though I downloaded this fild (siglip-so400m-patch14-384.safetensors) 3.4 GB and this file (sigclip_vision_patch14_384.safetensors) 836 MB and placed them in my ComfyUI\models\clip_vision directory, anyone know what I should do?

Ответить
@gimperita3035
@gimperita3035 - 22.11.2024 20:05

I'm using SD 3.5 L for the Ultimate Upscaler - with a detailer Lora - and it works fantastic!

Ответить
@mikrobixmikrobix
@mikrobixmikrobix - 22.11.2024 22:09

I have problems with installing many nodes (Depth Anything). Let me know what version of Python you use? I have 3.12 included with Comfy and I often have this exact problem.

Ответить
@therookiesplaybook
@therookiesplaybook - 23.11.2024 00:03

What am I missing. The output image doesn't match the input at all when I do it.

Ответить
@Osama-xs8cl
@Osama-xs8cl - 23.11.2024 03:01

hallow Olivio , What is the minimum GPU VRAMs that can run Flux on ComfyUI ?

Ответить
@Zegeeye
@Zegeeye - 23.11.2024 03:37

For making the REDUX model to work you have to add a node to control the amount of strength.

Ответить
@asdfwerqewsd
@asdfwerqewsd - 23.11.2024 04:01

Are there going to be GGUF versions of these models?

Ответить
@Kvision25th
@Kvision25th - 23.11.2024 08:53

Flux is so all over the place :/ guidance 30 :D

Ответить
@stefanoangeliph
@stefanoangeliph - 23.11.2024 10:42

I have been testing the Depth lora, but the output is very far from the input image. Does not seem to work as the controlnet depth does. Even in your video, the 2 cars have similar position, but they are not sharing the same "depth": the input red car is seen from a higher position than the output one. In my test (a bedroom) the output image sometimes is "reversed". Is this expected? Does it mean that these two Canny and Depth are far from how ControlNet works?

Ответить
@FlyingCowFX
@FlyingCowFX - 23.11.2024 11:51

I am seeing very grainy results with the flux fill model for inpainting, wonder if its my settings or the model

Ответить