Комментарии:
Can we use the inpainting model together with lora trained on regular dev model?
This would be game changer because like this 2 consistent unique characters in one image would be possible 🥳
I just want video generation in forge FLUX
ОтветитьWould my 3070 8gb be able to run flux?
ОтветитьOn time!!
Ответитьeasiest way to run flux on mac in comfy?
ОтветитьUsing that same workflow for inpainting, im getting error that its missing noise input.
ОтветитьGreat video and I want you to know that I really like your shirt!
ОтветитьMy dawg, that shirt. Love it.
ОтветитьWhere i can find all workflows that you're using in this video?
ОтветитьI just noticed Olivio has a mouse callus. It is a true badge of honor.
ОтветитьThose tabs and that mask shape was wild. Thanks for the info :)
ОтветитьAs always with inpainting, I strongly recommend using the Inpaint-CropAndStitch nodes. That way it's much less likely that you'll need to upscale. It behaves like "inpaint only masked" in A1111/Forge. I've tested it with this new fill model and it works like a charm.
ОтветитьI need that shirt :O (edit: oh hello link! Thanks!!!)
ОтветитьGreat video.
Ответитьdoes Redux work with gguf q4 version ? as i only has 8g Vram.
ОтветитьGREAT t shirt.... and episode, as always.
ОтветитьCan you do OpenPose yet for Flux-Forge?
ОтветитьThank you Again, OV
ОтветитьIs it working with GGUF flux models ?
ОтветитьCan you run this with 12gb vram with gguf q4 flux ?
ОтветитьI would recommend lowering your Flux guidance and trying DEIS/SGM_uniform or Heun/Beta to reduce the plastic skin appearance. The default guidance for Flux in sample workflows is way too high. For example 3.5 is the default but 1.6-2.7 yields superior results.
ОтветитьFinally playing w/this a bit. I wish the depth map nodes would keep the same resolution as the input image. I'm sure I could just use some math nodes to do that, but seems like it would be automatic, or a checkbox on the node. This matters in these setups because the input controlnet image (depth/canny) drives the size of the latent image, thus the size of your final image.
ОтветитьThanks for sharing! That's great news! Let's see if native control nets work better... As it usually happens with FLUX, some things just don't seem to make a lot of sense... Like what on Earth is with Flux Guidance 10? Or 30??! Also, why do we need a whole 23GB separate model just for the inpainting (which we can already do with the help of masking and differential diffusion anyways). Why? So many questions, Black Forest Labs, so many questions...
ОтветитьWhat about Forge integration?
ОтветитьThank you for the video. The inpaint looks promising. Do you think the 24GB inpainting model will work with a 4060Ti (16GB of VRAM) ?
ОтветитьMine threw up an error when running through Canny Edge but not with Depth Anything. If I disconnect it, run the process once and then reconnect/run, it works. Says I'm trying to run conflicting models the first time but everything exactly matches what you're running. Just letting other who might have the same issue what to do.
ОтветитьGreat video! Redux, Depth and Canny (have not tried Fill yet) works with Pixelwave model too.
ОтветитьHow did you know you need a visual CLIP model?
ОтветитьI get this error "CLIPVisionLoader
Error(s) in loading state_dict for CLIPVisionModelProjection:" while loading clip vision, even though I downloaded this fild (siglip-so400m-patch14-384.safetensors) 3.4 GB and this file (sigclip_vision_patch14_384.safetensors) 836 MB and placed them in my ComfyUI\models\clip_vision directory, anyone know what I should do?
I'm using SD 3.5 L for the Ultimate Upscaler - with a detailer Lora - and it works fantastic!
ОтветитьI have problems with installing many nodes (Depth Anything). Let me know what version of Python you use? I have 3.12 included with Comfy and I often have this exact problem.
ОтветитьWhat am I missing. The output image doesn't match the input at all when I do it.
Ответитьhallow Olivio , What is the minimum GPU VRAMs that can run Flux on ComfyUI ?
ОтветитьFor making the REDUX model to work you have to add a node to control the amount of strength.
ОтветитьAre there going to be GGUF versions of these models?
ОтветитьFlux is so all over the place :/ guidance 30 :D
ОтветитьI have been testing the Depth lora, but the output is very far from the input image. Does not seem to work as the controlnet depth does. Even in your video, the 2 cars have similar position, but they are not sharing the same "depth": the input red car is seen from a higher position than the output one. In my test (a bedroom) the output image sometimes is "reversed". Is this expected? Does it mean that these two Canny and Depth are far from how ControlNet works?
ОтветитьI am seeing very grainy results with the flux fill model for inpainting, wonder if its my settings or the model
Ответить