Комментарии:
I just downloaded SwarmUI as well as Hunyuan models shown in the video, however I keep getting this error: ComfyUI execution error: HunyuanImageToVideo.encode() missing 1 required positional argument: 'guidance_type'. Has anyone else gotten this as well? If I have missed a step, please let me know as I am fairly new to all of this.
ОтветитьI follow all these tutorials exactly and they do't work. even flux on Forge for images doesn't work.
Ответитьnot working. best I get is something that looks like nintendo 1997 quality
ОтветитьAbout the GGUF Models. Where do I need to put these. Seems not to work in the "diffusion_models" folder
ОтветитьI feel there is something missing in the video. When I try it only the "init Image" and the "image to video" is turned on. In the "image to video" tab hunyuan_video_I2V_fp8_e4m3fn is selected, but when I click generate I get a "[Info] User local requested 1 image with model 'Flux/flux1-dev-fp8.safetensors'..." and "missing 1 required positional argument: 'image_interleave'" in the terminal
ОтветитьI get this error: ComfyUI execution error: TextEncodeHunyuanVideo_ImageToVideo.encode() missing 1 required positional argument: 'image_interleave' :-(
Ответитьit doesn't use my init image... generates random stuff.
Video CFG is at 1, image creativity is at 0...
I'm using Stability Matrix (SM) as Installer for ForgeUI & ComfyUI. Can / Should I use the same SM for SwarmUI ?
ОтветитьWhich one will you say is best. Hunyan or Wan?
Pros/cons between the two?
And what is best to use Swarm or Comfy?
Pros/cons?
That was awesome information! We are going to use this tool with BeLikeNative. We are always in search of a good and cost-effective video editors. Actually, I own a fiction web story writing firm. So, any AI tool is not for us. Still, we use BeLikeNative because it gives us perfectly translated and paraphrased scripts in 11 languages. So far, these scripts are so well localized that even native editors could not identify them. This video editor tool might increase our productivity and earning potential.
ОтветитьThanks a lot Seb. Quick question - would it be the same type of settings for Wan i2v in SwarmUI? I couldn’t get it working (from your last tutorial). Like “Image Creativity” set to zero for it to work? I didn’t in Wan, likely to be the fault? (I’m new in making video gen.) Thanks :) P
ОтветитьThanks for the information!
How much Graphic Memory you have ?
Generation looks so fast on the video
heard it's faster than wan, hope that's true, because I don't want to wait 20 minutes a 5second video
ОтветитьI didn't realize it was possible to get previews of the video generation... Is this possible in Comfy as well? I wasn't able to find a way with Huny when I checked a few weeks ago, but I may have easily missed something.
ОтветитьAnyone know what's going on with the glitches at the beginning of every video generated by this model? It seems the first few frames always come out weird, regardless of length or scheduler or anything else. Anyone got a solution for this? Running a lot of tests myself.
ОтветитьCan i please get some help i cant figure it out, it says it doesnt have any backends. I moved the swarm folder from C drive to another drive because i didnt have any space for models, does anyone had this problem and knows how to fix it ? much appretiated
ОтветитьI have an RTX 3060 12GB and a 4 second image to video took 35 minutes to generate as an MP4 format, that was with TeaCache enabled. Love the idea of all local but for me it's not really viable on my pc sadly. So still Kling / Hailou for me at the moment.
ОтветитьHunyan or WAN 2.1? What would you say, whis one is better. What are the pros and cons?
ОтветитьWe're gonna need a bigger comp...
My internal SSD is full and locally my PC can handle SVD and LTX (with low VRAM tweaks), but that's about it. At least with a reasonable processing time.
just tried hunyuan i2v but it is not even close to the superb quality of Wan2.1 maybe some wrong settings? Copied your settings exactly.
Ответитьcomfyui workflow?
ОтветитьSwarm automatically downloaded safetensor of llava_llama3_fp8_scaled for me. Did it do that for you seb?
Ответитьthis year is kinda insane, 3 open source ai video head to head releasing their newest model.
ОтветитьWait a second...
....Is this a non-node-based I2V local AI image and video generator???????
When I try to install Teacache it constantly says: failed to send request to server. Did the server crash?
it says it installed however when i restart swarm it tells me to install.
WIll you make a comfyui workflow for this? i dont want to download swarm :(
ОтветитьCannot execute because node TextEncodeHunyuanVideo_ImageToVideo does not exist
ОтветитьDarn, followed along exactly and every time I hit generate it just does a singe picture and not a video. The out put even lists that I asked for 120 frames but it's just a still that's instantly created.
Ответитьits just creating a image of what i put in. yes i have image to video turned on. no its a wepg moving image. hmm i see the model in the big box below, and when i search for it, but it wont select in the "video model" pull down... - found it, I had to set the architecture of the model...
ОтветитьThank you so much for sharing this, you make AI accessible for the greater public.
ОтветитьThis is all very interesting but requires a lot of resources.
ОтветитьMy original comment was about missing file "TextEncodeHunyuanVideo_ImageToVideo". Just fyi for anyone else running into this issue. I have Swarm setup to pull updates on start. That's not enough though. I had to also go into the Comfy Manager and do an Update All for it to properly get the files needed. Doing the install missing nodes didn't work for some reason. 🤷♂
ОтветитьSebastian. I'm new to all things video. Have been doing images for a while. I've written music and lyrics for a song. Can this technique be used for creating a music video to sync with the lyrics and music?
ОтветитьSo, with Swarm I dont have to download the VAE and Clip Files?
ОтветитьThank you for pasting the direct download link for Swarm. I wish more creators did this.
ОтветитьThanks
ОтветитьHow does it compare to skyreels and wan 2.1? For once this would actually be a really worthwhile comparison. Skyreels can only do 480P and the image to video seems hacky, but is supposed to have more diverse training data capable of better motion and maybe quality.
Wan 2.1 had a lot of hype with people saying it beats everything initially, but the videos they were demoing looked like they were from 2 generations ago, so perhaps this was just the 1.3b model. Recently I saw one person demo a Wan 2.1 that looked like it was trading blows with VEO 2, so maybe this is just because there are 14B quants people can run now.
Where can I see how many gb ram it needs?
Ответитьlol im assuming i cant run on 3060 12gb
Ответитьthat option "video Boomerang", is that seamless video looping???
ОтветитьThe quality looks great. I think it's useful for full videos now. I will try it.
ОтветитьI'm experiencing the same issue with that first frame glitch. I'm using the GGUF version.
ОтветитьSo quick, thank you ! I'm stuck in square aspect even if i use any of the video resolution option, any idea ?
ОтветитьHow does it compare to wan2.1 i2v? Better or worse?
Ответитьthanks for getting us the news so fast seb 😊
ОтветитьThanks! What are GPU requirements?
ОтветитьAmazing, SwarmUI makes the things so easy, thank you for making this tutorial so quick
Ответитьgood tutorial!
ОтветитьWE ARE LIVEEEEE, i was literally sitting refreshing my sub box lol
Ответить