comfyui層樣式
中文說明點這裡
商務合作請聯繫電子郵件[email protected]。
有關業務合作,請聯繫電子郵件[email protected]。
一組comfyui節點,可以復合層並掩蓋以實現Photoshop之類的功能。
它將Photoshop的一些基本功能遷移到ComfyUI,旨在集中工作流程並降低軟件切換的頻率。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586601e4930.png)
*此工作流(title_example_workflow.json)在工作流目錄中。
示例工作流程
workflow
目錄中的一些JSON Workflow文件,這是如何在Comfyui中使用這些節點的示例。
如何安裝
(以comfyui官方便攜式軟件包和Aki comfyui軟件包為例,請修改其他comfyui環境的依賴關係環境目錄)
安裝插件
安裝依賴軟件包
對於comfyui官方便攜式軟件包,請雙擊插件目錄中的install_requirements.bat
,在插件目錄中的install_requirements_aki.bat
上aki comfyui軟件包雙擊插件目錄中,並等待安裝完成。
或安裝依賴項軟件包,打開comfyui_layerstyle插件目錄中的CMD窗口,例如ComfyUIcustom_ NodesComfyUI_LayerStyle
然後輸入以下命令,
對於Comfyui官方便攜式軟件包,類型:
......python_embededpython.exe -s -m pip install .whldocopt-0.6.2-py2.py3-none-any.whl
......python_embededpython.exe -s -m pip install .whlhydra_core-1.3.2-py3-none-any.whl
......python_embededpython.exe -s -m pip install -r requirements.txt
.repair_dependency.bat
對於Aki comfyui軟件包,類型:
....pythonpython.exe -s -m pip install .whldocopt-0.6.2-py2.py3-none-any.whl
....pythonpython.exe -s -m pip install .whlhydra_core-1.3.2-py3-none-any.whl
....pythonpython.exe -s -m pip install -r requirements.txt
.repair_dependency.bat
下載模型文件
來自Baidunetdisk的中國國內用戶和其他用戶來自huggingface.co/chflame163/comfyui_layerstyle
下載所有文件,然後將它們複製到ComfyUImodels
文件夾。此鏈接提供了此插件所需的所有模型文件。或根據每個節點的說明下載模型文件。
常見問題
如果節點無法正確加載或使用過程中存在錯誤,請檢查Comfyui終端窗口中的錯誤消息。以下是常見錯誤及其解決方案。
警告:找不到xxxx.ini,使用默認xxxx。
此警告消息表明無法找到INI文件,也不會影響使用。如果您不想查看這些警告,請在插件目錄中修改所有*.ini.example
文件為*.ini
。
ModulenotFoundError:無模塊名為'psd_tools'
此錯誤是psd_tools
未正確安裝。
解決方案:
- 關閉comfyui並打開插件目錄中的終端窗口並執行以下命令:
../../../python_embeded/python.exe -s -m pip install psd_tools
如果在安裝psd_tool期間發生錯誤,例如ModuleNotFoundError: No module named 'docopt'
,請下載Docopt的WHL並進行手冊安裝。在終端窗口中執行以下命令: ../../../python_embeded/python.exe -s -m pip install path/docopt-0.6.2-py2.py3-none-any.whl
path
是路徑名稱WHL文件。
無法從“ cv2.ximgproc”導入名稱為'guidederfilter'
此錯誤是由opencv-contrib-python
軟件包的不正確版本引起的,或者該軟件包由其他OpenCV軟件包覆蓋。
名稱:未定義的名稱“引導”
問題的原因與上述相同。
無法從“變形金剛”導入“ vitmatteimageProcessor”名稱
此錯誤是由低版本的transformers
軟件包引起的。
Insightface加載非常慢
此錯誤是由protobuf
軟件包的低版本引起的。
有關上述三個依賴項軟件包的問題,請雙擊repair_dependency.bat
(對於官方comfyui progable)或repair_dependency_aki.bat
(對於comfyui-aki-v1.x)中的插件文件夾中,以自動修復它們。
設置了OnnxRuntime :: Python :: createexecutionProviderInstance cuda_path,但無法加載cuda。請按照GPU需求頁面上提到的正確版本的CUDA和CUDNN安裝正確的版本
解決方案:重新安裝onnxruntime
依賴關係軟件包。
錯誤加載模型xxx:我們無法連接到huggingface.co ...
檢查網絡環境。如果您無法在中國訪問huggingface.co,請嘗試修改HuggingFace_Hub軟件包以強制使用HF_MIRROR。
ValueError:Trimap不包含前景值(xxxx ...)
此錯誤是由於掩模區域使用PyMatting
方法來處理掩模邊緣時太大或太小。
解決方案:
- 請調整參數以更改口罩的有效區域。或使用其他方法處理邊緣。
requests.exceptions.proxyerror:httpsconnectionpool(xxxx ...)
發生此錯誤時,請檢查網絡環境。
unboundlocalerror:分配前引用的本地變量“ clip_processor”
unboundlocalerror:分配前引用的本地變量'text_model'
如果執行JoyCaption2
節點時發生此錯誤,並且已確認模型文件已放置在正確的目錄中,請檢查transformers
依賴關係軟件包版本至少為4.43.2或更高。如果transformers
版本高於或等於4.45.0,並且也有錯誤消息:
Error loading models: De️️scriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
......
請嘗試將protobuf
依賴關係軟件包降級到3.20.3,或設置環境變量: PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
。
更新
**如果更新後的依賴項包錯誤,請雙擊repair_dependency.bat
(對於官方comfyui progable)或repair_dependency_aki.bat
(對於Comfyui-Aki-v1.x)(用於Comfyui-Aki-v1.x)中的插件文件夾中,以重新安裝依賴項包。
提交Benultra和LoadBenModel節點。這兩個節點是在Comfyui中實施Pramallc/Ben項目。
從huggingface或baidunetdisk下載BEN_Base.pth
和config.json
然後復製到ComfyUI/models/BEN
文件夾。
合併Jimlee2048提交的PR,添加LoadBirefNetModelV2節點,並支持加載RMBG 2.0型號。
從huggingface或baidunetdisk下載模型文件,然後將其複製到ComfyUI/models/BiRefNet/RMBG-2.0
文件夾。
Florence2 Nodes支持基本promptgen-v2.0和大promptgen-v2.0,下載base-PromptGen-v2.0
和large-PromptGen-v2.0
兩個來自huggingface或baidunetdisk的兩個文件夾,然後復製到ComfyUI/models/florence2
。
sam2ultra和對象電源節點支持圖像批次。
SAM2ULTRA和SAM2Videoultra節點增加了對SAM2.1模型的支持,包括Kijai的FP16型號。從baidunetdisk或huggingface.co/kijai/sam2-safetensors下載模型文件,然後復製到ComfyUI/models/sam2
文件夾。
提交JoyCaption2Split和LoadJoyCaption2Model節點,在多個JoyCaption2節點上共享該模型可提高效率。
sementanythingultra和SemenmanyThingulTrav2添加cache_model
選項,易於靈活地管理VRAM使用情況。
由於transformers
的Llamavision節點的高版本要求,這會影響一些較舊的第三方插件的加載,因此LayerStyle插件已將默認要求降低到4.43.2。如果您需要運行Llamavision,請自己升級到4.45.0或更高。
提交JoyCaption2和JoyCaption2ExtraOptions節點。需要安裝新的依賴軟件包。將JoyCaption-Alpha-Two模型用於本地推理。可用於生成及時的單詞。該節點是https://huggingface.co/john6666/joy-caption-alpha-two-cli-mod在comfyui中的實現,謝謝原始作者。 Download models form BaiduNetdisk and BaiduNetdisk , or huggingface/Orenguteng and huggingface/unsloth , then copy to ComfyUI/models/LLM
, Download models from BaiduNetdisk or huggingface/google , and copy to ComfyUI/models/clip
, Donwload the cgrkzexw-599808
folder from Baidunetdisk或HuggingFace/john6666,然後復製到ComfyUI/models/Joy_caption
。
提交Llamavision節點,使用Llama 3.2視覺模型進行局部推論。可用於生成及時的單詞。該節點代碼的一部分來自comfyui-pixtralllamolmolmolmolmolmovision,謝謝原始作者。要使用此節點, transformers
需要升級到4.45.0或更高。從Baidunetdisk或HuggingFace/Seanscript下載型號,然後復製到ComfyUI/models/LLM
。
提交RandomGeneratorV2節點,添加最小隨機範圍和種子選項。
提交TextJoinv2節點,在TextJion之上添加定界符選項。
提交Gaussianblurv2節點,參數精度已提高到0.01。
提交user -promptgeneratextimgwithReference節點。
提交GrayValue節點,輸出與RGB顏色值相對應的灰度值。
lut應用,textimagev2,textimage,simpletextimage節點支持在commas,semas,emicolons或spaces隔開的resource-dir.ini
中定義多個文件夾。同時支持刷新的實時更新。
LUT Apply,TextImageV2,TextImage,SimpleTextImage節點支持定義多目錄字體和LUT文件夾,並支持刷新和實時更新。
將人體粉碎節點提交,用於產生人體零件面具。它基於Metal3D/comfyui_human_parts的戰機,感謝原始作者。從Baidunetdisk或HuggingFace下載模型文件,然後將其複製到ComfyUImodelsonnxhuman-parts
文件夾。
ObjectDetector節點添加按信心選項添加排序。
提交drawbboxmask節點,用於將對象檢測器節點輸出的Bboxes輸出轉換為掩碼。
提交用戶promptgeneratortxtimg和用戶promptgeneratorreplacewordword節點,用於生成文本和圖像提示並替換提示內容。
提交Phiprompt節點,使用Microsoft Phi 3.5文本和視覺模型進行本地推理。可用於生成及時的單詞,過程提示單詞或從圖像中推斷出提示單詞。運行此模型需要至少16GB的視頻內存。
從baidunetdisk或huggingface.co/microsoft/phi-3.5-vision-instruct and huggingface.co/microsoft/phi-3.5-mini-Instruct下載模型文件,然後將其複製到ComfyUImodelsLLM
folder。
提交GetMainColors節點,它可以獲得圖像的5個主要顏色。提交colorname節點,它可以獲得輸入顏色值的顏色名稱。
將亮度和對比節點複製為BrightnessContrastV2,Shadow&Emhtlight節點的顏色為ColorofShadowHighlight,以及Shadow&Highlight Mask to Shadow&Emake to Shadow亮點蒙版V2,以避免comfyui工作流的錯誤在節點名稱中由“&”字符造成的“&”字符。
提交VQAPROMPT和LOADVQAMODEL節點。
從baidunetdisk或huggingface.co/salesforce/blip-vqa-capfilt-large and huggingface.co/salesforce/blip-vqa-base下載型號,然後復製到ComfyUImodelsVQA
folder。
florence2ultra,florence2image2prompt和loadFlorence2Model節點支持Miaoshouai/Florence-2-Large-Promptgen-V1.5和Miaoshouai/Florence-2-Base-base-base-promptgen-v1.5模型。
從baidunetdisk或huggingface.co/miaoshouai/florence-2-large-promptgen-v1.5和huggingface.co/miaoshouai/florence-2-base-2-base-promptgen-v1.5下載ComfyUImodelsflorence2
文件。 。
提交Birefnetultrav2和LoadBirefnetModel節點,這些節點支持最新的Birefnet模型的使用。從Baidunetdisk或GoogleDrive下載型號文件,名為BiRefNet-general-epoch_244.pth
到ComfyUI/Models/BiRefNet/pth
文件夾。您還可以下載更多的Birefnet型號並將其放在此處。
ExtendCanvasv2節點支持負值輸入,這意味著圖像將被裁剪。
節點的默認標題顏色已更改為藍綠色,並以層,外行,外行,外行,層次,層次和LayerFilter為單位的節點以不同的顏色區分。
對象檢測器節點添加了Sort Bbox選項,該選項允許從左到右,從上到下以及大到小,使對象選擇更加直觀和方便。昨天發布的節點已被放棄,請用新版本節點手動替換它(對不起)。
提交SAM2ultra,SAM2Videoultra,ObjectDetectorFL2,ObjectDetectoryOloworld,ObjectDetectoryOlo8,ObjectDetectoremask和BboxJoin節點。從baidunetdisk或huggingface.co/kijai/sam2-safetensors下載型號,然後復製到ComfyUI/models/sam2
文件夾,從baidunetdisk或googledrive下載型號,然後復製到ComfyUI/models/yolo-world
文件夾。此更新引入了新的依賴項,請重新安裝依賴項軟件包。
提交隨機發電機節點,用於在指定範圍內生成隨機數,並具有INT,float和boolean的輸出,從而通過圖像批處理支持批處理生成不同的隨機數。
提交EVF-SAMULTRA節點,它是Comfyui中EVF-SAM的實現。請從Baidunetdisk或HuggingFace/EVF-SAM2,HuggingFace/evf-SAM下載模型文件,以ComfyUI/models/EVF-SAM
文件夾(將型號保存在其各自的子目錄中)。由於引入了新的依賴項軟件包,插件升級後,請重新安裝依賴項軟件包。
提交ImagetAggerSave和ImageAutocroPV3節點。用於實現訓練集的自動修剪和標記工作流(工作流image_tagger_save.json
位於工作流目錄中)。
提交CheckMaskV2節點,添加了更快地檢測掩碼的simple
方法。
將ImagereEl和ImagereElComposite節點提交到畫布上的多個圖像。
NumberCalculatorV2和NumberCalculator添加min
方法和max
方法。
優化節點加載速度。
Florence2Image2Prompt增加了對thwri/CogFlorence-2-Large-Freeze
和thwri/CogFlorence-2.1-Large
模型的支持。請從Baidunetdisk或HuggingFace/Cogflorence-2-Large-Freeze和HuggingFace/Cogflorence-2.1 Large下載模型文件,然後將其複製到ComfyUI/models/florence2
文件夾。
合併小丑沙爾(Clownsharkbatwing)的分支“使用GPU進行顏色混合模式”,某些層混合物的速度超過十倍。
提交florence2ultra,florence2image2prompt和loadFlorence2Model節點。
透明背景杜特拉節點添加新的模型支持。請根據說明下載模型文件。
提交Segformerultrav2,SegfromerFashionPipeline和SegformerClothespeline節點,用於分割服裝。請根據說明下載模型文件。
commit install_requirements.bat
and install_requirements_aki.bat
,單擊解決方案以安裝依賴項軟件包。
提交透明背景節點,它根據透明背景模型刪除背景。
將Ultra節點的Vitmatte模型更改為本地調用。請下載所有Vitmatte模型的文件到ComfyUI/models/vitmatte
夾。
getColortonev2節點將mask
方法添加到“顏色選擇”選項中,該選項可以準確地獲得蒙版內的主要顏色和平均顏色。
imagescalebyaspectratiov2節點添加“ background_color”選項。
LUT應用添加“強度”選項。
提交AutoAdjustv2節點,添加可選的掩碼輸入,並為多種自動顏色調整模式提供支持。
由於即將在Gemini-Pro Vision Services的停用,PressTagger和Pressembellish添加了“ Gemini-1.5-Flash” API繼續使用它。
Ultra節點添加了在CUDA設備上運行VitMatte
選項,從而增加了5倍的運行速度。
提交Queuestop節點,用於終止隊列操作。
在處理大尺寸圖像時,優化了對超節點的VitMate
方法的性能。
cropbyMaskV2添加選項,以通過倍數繞切割尺寸。
提交CheckMask節點,它檢測掩模是否包含足夠的有效區域。提交HSVVALUE節點,它將顏色值轉換為HSV值。
booleanoperatorv2,numberCalculatorV2,Integer,float,boolean節點添加了字符串輸出,以輸出該值,作為字符串,可與SwitchCase一起使用。
提交交換機節點,根據匹配字符串切換輸出。可用於任何類型的數據切換。
提交字符串節點,用於輸出字符串。這是文本框簡化節點。
提交如果節點,則基於布爾條件輸入的開關輸出。可用於任何類型的數據切換。
提交字符串條件節點,確定文本是否包含子字符串。
提交numberCalculatorv2節點,添加第n個根操作。提交booleanoperatorv2節點,增加/少於更大/少,超過或等於或平等的邏輯判斷。這兩個節點可以訪問數字輸入,並且可以在節點內輸入數字值。注意:數字輸入優先。當有輸入時,節點中的值將無效。
提交SD3NegativeConditioning節點,將SD3中負條件的四個節點封裝到一個單獨的節點中。
ImageRemoveAlpha節點添加可選掩碼輸入。
使用低頻濾波和高頻保存來恢復圖像詳細信息,融合效果更好。
提交Addgrain和MaskGrain節點,向圖片或掩碼添加噪聲。
提交filmv2節點,基於上一個速度的添加fastgrain方法,噪聲速度的速度快10倍。
提交ImagetOmask節點,可以將圖像轉換為掩碼。支持將實驗室,RGBA,YUV和HSV模式中的任何通道轉換為口罩,同時提供色標調整。支持蒙版可選輸入以獲取僅包含有效零件的掩碼。
某些節點中的黑點和WhitePoint選項已更改為滑塊調整以進行更直觀的顯示。包括MaskEdgeultradetailv2,Sengemanthingultrav2,rmbgultrav2,phrommaskultrav2,birefnetultra,segformerB2Clothesultra,BlendifMask和Levels。
imagescalerestorev2和imagescalebyaspectratiov2節點添加了total_pixel
方法以擴展圖像。
提交MediaPipeFaciaL節點,用於細分面部特徵,包括左和右眉,眼睛,嘴唇和牙齒。
提交批處理節點,用於從批處理圖像或掩碼中檢索指定的圖像或掩碼。
分層創建新的子目錄,例如Systemio,數據和提示。一些節點分為子目錄。
提交MaskByColor節點,根據所選顏色生成掩碼。
提交LOADPSD節點,它讀取PSD格式和輸出層圖像。請注意,該節點需要安裝psd_tools
依賴項軟件包,如果在安裝psd_tool期間發生錯誤,例如ModuleNotFoundError: No module named 'docopt'
,請下載docopt的WHL並手動安裝它。
提交segformerb2clothesultra節點,用於細分角色衣服。多虧了原始作者,模型分割代碼是源自Starthua的。
SaveImagePlus節點將輸出工作流程添加到JSON函數,支持%date
和embeddint日期或時間的%time
,並添加預覽開關。
提交SaveImagePlus節點,它可以自定義保存圖片的目錄,在文件名中添加時間戳,選擇保存格式,設置圖像壓縮率,設置是否保存工作流程,並選擇將無形的水印添加到圖片中。
提交Addblindwatermark,Showblindwatermark節點,在圖片中添加隱形水印和解碼水印。 COMMIT CREATEQRCODE,DECODEQRCODE節點,它可以生成二維代碼圖片並解碼二維代碼。
imagescalerestorev2,imagescalebyaspectratiov2,ImageAutocroPV2節點添加了width
和height
的選項,可以將寬度或高度指定為固定值。
提交purgevram節點,清理vram an ram。
提交自動調節節點,它可以自動調整圖像對比度和白平衡。
提交RGBVALUE節點以將顏色值作為R,G,B的單個小數值輸出。這個想法來自Vxinhao,謝謝。
提交種子節點以輸出種子值。 ImageMaskScaleas,ImagescaleByspectratio,ImagescaleByspectratiov2,Imagescalestore,ImagesCalerestorev2節點增加了width
, height
輸出。
提交級別節點,它可以實現與photoshop.sharp&soft添加“無”選項的相同顏色級別調整功能。
提交BlendifMask節點,該節點與imgaeblendv2或ImageBlendAdadVanceV2合作以實現與Photoshop的功能相同的混合物。
提交彩色和色平衡節點,用於調整圖片的色溫和色彩平衡。
在圖像之間添加新型的混合模式V2。現在最多支持30個混合模式。新的混合模式可用於所有支持混合模式節點的V2版本,包括ImageBlend V2,ImageBlendAdadVance V2,Dropshadow V2,Innershadow V2,Outerglow V2,Interglow V2,Interglow V2,Stroke V2,Stroke V2,Coloroverlay V2,Coliloverlay V2,GradientOverlay V2。
BlendMode V2代碼的一部分來自Comfyui的Virtuoso節點。感謝原始作者。
提交yolov8detect節點。
提交QWENIMAGE2PROMPT節點,此節點是Comfyui_vlm_nodes的UForm-Gen2 Qwen Node
的重新包裝,這要歸功於原始作者。
提交布爾亞器,編號量表,文本框,整數,float,booleannodes。這些節點可以執行數學和邏輯操作。
提交ExtendCanvasv2節點,支持顏色值輸入。
提交AutoBrightness節點,它可以自動調整圖像的亮度。
CreateGradientMask節點添加center
選項。
提交GetColortOnev2節點,可以為背景或主體選擇主要和平均顏色。
提交ImageWardFilter節點,可以濾除質量差的圖片。
Ultra Nodes添加VITMatte(local)
方法,如果您之前已經下載了該模型,則可以選擇此方法以避免訪問HuggingFace.co。
提交HDR效應節點,它增強了輸入圖像的動態範圍和視覺吸引力。該節點是HDR效果的重新包裝(Superbeasts.ai)。
提交cropboxresolve節點。
提交Birefnetultra節點,它使用Birefnet模型去除背景具有更好的識別能力和超高邊緣的詳細信息。
提交ImageAutocroPV2節點,它可以選擇不刪除背景,支持掩碼輸入,並長時間或短側尺寸擴展。
提交ImageHub節點,最多支持9組映像和掩碼開關輸出,並支持隨機輸出。
提交textjoin節點。
提交expssembellish節點。它輸出拋光的及時單詞,並支持將圖像作為引用。
超節點已完全升級到V2版本,並添加了Vitmatte Edge Processing方法,該方法適用於處理半透明區域。包括MaskEdgeultradetailv2,SementanyThingulTrav2,rmbgultrav2和shormmaskultrav2節點。
提交陰影和高光節點的顏色,它可以分開調整深色和明亮部分的顏色。提交陰影並突出顯示面膜節點,它可以輸出黑暗和明亮區域的掩碼。
在原始節點的基礎上提交CropByMaskV2節點,它支持crop_box
輸入,從而方便切割相同尺寸的層。
提交SimpleTextImage節點,它從文本中生成簡單的排版圖像和掩碼。該節點引用了Zho-zho-zho/comfyui-text_image-composite的一些功能和代碼。
提示提示節點節點,推斷基於圖像的提示。它可以替換提示的密鑰單詞(需要應用Google Studio API密鑰)。升級ColorimageV2和漸變ImimageV2,支持用戶自定義預設尺寸和size_as輸入。
提交喇嘛節點,它可以基於掩碼從圖像中刪除對象。該節點是重新包裝的。
提交ImageMoveAlpha和ImageCombinealpha節點,可以刪除或合併圖像的α通道。
提交Imagesscalerestorev2和ImagessCalebyAspectratiov2節點,支持縮放圖像到指定的長邊或短邊尺寸。
提交shoremaskultra節點,生成肖像臉,頭髮,身體皮膚,衣服或配飾的口罩。該節點的模型代碼來自A-Person掩膜生成器。
提交Lightleak節點,該過濾器模擬了膜的光洩漏效果。
提交薄膜節點,該濾鏡模擬了膠片的穀物,深邊緣和模糊的邊緣,支持輸入深度圖以模擬散焦。它是重組和封裝數字約翰/comfyui-Propost的。
提交ImageAutrocrop節點,該節點旨在生成用於訓練模型的圖像材料。
提交ImagesscalebyAspectratio節點,可以根據框架比率將圖像縮放或掩蓋。
修復LUT中的顏色級別的錯誤應用節點渲染,現在該節點支持對日誌顏色空間。 *請為日誌色彩空間圖像加載專用的日誌LUT文件。
提交CreateGradientMask節點。提交layerimagetransform和layermaskTransform節點。
提交MaskEdgeUltradetail節點,它將粗糙的掩碼處理到超細邊緣。
提交尖銳而柔軟的節點,它可以增強或平滑圖像細節。提交MaskByDiverent節點,它比較兩個圖像並輸出掩碼。提交sementanythingultra節點,提高面膜邊緣的質量。 *如果未安裝分段,則需要手動下載該模型。
所有節點均具有完全支持的批處理圖像,為視頻創建提供了便利。 (CropbyMask節點僅支持相同大小的剪切。如果輸入批處理bast_for_crop,則將使用第一張表中的數據。)
提交REMBGULTRA和PIXELSPREAD節點可顯著提高掩模質量。 *REMBGULTRA需要手動模型下載。
提交textimage節點,它生成文本圖像和掩碼。
在圖像之間添加新型的混合模式。現在支持多達19個混合模式。添加color_burn,color_dodge,linear_burn,linear_dodge,Overlay,soft_light,hard_light,vivid_light,pin_light,linear_light和hard_mix 。新添加的混合模式適用於支持混合模式的所有節點。
提交colormap濾鏡節點以創建偽彩色熱圖效應。
提交水彩和皮膚節點。這些是產生水彩和皮膚光滑效果的圖像過濾器。
提交ImageShift節點以移動圖像並輸出位移接縫掩碼,從而方便創建連續紋理。
提交ImageMaskScaleas節點以根據參考圖像調整圖像或掩碼大小。
提交Imagesscalerestore節點與CropbyMask合作進行本地高檔和維修工作。
提交CropByMask和RestoreCropbox節點。這兩個的組合可以在恢復圖像之前部分裁剪並重新繪製圖像。
提交Coloradapter節點,可以自動調整圖像的色調。
提交MaskStroke節點,它可以生成Mask Contour Strokes。
添加用於調整圖像顏色的LayerColor節點組。它包括LUT應用,伽瑪,亮度和對比度,RGB,YUV,實驗室ADN HSV。
提交ImageChannelsplit和ImageChannelmerge節點。
提交MaskMotionBlur節點。
提交軟燈節點。
COMMAL CHANNELSHAKE節點(即過濾器)可以產生與Tiktok徽標相似的通道位錯效應。
提交蒙版節點可以在掩碼中創建梯度。
提交getColortone節點,可以獲得圖像的主要顏色或平均顏色。提交MaskGrow和MaskEdgeshrink節點。
提交MaskBoxDetect節點,該節點可以自動通過掩碼檢測位置並將其輸出到復合節點。將XY提交為百分比節點,以將絕對坐標轉換為百分比坐標。提交高斯布魯爾節點。提交getImagesize節點。
提交ExtendCanvas節點。
提交ImageBlendAdvance節點。該節點允許綜合背景圖像和不同大小的層,提供更自由的綜合體驗。將PrintInfo節點作為工作流程調試輔助。
提交比色像和梯度圖像節點,用於生成固體和梯度顏色圖像。
提交漸變連續層和色彩播放節點。添加無效的掩碼輸入判斷,並在輸入無效的掩碼時忽略它。
提交Innerglow,Innershadow和MotionBlur節點。
重命名所有完整的節點,將節點分為4組:層材料,layermask,層級,層,層。需要用新版本節點手動替換包含舊版本節點的工作流程。
Outerglow節點通過添加亮度, Light_Color和Glow_Color的選項進行了重大修改。
提交MaskInvert節點。
提交colorPick節點。
提交衝程節點。
提交MaskPreview節點。
提交ImageOpacity節點。
layer_mask現在不是強制輸入。允許使用具有不同形狀的層和口罩,但大小必須保持一致。
提交ImageBlend節點。
提交OuterGlow節點。
提交Dropshadow節點。
描述
節點根據它們的功能分為5組:層,外行,外行詞,分層和層式濾波器。
- layerstyle節點提供模仿Adobe Photoshop的圖層樣式。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866032eb31.png)
- LayerColor節點組提供顏色調整功能。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586603b0732.png)
- Layermask節點提供掩模輔助工具。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58660429633.png)
- 分層節點提供與層合成工具和工作流有關的輔助節點。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586604c3634.png)
- LayerFilter節點提供圖像效應過濾器。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58660580d35.png)
層
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866060a336.png)
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58660714837.png)
Dropshadow
產生陰影![圖像](https://images.downcodes.com/uploads/20250202/img_679f586607b8438.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58660862939.png)
- 背景_IMAGE 1 :背景圖像。
- layer_image 1 :複合圖層圖像。
- layer_mask 1,2 :層掩碼,用於layer_image,陰影是根據其形狀生成的。
- invert_mask:是否要反轉面膜。
- Blend_mode 3 :陰影的混合模式。
- 不透明:陰影的不透明度。
- decter_x:陰影的水平偏移。
- decter_y:陰影的垂直偏移。
- 成長:陰影膨脹幅度。
- 模糊:陰影模糊級別。
- Shadow_color 4 :陰影顏色。
- 筆記
Outerglow
產生外部光芒![圖像](https://images.downcodes.com/uploads/20250202/img_679f586608f3a310.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586609b22311.png)
- 背景_IMAGE 1 :背景圖像。
- layer_image 1 :複合圖層圖像。
- layer_mask 1,2 :layer_image的掩碼,根據其形狀生成生長。
- invert_mask:是否要反轉面膜。
- Blend_mode 3 :匯總模式。
- 不透明度:光澤的不透明度。
- 亮度:光的亮度。
- Glow_range:閃光範圍。
- 模糊:光澤的模糊。
- Light_color 4 :輝光的中心部分顏色。
- GLOW_COLOR 4 :GLOW的邊緣零件顏色。
- 筆記
Innershadow
產生內在陰影![圖像](https://images.downcodes.com/uploads/20250202/img_679f58660a4a1312.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58660b0d3313.png)
- 背景_IMAGE 1 :背景圖像。
- layer_image 1 :複合圖層圖像。
- layer_mask 1,2 :層掩碼,用於layer_image,陰影是根據其形狀生成的。
- invert_mask:是否要反轉面膜。
- Blend_mode 3 :陰影的混合模式。
- 不透明:陰影的不透明度。
- decter_x:陰影的水平偏移。
- decter_y:陰影的垂直偏移。
- 成長:陰影膨脹幅度。
- 模糊:陰影模糊級別。
- Shadow_color 4 :陰影顏色。
- 筆記
innerglow
產生內部光芒![圖像](https://images.downcodes.com/uploads/20250202/img_679f58660b9b1314.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58660c5c6315.png)
- 背景_IMAGE 1 :背景圖像。
- layer_image 1 :複合圖層圖像。
- layer_mask 1,2 :layer_image的掩碼,根據其形狀生成生長。
- invert_mask:是否要反轉面膜。
- Blend_mode 3 :匯總模式。
- 不透明度:光澤的不透明度。
- 亮度:光的亮度。
- Glow_range:閃光範圍。
- 模糊:光澤的模糊。
- Light_color 4 :輝光的中心部分顏色。
- GLOW_COLOR 4 :GLOW的邊緣零件顏色。
- 筆記
中風
產生層。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58660cea3316.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586612755317.png)
- 背景_IMAGE 1 :背景圖像。
- layer_image 1 :複合圖層圖像。
- layer_mask 1,2 :層蒙版,用於layer_image,按照其形狀生成中風。
- invert_mask:是否要反轉面膜。
- Blend_mode 3 :中風的混合模式。
- 不透明:中風的不透明度。
- Stroke_grow:中風膨脹/收縮幅度,正值表明膨脹和負值表示收縮。
- stroke_width:中風寬度。
- 模糊:中風的模糊。
- Stroke_color 4 :中風顏色,以十六進制RGB格式描述。
- 筆記
漸變lay
生成梯度覆蓋![圖像](https://images.downcodes.com/uploads/20250202/img_679f58661324c318.png)
節點選項:
- 背景_IMAGE 1 :背景圖像。
- layer_image 1 :複合圖層圖像。
- layer_mask 1,2 :layer_image的掩碼。
- invert_mask:是否要反轉面膜。
- Blend_mode 3 :梯度的混合模式。
- 不透明:中風的不透明度。
- start_color:漸變開頭的顏色。
- start_alpha:梯度開頭的透明度。
- end_color:漸變末端的顏色。
- end_alpha:梯度末端的透明度。
- 角度:梯度旋轉角度。
- 筆記
色彩播放
生成顏色疊加![圖像](https://images.downcodes.com/uploads/20250202/img_679f586613f3b319.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586614dae320.png)
- 背景_IMAGE 1 :背景圖像。
- layer_image 1 :複合圖層圖像。
- layer_mask 1,2 :layer_image的掩碼。
- invert_mask:是否要反轉面膜。
- Blend_mode 3 :顏色的混合模式。
- 不透明:中風的不透明度。
- 顏色:覆蓋的顏色。
- 筆記
外行
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586615818321.png)
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586616caf322.png)
lut申請
將LUT應用於圖像。僅支持.cube格式。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586617beb323.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58661888f324.png)
- LUT * :這是可用的列表。 LUT文件夾中的立方體文件,所選的LUT文件將應用於圖像。
- color_space:有關常規圖像,請選擇線性,以獲取日誌色彩空間中的圖像,請選擇日誌。
- 強度:範圍0〜100,LUT應用強度。值越大,與原始圖像的差異越大,並且值越小,與原始圖像的距離越近。
* lut文件夾在resource_dir.ini
中定義,此文件位於插件的根目錄中,默認名稱為resource_dir.ini.example
。要首次使用此文件,您需要將文件後綴更改為.ini
。打開文本編輯軟件,並以“ lut_dir =”之後找到該行,“ =”,輸入自定義文件夾路徑名。支持在resource-dir.ini
中定義多個文件夾,並由逗號,半隆或空格隔開。該文件夾中的所有.Cube文件將在Comfyui初始化期間收集並顯示在節點列表中。如果INI中設置的文件夾無效,則將啟用插件隨附的LUT文件夾。
自動調整
自動調整圖像的亮度,對比度和白平衡。提供一些手動調整選項,以補償自動調整的缺點。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58661912b325.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586619f61326.png)
- 強度:調整強度。值越大,與原始圖像的差異越大。
- 亮度:亮度的手動調整。
- 對比:對比度的手動調整。
- 飽和:手動調整飽和度。
- 紅色:紅色通道的手動調整。
- 綠色:綠色通道的手動調整。
- 藍色:藍色通道的手動調整。
autoAdjustv2
根據自動調整,添加蒙版輸入,並僅計算掩碼內的內容以進行自動調整。添加多個自動調整模式。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58661a952327.png)
基於自動調節已進行以下更改: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58661b742328.png)
- 蒙版:可選的掩碼輸入。
- 模式:自動調整模式。 “ RGB”會根據RGB的三個通道自動調整,“ Lum + SAT”自動根據亮度和飽和度自動調整“亮度”,“亮度”會根據亮度自動調整,“飽和度”會根據飽和度自動調整,並自動調整“ Mono”。根據灰度和輸出單色。
自動賽
自動調節太黑或太亮的圖像以至於適度亮度,並支持掩蓋輸入。當掩蓋輸入時,僅將掩碼部分的內容用作自動亮度的數據源。輸出仍然是整個調整後的圖像。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58661c0ca329.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58661d0fb330.png)
- 強度:自動調節亮度的強度。值越大,對中間值的偏見就越大,與原始圖片的差異越大。
- 飽和:顏色飽和。亮度的變化通常會導致色彩飽和度的變化,並在可以調整適當的補償處。
科羅拉多克
自動調整圖像的色調以類似於參考圖像。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58661db99331.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58661ef54332.png)
曝露
更改圖像的曝光。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58661fb30333.png)
陰影的顏色和高光的顏色
調整圖像的深色和明亮部分的顏色。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586620cda334.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586621e7a335.png)
- 圖像:輸入圖像。
- 面具:可選輸入。如果有輸入,則只會調整蒙版範圍內的顏色。
- Shadow_brightness:黑暗區域的亮度。
- shadow_saturation:黑暗區域中的顏色飽和度。
- Shadow_hue:黑暗區域中的顏色色調。
- Shadow_level_offset:在黑暗區域中值的偏移,在黑暗區域中,更大的值使更多的區域更靠近亮區進入黑暗區域。
- Shadow_range:黑暗區域的過渡範圍。
- 亮點_Brightness:高光區的亮度。
- 亮點_飽和:高光區域的顏色飽和度。
- Lighlight_hue:高光區中的顏色色調。
- Lighlight_level_offset:高光區中值的偏移,其中較大的值使更多的區域更接近黑暗,進入了高光區。
- Lighlight_range:高光區的過渡範圍。
節點選項:
陰影亮點的顏色
Color of Shadow & Highlight
複製品,從節點名稱中刪除了“&”字符,以避免comfyui工作流解析錯誤。
體溫
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586622b38336.png)
更改圖像的色溫。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586623bca337.png)
- 溫度:色溫值。範圍在100到100之間。值越高,色溫越高(BLUE);顏色溫度越低,色溫(淡黃色)越低。
水平
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586624548338.png)
更改圖像級別。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586625212339.png)
- 頻道:選擇要調整的頻道。可提供RGB,紅色,綠色,藍色。
- black_point * :輸入黑點值。值範圍0-255,默認值0。
- white_point * :輸入白點值。值範圍0-255,默認值255。
- gray_point:輸入灰點值。值範圍0.01-9.99,默認值1。
- output_black_point * :輸出黑點值。值範圍0-255,默認值0。
- output_white_point * :輸出白點值。值範圍0-255,默認值255。
*如果black_point或output_black_point值大於white_point或output_white_point,則交換了兩個值,較大的值用作white_point,較小的值用作black_point。
色彩平淡
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586625c9e340.png)
更改圖像的色彩平衡。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586626cbc341.png)
- CYAN_RED:CYAN-RED平衡。負值是偏南,正值是紅色的。
- Magenta_green:Megenta-Green平衡。負值傾斜梅金塔,正值傾斜綠色。
- Yellow_blue:黃藍色平衡。負值傾斜黃色,正值為藍色。
伽瑪
更改圖像的伽馬值。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866277a4342.png)
亮度和對比度
改變圖像的亮度,對比度和飽和度。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58662817c343.png)
- 亮度:亮度的價值。
- 對比:對比的價值。
- 飽和度:飽和值。
BrightnessContrastV2
Brightness & Contrast
節點的複製品,從節點名稱中刪除了“&”字符,以避免comfyui工作流解析錯誤。
RGB
調整圖像的RGB通道。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586628d79344.png)
是的
調整圖像的YUV通道。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58662977c345.png)
實驗室
調整圖像的實驗室通道。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58662a0f5346.png)
HSV
調整圖像的HSV通道。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58662ab48347.png)
分層
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58662b7f9348.png)
ImageBlendAdvance
用於合成層,允許在背景圖像上的不同大小的層圖層以及設置位置和轉換。可以選擇多種混合模式,並且可以設置透明度。
該節點提供層轉換_Methods和anti Aliasing選項。有助於提高合成圖像的質量。
該節點提供可用於後續工作流程的掩碼輸出。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58662d497349.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58662e86f350.png)
- 背景_IMAGE:背景圖像。
- layer_image 5 :複合圖層圖像。
- layer_mask 2,5 :layer_image的掩碼。
- invert_mask:是否要反轉面膜。
- Blend_mode 3 :混合模式。
- 不透明度:混合的不透明度。
- x_percent:層在背景圖像上的水平位置,以百分比表示,最左側為0,最右邊為100。它可能小於0或大於100,表明該層的某些內容在屏幕外。
- y_ percent:層在背景圖像上的垂直位置,以百分比表示,頂部為0,底部為100。例如,將其設置為50表示垂直中心,20表示上心,80表示下部中心。
- 鏡子:鏡子翻轉。提供兩種翻轉模式,水平翻轉和垂直翻轉。
- 比例:層放大倍數,1.0表示原始大小。
- extack_ratio:層縱橫比。 1.0是原始比率,一個大於其表示伸長的值,而值小於它表示變平。
- 旋轉:層旋轉度。
- 層增大和旋轉的採樣方法,包括蘭開斯,雙子,錘子,雙線性,盒子和最近的採樣方法。不同的採樣方法可以影響合成圖像的圖像質量和處理時間。
- 抗_aliasing:抗混溶劑,範圍從0到16,值越大,較不明顯的混疊。過高的值將大大降低節點的處理速度。
- 筆記
CropbyMask
根據面膜範圍裁剪圖像,並設置要保留的周圍邊界的大小。該節點可以與Restorecropbox和Images量表節點結合使用,以裁剪和修改圖像的高檔部分,然後將它們粘貼到位。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58662f786351.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586630d6e352.png)
- 圖像5 :輸入圖像。
- mask_for_crop 5 :圖像的掩碼,將根據蒙版範圍自動切割。
- invert_mask:是否要反轉面膜。
- 檢測:檢測方法,
min_bounding_rect
是塊狀的最小邊界矩形, max_inscribed_rect
是塊狀的最大銘文矩形, mask-area
是掩蓋像素的有效區域。 - top_reserve:切成頂點以保持尺寸。
- bottom_reserve:切開底部以保持尺寸。
- LEFT_RESERVE:切割左側以保持尺寸。
- right_reserve:切割右側的右側。
- 筆記
輸出:
- croped_image:作物後的圖像。
- Croped_mask:裁剪後的面具。
- crop_box:恢復RestoreCropbox節點時使用修剪的框數據。
- Box_Preview:切割位置的預覽圖像,紅色表示檢測到的範圍,而綠色表示添加保留邊框後的切割範圍。
CropByMaskV2
V2升級版本的CropByMask。支持crop_box輸入,從而易於切割相同尺寸的層。
基於CropbyMask進行了以下更改: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586631a69353.png)
- 輸入
mask_for_crop
reanme to mask
。 - 將可選輸入添加到
crop_box
。如果這裡有輸入,將忽略掩模檢測,並且該數據將直接用於裁剪。 - 添加選項
round_to_multiple
以圓形修剪邊緣長度倍數。例如,將其設置為8將迫使寬度和高度為8的倍數。
RestoreCropbox
將裁剪的圖像恢復為CropbyMask的原始圖像。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586632589354.png)
- background_image:切割前的原始圖像。
- croped_image 5 :裁剪圖像。如果中間放大,則需要在恢復之前恢復尺寸。
- croped_mask 5 :切割面具。
- crop_box:切割過程中的框數據。
- invert_mask:是否要反轉面膜。
- 筆記
農作物盒溶解
將corp_box
解析為x
, y
, width
, height
。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866330f5355.png)
ImagesScalestore
圖像縮放。當該節點成對使用時,可以將圖像自動恢復到第二個節點上的原始大小。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586633a73356.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586634ea3357.png)
- 圖像5 :輸入圖像。
- 面膜2,5 :圖像面具。
- Original_size:可選輸入,用於將圖像恢復為原始大小。
- 比例:比例比。當原始_size具有輸入或scale_by_longest_side設置為true時,將忽略此設置。
- scale_by_longest_side:允許按長邊尺寸縮放。
- LINFEST_SIDE:當Scale_by_longest_side設置為true時,將使用此值將其用於圖像的長邊緣。當原始_size具有輸入時,此設置將被忽略。
輸出:
- 圖像:縮放圖像。
- 蒙版:如果有掩碼輸入,則將輸出縮放的掩碼。
- Original_size:圖像的原始大小數據用於後續節點恢復。
- 寬度:輸出圖像的寬度。
- 高度:輸出圖像的高度。
ImagesScalestorev2
V2升級版的Imagescalerestore。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586635870358.png)
基於ImageScalerestore進行了以下更改:
- scale_by:允許長,短,寬度,高度或總像素按指定維度縮放。將此選項設置為by_scale時,使用比例值,對於其他選項,請使用scale_by_length值。
- scale_by_length:此處的值用作
scale_by
來指定邊緣的長度。
ImageMaskScaleas
將圖像或掩碼縮放到參考圖像(或參考蒙版)的大小。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58663621e359.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586637400360.png)
- scale_as * :參考大小。它可以是圖像或口罩。
- 圖像:要縮放的圖像。此選項是可選輸入。如果沒有輸入,將輸出黑色圖像。
- 面具:面具要縮放。此選項是可選輸入。如果沒有輸入,將輸出一個黑色掩碼。
- 擬合:比例比率模式。當原始圖像的寬度與高度比不匹配縮放大小時,有三種模式可供選擇,信箱模式保留了完整的框架,並用黑色填充空白。農作物模式保留了完整的短邊緣,任何過多的長邊緣都將被切斷;填充模式不能保持幀比,並以寬度和高度填充屏幕。
- 方法:縮放採樣方法,包括蘭氏,雙子,錘子,雙線性,盒子和最近。
*僅限於輸入圖像和口罩。強迫其他類型的輸入的集成將導致節點錯誤。
輸出:
- 圖像:如果有圖像輸入,則將輸出縮放圖像。
- 蒙版:如果有掩碼輸入,則將輸出縮放的掩碼。
- Original_size:圖像的原始大小數據用於後續節點恢復。
- 寬度:輸出圖像的寬度。
- 高度:輸出圖像的高度。
ImagescalebyAspectratio
按比例縮放圖像或掩蓋。縮放尺寸可以舍入到8或16的倍數,並且可以縮放到長側尺寸。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586637c34361.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586638a83362.png)
- extack_ratio:這是提供幾個常見的幀比率。另外,您可以選擇“原始”以保持原始比率或使用“自定義”自定義比率。
- 比例_WIDTH:比例寬度。如果縱橫比選項不是“自定義”,則將忽略此設置。
- 比例_height:比例高度。如果縱橫比選項不是“自定義”,則將忽略此設置。
- 擬合:比例比率模式。當原始圖像的寬度與高度比不匹配縮放大小時,有三種模式可供選擇,信箱模式保留了完整的框架,並用黑色填充空白。農作物模式保留了完整的短邊緣,任何過多的長邊緣都將被切斷;填充模式不能保持幀比,並以寬度和高度填充屏幕。
- 方法:縮放採樣方法,包括蘭氏,雙子,錘子,雙線性,盒子和最近。
- round_to_multife:圓形倍數。例如,將其設置為8將迫使寬度和高度為8的倍數。
- scale_by_longest_side:允許按長邊尺寸縮放。
- LINFEST_SIDE:當Scale_by_longest_side設置為true時,將使用此值將其用於圖像的長邊緣。當原始_size具有輸入時,此設置將被忽略。
輸出:
- 圖像:如果具有圖像輸入,則將輸出縮放圖像。
- 蒙版:如果有掩碼輸入,則將輸出縮放的掩碼。
- Original_size:圖像的原始大小數據用於後續節點恢復。
- 寬度:輸出圖像的寬度。
- 高度:輸出圖像的高度。
ImagescalebyAspectratiov2
V2升級的ImagescalebyAspectratio的版本
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58663954a363.png)
基於ImageScalebyAspectratio進行了以下更改:
- scale_to_side:允許長,短,寬度,高度或總像素允許按指定維度縮放。
- scale_to_length:這裡的數值值作為scale_to_side的指定邊緣的長度或總像素(kilo像素)。
- Background_Color 4 :背景的顏色。
qwenimage2prompt
推斷基於圖像的提示。由於原始作者,該節點是comfyui_vlm_nodes的UForm-Gen2 Qwen Node
的重新包裝。從huggingface或baidu netdisk下載模型文件到ComfyUI/models/LLavacheckpoints/files_for_uform_gen2_qwen
夾。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58663a008364.png)
節點選項:
Llamavision
使用Llama 3.2視覺模型進行本地推理。可用於生成及時的單詞。該節點代碼的一部分來自comfyui-pixtralllamolmolmolmolmolmovision,謝謝原始作者。要使用此節點, transformers
需要升級到4.45.0或更高。從Baidunetdisk或HuggingFace/Seanscript下載型號,然後復製到ComfyUI/models/LLM
。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58663aeba365.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58663ce5f366.png)
- 圖像:圖像輸入。
- 模型:目前,只有“ Llama-3.2-11b-Vision-Instruct-NF4”。
- System_prompt:LLM模型的系統提示單詞。
- USER_PROMPT:LLM模型的用戶提示單詞。
- max_new_tokens:llm型號的max_new_tokens。
- do_sample:do_sample for LLM型號。
- TOP-P:LLM型號的TOP_P。
- top_k:for llm型號的top_k。
- stop_strings:停止字符串。
- 種子:隨機數的種子。
- control_after_generate:種子更改選項。如果此選項已固定,則生成的隨機數將始終相同。
- include_prompt_in_output:輸出包含提示單詞嗎?
- cache_model:是否要緩存模型。
JOYCAPTION2
將JoyCaption-Alpha-Two模型用於本地推理。可用於生成及時的單詞。該節點是https://huggingface.co/john6666/joy-caption-alpha-two-cli-mod在comfyui中的實現,謝謝原始作者。 Download models form BaiduNetdisk and BaiduNetdisk , or huggingface/Orenguteng and huggingface/unsloth , then copy to ComfyUI/models/LLM
, Download models from BaiduNetdisk or huggingface/google , and copy to ComfyUI/models/clip
, Donwload the cgrkzexw-599808
folder from Baidunetdisk或HuggingFace/john6666,然後復製到ComfyUI/models/Joy_caption
。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58663dbfe367.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58663f8ac368.png)
- 圖像:圖像輸入。
- extra_options:輸入extra_options。
- llm_model:有兩種LLM型號可供選擇,Orenguteng/llama-3.1-8b-lix-lim-unceserored-v2和unsaph/meta-llama-3.1-8b-Instruct。
- 設備:型號加載設備。目前,僅支持CUDA。
- DTYPE:模型精度,NF4和BF16。
- VLM_LORA:是否加載text_madel。
- catchion_type:字幕類型選項,包括:“描述性”,“描述性(非正式)”,“訓練提示”,“ Midjourney”,“ Booru標籤列表”,“ BOORU類似標籤列表”,“藝術評論家”, “產品清單” “,”社交媒體帖子。
- catchion_length:標題長度。
- USER_PROMPT:LLM模型的用戶提示單詞。如果這裡有內容,它將覆蓋catch_type和extra_options的所有設置。
- MAX_NEW_TOKENS:LLM的MAX_NEW_TOKEN參數。
- do_sample:llm的do_sample參數。
- TOP-P:LLM的TOP_P參數。
- 溫度:LLM的溫度參數。
- cache_model:是否要緩存模型。
JOYCAPTION2SPLIT
JoyCaption2的節點單獨的模型加載和推理,當使用多個JoyCaption2節點時,可以共享模型以提高效率。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866405e2369.png)
- 圖像:圖像輸入。
- JOY2_MODEL:JOYCAPTION模型輸入。
- extra_options:輸入extra_options。
- catchion_type:字幕類型選項,包括:“描述性”,“描述性(非正式)”,“訓練提示”,“ Midjourney”,“ Booru標籤列表”,“ BOORU類似標籤列表”,“藝術評論家”, “產品清單” “,”社交媒體帖子。
- catchion_length:標題長度。
- USER_PROMPT:LLM模型的用戶提示單詞。如果這裡有內容,它將覆蓋catch_type和extra_options的所有設置。
- MAX_NEW_TOKENS:LLM的MAX_NEW_TOKEN參數。
- do_sample:llm的do_sample參數。
- TOP-P:LLM的TOP_P參數。
- 溫度:LLM的溫度參數。
LOADJOYCAPTION2MODEL
JoyCaption2的模型加載節點,與JoyCaption2Split一起使用。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586641187370.png)
- llm_model:有兩種LLM型號可供選擇,Orenguteng/llama-3.1-8b-lix-lim-unceserored-v2和unsaph/meta-llama-3.1-8b-Instruct。
- 設備:型號加載設備。目前,僅支持CUDA。
- DTYPE:模型精度,NF4和BF16。
- VLM_LORA:是否加載text_madel。
JOYCAPTION2EXTRAOPTIONS
joycaption的extra_options參數節點2。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586641b48371.png)
- refer_character_name:如果圖像中有一個人/角色,則必須稱為{name}。
- Dubl_people_info:不包含有關無法更改的人/角色的信息(例如種族,性別等),但仍然包含可變的屬性(例如髮型)。
- include_lighting:包括有關照明的信息。
- 包括_camera_angle:包括有關攝像機角度的信息。
- 包括_watermark:包括有關是否有水印的信息。
- 包括_jpeg_artifacts:包括有關是否有JPEG文物的信息。
- include_exif:如果是照片,則必須包括有關使用哪些相機的信息以及光圈,快門速度,ISO等。
- dubl_sexual:不要包括任何性別;保持PG。
- Dubl_image_resolution:不要提及圖像的分辨率。
- 包括_aesthetic_quality:您必須包含有關圖像的主觀美學質量的信息,從低到很高。
- include_composition_style:包括有關圖像組成樣式的信息,例如領先行,三分之二或對稱性。
- Dublude_Text:不要提及圖像中的任何文本。
- 指定_depth_field:指定景點以及背景是否處於焦點或模糊之中。
- 指定_lighting_sources:如果適用,請提及可能使用人工或自然照明源。
- do_not_use_ambigous_language:請勿使用任何模棱兩可的語言。
- 包括_nsfw:包括圖像是SFW,建議性或NSFW。
- 唯一_describe_mast_important_elements:僅描述圖像中最重要的元素。
- targin_name:person/cartar name,如果選擇
refer_character_name
。
Phiprompt
使用Microsoft PHI 3.5文本和視覺模型進行本地推理。可用於生成及時的單詞,過程提示單詞或從圖像中推斷出提示單詞。運行此模型需要至少16GB的視頻內存。從baidunetdisk或huggingface.co/microsoft/phi-3.5-vision-instruct and huggingface.co/microsoft/phi-3.5-mini-Instruct下載模型文件,然後將其複製到ComfyUImodelsLLM
folder。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586642b15372.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586644ba6373.png)
- 圖像:可選輸入。輸入圖像將用作PHI-3.5視頻教學的輸入。
- 模型:可選為加載PHI-3.5-Vision-Insruct或Phi-3.5-Mini-Instruct模型。自動的默認值將根據是否有圖像輸入自動加載相應的模型。
- 設備:型號加載設備。支持CPU和CUDA。
- DTYPE:模型加載精度具有三個選項:FP16,BF16和FP32。
- cache_model:是否要緩存模型。
- System_prompt:PHI-3.5-MINI-Instruct的系統提示。
- USER_PROMPT:LLM模型的用戶提示單詞。
- do_sample:llm默認為true的do_sample參數。
- 溫度:LLM默認值的溫度參數為0.5。
- MAX_NEW_TOKENS:LLM默認值為512的MAX_NEW_TOKEN參數。
用戶promptgeneratortxtimg
用戶預設用於生成SD文本以圖像提示單詞。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866456c0374.png)
- 模板:提示單詞模板。當前,只有“ SD TXT2IMG提示”。
- 描述:提示單詞描述。在此處輸入簡單的描述。
- limit_word:輸出提示單詞的最大長度限制。例如,200意味著輸出文本將限制為200個單詞。
用戶promptgeneratextimgwithReference
USERCOMPT預設用於生成SD文本以根據輸入內容映像提示單詞。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866460c2375.png)
- Reference_text:參考文本輸入。通常這是圖像的樣式描述。
- 模板:提示單詞模板。當前,只有“ SD TXT2IMG提示”。
- 描述:提示單詞描述。在此處輸入簡單的描述。
- limit_word:輸出提示單詞的最大長度限制。例如,200意味著輸出文本將限制為200個單詞。
用戶promptGeneratorReplaceWord
用戶推薦的預設用於用不同的內容替換文本中的關鍵字。這不僅是一個簡單的替代品,而且是基於提示單詞上下文的文本的邏輯分類,以實現輸出內容的合理性。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586646c2c376.png)
- orig_prompt:原始提示輸入。
- 模板:提示單詞模板。當前,只有“提示替換單詞”可用。
- dublude_word:需要排除的關鍵字。
- replace_with_word:該單詞將替換dublude_word。
提示
推斷基於圖像的提示。它可以替換為提示的關鍵詞。該節點當前使用Google Gemini API作為後端服務。請確保網絡環境可以正常使用雙子座。請在Google AI Studio上申請您的API密鑰,然後將其填寫在api_key.ini
中,此文件位於插件的根目錄中,默認名稱為api_key.ini.example
。要首次使用此文件,您需要將文件後綴更改為.ini
。使用文本編輯軟件打開它,在google_api_key=
之後填寫API密鑰並保存。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866476e2377.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586648b13378.png)
- API:使用的API。目前,有兩個選項“ Gemini-1。5-Flash”和“ Google-Gemini”。
- token_limit:生成及時單詞的最大令牌限制。
- dublude_word:需要排除的關鍵字。
- replace_with_word:該單詞將替換dublude_word。
提示
輸入簡單的提示單詞,輸出拋光的提示單詞,並支持輸入圖像作為參考,並支持中文輸入。該節點當前使用Google Gemini API作為後端服務。請確保網絡環境可以正常使用雙子座。請在Google AI Studio上申請您的API密鑰,然後將其填寫在api_key.ini
中,此文件位於插件的根目錄中,默認名稱為api_key.ini.example
。要首次使用此文件,您需要將文件後綴更改為.ini
。使用文本編輯軟件打開它,在google_api_key=
之後填寫API密鑰並保存。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586649377379.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58664a0da380.png)
- 圖像:可選,輸入圖像作為提示單詞的參考。
- API:使用的API。目前,有兩個選項“ Gemini-1。5-Flash”和“ Google-Gemini”。
- token_limit:生成及時單詞的最大令牌限制。
- 描述:在此處輸入簡單描述。支持中文文本輸入。
florence2image2prompt
使用Florence 2模型推斷及時的單詞。該節點部分的代碼來自Yiwangsimple/florence_dw,這要歸功於原始作者。 *第一次使用時,該模型將自動下載。您還可以將模型文件從Baidunetdisk下載到ComfyUI/models/florence2
文件夾。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58664a92e381.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58664b4e8382.png)
- florence2_model:florence2模型輸入。
- 圖像:圖像輸入。
- 任務:選擇Florence2的任務。
- text_input:佛羅倫薩的文本輸入2。
- max_new_tokens:生成文本的最大令牌數。
- num_beams:生成文本的光束搜索的數量。
- do_sample:是否使用文本生成的採樣。
- fill_mask:是否使用文本標記蒙版填充。
VQAPROMPT
使用Blip-VQA模型進行視覺詢問回答。得益於原始作者,該節點的一部分是從celoron/comfyui-visualQueryTemplate引用的。
*從baidunetdisk或huggingface.co/salesforce/blip-vqa-capfilt-large and huggingface.co/salesforce/blip-vqa-base下載模型文件,然後復製到ComfyUImodelsVQA
folder。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58664bcdc383.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58664cdba384.png)
- 圖像:圖像輸入。
- VQA_MODEL:VQA模型輸入,它來自LoadVQAmodel節點。
- 問題:任務文本輸入。一個問題包含在捲曲括號“ {}”中,該問題的答案將在文本輸出中以其原始位置替換。可以使用單個問答中的捲曲括號來定義多個問題。例如,對於放置在場景中的項目的圖片,問題是:“ {object color} {object}在{scene}上”。
LOADVQAMODEL
加載Blip-VQA模型。
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58664d9aa385.png)
- 模型:目前有兩個模型可從“ Blip-VQA基本”和“ Blip-VQA capfilt-large”中進行選擇。
- 精度:模型精度有兩個選項:“ FP16”和“ FP32”。
- 設備:模型運行設備具有兩個選項:“ CUDA”和“ CPU”。
圖像班
移動圖像。該節點支持位移接縫面膜的輸出,從而方便創建連續紋理。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58664e532386.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58664f23e387.png)
- 圖像5 :輸入圖像。
- 面具2,5 :圖像的面具。
- shift_x:偏移的水平距離。
- shift_y:移位的垂直距離。
- 循環:是位移的一部分,它是循環的邊界。
- 背景_Color 4 :背景顏色。如果循環設置為false,則此處的設置將用作背景顏色。
- border_mask_width:邊框面具寬度。
- border_mask_blur:邊框面具模糊。
- 筆記
imageBlend
合成層圖像和背景圖像的簡單節點,可以選擇多個混合模式,並且可以設置透明度。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58664fe59388.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58665104f389.png)
- 背景_IMAGE 1 :背景圖像。
- layer_image 1 :複合圖層圖像。
- layer_mask 1,2 :layer_image的掩碼。
- invert_mask:是否要反轉面膜。
- Blend_mode 3 :混合模式。
- 不透明度:混合的不透明度。
- 筆記
Imagereel
在一個捲軸中顯示多個圖像。文本註釋可以添加到捲軸中的每個圖像中。通過使用ImagerElComposite節點,可以將多個捲軸組合成一個圖像。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58665192a390.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586652f4e391.png)
- Image1:第一個圖像。它必須是輸入。
- Image2:第二張圖像。可選輸入。
- Image3:第三張圖像。可選輸入。
- Image4:第四張圖像。可選輸入。
- Image1_Text:第一個圖像的文本註釋。
- Image2_Text:第二張圖像的文本註釋。
- Image3_Text:第三張圖像的文本註釋。
- Image4_Text:第四張圖像的文本註釋。
- reel_height:捲軸的高度。
- 邊界:捲軸中圖像的邊界寬度。
輸出:
- 捲軸:ImagerElComposite節點輸入的捲軸。
ImagereElComposite
將多個捲軸組合成一個圖像。
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58665384b392.png)
- reel_1:第一個捲軸。它必須是輸入。
- reel_2:第二個捲軸。可選輸入。
- reel_3:第三個捲軸。可選輸入。
- reel_4:第四捲軸。可選輸入。
- font_file ** :這是字體文件夾中可用的字體文件的列表,所選的字體文件將用於生成圖像。
- 邊界:捲軸的邊界寬度。
- color_theme:輸出圖像的主題顏色。
*字體文件夾在resource_dir.ini
中定義,此文件位於插件的根目錄中,默認名稱為resource_dir.ini.example
。要首次使用此文件,您需要將文件後綴更改為.ini
。打開文本編輯軟件,並以“ font_dir =”開頭找到“ =”之後的行,輸入自定義文件夾路徑名。支持在resource-dir.ini
中定義多個文件夾,並由逗號,半隆或空格隔開。該文件夾中的所有字體文件將在Comfyui初始化期間收集並顯示在節點列表中。如果INI中設置的文件夾無效,則將啟用插件隨附的字體文件夾。
ImageOpacity
調整圖像不透明度![圖像](https://images.downcodes.com/uploads/20250202/img_679f586654460393.png)
節點選項:
- 圖像5 :圖像輸入,支持RGB和RGBA。如果是RGB,則將自動添加整個圖像的Alpha通道。
- 蒙版2,5 :蒙版輸入。
- invert_mask:是否要反轉面膜。
- 不透明度:圖像的不透明度。
- 筆記
色彩點
從MTB節點修改Web擴展。借助原始作者,在調色板上選擇顏色,然後輸出RGB值。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586654e53394.png)
節點選項:
- 模式:輸出格式以十六進制(十六進制)和十進制(DEC)提供。
輸出類型:
rgbvalue
將顏色值作為單個r,g,b輸出三個小數點值。支持ColorPicker節點輸出的十六進制和DEC格式。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866557a2395.png)
節點選項:
- color_value:支持十六進制(十六進制)或十進制(DEC)顏色值,應為字符串或元組類型。在其他類型中強制會導致錯誤。
hsvvalue
輸出顏色值作為H,S和V的單個小數值(最大值為255)。支持ColorPicker節點輸出的十六進制和DEC格式。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866561c0396.png)
節點選項:
- color_value:支持十六進制(十六進制)或十進制(DEC)顏色值,應為字符串或元組類型。在其他類型中強制會導致錯誤。
灰色
基於顏色值的輸出灰度值。支持輸出256級和100級灰度值。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586656892397.png)
節點選項:
- color_value:支持十六進制(十六進制)或十進制(DEC)顏色值,應為字符串或元組類型。在其他類型中強制會導致錯誤。
輸出:
- 灰色(256_level):256級灰度值。整數類型,範圍0〜255。
- 灰色(100_level):100級灰度值。整數類型,範圍0〜100。
getColortone
從圖像和輸出RGB值中獲取主顏色或平均顏色。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586656f75398.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586657e9a399.png)
輸出類型:
- HEX中的RGB顏色:十六進制RGB格式描述的RGB顏色,例如“#fa3d86”。
- 列表中的HSV顏色:Python列表數據格式描述的HSV顏色。
getColortOnev2
V2 GetColortone的升級。您可以指定主體或平均顏色以獲取身體或背景。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866589353100.png)
根據GetColortong進行了以下更改: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58665970e3101.png)
- color_of:分別選擇4個選項,掩碼,全部,背景和主題,分別選擇掩模區域,整個圖片,背景或主題的顏色。
- remove_background_method:有兩種背景識別方法:birefnet和rmbg v1.4。
- invert_mask:是否要反轉面膜。
- mask_grow:面具擴展。對於受試者,較大的值使所獲得的顏色更接近身體中心的顏色。
輸出:
- 圖像:純色圖片輸出,大小與輸入圖片相同。
- 蒙版:掩碼輸出。
GetMainColors
獲取圖像的主要顏色。您可以獲得5種顏色。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586659ead3102.png)
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58665b6ab3103.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58665d1723104.png)
- 圖像:圖像輸入。
- k_means_algorithm:k-means算法選項。 “ Lloyd”是標準K-均值算法,而“ Elkan”是三角不平等算法,適用於較大的圖像。
輸出:
- Preview_image:5個主要顏色預覽圖像。
- color_1〜顏色_5:顏色值輸出。以十六進制格式輸出RGB字符串。
殖民者
根據顏色值在調色板中輸出最相似的顏色名稱。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58665dc453105.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58665e6e93106.png)
- 顏色:顏色值輸入,以十六進制格式RGB字符串格式。
- 調色板:調色板。
xkcd
包括949種顏色, css3
包括147種顏色, html4
包括16種顏色。
輸出:
延伸canvas
擴展畫布![圖像](https://images.downcodes.com/uploads/20250202/img_679f58665f1943107.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58666030b3108.png)
- invert_mask:是否要反轉面膜。
- 頂部:頂部擴展值。
- 底部:底部擴展值。
- 左:左擴展值。
- 右:右擴展值。
- 顏色;畫布的顏色。
擴展canvasv2
v2升級到擴展canvas。
基於ExtendCanvas,將顏色修改為字符串類型,並且支持外部ColorPicker
輸入,支持負值輸入,這意味著圖像將被裁剪。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586660e543109.png)
xy至百分比
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866619f73110.png)
將絕對坐標轉換為百分比坐標。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586662cab3111.png)
節點選項:
layerimageTransform
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866635623112.png)
該節點用於單獨轉換layer_image,它可以改變大小,旋轉,縱橫比和鏡像翻轉而無需更改圖像大小。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58666481b3113.png)
節點選項:
- X:X的值。
- Y:Y的價值。
- 鏡子:鏡子翻轉。提供兩種翻轉模式,水平翻轉和垂直翻轉。
- 比例:層放大倍數,1.0表示原始大小。
- extack_ratio:層縱橫比。 1.0是原始比率,一個大於其表示伸長的值,而值小於它表示變平。
- 旋轉:層旋轉度。
- 層增大和旋轉的採樣方法,包括蘭開斯,雙子,錘子,雙線性,盒子和最近的採樣方法。不同的採樣方法可以影響合成圖像的圖像質量和處理時間。
- 抗_aliasing:抗混溶劑,範圍從0到16,值越大,較不明顯的混疊。過高的值將大大降低節點的處理速度。
LayermaskTransform
與layerImageTransform節點相似,該節點用於分別轉換layer_mask,該節點可以擴展,旋轉,更改縱橫比,而無需更改掩碼大小。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866652923114.png)
節點選項:
- X:X的值。
- Y:Y的價值。
- 鏡子:鏡子翻轉。提供兩種翻轉模式,水平翻轉和垂直翻轉。
- 比例:層放大倍數,1.0表示原始大小。
- extack_ratio:層縱橫比。 1.0是原始比率,一個大於其表示伸長的值,而值小於它表示變平。
- 旋轉:層旋轉度。
- 層增大和旋轉的採樣方法,包括蘭開斯,雙子,錘子,雙線性,盒子和最近的採樣方法。不同的採樣方法可以影響合成圖像的圖像質量和處理時間。
- 抗_aliasing:抗混溶劑,範圍從0到16,值越大,較不明顯的混疊。過高的值將大大降低節點的處理速度。
比色器
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586665db63115.png)
生成指定顏色和大小的圖像。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866668db3116.png)
節點選項:
- 寬度:圖像的寬度。
- 高度:圖像的高度。
- 顏色4 :圖像的顏色。
ColorimageV2
V2升級的LOLYIMAGE版本。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866674493117.png)
根據比色像進行了以下更改:
- size_as * :輸入圖像或在此處掩蓋圖像根據其大小生成圖像。請注意,此輸入優先於其他大小設置。
- 尺寸** :大小預設。可以由用戶自定義預設。如果具有size_as輸入,則將忽略此選項。
- custom_width:圖像寬度。當將大小設置為“自定義”時,它有效。如果具有size_as輸入,則將忽略此選項。
- custom_height:圖像高度。當將大小設置為“自定義”時,它有效。如果具有size_as輸入,則將忽略此選項。
*僅限於輸入圖像和口罩。強迫其他類型的輸入的集成將導致節點錯誤。 **預設大小是在custom_size.ini
中定義的,此文件位於插件的根目錄中,默認名稱為custom_size.ini.example
。要首次使用此文件,您需要將文件後綴更改為.ini
。使用文本編輯軟件打開。每行代表一個大小,第一個值是寬度,第二行是高度,中間是小寫的“ x”。為了避免錯誤,請不要輸入額外的字符。
梯度圖
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586667e1c3118.png)
生成具有指定尺寸和顏色梯度的圖像。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586668a3a3119.png)
節點選項:
- 寬度:圖像的寬度。
- 高度:圖像的高度。
- 角度:梯度角度。
- start_color 4 :乞討的顏色。
- end_color 4 :結尾的顏色。
梯度imimagev2
V2升級版的梯度圖。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866694763120.png)
基於梯度圖表進行了以下更改:
- size_as * :輸入圖像或在此處掩蓋圖像根據其大小生成圖像。請注意,此輸入優先於其他大小設置。
- 尺寸** :大小預設。可以由用戶自定義預設。如果具有size_as輸入,則將忽略此選項。
- custom_width:圖像寬度。當將大小設置為“自定義”時,它有效。如果具有size_as輸入,則將忽略此選項。
- custom_height:圖像高度。當將大小設置為“自定義”時,它有效。如果具有size_as輸入,則將忽略此選項。
*僅限於輸入圖像和口罩。強迫其他類型的輸入的集成將導致節點錯誤。 **預設大小是在custom_size.ini
中定義的,此文件位於插件的根目錄中,默認名稱為custom_size.ini.example
。要首次使用此文件,您需要將文件後綴更改為.ini
。使用文本編輯軟件打開。每行代表一個大小,第一個值是寬度,第二行是高度,中間是小寫的“ x”。為了避免錯誤,請不要輸入額外的字符。
ImageRewardWardfilter
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586669df23121.png)
評級散裝圖片和輸出排名最高的圖片。由於原始作者,它使用了[ImagerWard](https://github.com/thudm/imagereward)進行圖像評分。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58666b2e13122.png)
節點選項:
- 提示:可選輸入。在此處輸入提示將被用作確定圖片匹配程度的基礎。
- output_nun:輸出的圖片數。此值應小於圖片批次。
輸出:
- 圖像:按等級順序從高到低的批量圖片。
- obsolete_images:淘汰圖片。還按等級順序從高到低的輸出。
SimpleTextImage
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58666bc943123.png)
從文本中生成簡單的排版圖像和口罩。由於原始作者,該節點引用了Zho-zho-zho/comfyui-text_image-composite的一些功能和代碼,這要歸功於原始作者。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58666d1103124.png)
節點選項:
- size_as * :輸入圖像或掩碼將根據其大小生成輸出圖像並掩蓋。該輸入優先於寬度和高度以下。
- font_file ** :這是字體文件夾中可用的字體文件的列表,所選的字體文件將用於生成圖像。
- 對齊:對齊選項。有三個選擇:中心,左和右。
- char_per_line:每行字符數,任何多餘的都將自動包裝。
- 領先:領先空間。
- font_size:字體的大小。
- text_color: The color of text.
- stroke_width: The width of stroke.
- stroke_color: The color of stroke.
- x_offset: The horizontal offset of the text position.
- y_offset: The vertical offset of the text position.
- width: Width of the image. If there is a size_as input, this setting will be ignored.
- height: Height of the image. If there is a size_as input, this setting will be ignored.
* Only limited to input image and mask. forcing the integration of other types of inputs will result in node errors.
** The font folder is defined in resource_dir.ini
, this file is located in the root directory of the plug-in, and the default name is resource_dir.ini.example
.要首次使用此文件,您需要將文件後綴更改為.ini
。 Open the text editing software and find the line starting with "FONT_dir=", after "=", enter the custom folder path name.支持在resource-dir.ini
中定義多個文件夾,並由逗號,半隆或空格隔開。 all font files in this folder will be collected and displayed in the node list during ComfyUI initialization. If the folder set in ini is invalid, the font folder that comes with the plugin will be enabled.
文本圖像
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58666dce53125.png)
Generate images and masks from text. support for adjusting the spacing between words and lines, horizontal and vertical adjustments, it can set random changes in each character, including size and position.
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58666eebf3126.png)
節點選項:
- size_as * : The input image or mask here will generate the output image and mask according to their size. this input takes priority over the width and height below.
- font_file ** : Here is a list of available font files in the font folder, and the selected font files will be used to generate images.
- spacing: Word spacing.this value is in pixels.
- leading: Row leading.this value is in pixels.
- horizontal_border: Side margin. If the text is horizontal, it is the left margin, and if it is vertical, it is the right margin. this value is represents a percentage, for example, 50 indicates that the starting point is located in the center on both sides.
- vertical_border: Top margin. this value is represents a percentage, for example, 10 indicates that the starting point is located 10% away from the top.
- scale: The overall size of the text. the initial size of text is automatically calculated based on the screen size and text content, with the longest row or column by default adapting to the image width or height. adjusting the value here will scale the text as a whole. this value is represents a percentage, for example, 60 represents scaling to 60%.
- variation_range: The range of random changes in characters. when this value is greater than 0, the character will undergo random changes in size and position, and the larger the value, the greater the magnitude of the change.
- variation_seed: The seed for randomly. fix this value to individual characters changes generated each time will not change.
- layout: Text layout. there are horizontal and vertical options to choose from.
- width: Width of the image. If there is a size_as input, this setting will be ignored.
- height: Height of the image. If there is a size_as input, this setting will be ignored.
- text_color: The color of text.
- background_color 4 : The color of background.
* Only limited to input image and mask. forcing the integration of other types of inputs will result in node errors.
** The font folder is defined in resource_dir.ini
, this file is located in the root directory of the plug-in, and the default name is resource_dir.ini.example
.要首次使用此文件,您需要將文件後綴更改為.ini
。 Open the text editing software and find the line starting with "FONT_dir=", after "=", enter the custom folder path name.支持在resource-dir.ini
中定義多個文件夾,並由逗號,半隆或空格隔開。 all font files in this folder will be collected and displayed in the node list during ComfyUI initialization. If the folder set in ini is invalid, the font folder that comes with the plugin will be enabled.
TextImageV2
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58666fe093127.png)
This node is merged from heshengtao. The PR modifies the scaling of the image text node based on the TextImage node. The font spacing follows the scaling, and the coordinates are no longer based on the top left corner of the text, but on the center point of the entire line of text. Thank you for the author's contribution.
喇嘛
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866709583128.png)
Erase objects from the image based on the mask. this node is repackage of IOPaint, powered by state-of-the-art AI models, thanks to the original author.
It is have LaMa, LDM, ZITS,MAT, FcF, Manga models and the SPREAD method to erase. Please refer to the original link for the introduction of each model.
Please download the model files from lama models(BaiduNetdisk) or lama models(Google Drive) to ComfyUI/models/lama
folder.
Node optons: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586671c353129.png)
- lama_model: Choose a model or method.
- device: After correctly installing Torch and Nvidia CUDA drivers, using cuda will significantly improve running speed.
- invert_mask:是否要反轉面膜。
- grow: Positive values expand outward, while negative values contract inward.
- blur: Blur the edge.
ImageChannelSplit
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866726203130.png)
Split the image channel into individual images.
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866733ec3131.png)
- mode: Channel mode, include RGBA, YCbCr, LAB adn HSV.
ImageChannelMerge
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586673f583132.png)
Merge each channel image into one image.
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866750363133.png)
- mode: Channel mode, include RGBA, YCbCr, LAB adn HSV.
ImageRemoveAlpha
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866759813134.png)
Remove the alpha channel from the image and convert it to RGB mode. you can choose to fill the background and set the background color.
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586676afe3135.png)
- RGBA_image: The input image supports RGBA or RGB modes.
- mask: Optional input mask. If there is an input mask, it will be used first, ignoring the alpha that comes with RGBA_image.
- fill_background: Whether to fill the background.
- background_color 4 : Color of background.
ImageCombineAlpha
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866772323136.png)
Merge the image and mask into an RGBA mode image containing an alpha channel.
ImageAutoCrop
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586677ae93137.png)
Automatically cutout and crop the image according to the mask. it can specify the background color, aspect ratio, and size for output image. this node is designed to generate the image materials for training models.
*Please refer to the model installation methods for SegmentAnythingUltra and RemBgUltra.
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586678d5f3138.png)
- background_color 4 : The background color.
- aspect_ratio: Here are several common frame ratios provided. alternatively, you can choose "original" to keep original ratio or customize the ratio using "custom".
- proportional_width: Proportional width. if the aspect ratio option is not "custom", this setting will be ignored.
- proportional_height: Proportional height. if the aspect ratio option is not "custom", this setting will be ignored.
- scale_by_longest_side: Allow scaling by long edge size.
- longest_side: When the scale_by_longest_side is set to True, this will be used this value to the long edge of the image. when the original_size have input, this setting will be ignored.
- detect: Detection method, min_bounding_rect is the minimum bounding rectangle, max_inscribed_rect is the maximum inscribed rectangle.
- border_reserve: Keep the border. expand the cutting range beyond the detected mask body area.
- ultra_detail_range: Mask edge ultra fine processing range, 0 is not processed, which can save generation time.
- matting_method: The method of generate masks. There are two methods available: Segment Anything and RMBG 1.4. RMBG 1.4 runs faster.
- sam_model: Select the SAM model used by Segment Anything here.
- grounding_dino_model: Select the Grounding_Dino model used by Segment Anything here.
- sam_threshold: The threshold for Segment Anything.
- sam_prompt: The prompt for Segment Anything.
Output: cropped_image: Crop and replace the background image. box_preview: Crop position preview. cropped_mask: Cropped mask.
ImageAutoCropV2
The V2 upgrad version of ImageAutoCrop
, it has made the following changes based on the previous version: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586679b793139.png)
- Add optional input for mask. when there is a mask input, use that input directly to skip the built-in mask generation.
- Add
fill_background
. When set to False, the background will not be processed and any parts beyond the frame will not be included in the output range. -
aspect_ratio
adds the original
option. - scale_by: Allow scaling by specified dimensions for longest, shortest, width, or height.
- scale_by_length: The value here is used as
scale_by
to specify the length of the edge.
ImageAutoCropV3
Automatically crop the image to the specified size. You can input a mask to preserve the specified area of the mask. This node is designed to generate image materials for training the model.
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58667a8533140.png)
- image: The input image.
- mask: Optional input mask. The masking part will be preserved within the range of the cutting aspect ratio.
- aspect_ratio: The aspect ratio of the output. Here are common frame ratios provided, with "custom" being the custom ratio and "original" being the original frame ratio.
- proportional_width: Proportionally wide. If the aspect_ratio option is not 'custom', this setting will be ignored.
- proportional_height: High proportion. If the aspect_ratio option is not 'custom', this setting will be ignored.
- method: Scaling sampling methods include Lanczos, Bicubic, Hamming, Bilinear, Box, and Nearest.
- scale_to_side: Allow scaling to be specified by long side, short side, width, height, or total pixels.
- scale_to_length: The value here is used as the scale_to-side to specify the length of the edge or the total number of pixels (kilo pixels).
- round_to_multiple: Multiply to the nearest whole. For example, if set to 8, the width and height will be forcibly set to multiples of 8.
Outputs: cropped_image: The cropped image. box_preview: Preview of cutting position.
HLFrequencyDetailRestore
Using low frequency filtering and retaining high frequency to recover image details. Compared to kijai's DetailTransfer, this node is better integrated with the environment while retaining details. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58667b4813141.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58667c1013142.png)
- image: Background image input.
- detail_image: Detail image input.
- mask: Optional input, if there is a mask input, only the details of the mask part are restored.
- keep_high_freq: Reserved range of high frequency parts. The larger the value, the richer the retained high-frequency details.
- erase_low_freq: The range of low frequency parts of the erasure. The larger the value, the more the low frequency range of the erasure.
- mask_blur: Mask edge blur. Valid only if there is masked input.
getimagesize
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58667c9ca3143.png)
Obtain the width and height of the image.
輸出:
- width: The width of image.
- height: The height of image.
- original_size: The original size data of the image is used for subsequent node recovery.
ImageHub
Switch output from multiple input images and masks, supporting 9 sets of inputs. All input items are optional. if there is only image or mask in a set of input, the missing item will be output as None. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58667d21e3144.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58667e47a3145.png)
- output: Switch output. the value is the corresponding input group. when the
random-output
option is True, this setting will be ignored. - random_output: When this is true, the
output
setting will be ignored and a random set will be output among all valid inputs.
BatchSelector
Retrieve specified images or masks from batch images or masks. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58667f0353146.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58667fa843147.png)
- images: Batch images input. This input is optional.
- masks: Batch masks input. This input is optional.
- select: Select the output image or mask at the batch index value, where 0 is the first image. Multiple values can be entered, separated by any non numeric character, including but not limited to commas, periods, semicolons, spaces or letters, and even Chinese characters. Note: If the value exceeds the batch size, the last image will be output. If there is no corresponding input, an empty 64x64 image or a 64x64 black mask will be output.
TextJoin
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866801ff3148.png)
Combine multiple paragraphs of text into one.
TextJoinV2
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586680c3c3149.png)
Added delimiter options on the basis of TextJoin.
PrintInfo
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866815193150.png)
Used to provide assistance for workflow debugging. When running, the properties of any object connected to this node will be printed to the console.
This node allows any type of input.
文本框
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586681c2a3151.png)
Output a string.
細繩
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866823c73152.png)
Output a string. same as TextBox.
整數
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586682aee3153.png)
Output a integer value.
漂浮
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866831e33154.png)
Output a floating-point value with a precision of 5 decimal places.
布爾
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586683a143155.png)
Output a boolean value.
RandomGenerator
Used to generate random value within a specified range, with outputs of int, float, and boolean. Supports batch and list generation, and supports batch generation of a set of different random number lists based on image batch. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866841a93156.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866858f63157.png)
- image: Optional input, generate a list of random numbers that match the quantity in batches according to the image.
- min_value: Minimum value. Random numbers will randomly take values from the minimum to the maximum.
- max_value: Maximum value. Random numbers will randomly take values from the minimum to the maximum.
- float_decimal_places: Precision of float value.
- fix_seed:Is the random number seed fixed.如果此選項已固定,則生成的隨機數將始終相同。
Outputs: int: Integer random number. float: Float random number. bool: Boolean random number.
RandomGeneratorV2
On the based of RandomGenerator, add the least random range and seed options.
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866865573158.png)
- image: Optional input, generate a list of random numbers that match the quantity in batches according to the image.
- min_value: Minimum value. Random numbers will randomly take values from the minimum to the maximum.
- max_value: Maximum value. Random numbers will randomly take values from the minimum to the maximum.
- least: Minimum random range. Random numbers will randomly at least take this value.
- float_decimal_places: Precision of float value.
- seed: The seed of random number.
- control_after_generate:種子更改選項。如果此選項已固定,則生成的隨機數將始終相同。
Outputs: int: Integer random number. float: Float random number. bool: Boolean random number.
NumberCalculator
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866870a83159.png)
Performs mathematical operations on two numeric values and outputs integer and floating point results * . Supported operations include +
, -
, *
, /
, **
, //
, %
.
* The input only supports boolean, integer, and floating point numbers, forcing in other data will result in error.
NumberCalculatorV2
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866878fe3160.png)
The upgraded version of NumberCalculator has added numerical inputs within nodes and square root operations. The square root operation option is nth_root
Note: The input takes priority, and when there is input, the values within the node will be invalid.
BooleanOperator
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866881fc3161.png)
Perform a Boolean operation on two numeric values and output the result * . Supported operations include ==
, !=
, and
, or
, xor
, not
, min
, max
.
* The input only supports boolean, integer, and floating point numbers, forcing in other data will result in error. The and
operation between the values outputs a larger number, and the or
operation outputs a smaller number.
BooleanOperatorV2
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586688ad23162.png)
The upgraded version of Boolean Operator has added numerical inputs within nodes and added judgments for greater than, less than, greater than or equal to, and less than or equal to. Note: The input takes priority, and when there is input, the values within the node will be invalid.
StringCondition
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866892533163.png)
Determine whether the text contains or does not contain substrings, and output a Boolean value.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586689d3b3164.png)
- text: Input text.
- condition: Judgment conditions.
include
determines whether it contains a substring, and exclude
determines whether it does not. - sub_string: Substring.
CheckMask
Check if the mask contains enough valid areas and output a Boolean value.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58668a4533165.png)
- white_point: The white point threshold used to determine whether the mask is valid is considered valid if it exceeds this value.
- area_percent: The percentage of effective areas. If the proportion of effective areas exceeds this value, output True.
CheckMaskV2
On the basis of CheckMask, the method
option has been added, which allows for the selection of different detection methods. The area_percent
is changed to a floating point number with an accuracy of 2 decimal places, which can detect smaller effective areas.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58668aba33166.png)
- method: There are two detection methods, which are
simple
and detectability
. The simple method only detects whether the mask is completely black, while the detect_percent method detects the proportion of effective areas.
如果
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58668b48a3167.png)
Switches output based on Boolean conditional input. It can be used for any type of data switching, including but not limited to numeric values, strings, pictures, masks, models, latent, pipe pipelines, etc.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58668c26b3168.png)
- if_condition: Conditional input. Boolean, integer, floating point, and string inputs are supported. When entering a value, 0 is judged to be False; When a string is entered, an empty string is judged as Flase.
- when_True: This item is output when the condition is True.
- when_False: This item is output when the condition is False.
SwitchCase
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58668c9a93169.png)
Switches the output based on the matching string. It can be used for any type of data switching, including but not limited to numeric values, strings, pictures, masks, models, latent, pipe pipelines, etc. Supports up to 3 sets of case switches. Compare case to switch_condition
, if the same, output the corresponding input. If there are the same cases, the output is prioritized in order. If there is no matching case, the default input is output. Note that the string is case sensitive and Chinese and English full-width and half-width.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58668d7843170.png)
- input_default: Input entry for default output. This input is required.
- input_1: Input entry used to match
case_1
. This input is optional. - input_2: Input entry used to match
case_2
. This input is optional. - input_3: Input entry used to match
case_3
. This input is optional. - switch_condition: String used to judge with case.
- case_1: case_1 string.
- case_2: case_2 string.
- case_3: case_3 string.
QueueStop
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58668df083171.png)
Stop the current queue. When executed at this node, the queue will stop. The workflow diagram above illustrates that if the image is larger than 1Mega pixels, the queue will stop executing.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58668e8993172.png)
- mode: Stop mode. If you choose
stop
, it will be determined whether to stop based on the input conditions. If you choose continue
, ignore the condition to continue executing the queue. - stop: If true, the queue will stop. If false, the queue will continue to execute.
清除
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58668f04c3173.png)
Clean up GPU VRAM and system RAM. any type of input can be accessed, and when executed to this node, the VRAM and garbage objects in the RAM will be cleaned up. Usually placed after the node where the inference task is completed, such as the VAE Decode node.
節點選項:
- purge_cache: Clean up cache。
- purge_models: Unload all loaded models。
SaveImagePlus
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58668fb153174.png)
Enhanced save image node. You can customize the directory where the picture is saved, add a timestamp to the file name, select the save format, set the image compression rate, set whether to save the workflow, and optionally add invisible watermarks to the picture. (Add information in a way that is invisible to the naked eye, and use the ShowBlindWaterMark
node to decode the watermark). Optionally output the json file of the workflow.
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866905c73175.png)
- iamge: The input image.
- custom_path * : User-defined directory, enter the directory name in the correct format. If empty, it is saved in the default output directory of ComfyUI.
- filename_prefix * : The prefix of file name.
- timestamp: Timestamp the file name, opting for date, time to seconds, and time to milliseconds.
- format: The format of image save. Currently available in
png
and jpg
. Note that only png format is supported for RGBA mode pictures. - quality: Image quality, the value range 10-100, the higher the value, the better the picture quality, the volume of the file also correspondingly increases.
- meta_data: Whether to save metadata to png file, that is workflow information. Set this to false if you do not want the workflow to be leaked.
- blind_watermark: The text entered here (does not support multilingualism) will be converted into a QR code and saved as an invisible watermark. Use
ShowBlindWaterMark
node can decode watermarks. Note that pictures with watermarks are recommended to be saved in png format, and lower-quality jpg format will cause watermark information to be lost. - save_workflow_as_json: Whether the output workflow is a json file at the same time (the output json is in the same directory as the picture).
- preview: Preview switch.
* Enter %date
for the current date (YY-mm-dd) and %time
for the current time (HH-MM-SS). You can enter /
for subdirectories. For example, %date/name_%tiem
will output the image to the YY-mm-dd
folder, with name_HH-MM-SS
as the file name prefix.
ImageTaggerSave
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586690d0f3176.png)
The node used to save the training set images and their text labels, where the image files and text label files have the same file name. Customizable directory for saving images, adding timestamps to file names, selecting save formats, and setting image compression rates. *The workflow image_tagger_stave.exe is located in the workflow directory.
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586691e013177.png)
- iamge: The input image.
- tag_text: Text label of image.
- custom_path * : User-defined directory, enter the directory name in the correct format. If empty, it is saved in the default output directory of ComfyUI.
- filename_prefix * : The prefix of file name.
- timestamp: Timestamp the file name, opting for date, time to seconds, and time to milliseconds.
- format: The format of image save. Currently available in
png
and jpg
. Note that only png format is supported for RGBA mode pictures. - quality: Image quality, the value range 10-100, the higher the value, the better the picture quality, the volume of the file also correspondingly increases.
- preview: Preview switch.
* Enter %date
for the current date (YY-mm-dd) and %time
for the current time (HH-MM-SS). You can enter /
for subdirectories. For example, %date/name_%tiem
will output the image to the YY-mm-dd
folder, with name_HH-MM-SS
as the file name prefix.
AddBlindWaterMark
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866926883178.png)
Add an invisible watermark to a picture. Add the watermark image in a way that is invisible to the naked eye, and use the ShowBlindWaterMark
node to decode the watermark.
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866934cf3179.png)
- iamge: The input image.
- watermark_image: Watermark image. The image entered here will automatically be converted to a square black and white image as a watermark. It is recommended to use a QR code as a watermark.
ShowBlindWaterMark
Decoding the invisible watermark added to the AddBlindWaterMark
and SaveImagePlus
nodes. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586693ab73180.png)
CreateQRCode
Generate a square QR code picture.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866940963181.png)
- size: The side length of image.
- border: The size of the border around the QR code, the larger the value, the wider the border.
- text: Enter the text content of the QR code here, and multi-language is not supported.
DecodeQRCode
Decoding the QR code.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866946c03182.png)
- image: The input QR code image.
- pre_blur: Pre-blurring, you can try to adjust this value for QR codes that are difficult to identify.
LoadPSD
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586694d213183.png)
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866958543184.png)
Load the PSD format file and export the layers.請注意,該節點需要安裝psd_tools
依賴項軟件包,如果在安裝psd_tool期間發生錯誤,例如ModuleNotFoundError: No module named 'docopt'
,請下載docopt的WHL並手動安裝它。
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58669638d3185.png)
- image: Here is a list of *.psd files under
ComfyUI/input
, where previously loaded psd images can be selected. - file_path: The complete path and file name of the psd file.
- include_hidden_layer: whether include hidden layers.
- find_layer_by: The method for finding layers can be selected by layer key number or layer name. Layer groups are treated as one layer.
- layer_index: The layer key number, where 0 is the bottom layer, is incremented sequentially. If include_hiddenlayer is set to false, hidden layers are not counted. Set to -1 to output the top layer.
- layer_name: Layer name. Note that capitalization and punctuation must match exactly.
Outputs: flat_image: PSD preview image. layer_iamge: Find the layer output. all_layers: Batch images containing all layers.
SD3NegativeConditioning
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586696c3c3186.png)
Encapsulate the four nodes of Negative Condition in SD3 into a separate node.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866974a03187.png)
- zero_out_start: Set the ConditioningSetTimestepRange start value for Negative ConditioningZeroOut, which is the same as the ConditioningSetTimestepRange end value for Negative.
LayerMask
![圖像](https://images.downcodes.com/uploads/20250202/img_679f586697a1d3188.png)
BlendIfMask
Reproduction of Photoshop's layer Style - Blend If function. This node outputs a mask for layer composition on the ImageBlend or ImageBlendAdvance nodes. mask
is an optional input, and if you enter a mask here, it will act on the output. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866989513189.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866996e43190.png)
- invert_mask:是否要反轉面膜。
- blend_if: Channel selection for Blend If. There are four options:
gray
, red
, green
, and blue
. - black_point: Black point values, ranging from 0-255.
- black_range: Dark part transition range. The larger the value, the richer the transition level of the dark part mask.
- white_point: White point values, ranging from 0-255.
- white_range: Brightness transition range. The larger the value is, the richer the transition level of the bright part mask is.
MaskBoxDetect
Detect the area where the mask is located and output its position and size. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f586699ece3191.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58669ad9f3192.png)
- detect: Detection method,
min_bounding_rect
is the minimum bounding rectangle of block shape, max_inscribed_rect
is the maximum inscribed rectangle of block shape, and mask-area
is the effective area for masking pixels. - x_adjust: Adjust of horizontal deviation after detection.
- y_adjust: Adjust of vertical offset after detection.
- scale_adjust: Adjust the scaling offset after detection.
輸出:
- Box_Preview:檢測結果的預覽圖像。紅色代表檢測到的結果,綠色表示調整的輸出結果。
- x_percent: Horizontal position output in percentage.
- y_percent: Vertical position output in percentage.
- width: Width.
- height: Height.
- x: The x-coordinate of the top left corner position.
- y: The y-coordinate of the top left corner position.
Ultra Nodes
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58669b60f3193.png)
最新版本的節點的節點包括:SementAnyingultrav2,rmbgultrav2,birefnetultra,segformertrav2,segformerb2clothesultra和maskEdgeDeceDeltradeTaildAildailv2。 There are three edge processing methods for these nodes:
-
PyMatting
optimizes the edges of the mask by using a closed form matching to mask trimap. -
GuideFilter
uses opencv guidedfilter to feather edges based on color similarity, and performs best when edges have strong color separation.
The code for the above two methods is from the ComfyUI-Image-Filters in spacepxl's Alpha Matte, thanks to the original author. -
VitMatte
uses the transformer vit model for high-quality edge processing, preserving edge details and even generating semi transparent masks. Note: When running for the first time, you need to download the vitmate model file and wait for the automatic download to complete. If the download cannot be completed, you can run the command huggingface-cli download hustvl/vitmatte-small-composition-1k
to manually download. After successfully downloading the model, you can use VITMatte(local)
without accessing the network. - VitMatte's options:
device
set whether to use CUDA for vitimate operations, which is about 5 times faster than CPU. max_megapixels
set the maximum image size for vitmate operation, and oversized images will be reduced in size. For 16G VRAM, it is recommended to set it to 3.
*Download all model files from BaiduNetdisk or Huggingface to ComfyUI/models/vitmatte
folder.
The following figure is an example of the difference in output between three methods. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58669c3d93194.png)
SegmentAnythingUltra
Improvements to ComfyUI Segment Anything, thanks to the original author.
*Please refer to the installation of ComfyUI Segment Anything to install the model. If ComfyUI Segment Anything has been correctly installed, you can skip this step.
- From here download the config.json,model.safetensors,tokenizer_config.json,tokenizer.json and vocab.txt 5 files to
ComfyUI/models/bert-base-uncased
folder. - Download GroundingDINO_SwinT_OGC config file, GroundingDINO_SwinT_OGC model, GroundingDINO_SwinB config file, GroundingDINO_SwinB model to
ComfyUI/models/grounding-dino
folder. - Download sam_vit_h,sam_vit_l, sam_vit_b, sam_hq_vit_h, sam_hq_vit_l, sam_hq_vit_b, mobile_sam to
ComfyUI/models/sams
folder. *Or download them from GroundingDino models on BaiduNetdisk and SAM models on BaiduNetdisk . ![圖像](https://images.downcodes.com/uploads/20250202/img_679f58669d75a3195.png)
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58669e9633196.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f58669fa9e3197.png)
- sam_model: Select the SAM model.
- ground_dino_model: Select the Grounding DINO model.
- threshold: The threshold of SAM.
- detail_range: Edge detail range.
- black_point: Edge black sampling threshold.
- white_point: Edge white sampling threshold.
- process_detail: Set to false here will skip edge processing to save runtime.
- prompt: Input for SAM's prompt.
- cache_model: Set whether to cache the model.
SegmentAnythingUltraV2
The V2 upgraded version of SegmentAnythingUltra has added the VITMatte edge processing method.(Note: Images larger than 2K in size using this method will consume huge memory) ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866a05433198.png)
On the basis of SegmentAnythingUltra, the following changes have been made: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866a169d3199.png)
- detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.
- detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.
- detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.
- device: Set whether the VitMatte to use cuda.
- max_megapixels: Set the maximum size for VitMate operations.
SAM2Ultra
This node is modified from kijai/ComfyUI-segment-anything-2. Thank to kijai for making significant contributions to the Comfyui community.
SAM2 Ultra node only support single image. If you need to process multiple images, please first convert the image batch to image list.
*Download models from BaiduNetdisk or huggingface.co/Kijai/sam2-safetensors and copy to ComfyUI/models/sam2
folder.
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866a23633200.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866a40083201.png)
- image: The image to segment.
- bboxes: Input recognition box data.
- sam2_model: Select the SAM2 model.
- presicion: Model's persicion. can be selected from fp16, bf16, and fp32.
- bbox_select: Select the input box data. There are three options: "all" to select all, "first" to select the box with the highest confidence, and "by_index" to specify the index of the box.
- select_index: This option is valid when bbox_delect is 'by_index'. 0 is the first one. Multiple values can be entered, separated by any non numeric character, including but not limited to commas, periods, semicolons, spaces or letters, and even Chinese.
- cache_model:是否要緩存模型。 After caching the model, it will save time for model loading.
- detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.
- detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.
- detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.
- black_point: Edge black sampling threshold.
- white_point: Edge white sampling threshold.
- process_detail: Set to false here will skip edge processing to save runtime.
- device: Set whether the VitMatte to use cuda.
- max_megapixels: Set the maximum size for VitMate operations.
SAM2VideoUltra
SAM2 Video Ultra node support processing multiple frames of images or video sequences. Please define the recognition box data in the first frame of the sequence to ensure correct recognition.
sam2_video_ultra_example.mp4
2024-09-03.152625.mp4
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866a4b553202.png)
- image: The image to segment.
- bboxes: Optional input of recognition bbox data.
bboxes
and first_frame_mask
must have least one input. If first_frame_mask inputed, bbboxes will be ignored. - first_frame_mask: Optional input of the first frame mask. The mask will be used as the first frame recognition object.
bboxes
and first_frame_mask
must have least one input. If first_frame_mask inputed, bbboxes will be ignored. - pre_mask: Optional input mask, which will serve as a propagation focus range limitation and help improve recognition accuracy.
- sam2_model: Select the SAM2 model.
- presicion: Model's persicion. can be selected from fp16 and bf16.
- cache_model:是否要緩存模型。 After caching the model, it will save time for model loading.
- individual_object: When set to True, it will focus on identifying a single object. When set to False, attempts will be made to generate recognition boxes for multiple objects.
- mask_preview_color: Display the color of non masked areas in the preview output.
- detail_method: Edge processing methods. Only VITMatte method can be used.
- detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.
- detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.
- black_point: Edge black sampling threshold.
- white_point: Edge white sampling threshold.
- process_detail: Set to false here will skip edge processing to save runtime.
- device: Only cuda can be used.
- max_megapixels: Set the maximum size for VitMate operations.A larger size will result in finer mask edges, but it will lead to a significant decrease in computation speed.
ObjectDetectorFL2
Use the Florence2 model to identify objects in images and output recognition box data.
*Download models from BaiduNetdisk and copy to ComfyUI/models/florence2
folder.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866a57a33203.png)
- image: The image to segment.
- florence2_model: Florence2 model, it from LoadFlorence2Model node.
- prompt: Describe the object that needs to be identified.
- sort_method: The selection box sorting method has 4 options: "left_to_right", "top_to_bottom", "big_to_small" and "confidence".
- bbox_select: Select the input box data. There are three options: "all" to select all, "first" to select the box with the highest confidence, and "by_index" to specify the index of the box.
- select_index: This option is valid when bbox_delect is 'by_index'. 0 is the first one. Multiple values can be entered, separated by any non numeric character, including but not limited to commas, periods, semicolons, spaces or letters, and even Chinese.
ObjectDetectorYOLOWorld
Use the YOLO-World model to identify objects in images and output recognition box data.
*Download models from BaiduNetdisk or GoogleDrive and copy to ComfyUI/models/yolo-world
folder.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866a60643204.png)
- image: The image to segment.
- confidence_threshold: The threshold of confidence.
- nms_iou_threshold: The threshold of Non-Maximum Suppression.
- prompt: Describe the object that needs to be identified.
- sort_method: The selection box sorting method has 4 options: "left_to_right", "top_to_bottom", "big_to_small" and "confidence".
- bbox_select: Select the input box data. There are three options: "all" to select all, "first" to select the box with the highest confidence, and "by_index" to specify the index of the box.
- select_index: This option is valid when bbox_delect is 'by_index'. 0 is the first one. Multiple values can be entered, separated by any non numeric character, including but not limited to commas, periods, semicolons, spaces or letters, and even Chinese.
ObjectDetectorYOLO8
Use the YOLO-8 model to identify objects in images and output recognition box data.
*Download models from GoogleDrive or BaiduNetdisk and copy to ComfyUI/models/yolo
folder.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866a697b3205.png)
- image: The image to segment.
- yolo_model: Choose the yolo model.
- sort_method: The selection box sorting method has 4 options: "left_to_right", "top_to_bottom", "big_to_small" and "confidence".
- bbox_select: Select the input box data. There are three options: "all" to select all, "first" to select the box with the highest confidence, and "by_index" to specify the index of the box.
- select_index: This option is valid when bbox_delect is 'by_index'. 0 is the first one. Multiple values can be entered, separated by any non numeric character, including but not limited to commas, periods, semicolons, spaces or letters, and even Chinese.
ObjectDetectorMask
Use mask as recognition box data. All areas surrounded by white areas on the mask will be recognized as an object. Multiple enclosed areas will be identified separately.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866a723e3206.png)
- object_mask: The mask input.
- sort_method: The selection box sorting method has 4 options: "left_to_right", "top_to_bottom", "big_to_small" and "confidence".
- bbox_select: Select the input box data. There are three options: "all" to select all, "first" to select the box with the highest confidence, and "by_index" to specify the index of the box.
- select_index: This option is valid when bbox_delect is 'by_index'. 0 is the first one. Multiple values can be entered, separated by any non numeric character, including but not limited to commas, periods, semicolons, spaces or letters, and even Chinese.
BBoxJoin
Merge recognition box data.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866a79bb3207.png)
- bboxes_1: Required input. The first set of identification boxes.
- bboxes_2: Optional input. The second set of identification boxes.
- bboxes_3: Optional input. The third set of identification boxes.
- bboxes_4: Optional input. The fourth set of identification boxes.
DrawBBoxMask
Draw the recognition BBoxes data output by the Object Detector node as a mask.
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866a81683208.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866a96873209.png)
- 圖像:圖像輸入。 It must be consistent with the image recognized by the Object Detector node.
- bboxes: Input recognition BBoxes data.
- grow_top: Each BBox expands upwards as a percentage of its height, positive values indicate upward expansion and negative values indicate downward expansion.
- grow_bottom: Each BBox expands downwards as a percentage of its height, positive values indicating downward expansion and negative values indicating upward expansion.
- grow_left: Each BBox expands to the left as a percentage of its width, positive values expand to the left and negative values expand to the right.
- grow_right: Each BBox expands to the right as a percentage of its width, positive values indicate expansion to the right and negative values indicate expansion to the left.
EVF-SAMUltra
This node is implementation of EVF-SAM in ComfyUI.
*Please download model files from BaiduNetdisk or huggingface/EVF-SAM2, huggingface/EVF-SAM to ComfyUI/models/EVF-SAM
folder(save the models in their respective subdirectories). ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866a9fb23210.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866ab43d3211.png)
- image: The input image.
- 模型:選擇模型。 Currently, there are options for evf-sam2 and evf sam.
- presicion: Model accuracy can be selected from fp16, bf16, and fp32.
- load_in_bit: Load the model with positional accuracy. You can choose from full, 8, and 4.
- pormpt: Prompt words used for segmentation.
- detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.
- detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.
- detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.
- black_point: Edge black sampling threshold.
- white_point: Edge white sampling threshold.
- process_detail: Set to false here will skip edge processing to save runtime.
- device: Set whether the VitMatte to use cuda.
- max_megapixels: Set the maximum size for VitMate operations.
Florence2Ultra
Using the segmentation function of the Florence2 model, while also having ultra-high edge details. The code for this node section is from spacepxl/ComfyUI-Florence-2, thanks to the original author. *Download the model files from BaiduNetdisk to ComfyUI/models/florence2
folder.
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866abf033212.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866ac85b3213.png)
- florence2_model: Florence2 model input.
- 圖像:圖像輸入。
- task: Select the task for florence2.
- text_input: Text input for florence2.
- detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.
- detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.
- detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.
- black_point: Edge black sampling threshold.
- white_point: Edge white sampling threshold.
- process_detail: Set to false here will skip edge processing to save runtime.
- device: Set whether the VitMatte to use cuda.
- max_megapixels: Set the maximum size for VitMate operations.
LoadFlorence2Model
Florence2 model loader. *When using it for the first time, the model will be automatically downloaded.
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866acf743214.png)
At present, there are base, base-ft, large, large-ft, DocVQA, SD3-Captioner and base-PromptGen models to choose from.
RemBgUltra
Remove background. compared to the similar background removal nodes, this node has ultra-high edge details.
This node combines the Alpha Matte node of Spacepxl's ComfyUI-Image-Filters and the functionality of ZHO-ZHO-ZHO's ComfyUI-BRIA_AI-RMBG, thanks to the original author.
*Download model files from BRIA Background Removal v1.4 or BaiduNetdisk to ComfyUI/models/rmbg/RMBG-1.4
folder. This model can be used for non-commercial purposes.
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866ad53f3215.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866aec1a3216.png)
- detail_range: Edge detail range.
- black_point: Edge black sampling threshold.
- white_point: Edge white sampling threshold.
- process_detail: Set to false here will skip edge processing to save runtime.
RmBgUltraV2
The V2 upgraded version of RemBgUltra has added the VITMatte edge processing method.(Note: Images larger than 2K in size using this method will consume huge memory)
On the basis of RemBgUltra, the following changes have been made: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866af4713217.png)
- detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.
- detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.
- detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.
- device: Set whether the VitMatte to use cuda.
- max_megapixels: Set the maximum size for VitMate operations.
BenUltra
It is the implementation of PramaLLC/BEN project in ComfyUI. Thank you to the original author.
從huggingface或baidunetdisk下載BEN_Base.pth
和config.json
然後復製到ComfyUI/models/BEN
文件夾。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866afbfc3218.png)
Node Options: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866b0e013219.png)
- ben_model: Ben model input.
- 圖像:圖像輸入。
- detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.
- detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.
- detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.
- black_point: Edge black sampling threshold.
- white_point: Edge white sampling threshold.
- process_detail: Set to false here will skip edge processing to save runtime.
- max_megapixels: Set the maximum size for VitMate operations.
LoadBenModel
Load the BEN model.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866b173a3220.png)
- 模型:選擇模型。 Currently, only the Ben_Sase model is available for selection.
BiRefNetUltra
Using the BiRefNet model to remove background has better recognition ability and ultra-high edge details. The code for the model part of this node comes from Viper's ComfyUI-BiRefNet,thanks to the original author.
*From https://huggingface.co/ViperYX/BiRefNet or BaiduNetdisk download the BiRefNet-ep480.pth
, pvt_v2_b2.pth
, pvt_v2_b5.pth
, swin_base_patch4_window12_384_22kto1k.pth
, swin_large_patch4_window12_384_22kto1k.pth
5 files to ComfyUI/models/BiRefNet
folder.
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866b1f693221.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866b2ebc3222.png)
- detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.
- detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.
- detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.
- black_point: Edge black sampling threshold.
- white_point: Edge white sampling threshold.
- process_detail: Set to false here will skip edge processing to save runtime.
- device: Set whether the VitMatte to use cuda.
- max_megapixels: Set the maximum size for VitMate operations.
BiRefNetUltraV2
This node supports the use of the latest BiRefNet model. *Download model file from BaiduNetdisk or GoogleDrive named BiRefNet-general-epoch_244.pth
to ComfyUI/Models/BiRefNet/pth
folder.您還可以下載更多的Birefnet型號並將其放在此處。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866b35633223.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866b46c93224.png)
- image: The input image.
- birefnet_model: The BiRefNet model is input and it is output from the LoadBiRefNetModel node.
- detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.
- detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.
- detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.
- black_point: Edge black sampling threshold.
- white_point: Edge white sampling threshold.
- process_detail: Due to the excellent edge processing of BiRefNet, it is set to False by default here.
- device: Set whether the VitMatte to use cuda.
- max_megapixels: Set the maximum size for VitMate operations.
LoadBiRefNetModel
Load the BiRefNet model.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866b4f533225.png)
- 模型:選擇模型。 List the files in the
CoomfyUI/models/BiRefNet/pth
folder for selection.
LoadBiRefNetModelV2
This node is a PR submitted by jimlee2048 and supports loading RMBG-2.0 models.
Download model files from huggingface or 百度网盘 and copy to ComfyUI/models/BiRefNet/RMBG-2.0
folder.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866b55d33226.png)
- 模型:選擇模型。 There are two options,
BiRefNet-General
and RMBG-2.0
.
TransparentBackgroundUltra
Using the transparent-background model to remove background has better recognition ability and speed, while also having ultra-high edge details.
*From googledrive or BaiduNetdisk download all files to ComfyUI/models/transparent-background
folder.
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866b5da63227.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866b670f3228.png)
- 模型:選擇模型。
- detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.
- detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.
- detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.
- black_point: Edge black sampling threshold.
- white_point: Edge white sampling threshold.
- process_detail: Set to false here will skip edge processing to save runtime.
- device: Set whether the VitMatte to use cuda.
- max_megapixels: Set the maximum size for VitMate operations.
PersonMaskUltra
Generate masks for portrait's face, hair, body skin, clothing, or accessories. Compared to the previous A Person Mask Generator node, this node has ultra-high edge details. The model code for this node comes from a-person-mask-generator, edge processing code from ComfyUI-Image-Filters,thanks to the original author. *Download model files from BaiduNetdisk to ComfyUI/models/mediapipe
folder.
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866b6de83229.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866b7d043230.png)
- face: Face recognition.
- hair: Hair recognition.
- body: Body skin recognition.
- clothes: Clothing recognition.
- accessories: Identification of accessories (such as backpacks).
- background: Background recognition.
- confidence: Recognition threshold, lower values will output more mask ranges.
- detail_range: Edge detail range.
- black_point: Edge black sampling threshold.
- white_point: Edge white sampling threshold.
- process_detail: Set to false here will skip edge processing to save runtime.
PersonMaskUltraV2
The V2 upgraded version of PersonMaskUltra has added the VITMatte edge processing method.(Note: Images larger than 2K in size using this method will consume huge memory)
On the basis of PersonMaskUltra, the following changes have been made: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866b875f3231.png)
detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.
detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.
detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.
device: Set whether the VitMatte to use cuda.
max_megapixels: Set the maximum size for VitMate operations.
SegformerB2ClothesUltra
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866b8f2e3232.png)
為角色的臉,頭髮,手臂,腿部和衣服生成面具,主要用於分割衣服。 The model segmentation code is fromStartHua,thanks to the original author. Compared to the comfyui_segformer_b2_clothes, this node has ultra-high edge details. (Note: Generating images with edges exceeding 2K in size using the VITMatte method will consume a lot of memory)
*Download all model files from huggingface or BaiduNetdisk to ComfyUI/models/segformer_b2_clothes
folder.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866b9a953233.png)
- face: Facial recognition switch.
- hair: Hair recognition switch.
- hat: Hat recognition switch.
- sunglass: Sunglass recognition switch.
- left_arm: Left arm recognition switch.
- right_arm: Right arm recognition switch.
- left_leg: Left leg recognition switch.
- right_leg:右腿識別開關。
- skirt: Skirt recognition switch.
- pants: Pants recognition switch.
- dress: Dress recognition switch.
- belt: Belt recognition switch.
- shoe: Shoes recognition switch.
- bag: Bag recognition switch.
- scarf: Scarf recognition switch.
- detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.
- detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.
- detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.
- black_point: Edge black sampling threshold.
- white_point: Edge white sampling threshold.
- process_detail: Set to false here will skip edge processing to save runtime.
- device: Set whether the VitMatte to use cuda.
- max_megapixels: Set the maximum size for VitMate operations.
SegformerUltraV2
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866ba2e73234.png)
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866bae423235.png)
Using the segformer model to segment clothing with ultra-high edge details. Currently supports segformer b2 clothes, segformer b3 clothes and segformer b3 fashion。
*Download modelfiles from huggingface or BaiduNetdisk to ComfyUI/models/segformer_b2_clothes
folder.
*Download modelfiles from huggingface or BaiduNetdisk to ComfyUI/models/segformer_b3_clothes
folder.
*Download modelfiles from huggingface or BaiduNetdisk to ComfyUI/models/segformer_b3_fashion
folder.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866bbafe3236.png)
- image: The input image.
- segformer_pipeline: Segformer pipeline input. The pipeline is output by SegformerClottesPipeline and SegformerFashionPipeline node.
- detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.
- detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.
- detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.
- black_point: Edge black sampling threshold.
- white_point: Edge white sampling threshold.
- process_detail: Set to false here will skip edge processing to save runtime.
- device: Set whether the VitMatte to use cuda.
- max_megapixels: Set the maximum size for VitMate operations.
SegformerClothesPipiline
Select the segformer clothes model and choose the segmentation content.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866bc2133237.png)
- model: Model selection. There are currently two models available to choose from for segformer b2 clothes and segformer b3 clothes.
- face: Facial recognition switch.
- hair: Hair recognition switch.
- hat: Hat recognition switch.
- sunglass: Sunglass recognition switch.
- left_arm: Left arm recognition switch.
- right_arm: Right arm recognition switch.
- left_leg: Left leg recognition switch.
- right_leg:右腿識別開關。
- left_shoe: Left shoe recognition switch.
- right_shoe: Right shoe recognition switch.
- skirt: Skirt recognition switch.
- pants: Pants recognition switch.
- dress: Dress recognition switch.
- belt: Belt recognition switch.
- bag: Bag recognition switch.
- scarf: Scarf recognition switch.
SegformerFashionPipiline
Select the segformer fashion model and choose the segmentation content.
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866bcbae3238.png)
- model: Model selection. Currently, there is only one model available for selection: segformer b3 fashion。
- shirt: shirt and blouse switch.
- top: top, t-shirt, sweatshirt switch.
- sweater: sweater switch.
- cardigan: cardigan switch.
- jacket: jacket switch.
- vest: vest switch.
- pants: pants switch.
- shorts: shorts switch.
- skirt: skirt switch.
- coat: coat switch.
- dress: dress switch.
- jumpsuit: jumpsuit switch.
- cape: cape switch.
- glasses: glasses switch.
- hat: hat switch.
- hairaccessory: headband, head covering, hair accessory switch.
- tie: tie switch.
- glove: glove switch.
- watch: watch switch.
- belt: belt switch.
- legwarmer: leg warmer switch.
- tights: tights and stockings switch.
- sock: sock switch.
- shoe: shoes switch.
- bagwallet: bag and wallet switch.
- scarf: scarf switch.
- umbrella: umbrella switch.
- hood: hood switch.
- collar: collar switch.
- lapel: lapel switch.
- epaulette: epaulette switch.
- sleeve: sleeve switch.
- pocket: pocket switch.
- neckline: neckline switch.
- buckle: buckle switch.
- zipper: zipper switch.
- applique: applique switch.
- bead: bead switch.
- bow: bow switch.
- flower: flower switch.
- fringe: fringe switch.
- ribbon: ribbon switch.
- rivet: rivet switch.
- ruffle: ruffle switch.
- sequin: sequin switch.
- tassel: tassel switch.
HumanPartsUltra
Used for generate human body parts masks, it is based on the warrper of metal3d/ComfyUI_Human_Parts, thank the original author. This node has added ultra-fine edge processing based on the original work.從Baidunetdisk或HuggingFace下載模型文件,然後將其複製到ComfyUImodelsonnxhuman-parts
文件夾。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866bd7383239.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866be88e3240.png)
- image: The input image.
- face: Recognize face switch.
- hair: Recognize hair switch.
- galsses: Recognize glasses switch.
- top_clothes: Recognize top clothes switch.
- bottom_clothes: Recognize bottom clothes switch.
- torso_skin: Recognize torso skin switch.
- left_arm: Recognize left arm switch.
- right_arm: Recognize right arm switch.
- left_leg: Recognize left leg switch.
- right_leg: Recognize right leg switch.
- left_foot: Recognize left foot switch.
- right_foot: Recognize right foot switch.
- detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.
- detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.
- detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.
- black_point: Edge black sampling threshold.
- white_point: Edge white sampling threshold.
- process_detail: Set to false here will skip edge processing to save runtime.
- device: Set whether the VitMatte to use cuda.
- max_megapixels: Set the maximum size for VitMate operations.
MaskEdgeUltraDetail
Process rough masks to ultra fine edges. This node combines the Alpha Matte and the Guided Filter Alpha nodes functions of Spacepxl's ComfyUI-Image-Filters, thanks to the original author. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866bf3963241.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c025a3242.png)
- method: Provide two methods for edge processing: PyMatting and OpenCV-GuidedFilter. PyMatching has a slower processing speed, but for video, it is recommended to use this method to obtain smoother mask sequences.
- mask_grow: Mask expansion amplitude. positive values expand outward, while negative values contract inward. For rougher masks, negative values are usually used to shrink their edges for better results.
- fix_gap: Repair the gaps in the mask. if obvious gaps in the mask, increase this value appropriately.
- fix_threshold: The threshold of fix_gap.
- detail_range: Edge detail range.
- black_point: Edge black sampling threshold.
- white_point: Edge white sampling threshold.
MaskEdgeUltraDetailV2
The V2 upgraded version of MaskEdgeUltraDetail has added the VITMatte edge processing method.(Note: Images larger than 2K in size using this method will consume huge memory)
This method is suitable for handling semi transparent areas.
On the basis of MaskEdgeUltraDetail, the following changes have been made: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c0b9f3243.png)
- method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.
- edge_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.
- edge_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.
- device: Set whether the VitMatte to use cuda.
- max_megapixels: Set the maximum size for VitMate operations.
YoloV8Detect
Use the YoloV8 model to detect faces, hand box areas, or character segmentation. Supports the output of the selected number of channels. Download the model files from GoogleDrive or BaiduNetdisk to ComfyUI/models/yolo
folder.
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c12903244.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c21aa3245.png)
- yolo_model: Yolo model selection. the model with
seg
name can output segmented masks, otherwise they can only output box masks. - mask_merge: Select the merged mask.
all
is to merge all mask outputs. The selected number is how many masks to output, sorted by recognition confidence to merge the output.
輸出:
- mask: The output mask.
- yolo_plot_image: Preview of yolo recognition results.
- yolo_masks: For all masks identified by yolo, each individual mask is output as a mask.
MediapipeFacialSegment
Use the Mediapipe model to detect facial features, segment left and right eyebrows, eyes, lips, and tooth. *Download the model files from BaiduNetdisk to ComfyUI/models/mediapipe
folder.
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c28863246.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c31de3247.png)
- left_eye: Recognition switch of left eye.
- left_eyebrow: Recognition switch of left eyebrow.
- right_eye: Recognition switch of right eye.
- right_eyebrow: Recognition switch of right eyebrow.
- lips: Recognition switch of lips.
- tooth: Recognition switch of tooth.
MaskByColor
Generate a mask based on the selected color. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c39863248.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c44723249.png)
- image: Input image.
- mask: This input is optional, if there is a mask, only the colors inside the mask are included in the range.
- color: Color selector. Click on the color block to select a color, and you can use the straws on the color picker panel to pick up the screen color. Note: When using straws, maximize the browser window.
- color_in_HEX 4 : Enter color values. If this item has input, it will be used first, ignoring the color selected by the
color
. - threshold: Mask range threshold, the larger the value, the larger the mask range.
- fix_gap: Repair the gaps in the mask. If there are obvious gaps in the mask, increase this value appropriately.
- fix_threshold: The threshold for repairing masks.
- invert_mask:是否要反轉面膜。
成像膜
Convert the image to a mask.支持將實驗室,RGBA,YUV和HSV模式中的任何通道轉換為口罩,同時提供色標調整。支持蒙版可選輸入以獲取僅包含有效零件的掩碼。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c4c513250.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c56483251.png)
- image: Input image.
- mask: This input is optional, if there is a mask, only the colors inside the mask are included in the range.
- channel: Channel selection. You can choose any channel of LAB, RGBA, YUV, or HSV modes.
- black_point * : Black dot value for the mask. The value range is 0-255, with a default value of 0.
- white_point * : White dot value for the mask. The value range is 0-255, with a default value of 255.
- gray_point: Gray dot values for the mask. The value range is 0.01-9.99, with a default of 1.
- invert_output_mask: Whether to reverse the mask.
* If the black_point or output_black_point value is greater than white_point or output_white_point, the two values are swapped, with the larger value used as white_point and the smaller value used as black_point.
Shadow & Highlight Mask
Generate masks for the dark and bright parts of the image. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c5d133252.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c6de33253.png)
- image: The input image.
- mask: Optional input. if there is input, only the colors within the mask range will be adjusted.
- shadow_level_offset: The offset of values in the dark area, where larger values bring more areas closer to the bright into the dark area.
- shadow_range: The transitional range of the dark area.
- highlight_level_offset: The offset of values in the highlight area, where larger values bring more areas closer to the dark into the highlight area.
- highlight_range: The transitional range of the highlight area.
Shadow Highlight Mask V2
A replica of the Shadow & Highlight Mask
node, with the "&" character removed from the node name to avoid ComfyUI workflow parsing errors.
PixelSpread
Pixel expansion preprocessing on the masked edge of an image can effectively improve the edges of image composit. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c76023254.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c87373255.png)
- invert_mask:是否要反轉面膜。
- mask_grow: Mask expansion amplitude.
MaskByDifferent
Calculate the differences between two images and output them as mask. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c8f233256.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866c9c7c3257.png)
- 增益:計算差的增益。較高的價值將導致更明顯的微小差異。
- fix_gap:修復蒙版的內部間隙。更高的價值將修復較大的差距。
- fix_threshold:fix_gap的閾值。
- main_subject_detect:將其設置為true將啟用主題檢測,忽略主題之外的差異。
MaskGrow
Grow and shrink edges and blur the mask ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866ca8363258.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866cb3173259.png)
- invert_mask:是否要反轉面膜。
- grow: Positive values expand outward, while negative values contract inward.
- blur: Blur the edge.
MaskEdgeShrink
Smooth transition and shrink the mask edges while preserving edge details. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866cbcf93260.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866ccae63261.png)
- invert_mask:是否要反轉面膜。
- shrink_level: Shrink the smoothness level.
- soft: Smooth amplitude.
- edge_shrink: Edge shrinkage amplitude.
- edge_reserve: Preserve the amplitude of edge details, 100 represents complete preservation, and 0 represents no preservation at all.
Comparison of MaskGrow and MaskEdgeShrink ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866cd3283262.png)
MaskMotionBlur
Create motion blur on the mask. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866cdff83263.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866ceaf83264.png)
- invert_mask:是否要反轉面膜。
- blur: The size of blur.
- angle: The angle of blur.
MaskGradient
Create a gradient for the mask from one side. please note the difference between this node and the CreateGradientMask node. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866cf3d43265.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866d06473266.png)
- invert_mask:是否要反轉面膜。
- Gradient_side:生成從哪個邊緣的梯度。 There are four directions: top, bottom, left and right.
- gradient_scale:梯度距離。默認值100表示梯度的一側是完全透明的,另一側完全不透明。值越小,從透明到不透明的距離越短。
- gradient_offset:梯度位置偏移。
- 不透明度:梯度的不透明度。
CreateGradientMask
Create a gradient mask. please note the difference between this node and the MaskGradient node. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866d0f3d3267.png)
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866d21473268.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866d2e7a3269.png)
- size_as * : The input image or mask here will generate the output image and mask according to their size. this input takes priority over the width and height below.
- width: Width of the image. If there is a size_as input, this setting will be ignored.
- height: Height of the image. If there is a size_as input, this setting will be ignored.
- Gradient_side:生成從哪個邊緣的梯度。 There are five directions: top, bottom, left, right and center.
- gradient_scale:梯度距離。默認值100表示梯度的一側是完全透明的,另一側完全不透明。值越小,從透明到不透明的距離越短。
- gradient_offset:梯度位置偏移。 When
gradient_side
is center, the size of the gradient area is adjusted here, positive values are smaller, and negative values are enlarged. - 不透明度:梯度的不透明度。
* Only limited to input image and mask. forcing the integration of other types of inputs will result in node errors.
MaskStroke
Generate mask contour strokes. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866d380f3270.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866d42b13271.png)
- invert_mask:是否要反轉面膜。
- Stroke_grow:中風膨脹/收縮幅度,正值表明膨脹和負值表示收縮。
- stroke_width:中風寬度。
- 模糊:中風的模糊。
MaskGrain
Generates noise for the mask. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866d4bf03272.png)
節點選項:
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866d54cb3273.png)
- grain: Noise intensity.
- invert_mask:是否要反轉面膜。
MaskPreview
Preview the input mask ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866d5af23274.png)
MaskInvert
Invert the mask ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866d65ad3275.png)
LayerFilter
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866d6bf43276.png)
Sharp & Soft
增強或平滑圖像的細節。 ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866d76be3277.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866d87323278.png)
- 增強:提供4個預設,它們非常尖銳,鋒利,柔軟且非常柔軟。 If you choose None, you will not do any processing.
SkinBeauty
Make the skin look smoother. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866d90b43279.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866d9cd93280.png)
- smooth: Skin smoothness.
- threshold: Smooth range. the larger the range with the smaller value.
- 不透明度:光滑的不透明度。
水彩
Watercolor painting effect ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866da53e3281.png)
Node option: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866daffb3282.png)
- line_density: The black line density.
- opacity: The opacity of watercolor effects.
SoftLight
Soft light effect, the bright highlights on the screen appear blurry. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866db7d13283.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866dc4333284.png)
- soft: Size of soft light.
- threshold: Soft light range. the light appears from the brightest part of the picture. in lower value, the range will be larger, and in higher value, the range will be smaller.
- opacity: Opacity of the soft light.
ChannelShake
Channel misalignment. similar to the effect of Tiktok logo. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866dccd33285.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866dd9653286.png)
- distance: Distance of channel separation.
- angle: Angle of channel separation.
- mode: Channel shift arrangement order.
HDR Effects
enhances the dynamic range and visual appeal of input images. This node is reorganize and encapsulate of HDR Effects (SuperBeasts.AI), thanks to the original author. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866de2903287.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866df1433288.png)
- hdr_intensity: Range: 0.0 to 5.0, Controls the overall intensity of the HDR effect, Higher values result in a more pronounced HDR effect.
- shadow_intmente:範圍:0.0至1.0,調整圖像中陰影的強度,較高的值使陰影變暗並增加對比度。
- Lightibly_intmentie:範圍:0.0至1.0,調整圖像中的亮點的強度,更高的值使高光增強並增加對比度。
- gamma_intensity: Range: 0.0 to 1.0,Controls the gamma correction applied to the image,Higher values increase the overall brightness and contrast.
- contrast: Range: 0.0 to 1.0,Enhances the contrast of the image, Higher values result in more pronounced contrast.
- Enhance_Color:範圍:0.0至1.0,增強圖像的顏色飽和度,更高的值會產生更鮮豔的顏色。
電影
Simulate the grain, dark edge, and blurred edge of the film, support input depth map to simulate defocus.
This node is reorganize and encapsulate of digitaljohn/comfyui-propost, thanks to the original author. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866df99c3289.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866e14d93290.png)
- image: The input image.
- depth_map: Input depth map to simulate defocus effect. it is an optional input. if there is no input, will simulates radial blur at the edges of the image.
- center_x: The horizontal axis of the center point position of the dark edge and radial blur, where 0 represents the leftmost side, 1 represents the rightmost side, and 0.5 represents at the center.
- center_y: The vertical axis of the center point position of the dark edge and radial blur, where 0 represents the leftmost side, 1 represents the rightmost side, and 0.5 represents at the center.
- saturation: Color saturation, 1 is the original value.
- grain_power: Grain intensity. larger value means more pronounced the noise.
- grain_scale: Grain size.
- grain_sat: The color saturation of grain. 0 represents mono noise, and the larger the value, the more prominent the color.
- grain_shadows: Grain intensity of dark part.
- grain_highs: Grain intensity of light part.
- blur_strength: The strength of blur. larger value means more blurry it becomes.
- blur_focus_spread: Focus diffusion range. larger value means larger clear range.
- focal_depth: Simulate the focal distance of defucus. 0 indicates that focus is farthest, and 1 indicates that is closest. this setting only valid when input the depth_map.
FilmV2
The upgraded version of the Film node adds the fastgrain method on the basis of the previous one, and the speed of generating noise is accelerated by 10 times. The code for fastgrain is from github.com/spacepxl/ComfyUI-Image-Filters BetterFilmGrain node, thanks to the original authors. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866e1ea03291.png)
LightLeak
Simulate the light leakage effect of the film. please download model file from Baidu Netdisk or [Google Drive]([light_leak.pkl(Google Drive)(https://drive.google.com/file/d/1DcH2Zkyj7W3OiAeeGpJk1eaZpdJwdCL-/view?usp=sharing)) and copy to ComfyUI/models/layerstyle
folder. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866e291b3292.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866e36cf3293.png)
- light: 32 types of light spots are provided. random is a random selection.
- corner: There are four options for the corner where the light appears: top left, top right, bottom left, and bottom right.
- hue: The hue of the light.
- saturation: The color saturation of the light.
- opacity: The opacity of the light.
ColorMap
Pseudo color heat map effect. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866e67683294.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866e853c3295.png)
- color_map: Effect type. there are a total of 22 types of effects, as shown in the above figure.
- opacity: The opacity of the color map effect.
動態
Make the image motion blur ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866e90483296.png)
節點選項:
- angle: The angle of blur.
- blur: The size of blur.
高斯布魯爾
Make the image gaussian blur ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866e9a443297.png)
節點選項:
- blur: The size of blur, integer, range 1-999.
GaussianBlurV2
Gaussian blur. Change the parameter precision to floating-point number, with a precision of 0.01
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866ea51b3298.png)
- blur: The size of blur, float, range 0 - 1000.
addgrain
Add noise to the picture. ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866eaee83299.png)
節點選項: ![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866ebda93300.png)
- grain_power: Noise intensity.
- grain_scale: Noise size.
- grain_sat: Color saturation of noise.
註釋的註釋
1 layer_image,layer_mask和backick_image(如果輸入),這三個項目必須具有相同的大小。
2面具不是強制性輸入項目。默認情況下使用圖像的Alpha通道。如果圖像輸入不包括Alpha通道,則將自動創建整個圖像的Alpha通道。如果同時具有口罩輸入,則掩碼將覆蓋Alpha通道。
3混合模式包括普通,多屏,屏幕,添加,減去,差,差,color_burn,color_dodge,linear_burn,linear_dodge,linear_dodge,offorame,offorame,soft_light,hard_light,vidived_light,pin_light,pin_light,linear_light,linear_light和hard_mix。所有19種混合模式總共。
![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866ec5993301.png)
*混合模式的預覽
3 The BlendModeV2 include normal, dissolve, darken, multiply, color burn, linear burn, darker color, lighten, screen, color dodge, linear dodge(add), lighter color, dodge, overlay, soft light, hard light, vivid light, linear light, pin light, hard mix, difference, exclusion, subtract, divide, hue, saturation, color, luminosity, grain extract, grain merge all of 30 blend modes in total.
BlendMode V2代碼的一部分來自Comfyui的Virtuoso節點。感謝原始作者。![圖像](https://images.downcodes.com/uploads/20250202/img_679f5866ee1293302.png)
*Preview of the Blend Mode V2
4 The RGB color described by hexadecimal RGB format, like '#FA3D86'.
5 The layer_image and layer_mask must be of the same size.
星星
陳述
LayersTyle節點遵循MIT許可,其一些功能代碼來自其他開源項目。感謝原始作者。如果用於商業目的,請參閱授權協議的原始項目許可證。