7/23/2024

download all files in specific folder from hugging face model

 refer to code:


.

# Download all files from the IP-Adapter/sdxl_models folder
from huggingface_hub import snapshot_download

# Download the sdxl_models folder and its contents
snapshot_download(
repo_id="h94/IP-Adapter",
repo_type="model",
local_dir="./IP-Adapter_sdxl_models",
allow_patterns=["sdxl_models/*"]
)

..


It download all files under the sdxl_models folder.


Thank you.

7/19/2024

Download specific model from hugging face

 refer to code

.

import os
import shutil
from huggingface_hub import hf_hub_download

# Repository name
repo_id = "h94/IP-Adapter"
# Directory to save the downloaded files
local_directory = "./models/image_encoder"

# Ensure the local directory exists
os.makedirs(local_directory, exist_ok=True)

# List of files to download
files_to_download = [
"models/image_encoder/config.json",
"models/image_encoder/model.safetensors",
"models/image_encoder/pytorch_model.bin"
]

# Download each file and move it to the desired directory
for file in files_to_download:
file_path = hf_hub_download(repo_id=repo_id, filename=file)
# Construct the destination path
dest_path = os.path.join(local_directory, os.path.basename(file))
# Move the file to the destination path
shutil.move(file_path, dest_path)
print(f"Downloaded and moved to {dest_path}")

..


Thank you.


other option is

file_path = hf_hub_download(repo_id=repo_id, filename=file, cache_dir=local_directory, force_download=True)


draw image as grid on notebook

simply check below code and example result.

.


def image_grid(imgs, rows, cols):
assert len(imgs) == rows*cols
w, h = imgs[0].size
grid = Image.new('RGB', size=(cols*w, rows*h))
grid_w, grid_h = grid.size
for i, img in enumerate(imgs):
grid.paste(img, box=(i%cols*w, i//cols*h))
return grid
# read image prompt
image = Image.open("assets/images/statue.png")
depth_map = Image.open("assets/structure_controls/depth.png")
image_grid([image.resize((256, 256)), depth_map.resize((256, 256))], 1, 2)

..


Thank you.

7/08/2024

Unknown parameter in retrievalConfiguration.vectorSearchConfiguration: "overrideSearchType", must be one of: numberOfResults

 Error in AWS bedrock like:

Unknown parameter in retrievalConfiguration.vectorSearchConfiguration: "overrideSearchType", must be one of: numberOfResults


Solution 

Update boto3 sdk as latest one.

The parameters changed on 2024-03-27.

refer to here: https://awsapichanges.com/archive/changes/cd42c1-bedrock-agent-runtime.html


Thank you!

6/07/2024

Stop window update process in force

Open powershell by admin 

and put this on terminal


> net stop wuauserv #stop service update
> net stop bits #stop bits
> Remove-Item -Path "C:\Windows\SoftwareDistribution\Download\*" -Recurse -Force
> net start wuauserv #restart 
> net start bits #restart


You can do something else after stop bits, service update.

Thank you.




6/04/2024

Embedding invisible water mark on image

 Firstly, install opencv

pip install opencv-python numpy


code for invisible water mark embedding 

import cv2
import numpy as np

def embed_watermark(image_path, watermark, output_path):
# Load the image
img = cv2.imread(image_path)
# Ensure the image is in 3 channels RGB
if img.shape[2] != 3:
print("Image needs to be RGB")
return
# Prepare the watermark
# For simplicity, the watermark is repeated to match the image size
watermark = (watermark * (img.size // len(watermark) + 1))[:img.size]
watermark = np.array(list(watermark), dtype=np.uint8).reshape(img.shape)
# Embed watermark by altering the least significant bit
img_encoded = img & ~1 | (watermark & 1)
# Save the watermarked image
cv2.imwrite(output_path, img_encoded)
print("Watermarked image saved to", output_path)

# Usage
embed_watermark('path_to_your_image.jpg', 'your_watermark_text', 'watermarked_image.jpg')


retrive code

def extract_watermark(watermarked_image_path, original_image_path, output_path):
# Load the watermarked and the original image
img_encoded = cv2.imread(watermarked_image_path)
img_original = cv2.imread(original_image_path)

# Extract the watermark by comparing the least significant bits
watermark = img_encoded & 1 ^ img_original & 1
watermark = (watermark * 255).astype(np.uint8) # Scale to 0-255 for visibility

# Save or display the extracted watermark
cv2.imwrite(output_path, watermark)
print("Extracted watermark saved to", output_path)

# Usage
extract_watermark('watermarked_image.jpg', 'path_to_your_image.jpg', 'extracted_watermark.jpg')

5/22/2024

Error: "No data found. 0 train images with repeating" in LoRA Training with kohya-ss/sd-scripts.

 This is my cmd for training lora with kohya-ss/sd-scripts


python -m accelerate.commands.launch --num_cpu_threads_per_process=8 \

...

--enable_bucket \

...

--train_data_dir="/images/" \

--output_dir="models/loras" \

--logging_dir="./logs" \

--log_prefix=silostyle \

--resolution=512,512 \

--network_module=networks.lora \

...





And I met this error

> No data found.

   ...

   0 train images with repeating

   ..



One thing to check is folder name.

Your images and caption txt file has sub folder and the folder name should have "number_title"

correct :  /images/1_files, /images/2_files

wrong : /images/files, /images/0_files 


I am figuring out why kohya has such a rule..

Thank you.