12/11/2024

Pedestrian and human attribute dataset.

 

For Pedestrian Detection:

  1. CityPersons - High-quality pedestrian detection dataset with diverse urban scenes from multiple European cities
  2. Caltech Pedestrian Dataset - Contains approximately 250,000 frames with 350,000 bounding boxes and 2,300 unique pedestrians
  3. INRIA Person Dataset - Includes full-body pedestrians in various poses and backgrounds
  4. MOT (Multiple Object Tracking) Dataset - Contains pedestrians in crowded scenes

For Human Attribute Analysis:

  1. RAP (Richly Annotated Pedestrian) Dataset - Over 40 attributes including clothing types, colors, and accessories
  2. PETA Dataset - Large-scale surveillance person attribute dataset with 19,000 images
  3. Market-1501 Attribute Dataset - Contains 27 attributes for clothing and personal items
  4. DeepFashion Dataset - Focuses on clothing items with detailed annotations

Some considerations when choosing a dataset:

  • Make sure to check the license terms for each dataset
  • Consider the image quality and diversity needed for your specific use case
  • Check if the annotations match your requirements (bounding boxes, attributes, etc.)
  • Verify that the dataset size is sufficient for your model training needs

11/19/2024

Print detail Model structure

Refer to code

..

def print_model_structure(model, indent=0):
for name, child in model.named_children():
print(' ' * indent + f'└─ {name}: {child.__class__.__name__}')
if list(child.children()):
print_model_structure(child, indent + 2)

print_model_structure(composer_model)

..


This is Llama 3.1 8b Model Structure

└─ model: LlamaForCausalLM
└─ model: LlamaModel
└─ embed_tokens: Embedding
└─ layers: ModuleList
└─ 0: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 1: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 2: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 3: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 4: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 5: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 6: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 7: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 8: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 9: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 10: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 11: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 12: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 13: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 14: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 15: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 16: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 17: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 18: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 19: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 20: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 21: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 22: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 23: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 24: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 25: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 26: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 27: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 28: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 29: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 30: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ 31: LlamaDecoderLayer
└─ self_attn: LlamaFlashAttention2
└─ q_proj: Linear
└─ k_proj: Linear
└─ v_proj: Linear
└─ o_proj: Linear
└─ rotary_emb: LlamaRotaryEmbedding
└─ mlp: LlamaMLP
└─ gate_proj: Linear
└─ up_proj: Linear
└─ down_proj: Linear
└─ act_fn: SiLU
└─ input_layernorm: LlamaRMSNorm
└─ post_attention_layernorm: LlamaRMSNorm
└─ norm: LlamaRMSNorm
└─ rotary_emb: LlamaRotaryEmbedding
└─ lm_head: Linear

11/17/2024

Hook Llama 3.1 8b layer and print dimension

 refer to code


.

def register_dimension_hooks(model, rank):
if rank != 0:
return
print('\n------------------- Model Structure -------------------')
print("Model type:", type(model))
# Get the actual model through the wrapper layers
if hasattr(model, 'model'):
model = model.model
if hasattr(model, 'model'):
model = model.model
print("Base model type:", type(model))
def make_hook(name, rank):
def hook(module, input, output):
print(f"\n--------------- Hook: {name} ---------------")
if hasattr(module, 'weight'):
weight = module.weight
print(f"GPU {rank} - {name}:")
print(f"Input shape: {input[0].shape}")
if hasattr(weight, '_local_tensor'):
local_weight = weight._local_tensor
print(f"Local weight shape: {local_weight.shape}")
print(f"Global weight shape: {weight.shape}")
if hasattr(weight, 'device_mesh'):
print(f"Device mesh: {weight.device_mesh}")
print(f"Placement: {weight.placements}")
print(f"Output shape: {output.shape}")
print("-" * 50)
return hook

# Register hooks for embedding layer
if hasattr(model, 'embed_tokens'):
print("Found embed_tokens")
model.embed_tokens.register_forward_hook(make_hook('embed_tokens', rank))

# Register hooks for all transformer layers
if hasattr(model, 'layers'):
for i, layer in enumerate(model.layers):
# Attention blocks
layer.self_attn.q_proj.register_forward_hook(
make_hook(f'layer_{i}_q_proj', rank))
layer.self_attn.k_proj.register_forward_hook(
make_hook(f'layer_{i}_k_proj', rank))
layer.self_attn.v_proj.register_forward_hook(
make_hook(f'layer_{i}_v_proj', rank))
layer.self_attn.o_proj.register_forward_hook(
make_hook(f'layer_{i}_o_proj', rank))
# MLP blocks
layer.mlp.gate_proj.register_forward_hook(
make_hook(f'layer_{i}_mlp_gate_proj', rank))
layer.mlp.up_proj.register_forward_hook(
make_hook(f'layer_{i}_mlp_up_proj', rank))
layer.mlp.down_proj.register_forward_hook(
make_hook(f'layer_{i}_mlp_down_proj', rank))
# Layer norms
layer.input_layernorm.register_forward_hook(
make_hook(f'layer_{i}_input_layernorm', rank))
layer.post_attention_layernorm.register_forward_hook(
make_hook(f'layer_{i}_post_attention_layernorm', rank))

# Register hook for final layer norm
if hasattr(model, 'norm'):
model.norm.register_forward_hook(make_hook('final_layernorm', rank))

# Register hook for LM head
if hasattr(model, 'lm_head'):
print("Found lm_head")
model.lm_head.register_forward_hook(make_hook('lm_head', rank))

# Print model structure to debug
print("\nModel attributes:", dir(model))

..


Thank you.


11/03/2024

Auto Number Plate Recognition (ANPR), SDK source code



 # install 

pip install marearts-anpr


# code

# pip install marearts-anpr
import cv2
from PIL import Image
from marearts_anpr import ma_anpr_detector
from marearts_anpr import ma_anpr_ocr
from marearts_anpr import marearts_anpr_from_pil
from marearts_anpr import marearts_anpr_from_image_file
from marearts_anpr import marearts_anpr_from_cv2
if __name__ == '__main__':
#################################
## Initiate MareArts ANPR
print("EU ANPR")
user_name = "your_email"
serial_key = "your_serial_key"
detector_model_version = "middle" # Options: refer to detector model table
ocr_model_version = "eu" # Options: refer to ocr model table
# MareArts ANPR Detector Inference
anpr_d = ma_anpr_detector(detector_model_version, user_name, serial_key, conf_thres=0.3, iou_thres=0.5)
# MareArts ANPR OCR Inference
anpr_r = ma_anpr_ocr(ocr_model_version, user_name, serial_key)
#################################
#################################
# Routine Task 1 - Predict from File
image_path = './sample_images/eu_test1.jpg'
output = marearts_anpr_from_image_file(anpr_d, anpr_r, image_path)
print(output)
# Routine Task 2 - Predict from cv2
img = cv2.imread(image_path)
output = marearts_anpr_from_cv2(anpr_d, anpr_r, img)
print(output)
# Routine Task 3 - Predict from Pillow
pil_img = Image.open(image_path)
output = marearts_anpr_from_pil(anpr_d, anpr_r, pil_img)
print(output)
#################################
#################################
## Initiate MareArts ANPR for Korea
print("ANPR Korean")
# user_name, serial_key are already defined
# anpr_d is also already initiated before
ocr_model_version = "kr"
# MareArts ANPR OCR Inference
anpr_r = ma_anpr_ocr(ocr_model_version, user_name, serial_key)
#################################
# Routine Task 1 - Predict from File
image_path = './sample_images/kr_test2.jpg'
output = marearts_anpr_from_image_file(anpr_d, anpr_r, image_path)
print(output)
# Routine Task 2 - Predict from cv2
img = cv2.imread(image_path)
output = marearts_anpr_from_cv2(anpr_d, anpr_r, img)
print(output)
# Routine Task 3 - Predict from Pillow
pil_img = Image.open(image_path)
output = marearts_anpr_from_pil(anpr_d, anpr_r, pil_img)
print(output)
#################################

..


# Ask license is here: https://study.marearts.com/p/anpr-lpr-solution.html

# Live Test is here: https://live.marearts.com


10/30/2024

brief explain about "Audio → Spectrogram → Mel-spectrogram → MFCC"

 Audio → Spectrogram → Mel-spectrogram → MFCC

  • Spectrogram = raw photo
  • Mel-spectrogram = photo adjusted for human vision
  • MFCC = compressed, essential features extracted from that photo
    1. Spectrogram
    • Raw time-frequency representation
    • Shows energy at each frequency over time
    • Doesn't account for human perception
    1. Mel-spectrogram
    • Spectrogram mapped to mel scale
    • Mimics human frequency perception
    • Still maintains all frequency band information
    1. MFCC
    • Derived FROM the mel-spectrogram
    • Additional step: DCT (Discrete Cosine Transform) is applied
    • Keeps only lower coefficients (dimensionality reduction)
    • Decorrelates features

    .

    1. Audio → Spectrogram
      • Start with raw audio waveform
      • Apply pre-emphasis to boost higher frequencies
      • Frame the signal into short segments (typically 20-40ms with overlap)
      • Apply window function (usually Hamming) to reduce edge effects
      • Perform FFT on each frame
      • Calculate power spectrum (|FFT|²)
    2. Spectrogram → Mel-spectrogram
      • Create mel filter banks (triangular overlapping windows)
      • Convert frequencies to mel scale using formula: mel = 2595 * log10(1 + f/700)
      • Apply mel filter banks to power spectrum
      • Sum up the energy in each mel band
    3. Mel-spectrogram → MFCC
      • Take logarithm of mel filter bank energies (to match human perception)
      • Apply Discrete Cosine Transform (DCT)
      • Keep first N coefficients (typically 13-39)
      • Optionally:
        • Calculate delta (velocity) features
        • Calculate delta-delta (acceleration) features
        • Apply cepstral mean normalization (CMN)

    ..

    10/26/2024

    Download Youtube Video as best Quality

     code..

    import yt_dlp
    import os
    from typing import Optional

    def format_size(bytes):
    """Convert bytes to human readable format"""
    for unit in ['B', 'KB', 'MB', 'GB']:
    if bytes < 1024:
    return f"{bytes:.2f} {unit}"
    bytes /= 1024
    return f"{bytes:.2f} TB"

    def download_video(url: str, output_path: Optional[str] = None) -> str:
    """
    Download a YouTube video in the best quality using yt-dlp.
    Args:
    url (str): The URL of the YouTube video
    output_path (str, optional): Directory to save the video
    """
    try:
    if not output_path:
    output_path = os.getcwd()
    os.makedirs(output_path, exist_ok=True)
    # Configure yt-dlp options for best quality
    ydl_opts = {
    'format': 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best', # Best video + audio quality
    'outtmpl': os.path.join(output_path, '%(title)s.%(ext)s'),
    'merge_output_format': 'mp4', # Merge to MP4
    'progress_hooks': [lambda d: print(f"\rDownloading: {d['_percent_str']} of {d['_total_bytes_str']}", end="") if d['status'] == 'downloading' else None],
    'postprocessor_hooks': [lambda d: print("\nMerging video and audio...") if d['status'] == 'started' else None],
    'quiet': False,
    'no_warnings': False,
    # Additional options for best quality
    'format_sort': ['res:2160', 'res:1440', 'res:1080', 'res:720'],
    'video_multistreams': True,
    'audio_multistreams': True,
    'prefer_free_formats': True,
    'postprocessors': [{
    'key': 'FFmpegVideoConvertor',
    'preferedformat': 'mp4',
    }],
    }
    print(f"Fetching video information...")
    # Create yt-dlp object and download the video
    with yt_dlp.YoutubeDL(ydl_opts) as ydl:
    # Get video info first
    info = ydl.extract_info(url, download=False)
    video_title = info.get('title', 'video')
    duration = info.get('duration')
    formats = info.get('formats', [])
    # Find best quality format
    best_video = max(
    (f for f in formats if f.get('vcodec') != 'none'),
    key=lambda f: (
    f.get('height', 0),
    f.get('filesize', 0)
    ),
    default=None
    )
    # Print video details
    print(f"\nVideo details:")
    print(f"Title: {video_title}")
    print(f"Duration: {duration//60}:{duration%60:02d}")
    if best_video:
    print(f"Best quality available: {best_video.get('height', 'N/A')}p")
    if best_video.get('filesize'):
    print(f"Approximate size: {format_size(best_video['filesize'])}")
    print("\nStarting download in best quality...")
    # Download the video
    ydl.download([url])
    # Get the output filename
    output_file = os.path.join(output_path, f"{video_title}.mp4")
    print(f"\nDownload completed successfully!")
    print(f"Saved to: {output_file}")
    return output_file
    except Exception as e:
    print(f"\nError: {str(e)}")
    print("\nTroubleshooting steps:")
    print("1. Check if the video URL is correct")
    print("2. Check your internet connection")
    print("3. Make sure yt-dlp is up to date: pip install -U yt-dlp")
    print("4. Install or update ffmpeg (required for best quality):")
    print(" - On macOS: brew install ffmpeg")
    print(" - On Ubuntu/Debian: sudo apt-get install ffmpeg")
    print(" - On Windows: download from https://ffmpeg.org/download.html")
    return ""

    def main():
    """
    Main function to handle user input for video download.
    """
    print("YouTube Video Downloader (Best Quality)")
    print("-------------------------------------")
    print("This will download videos in the highest available quality")
    print("Note: Higher quality downloads may take longer and use more disk space")
    while True:
    url = input("\nEnter the YouTube video URL (or 'q' to quit): ").strip()
    if url.lower() == 'q':
    print("Goodbye!")
    break
    if not url:
    print("Please enter a valid URL")
    continue
    download_video(url)
    choice = input("\nWould you like to download another video? (y/n): ").strip().lower()
    if choice != 'y':
    print("Goodbye!")
    break

    if __name__ == "__main__":
    main()

    ..


    That's it.

    but install this

    pip install yt-dlp      


    Thank you!!!



    10/18/2024

    Sequence Parallel(SP)

    toy model

    class ToyModel(nn.Module):
    """MLP based model"""
    def __init__(self):
    super().__init__()
    self.in_proj = nn.Linear(10, 32)
    self.relu = nn.ReLU()
    self.out_proj = nn.Linear(32, 5)

    def forward(self, x):
    return self.out_proj(self.relu(self.in_proj(x)))

     .

    configuration

    sp_model = parallelize_module(
    module=model,
    device_mesh=device_mesh,
    parallelize_plan={
    "in_proj": ColwiseParallel(input_layouts=Shard(0)),
    "out_proj": RowwiseParallel(output_layouts=Shard(0)),
    },
    )

    ..






    1. Input Sharding:
      • The input sequence (shape [4 x 12 x 10]) is initially split along the sequence length dimension across 3 GPUs.
      • Each GPU receives a [4 x 4 x 10] shard of the input.
    2. All-Gather Operation:
      • An all-gather operation is performed to reconstruct the full input on each GPU.
      • After this, each GPU has the full [4 x 12 x 10] input.
    3. First Layer - in_proj (ColwiseParallel):
      • The weight matrix [10 x 32] is split column-wise across GPUs: [10 x 11], [10 x 11], [10 x 10].
      • Each GPU processes the full input [4 x 12 x 10] with its portion of the weight matrix.
      • The output on each GPU is [4 x 12 x 11], [4 x 12 x 11], and [4 x 12 x 10] respectively.
    4. ReLU Activation:
      • Applied element-wise to the output of the first layer on each GPU.
      • Shapes remain [4 x 12 x 11], [4 x 12 x 11], and [4 x 12 x 10] on the respective GPUs.
    5. Second Layer - out_proj (RowwiseParallel):
      • The weight matrix [32 x 5] is split row-wise across GPUs: [11 x 5], [11 x 5], [10 x 5].
      • Each GPU processes its input ([4 x 12 x 11], [4 x 12 x 11], [4 x 12 x 10]) with its portion of the weight matrix.
      • The output on each GPU is [4 x 12 x 5], representing partial sums for the full sequence.
    6. Reduce-Scatter Operation:
      • A reduce-scatter operation is performed to sum the partial results and distribute them across GPUs.
      • This results in each GPU having a portion of the final output, sharded along the sequence dimension.

    Key Corrections and Clarifications:

    • There are indeed two collective operations: an all-gather at the beginning and a reduce-scatter at the end.
    • The GPUs do not receive the same amount of tensor in the first layer output due to the uneven split of the weight matrix.
    • The sequence dimension (12 in this example) is not sharded during the middle layers but is reconstructed and then re-sharded at the end.

    This corrected diagram and explanation more accurately represent the sequence parallelism process as described in the original comment. It shows how the input is gathered, processed in parallel, and then the output is scattered, allowing for efficient parallel processing of the entire sequence across GPUs.



    full source code
    .
    import os
    import sys
    import torch
    import torch.nn as nn
    from torch.distributed._tensor import Shard
    from torch.distributed.tensor.parallel import (
    parallelize_module,
    ColwiseParallel,
    RowwiseParallel,
    )
    from log_utils import rank_log, get_logger, verify_min_gpu_count
    import torch.profiler

    # ---- GPU check ------------
    _min_gpu_count = 2
    if not verify_min_gpu_count(min_gpus=_min_gpu_count):
    print(f"Unable to locate sufficient {_min_gpu_count} gpus to run this example. Exiting.")
    sys.exit()
    # ---------------------------
    from torch.distributed._tensor.device_mesh import init_device_mesh



    """
    This is the script to test Sequence Parallel(SP) on a toy model in a
    Megetron-LM SPMD style. We show an E2E working flow from forward,
    backward and optimization.

    We use the example of two `nn.Linear` layers with an element-wise `nn.RELU`
    in between to show an example of sequence parallel, which was proposed in paper:

    https://arxiv.org/pdf/2205.05198.pdf.

    Like tensor parallel, we parallelize the first linear layer by column
    and also parallelize the second linear layer by row. But the input in each rank
    now is different so that we need one all-gather for input and one reduce-scatter
    in the end of the second linear layer.
    """

    class ToyModel(nn.Module):
    """MLP based model"""
    def __init__(self):
    super().__init__()
    self.in_proj = nn.Linear(10, 32)
    self.relu = nn.ReLU()
    self.out_proj = nn.Linear(32, 5)

    def forward(self, x):
    return self.out_proj(self.relu(self.in_proj(x)))

    def main():
    logger = get_logger()
    # create a device mesh based on the given world_size.
    device_mesh = init_device_mesh(
    device_type="cuda", mesh_shape=(int(os.environ["WORLD_SIZE"]),)
    )
    _rank = device_mesh.get_rank()
    print(f"Starting PyTorch Sequence Parallel example on rank {_rank}.")
    rank_log(_rank, logger, f"Device Mesh created: {device_mesh=}")

    # create model and move it to GPU. Init_device_mesh has already assigned gpu ids...
    model = ToyModel().to("cuda")

    # Custom parallelization plan for the model
    sp_model = parallelize_module(
    module=model,
    device_mesh=device_mesh,
    parallelize_plan={
    "in_proj": ColwiseParallel(input_layouts=Shard(0)),
    "out_proj": RowwiseParallel(output_layouts=Shard(0)),
    },
    )

    # Create a optimizer for the parallelized module.
    lr = 0.25
    optimizer = torch.optim.AdamW(sp_model.parameters(), lr=lr, foreach=True)

    # Perform a num of iterations of forward/backward
    # and optimizations for the sharded module.
    num_iters = 10
    rank_log(_rank, logger, "Sequence Parallel training starting...")

    with torch.profiler.profile(
    activities=[
    torch.profiler.ProfilerActivity.CPU,
    torch.profiler.ProfilerActivity.CUDA,
    ],
    schedule=torch.profiler.schedule(wait=1, warmup=1, active=3, repeat=2),
    on_trace_ready=torch.profiler.tensorboard_trace_handler(f'./log/tensorboard/rank_{_rank}'),
    record_shapes=True,
    profile_memory=True,
    with_stack=True
    ) as prof:
    for i in range(num_iters):
    # For SP, input can be different across all ranks.
    inp = torch.rand(20, 10, device="cuda")
    output = sp_model(inp)
    output.sum().backward()
    optimizer.step()
    rank_log(_rank, logger, f"Sequence Parallel iter {i} completed")
    prof.step()

    rank_log(_rank, logger, "Sequence Parallel training completed!")

    # Print profiler results
    print(prof.key_averages().table(sort_by="cuda_time_total", row_limit=10))

    if __name__ == "__main__":
    main()
    ..

    Thank you!