3/13/2024

Run multi comfyUI

 make different port number


Window

>.\python_embeded\python.exe -s ComfyUI\main.py --port 8189 --windows-standalone-build


Linux or Mac
> python ./main.py --port 8189 --windows-standalone-build

Thank you.
πŸ™‡πŸ»‍♂️

3/12/2024

Search process by port number on ubuntu and kill it.

 

>sudo lsof -i :8188

python  231134 mare   45u  IPv4 2074469      0t0  TCP localhost:8188->localhost:42132 (CLOSE_WAIT)

python  231134 mare   46u  IPv4 2105633      0t0  TCP localhost:8188->localhost:57870 (CLOSE_WAIT)

python  231134 mare   47u  IPv4 2074473      0t0  TCP localhost:8188->localhost:42160 (CLOSE_WAIT)

python  231134 mare   48u  IPv4 2103693      0t0  TCP localhost:8188->localhost:57886 (CLOSE_WAIT)


> kill -9 231134


πŸ™‡πŸ»‍♂️

3/07/2024

Retrieve my ssh keys and generate

 

Navigate to your SSH directory

cd ~/.ssh

List the RSA public key files:

ls -l *.pub

View the Contents of Your RSA Public Key
cat id_rsa.pub

To generate a new SSH RSA key pair, you can use the following command
ssh-keygen -t rsa -b 4096


Thank you. marearts.com
πŸ™‡πŸ»‍♂️

Create New swap file

 


Create a New Swap File

sudo fallocate -l 2G /swapfile2

sudo chmod 600 /swapfile2

sudo mkswap /swapfile2

sudo swapon /swapfile2


Make the swap file permanent

echo '/swapfile2 none swap sw 0 0' | sudo tee -a /etc/fstab

Check Current swap status

sudo swapon --show


Thank you 
πŸ™‡πŸ»‍♂️

3/06/2024

How to stop docker under Linux

 I can stop docker after put two commend. 

> sudo systemctl stop docker

[sudo] password for mare: 

Warning: Stopping docker.service, but it can still be activated by:

  docker.socket

> sudo systemctl stop docker.socket


I hope it's helpful to you.

Thank you.

πŸ™‡πŸ»‍♂️

2/26/2024

Dominant frequency extraction.

 



Let's say we have channel x Length signal data ex)EEG (electroencephalogram) or time series data.

We might wonder what dominant Hz is there.

The code analysis this question and return 5 top dominant frequency. 

.

import numpy as np
from collections import Counter
from scipy.signal import welch

def identify_dominant_frequencies(signal, fs, top_n=5):
freqs, psd = welch(signal, fs)
peak_indices = np.argsort(psd)[-top_n:]
dominant_freqs = freqs[peak_indices]
return dominant_freqs

..
dominant_freqs = identify_dominant_frequencies(signal, fs, top_n)
dominant_freqs_summary[channel].extend(dominant_freqs) # Append the frequencies
..
median_dominant_freqs = {channel: np.median(freqs) if freqs else None for channel, freqs in dominant_freqs_summary.items()}
..

def get_top_n_frequencies(freq_list, top_n=5, bin_width=1.0):
# Bin frequencies into discrete intervals
binned_freqs = np.round(np.array(freq_list) / bin_width) * bin_width
# Count the frequency of each binned frequency
freq_counter = Counter(binned_freqs)
# Find the top N most common binned frequencies
top_freqs = freq_counter.most_common(top_n)
# Extract just the frequencies from the top N tuples (freq, count)
top_freqs = [freq for freq, count in top_freqs]
return top_freqs

# Initialize a dictionary to store the top 5 frequencies for each channel
top_5_freqs_all_channels = {}
bin_width = 1.0

# Calculate the top 5 frequencies for each channel
for channel, freqs in dominant_freqs_summary.items():
top_5_freqs = get_top_n_frequencies(freqs, top_n=5, bin_width=bin_width)
top_5_freqs_all_channels[channel] = top_5_freqs
print(f"{channel}: Top 5 Frequencies = {top_5_freqs}")

..


2/18/2024

GroupShuffleSplit, sklearn

 

There are same eeg_id in data, but we can split it based on same id to train, val using GroupShuffleSplit.

Refer to code:

.



import pandas as pd
from sklearn.model_selection import GroupShuffleSplit

# Load your dataset
train = pd.read_csv('./train.csv')

# Display the shape of the dataset
print("Dataset shape:", train.shape)

# Count unique eeg_id values
unique_eeg_id_count = train['eeg_id'].nunique()
print("Unique eeg_id count:", unique_eeg_id_count)

# Initialize the GroupShuffleSplit
gss = GroupShuffleSplit(n_splits=1, test_size=0.2, random_state=42)

# Split the dataset based on the 'eeg_id' to ensure group cohesion
for train_idx, val_idx in gss.split(train, groups=train['eeg_id']):
train_set = train.iloc[train_idx]
val_set = train.iloc[val_idx]

# Now, train_set and val_set are split according to unique eeg_ids,
# ensuring that all records of a single eeg_id are in the same subset
print("Training set shape:", train_set.shape)
print("Validation set shape:", val_set.shape)

..

Thank you.

πŸ™‡πŸ»‍♂️