2/27/2023

Useful Docker Commands: List, Stop, Remove, Logs, Build, Push, Pull, and Delete Images

 

refer to cmd


.

# List all running containers
docker ps

# List all containers, including stopped ones
docker ps -a

# Stop a running container
docker stop <container-id>

# Remove a stopped container
docker rm <container-id>

# View logs from a container
docker logs <container-id>

# Execute a command in a running container
docker exec <container-id> <command>

# Build an image from a Dockerfile
docker build -t <image-name> <path-to-dockerfile>

# Push an image to a Docker registry
docker push <registry>/<image-name>

# Pull an image from a Docker registry
docker pull <registry>/<image-name>

# Remove all Docker images using prune option
docker image prune -a -f

..


Thank you.

๐Ÿ™‡๐Ÿป‍♂️

www.marearts.com

Uploading Multiple Files to a Flask API using cURL example code

refer to code:


curl cmd.

.

curl -X POST -H "Content-Type: multipart/form-data" -H "Content-Type: application/json" -F 'data=@data.json;type=application/json' -H "Content-Type: image/jpeg" -F 'image=@image.jpg;type=image/jpeg' http://your-api-url.com/your-endpoint

..


flask code

.

from flask import Flask, request
import json

app = Flask(__name__)

@app.route('/your-endpoint', methods=['POST'])
def handle_upload():
if 'data' not in request.files:
return 'No data file uploaded', 400
if 'image' not in request.files:
return 'No image file uploaded', 400
data_file = request.files['data']
image_file = request.files['image']
if data_file.content_type != 'application/json':
return 'Invalid data file format', 400
if not is_valid_image_type(image_file.content_type):
return 'Invalid image file format', 400
# read the JSON data from the file
data = json.load(data_file)
# read the image data from the file
image_data = image_file.read()
# do something with the file data
return 'File upload successful'

def is_valid_image_type(content_type):
valid_image_types = ['image/jpeg', 'image/png', 'image/gif']
return content_type in valid_image_types

..


In this updated cURL command, the content type headers for the request and each file are specified using multiple -H flags. The Content-Type header for the request is set to multipart/form-data, which is the required encoding for file uploads. The content type for the JSON file is set to application/json, and the content type for the image file is set to image/jpeg. Note that you should adjust the content types as needed for your specific file formats.

With these changes, you should be able to upload a JSON file and an image file to your Flask API using cURL. The uploaded files will be checked for errors in the API code and then processed as needed.


Than you.

๐Ÿ™‡๐Ÿป‍♂️

www.marearts.com

save tokeniser from AutoTokenizer, AutoProcessor

 refer to code:

.

from transformers import AutoTokenizer, AutoProcessor

model_name_or_path = 'nielsr/layoutlmv3-finetuned-funsd'
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
processor = AutoProcessor.from_pretrained(model_name_or_path)

# Save tokenizer
tokenizer.save_pretrained('tokenizer_dir')

# Save processor
processor.save_pretrained('processor_dir')

..


๐Ÿ™‡๐Ÿป‍♂️

www.marearts.com

2/26/2023

split filename and dir using os.path.split

 refer to code


.

import os

path = '/Users/source_code/final_data/test/X51006647933.jpg'

# Split the path into directory name and file name
dirname, filename = os.path.split(path)

# Print the file name
print(filename) # Output: X51006647933.jpg

..


Thank you.

๐Ÿ™‡๐Ÿป‍♂️

www.marearts.com

Drawing rectangle and text on image using cv2, pillow (python code)

 Draw text & rectangle on image





Ver.1 pillow

.

from PIL import Image, ImageDraw, ImageFont
import json
# Display the image in Jupyter Notebook
from IPython.display import display


def draw_rect_text(base_dir, json_fn):
# Load JSON file
with open(base_dir+json_fn, 'r') as f:
data = json.load(f)

# Load image
img = Image.open(base_dir+data['image_file_name']+'.jpg')

# Create a draw object
draw = ImageDraw.Draw(img)

# Loop through OCR data and draw rectangles and text
for ocr in data['ocr']:
left = ocr['left']
top = ocr['top']
right = ocr['right']
bottom = ocr['bottom']
text = ocr['text']

# Draw rectangle
draw.rectangle((left, top, right, bottom), outline=(255, 0, 0), width=2)

# Draw text
font = ImageFont.truetype('arial.ttf', size=14)
text_size = draw.textsize(text, font=font)
text_x = left
text_y = top - text_size[1]
draw.text((text_x, text_y), text, font=font, fill=(255, 0, 0))

# Display image
# img.show() #now window
display(img) #display on notebook

..


Ver .2 cv2

.

import cv2
import json

def draw_rect_text(base_dir, json_fn):
# Load JSON file
with open(base_dir+json_fn, 'r') as f:
data = json.load(f)

print(base_dir+data['image_file_name']+'.jpg')
# Load image
img = cv2.imread(base_dir+data['image_file_name']+'.jpg')

# Loop through OCR data and draw rectangles and text
for ocr in data['ocr']:
left = ocr['left']
top = ocr['top']
right = ocr['right']
bottom = ocr['bottom']
text = ocr['text']
# Draw rectangle
cv2.rectangle(img, (left, top), (right, bottom), (0, 0, 255), 2)
# Draw text
font = cv2.FONT_HERSHEY_SIMPLEX
font_scale = 0.5
thickness = 1
text_size = cv2.getTextSize(text, font, font_scale, thickness)[0]
text_x = left
text_y = top - text_size[1]
cv2.putText(img, text, (text_x, text_y), font, font_scale, (0, 0, 255), thickness)
# Display image
cv2.imshow('Image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()

..


Thank you.

๐Ÿ™‡๐Ÿป‍♂️

www.marearts.com

2/25/2023

The AttributeError: 'Series' object has no attribute 'to_list'


The error occurs when you try to call the to_list() method on a pandas Series object, but the method is not available in the version of pandas you are using. This method was added in pandas version 0.24.0, so if you are using an earlier version of pandas, you will get this error.

To fix this error, you can either upgrade your pandas version to 0.24.0 or later, or you can use an alternative method to convert the Series object to a list. Here are some examples:

  1. Using the tolist() method: If you are using pandas version 0.17.0 or later, you can use the tolist() method instead of to_list(). For example:

.

import pandas as pd

# Create a pandas Series object
s = pd.Series([1, 2, 3, 4, 5])

# Convert the Series object to a list
lst = s.tolist()

print(lst)
# Output: [1, 2, 3, 4, 5]

..


  1. Using the values attribute: If you are using pandas version 0.24.0 or later, you can also use the values attribute to get a numpy array, and then convert the array to a list using the tolist() method. For example:
.
import pandas as pd

# Create a pandas Series object
s = pd.Series([1, 2, 3, 4, 5])

# Convert the Series object to a list
lst = s.values.tolist()

print(lst)
# Output: [1, 2, 3, 4, 5]
..


  1. Using the list() function: If you are using an earlier version of pandas and the above methods do not work, you can use the built-in list() function to convert the Series object to a list. For example:
.
import pandas as pd

# Create a pandas Series object
s = pd.Series([1, 2, 3, 4, 5])

# Convert the Series object to a list
lst = list(s)

print(lst)
# Output: [1, 2, 3, 4, 5]
..


Thank you.
www.marearts.com
๐Ÿ™‡๐Ÿป‍♂️

2/22/2023

How to Save and Load Python Lists using Pickle

 refer to code:


.

import pickle

# Example list
my_list = [(40, 1.646054384000001), (233, 3.0769193350000013), (221, 2.6460548819999996),
(214, 2.3542021680000005), (322, 2.726835301999998), (94, 1.201160183999999),
(193, 2.501363478000002), (171, 1.3009034040000031), (595, 5.574669749999998)]

# Save list to a file using pickle
with open("my_list.pickle", "wb") as f:
pickle.dump(my_list, f)

# Load list from the saved file
with open("my_list.pickle", "rb") as f:
loaded_list = pickle.load(f)

# Verify that the loaded list matches the original list
print(loaded_list == my_list) # True

..


thank you.

๐Ÿ™‡๐Ÿป‍♂️

www.marearts.com

Histogram drawing using matplotlib by python code.

 Refer to code:

first one is draw histogram

second one is for drawing simple graph


drawing histogram

.

import matplotlib.pyplot as plt

# Data to plot
data = [(40, 1.646054384000001), (233, 3.0769193350000013), (221, 2.6460548819999996),
(214, 2.3542021680000005), (322, 2.726835301999998), (94, 1.201160183999999),
(193, 2.501363478000002), (171, 1.3009034040000031), (595, 5.574669749999998),
(248, 2.455411452)]

# Separate the data into two lists for word length and processing time
word_lengths = [d[0] for d in data]
processing_times = [d[1] for d in data]

# Plot the histogram
plt.hist(processing_times, bins=5)

# Add labels and title
plt.xlabel("Processing Time")
plt.ylabel("Frequency")
plt.title("Histogram of Processing Time")

# Show the plot
plt.show()

..


drawing graph

.

import matplotlib.pyplot as plt

# Data to plot
data = [(40, 1.646054384000001), (233, 3.0769193350000013), ..]

word_lengths = [d[0] for d in data]
processing_times = [d[1] for d in data]

plt.bar(word_lengths, processing_times)
plt.xlabel("Word Length")
plt.ylabel("Processing Time")
plt.title("Histogram of Word Lengths vs. Processing Times")
plt.show()

..



Thank you.๐Ÿ™‡๐Ÿป‍♂️

www.marearts.com

2/21/2023

python ignore warings messages

1. ignore all warnings

.

import warnings

# Ignore all warnings
warnings.filterwarnings("ignore")

..


2. To ignore specific types of warnings, you can use a context manager:

import warnings
# Ignore DeprecationWarning
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=DeprecationWarning)
# Code that produces DeprecationWarning



3. You can also use a decorator to ignore specific types of warnings:

.
import warnings

# Ignore DeprecationWarning with a decorator
@warnings.catch_warnings()
@warnings.filterwarnings("ignore", category=DeprecationWarning)
def my_function():
# Code that produces DeprecationWarning
..

Thank you.
๐Ÿ™‡๐Ÿป‍♂️
www.marearts.com

convert ckpt -> bin -> onnx, + quantize, this example code is based on layoutlm v3

 refer to code.


.

# https://github.com/huggingface/optimum
# python -m pip install optimum[onnxruntime]
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForTokenClassification
from layoutlmv3_model import layoutlmv3_ner_model
import pickle

def layoutlmv3_vvv3():
base_path = 'layoutLM_research/tb_logs/model/M_microsoft-layoutlmv3-large_T_stride_V_v8/checkpoints/'
#load ckpt model
ckpt_path = base_path + 'vvv0.99061.ckpt'
bin_path = base_path + "bin"
onnx_path = base_path + "onnx"
quantizer_onnx_directory = base_path + "onnx_q"
#load ckpt model
base_model = layoutlmv3_ner_model.load_from_checkpoint(ckpt_path)
#save model params
with open(base_path+'model_cfg.pkl', 'wb') as fout:
pickle.dump(base_model.cfg, fout, protocol=2)
#save transformer model to bin
base_model.model.save_pretrained(bin_path)

# Load a model from transformers and export it to ONNX
ort_model = ORTModelForTokenClassification.from_pretrained(bin_path, from_transformers=True)
# Save the ONNX model and tokenizer
ort_model.save_pretrained(onnx_path)
from optimum.onnxruntime.configuration import AutoQuantizationConfig
from optimum.onnxruntime import ORTQuantizer
# Define the quantization methodology
qconfig = AutoQuantizationConfig.arm64(is_static=False, per_channel=False)
quantizer = ORTQuantizer.from_pretrained(ort_model)
# Apply dynamic quantization on the model
quantizer.quantize(save_dir=quantizer_onnx_directory, quantization_config=qconfig)



if __name__ == "__main__":
layoutlmv3_vvv3()

..



some specific code may not suitable your case.

but general concept would be same.

Thank you.

๐Ÿ™‡๐Ÿป‍♂️


www.marearts.com

2/19/2023

How to stop bash shell script when it has error

set -e

refer to code:

.

#!/bin/bash

set -e

echo "Starting script..."
ls /path/that/does/not/exist
echo "This line will not be executed."

..



thank you .

www.marearts.com


How to Install OpenCV 4.7 with CUDA, cuDNN, TBB, CUDA Video Codec, and Extra Modules in Linux

 


refer to bash code


.

#!/bin/bash

# Install dependencies
sudo apt-get update
sudo apt-get install build-essential cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libdc1394-22-dev
sudo apt-get install libcanberra-gtk-module libcanberra-gtk3-module

# Install CUDA 11
wget https://developer.download.nvidia.com/compute/cuda/11.4.0/local_installers/cuda_11.4.0_470.57.02_linux.run
sudo sh cuda_11.4.0_470.57.02_linux.run --silent --toolkit --override
echo 'export PATH=/usr/local/cuda-11.4/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc

# Download and extract TBB
wget https://github.com/oneapi-src/oneTBB/releases/download/v2022.0.0/oneapi-tbb-2022.0.0-lin.tgz
tar -xf oneapi-tbb-2022.0.0-lin.tgz
sudo cp -r oneapi-tbb-2022.0.0/lib/* /usr/local/lib/
echo 'export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc

# Download and extract OpenCV 4.7 and OpenCV extra modules
wget https://github.com/opencv/opencv/archive/4.7.0.zip
unzip 4.7.0.zip
cd opencv-4.7.0

wget https://github.com/opencv/opencv_contrib/archive/4.7.0.zip
unzip 4.7.0.zip

# Build and install OpenCV 4.7 with CUDA, cuDNN, TBB, CUDA video codec, and OpenCV extra modules
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.7.0/modules -D WITH_CUDA=ON -D WITH_TBB=ON -D WITH_NVCUVID=ON -D WITH_GSTREAMER=ON -D WITH_GSTREAMER_0_10=OFF -D WITH_LIBV4L=ON -D WITH_CUDNN=ON -D CUDA_ARCH_BIN=7.5 ..
make -j$(nproc)
sudo make install
echo 'export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH' >> ~/.bashrc
source ~/.bashrc

# Compile and run the sample code
cd ../../
wget https://raw.githubusercontent.com/spmallick/learnopencv/master/Averaging4kVideo/Averaging4kVideo.cpp
g++ Averaging4kVideo.cpp -o Averaging4kVideo `pkg-config --cflags --libs opencv4`
./Averaging4kVideo

..



thank you.

๐Ÿ™‡๐Ÿป‍♂️

www.marearts.com

2/18/2023

How to Vertically Stack Multiple Arrays Using numpy.vstack in Python

 refer to code:


.

import numpy as np

# Example arrays
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
c = np.array([7, 8, 9])

# Create a list of arrays
array_list = [a, b, c]

# Vertically stack all arrays in the list
result = np.empty((0, a.shape[0]))
for arr in array_list:
result = np.vstack((result, arr))

# Print the vertically stacked array
print(result)

..



Thank you.

๐Ÿ™‡๐Ÿป‍♂️

www.marearts.com

2/17/2023

How to Get the Shape of a List of Lists in Python

refer to code:


...

def get_shape(l):
if isinstance(l, list):
return [len(l)] + get_shape(l[0])
else:
return []

l = [[1, 2, 3], [4, 5], [6, 7, 8, 9]]
print(get_shape(l)) # Output: [3, 3]

l = [[1, [2, 3]], [4, [5, [6, [7]]]]]
print(get_shape(l)) # Output: [2, 2, 2, 1]

...


Thank you.

๐Ÿ™‡๐Ÿป‍♂️

www.marearts.com

How to print the full contents of a PyTorch tensor

 refer to code:


..

import torch

# create a tensor
x = torch.randn(3, 4)

# set print options to display full tensor
torch.set_printoptions(precision=10, threshold=None, edgeitems=None, linewidth=None, profile=None)

# print the full tensor
print(x)

..

In this example, we set the precision to 10 to display up to 10 decimal places, and set threshold, edgeitems, linewidth, and profile to None to display all the elements of the tensor. You can adjust these settings to your preference, depending on the size and precision of your tensor.



Thank you. 
๐Ÿ™‡๐Ÿป‍♂️
www.marearts.com

C++ code example for fast display of 8K video using VDPAU/DXVA

 refer to code:


..

#include <iostream>
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <unistd.h>
#include <vdpau/vdpau.h>
#include <vdpau/vdpau_x11.h>

using namespace std;

int main(int argc, char **argv) {
if (argc < 2) {
cerr << "Usage: " << argv[0] << " <video file>" << endl;
exit(1);
}

VdpDevice device;
VdpStatus status;
status = vdp_device_create_x11(DefaultDisplay(), DefaultScreen(DefaultDisplay()), &device, nullptr);
if (status != VDP_STATUS_OK) {
cerr << "vdp_device_create_x11 failed: " << status << endl;
exit(1);
}

VdpVideoSurface surface;
status = vdp_video_surface_create(device, VDP_CHROMA_TYPE_420, 8192, 4320, &surface);
if (status != VDP_STATUS_OK) {
cerr << "vdp_video_surface_create failed: " << status << endl;
exit(1);
}

VdpDecoder decoder;
status = vdp_decoder_create(device, VDP_DECODER_PROFILE_H264_HIGH, 8192, 4320, 16, &decoder);
if (status != VDP_STATUS_OK) {
cerr << "vdp_decoder_create failed: " << status << endl;
exit(1);
}

FILE *file = fopen(argv[1], "rb");
if (!file) {
cerr << "Failed to open video file: " << argv[1] << endl;
exit(1);
}

uint8_t *buf = new uint8_t[65536];
while (!feof(file)) {
size_t n = fread(buf, 1, 65536, file);
VdpBitstream bitstream = { buf, n };
status = vdp_decoder_decode(decoder, surface, &bitstream, 0, 0);
if (status != VDP_STATUS_OK) {
cerr << "vdp_decoder_decode failed: " << status << endl;
exit(1);
}
}

fclose(file);

VdpPresentationQueue presentation_queue;
status = vdp_presentation_queue_create(device, DefaultVisual(DefaultDisplay(), DefaultScreen(DefaultDisplay())), &presentation_queue);
if (status != VDP_STATUS_OK) {
cerr << "vdp_presentation_queue_create failed: " << status << endl;
exit(1);
}

VdpOutputSurface output_surface;
status = vdp_output_surface_create(device, VDP_RGBA_FORMAT_B8G8R8A8, 8192, 4320, &output_surface);
if (status != VDP_STATUS_OK) {
cerr << "vdp_output_surface_create failed: " << status << endl;
exit(1);
}

VdpPresentationQueueTarget presentation_queue_target;
status = vdp_presentation_queue_target_create_x11(presentation_queue, DefaultScreen(DefaultDisplay()), DefaultVisual(DefaultDisplay(), DefaultScreen(DefaultDisplay())), &presentation_queue_target);
if (status != VDP_STATUS_OK) {
cerr << "vdp_presentation_queue_target_create_x11 failed: " << status << endl;
exit(1);
}

VdpTimebase timebase;
status = vdp_timebase_create(device, 1, &timebase);
if (status != VDP_STATUS_OK) {
cerr << "vdp_timebase_create failed: " << status << endl;
exit(1);
}

VdpOutputSurfaceRenderBlendState blend_state = { VDP_OUTPUT_SURFACE_RENDER_BLEND_STATE_OPAQUE };
VdpOutputSurfaceRenderBlendState blend_state_array[1] = { blend_state };
VdpOutputSurfaceRenderBlendState *blend_states = blend_state_array;
VdpColor color = { 0, 0, 0, 0 };
VdpRect src_rect = { 0, 0, 8192, 4320 };
VdpRect dst_rect = { 0, 0, 1920, 1080 };
VdpTime presentation_time = 0;

while (true) {
status = vdp_presentation_queue_block_until_surface_idle(presentation_queue, output_surface, &presentation_time);
if (status != VDP_STATUS_OK) {
cerr << "vdp_presentation_queue_block_until_surface_idle failed: " << status << endl;
exit(1);
}

status = vdp_output_surface_render_output_surface(output_surface, &src_rect, surface, &dst_rect, blend_states, 1, &color);
if (status != VDP_STATUS_OK) {
cerr << "vdp_output_surface_render_output_surface failed: " << status << endl;
exit(1);
}

status = vdp_presentation_queue_display(presentation_queue, presentation_queue_target, presentation_time, timebase);
if (status != VDP_STATUS_OK) {
cerr << "vdp_presentation_queue_display failed: " << status << endl;
exit(1);
}
}

return 0;
}

..


this code creates the presentation queue and target, sets up the output surface rendering, and then enters a loop that continuously displays the video frames using the vdp_presentation_queue_display function.

Please note that this code is just an example and may need to be modified to fit your specific needs.


Thank you.

๐Ÿ™‡๐Ÿป‍♂️

www.marearts.com


Overview of Image Retrieval Applications for Finding Images by Visual and Text Features

 Here are a few examples of image retrieval applications:

  1. Google Images: A popular image search engine that allows you to search for images using keywords and filters, such as color, size, and type. Google Images uses a combination of text and visual features to match images to search queries.

  2. TinEye: A reverse image search engine that allows you to find where an image appears online or to search for similar images based on visual features. TinEye uses image recognition technology to analyze the content of images and identify matches.

  3. Clarifai: An image and video recognition platform that allows you to search for images based on visual features such as color, texture, and object category, as well as text features such as captions and tags. Clarifai uses deep learning models to extract and analyze visual and textual features from images.

  4. Microsoft Bing Visual Search: A search engine that allows you to search for images using visual and text features, such as color, object category, and image similarity. Bing Visual Search uses deep learning models to analyze visual features and search algorithms to find similar images.

  5. Amazon Rekognition: An image and video analysis service that allows you to search for images based on visual features such as faces, objects, and scenes, as well as text features such as captions and tags. Amazon Rekognition uses deep learning models to extract and analyze visual and textual features from images.



    thank you.

    www.marearts.com

    ๐Ÿ™‡๐Ÿป‍♂️