3/30/2023

The list_objects_v2 function returns up to 1000 objects by default. To read all the contents in the bucket, you can use pagination.

 refer to code:

You can modify '.json' for you case.

.

import boto3

def get_origin_fn_list(ORIGIN_DATA_S3, ORIGIN_DATA_S3_prefix):
s3 = boto3.client('s3')
paginator = s3.get_paginator('list_objects_v2')
origin_path = {}

for response in paginator.paginate(Bucket=ORIGIN_DATA_S3, Prefix=ORIGIN_DATA_S3_prefix):
for obj in response['Contents']:
if obj['Key'][-4:] == '.json':
path = obj['Key']
uid = path.split('/')[-2]
origin_path[uid] = path

print(f"get kv.json list: {len(origin_path)}/{sum(1 for _ in paginator.paginate(Bucket=ORIGIN_DATA_S3, Prefix=ORIGIN_DATA_S3_prefix))}")
return origin_path

..


Thank you.

🙇🏻‍♂️

www.marearts.com


3/16/2023

sync local dir with s3 bucket , code for Jupyter

 refer to code:


.

local_directory = "/path/to/your/local/directory"
s3_bucket = "your-s3-bucket-name"
s3_folder = "your-s3-folder-name"

# Sync S3 bucket folder to local directory
!aws s3 sync s3://$s3_bucket/$s3_folder $local_directory

# Sync local directory to S3 bucket folder
!aws s3 sync $local_directory s3://$s3_bucket/$s3_folder

..

Replace /path/to/your/local/directory, your-s3-bucket-name, and your-s3-folder-name with your specific values. The first aws s3 sync command downloads the S3 folder's contents to the local directory, and the second one uploads the local directory's contents to the S3 folder. You can use either of these commands as needed.

Note that the aws s3 sync command does not support excluding or including specific files like rsync, but it will only copy new and updated files by default.


Thank you.

🙇🏻‍♂️

www.marearts.com



3/15/2023

To save a JSON object (stored in a Python variable) to an Amazon S3 bucket

 

refer to code:

.

import boto3
import json

# Initialize the S3 client
s3 = boto3.client('s3')

# Specify the S3 bucket and JSON object key
bucket_name = 'your-bucket-name'
object_key = 'path/to/your/object.json'

# Your JSON data
json_data = {
"key1": "value1",
"key2": "value2",
"key3": "value3"
}

# Convert the JSON data to a string
json_content = json.dumps(json_data)

# Save the JSON content to the S3 bucket
s3.put_object(Bucket=bucket_name, Key=object_key, Body=json_content)

print(f"Saved JSON data to '{bucket_name}/{object_key}'")

..

This code will convert the JSON data to a string, and then save it to the specified S3 bucket and key.


Thank you.

🙇🏻‍♂️

www.marearts.com



copy s3 object to another bucket

refer to code: 


.

import boto3

# Initialize the S3 client
s3 = boto3.client('s3')

# Specify the source and destination S3 buckets and object keys
source_bucket = 'source-bucket-name'
source_key = 'path/to/source/object'

destination_bucket = 'destination-bucket-name'
destination_key = 'path/to/destination/object'

# Copy the object from the source bucket to the destination bucket
s3.copy_object(
CopySource={'Bucket': source_bucket, 'Key': source_key},
Bucket=destination_bucket,
Key=destination_key
)

print(f"Copied object from '{source_bucket}/{source_key}' to '{destination_bucket}/{destination_key}'")

..

Replace the placeholder values for source_bucket, source_key, destination_bucket, and destination_key with your actual bucket names and object keys. This code will copy the specified object from the source bucket to the destination bucket.


Thank you

🙇🏻‍♂️

www.marearts.com

load json file in memory form s3 bucket object (python example code)

 refer to code:

.

import boto3
import json

# Initialize the S3 client
s3 = boto3.client('s3')

# Specify the S3 bucket and JSON file key
bucket_name = 'your-bucket-name'
file_key = 'path/to/your/file.json'

# Download the JSON file from the S3 bucket
response = s3.get_object(Bucket=bucket_name, Key=file_key)
content = response['Body'].read()

# Parse the JSON content
data = json.loads(content)

# Print the JSON data
print(data)

..


Thank you.

🙇🏻‍♂️

www.marearts.com

get first item in dict (python sample code)

 refer to code:


.

my_dict = {'key1': 'value1', 'key2': 'value2', 'key3': 'value3'}

# Get the first key-value pair from the dictionary
first_key, first_value = next(iter(my_dict.items()))

print("First key:", first_key)
print("First value:", first_value)

..


Thank you.

www.marearts.com🙇🏻‍♂️

3/14/2023

split dict item as ratio

 sperate dict item as ratio.

refer to code:



.

import random

def split_dict(d, train_ratio=0.9):
# Convert the dictionary to a list of tuples and shuffle it
items = list(d.items())
random.shuffle(items)

# Calculate the indexes for the split
split_idx = int(train_ratio * len(items))

# Split the list into two lists containing train_ratio and (1 - train_ratio) of the items
train_items = items[:split_idx]
test_items = items[split_idx:]

# Convert the two lists back to dictionaries
train_dict = {k: v for k, v in train_items}
test_dict = {k: v for k, v in test_items}

return train_dict, test_dict




my_dict = {'apple': 2, 'banana': 3, 'orange': 1, 'kiwi': 4, 'pineapple': 5}
train_dict, test_dict = split_dict(my_dict, train_ratio=0.9)
print(train_dict)
print(test_dict)


..


Thank you.


python dict shuffle

shuffle dict order 


.

import random

my_dict = {'apple': 2, 'banana': 3, 'orange': 1, 'kiwi': 4}

# Convert the dictionary to a list of tuples and shuffle it
items = list(my_dict.items())
random.shuffle(items)

# Convert the shuffled list back to a dictionary
shuffled_dict = {k: v for k, v in items}

print(shuffled_dict)

..


Thank you.


3/13/2023

Decrease one node which involved in some docker swarm service without request fail.

 

.

When a node is shut down in a Docker Swarm, you can reduce the number of nodes while maintaining the availability of services by following these steps:

  1. Move the running services from the stopped node to another node. Use the command docker service update --move <service-name> to move the services to another node.

  2. After the services have been moved, remove the stopped node from the Docker Swarm with the command docker node rm <node-name>.

  3. Start the services again to maintain service availability. Use the command docker service update --force <service-name> to start the services again.

By following these steps, you can reduce the number of nodes while maintaining the availability of services. However, depending on the state of the services, availability may be temporarily reduced, so be careful. Also, it is recommended to back up before performing the above steps.

..


Thank you.

www.marearts.com

🙇🏻‍♂️

3/08/2023

download ttf font file


 download ttf font file: https://github.com/MareArts/font_ttf


Font Name
Arial
Arial Bold
Arial Bold Italic
Arial Italic
Courier New
Verdana
Verdana Bold
Verdana Bold Italic
Verdana Italic


www.marearts.com

Thank you.

🙇🏻‍♂️

3/07/2023

t3 vs c5 ec2 instance comparison spec & price table

Note: The prices mentioned are for the US East (N. Virginia) region and are subject to change. Also, keep in mind that the optimal EC2 instance type for inference may vary depending on the specific use case and workload.


EC2 Instance TypevCPUsMemory (GiB)Network Bandwidth (Gbps)Hourly Price ($)Monthly Price ($)
c5.large24Up to 100.08562.72
c5.xlarge48Up to 100.17125.44
c5.2xlarge816Up to 100.34250.88
c5.4xlarge1632Up to 100.68501.76
c5.9xlarge3672101.531,127.92
c5.18xlarge72144253.062,255.84
t3.large28Up to 50.083261.28
t3.xlarge416Up to 50.1664122.56
t3.2xlarge832Up to 50.3328245.12
t3a.large28Up to 50.07555.20
t3a.xlarge416Up to 50.15110.40
t3a.2xlarge832Up to 50.3220.80