from cryptography.fernet import Fernet def encrypt(message: bytes, key: bytes): return Fernet(key).encrypt(message) def decrypt(token: bytes, key: bytes): return Fernet(key).decrypt(token) key = Fernet.generate_key() # store in a secure location #ex) key is 'Fn1dPza4Gchl7KpPE4kz2oJEMFXYG39ykpSLcsT1icU=' message = 'This is scret string' #encryption enstr = encrypt(message.encode(), key) #decryption destr = decrypt(enstr, key).decode() print('input:', message) print('encryption:', enstr) print('decryption:', destr)
8/24/2019
python string encryption, decryption - example code
8/21/2019
get similarity between two graphs
Basically, this example use networkX python library.
I made very simple two graphs which are G1, G2
Let see here:
and nx.graph_edit_distance this function calculate how much edit graph can be became isomorphic, that is return value of the function.
Check the example code.
..
..
I made very simple two graphs which are G1, G2
Let see here:
and nx.graph_edit_distance this function calculate how much edit graph can be became isomorphic, that is return value of the function.
Check the example code.
..
#https://stackoverflow.com/questions/11804730/networkx-add-node-with-specific-position
#https://stackoverflow.com/questions/23975773/how-to-compare-directed-graphs-in-networkx
import matplotlib.pyplot as plt
import networkx as nx
G1=nx.Graph()
G1.add_node(1,pos=(1,1))
G1.add_node(2,pos=(2,2))
G1.add_node(3,pos=(3,1))
G1.add_edge(1,2)
G1.add_edge(1,3)
pos=nx.get_node_attributes(G1,'pos')
plt.figure('graph1')
nx.draw(G1,pos, with_labels=True)
G2=nx.Graph()
G2.add_node(1,pos=(10,10))
G2.add_node(2,pos=(20,20))
G2.add_node(3,pos=(30,10))
G2.add_node(4,pos=(40,30))
G2.add_edge(1,2)
G2.add_edge(1,3)
G2.add_edge(1,4)
pos2=nx.get_node_attributes(G2,'pos')
plt.figure('b')
nx.draw(G2,pos2, with_labels=True)
dist = nx.graph_edit_distance(G1, G2)
print(dist)
plt.show()
Labels:
graph compare,
graph similarity,
networkX,
Python,
Total
8/20/2019
compare text using fuzzy wuzzy in python
just refer to this example..it's simple and very useful.
#pip install fuzzywuzzy
from fuzzywuzzy import process
candidate = ["Atlanta Falcons", "New York Jetss", "New York Giants", "Dallas Cowboys"]search = "new york jets"
r1 = process.extract(search, candidate)
#r1 = process.extract(search, candidate, limit=3)
search = "cowboys"r2 = process.extractOne(search, candidate)
search = "new york jets"r3 = process.extractBests(search, candidate, score_cutoff=70)
print(r1)#[('New York Jetss', 96), ('New York Giants', 79), ('Atlanta Falcons', 29), ('Dallas Cowboys', 22)]
print(r2)#('Dallas Cowboys', 90)
print(r3)#[('Dallas Cowboys', 90)]
Labels:
compare string,
compare work,
fuzzy,
fuzzy wuzzy,
Python,
string compare,
Total
8/08/2019
PIL to string, string to PIL (python)
It's simple example source code for that:
PIL to string(base64)
- PIL open image
- image to byte
- byte to string (base64)
string(base64) to PIL
- string to byte
- PIL load byte
--
here, another source code for :
OpenCV -> PIL -> resize -> OpenCV
PIL to string(base64)
- PIL open image
- image to byte
- byte to string (base64)
string(base64) to PIL
- string to byte
- PIL load byte
--
import base64 import io from PIL import Image #open file using PIL pil_img = Image.open('IMG_0510.jpg') width, height = pil_img.size print(width, height) #get image data as byte buffer = io.BytesIO() pil_img.save(buffer, format=pil_img.format) buffer_value = buffer.getvalue() #byte to string base64_str = base64.b64encode(buffer_value) #read string to image buffer buffer2 = base64.b64decode(base64_str) pil_img2 = Image.open(io.BytesIO(buffer2)) width2, height2 = pil_img2.size print(width2, height2) #check first & second image pil_img.show() pil_img2.show()--
here, another source code for :
OpenCV -> PIL -> resize -> OpenCV
http://study.marearts.com/2019/06/opencv-pil-resize-opencv.html
Labels:
base64,
base64image,
PIL,
pil image,
pil to string,
pillow,
Python,
Total
Subscribe to:
Posts (Atom)
-
* Introduction - The solution shows panorama image from multi images. The panorama images is processing by real-time stitching algorithm...
-
Image size of origin is 320*240. Processing time is 30.96 second took. The result of stitching The resul...
-
In past, I wrote an articel about YUV 444, 422, 411 introduction and yuv rgb converting example code. refer to this page -> http://feel...
-
Logistic Classifier The logistic classifier is similar to equation of the plane. W is weight vector, X is input vector and y is output...
-
Created Date : 2007.8 Language : Matlab / C++(MFC) Tool : Matlab / Visual C++ 6.0 Library & Utilized : - / OpenGL Reference : ...
-
fig 1. Left: set 4 points (Left Top, Right Top, Right Bottom, Left Bottom), right:warped image to (0,0) (300,0), (300,300), (0,300) Fi...
-
The MNIST dataset is a dataset of handwritten digits, comprising 60 000 training examples and 10 000 test examples. The dataset can be downl...
-
Created Date : 2009.10. Language : C++ Tool : Visual Studio C++ 2008 Library & Utilized : Point Grey-FlyCapture, Triclops, OpenCV...
-
In the YUV color format, Y is bright information, U is blue color area, V is red color area. Show the below picture. The picture is u-v col...
-
Created Date : 2011.8 Language : Matlab Tool : Matlab 2010 Library & Utilized : - Reference : Multiple View Geometry (Hartly and Z...