8/24/2019

python string encryption, decryption - example code


from cryptography.fernet import Fernet

def encrypt(message: bytes, key: bytes):
    return Fernet(key).encrypt(message)

def decrypt(token: bytes, key: bytes):
    return Fernet(key).decrypt(token)

key = Fernet.generate_key()  # store in a secure location
#ex) key is 'Fn1dPza4Gchl7KpPE4kz2oJEMFXYG39ykpSLcsT1icU='

message = 'This is scret string'
#encryption
enstr = encrypt(message.encode(), key)
#decryption
destr = decrypt(enstr, key).decode()

print('input:',  message)
print('encryption:', enstr)
print('decryption:', destr)



8/21/2019

get similarity between two graphs

Basically, this example use networkX python library.
I made very simple two graphs which are G1, G2

Let see here:



and nx.graph_edit_distance this function calculate how much edit graph can be became isomorphic, that is return value of the function.

Check the example code.

..
#https://stackoverflow.com/questions/11804730/networkx-add-node-with-specific-position
#https://stackoverflow.com/questions/23975773/how-to-compare-directed-graphs-in-networkx

import matplotlib.pyplot as plt
import networkx as nx
G1=nx.Graph()
G1.add_node(1,pos=(1,1))
G1.add_node(2,pos=(2,2))
G1.add_node(3,pos=(3,1))
G1.add_edge(1,2)
G1.add_edge(1,3)

pos=nx.get_node_attributes(G1,'pos')
plt.figure('graph1')
nx.draw(G1,pos, with_labels=True)

G2=nx.Graph()
G2.add_node(1,pos=(10,10))
G2.add_node(2,pos=(20,20))
G2.add_node(3,pos=(30,10))
G2.add_node(4,pos=(40,30))
G2.add_edge(1,2)
G2.add_edge(1,3)
G2.add_edge(1,4)
pos2=nx.get_node_attributes(G2,'pos')
plt.figure('b')
nx.draw(G2,pos2, with_labels=True)

dist = nx.graph_edit_distance(G1, G2)
print(dist)

plt.show()
..

8/20/2019

compare text using fuzzy wuzzy in python

just refer to this example..it's simple and very useful.

#pip install fuzzywuzzy
from fuzzywuzzy import process
candidate = ["Atlanta Falcons", "New York Jetss", "New York Giants", "Dallas Cowboys"]
search = "new york jets"
r1 = process.extract(search, candidate)
#r1 = process.extract(search, candidate, limit=3)
search = "cowboys"
r2 = process.extractOne(search, candidate)
search = "new york jets"
r3 = process.extractBests(search, candidate, score_cutoff=70)
print(r1)
#[('New York Jetss', 96), ('New York Giants', 79), ('Atlanta Falcons', 29), ('Dallas Cowboys', 22)]
print(r2)
#('Dallas Cowboys', 90)
print(r3)
#[('Dallas Cowboys', 90)]


8/08/2019

PIL to string, string to PIL (python)

It's simple example source code for that:

PIL to string(base64)
- PIL open image
- image to byte
- byte to string (base64)

string(base64) to PIL
- string to byte
- PIL load byte

--
import base64
import io
from PIL import Image

#open file using PIL
pil_img = Image.open('IMG_0510.jpg')
width, height = pil_img.size
print(width, height)


#get image data as byte
buffer = io.BytesIO()
pil_img.save(buffer, format=pil_img.format)
buffer_value = buffer.getvalue()

#byte to string
base64_str = base64.b64encode(buffer_value)

#read string to image buffer
buffer2 = base64.b64decode(base64_str)
pil_img2 = Image.open(io.BytesIO(buffer2))
width2, height2 = pil_img2.size
print(width2, height2)


#check first & second image
pil_img.show()
pil_img2.show()
--

here, another source code for :
OpenCV -> PIL -> resize -> OpenCV

http://study.marearts.com/2019/06/opencv-pil-resize-opencv.html

7/30/2019

Simple example for CNN + MNIST + Keras, Tensorboard, save model, load model

Training Code CNN + MNIST

..

import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Input
from keras.layers import Conv2D, MaxPooling2D

"""Build CNN Model"""
num_classes = 10 input_shape = (28, 28, 1) #mnist channels first format model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) model.summary() """Download MNIST Data""" from keras.datasets import mnist import numpy as np # the data, split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() #(60000, 28, 28) -> (60000, 28, 28, 1) x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. x_train = np.reshape(x_train, (len(x_train), 28, 28, 1)) # adapt this if using `channels_first` image data format x_test = np.reshape(x_test, (len(x_test), 28, 28, 1)) # adapt this if using `channels_first` image data format # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) """Show some images""" import matplotlib.pyplot as plt row = 10 col = 10 n = row * col plt.figure(figsize=(4, 4)) for i in range(n): # display original #https://jakevdp.github.io/PythonDataScienceHandbook/04.08-multiple-subplots.html ax = plt.subplot(row, col, i+1) plt.imshow(x_test[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() """set up tensorboard""" from datetime import datetime import os logdir="logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S") os.makedirs(logdir, exist_ok=True) tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir) """Train model""" from keras.callbacks import TensorBoard batch_size = 128 epochs = 1 model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size, shuffle=True, validation_data=(x_test, y_test), callbacks=[TensorBoard(log_dir=logdir)]) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) """test one image data """ x_test[0].shape one_image = x_test[0].reshape(1,28,28,1) y_pred_all = model.predict(one_image) y_pred_it = model.predict_classes(one_image) print(y_pred_all, y_pred_it) plt.imshow(x_test[0].reshape(28, 28)) plt.show() """save model to drive""" model.save('my_cnn_mnist_model.h5')


..
CNN network Layout


Dataset


Run Tensorboard

>cd ./logs/scalars/20190730-105257
>tensorboard --logdir=./


almost 99% accuracy



Load Model and test one mnist image
...
"""load model from drive"""
from keras.models import load_model
new_model = load_model('my_cnn_mnist_model.h5')
"""load 1 image from drive"""
from PIL import Image
import numpy as np
"""test prediction"""
img_path = './mnist_7_450.jpg'
img = Image.open(img_path) #.convert("L") img = np.resize(img, (28,28,1)) im2arr = np.array(img) im2arr = im2arr.reshape(1,28,28,1) y_pred = new_model.predict_classes(im2arr) print(y_pred)

...

Test image


output
[7]


download minist jpeg file on here: http://study.marearts.com/2015/09/mnist-image-data-jpg-files.html



7/01/2019

Check if string matches with regular expression pattern in python

simple code for checking string matched with certain pattern.

import re
file3 = 'keyvalue_reference_1.json'
pattern = re.compile("keyvalue_reference_[0-9]+.json")
re = pattern.match(file3)

if re:
    print('matched')
else:
    print('non matched')


Thank you.

AWS S3, Get object list in Subfolder by python code using s3_client.list_objects function

This is my s3 folder structure





This is code to get file list in certain subfolder.

#get boto3 instance
s3_client = boto3.client(
        's3',
        aws_access_key_id=ACCESS_KEY,
        aws_secret_access_key=SECRET_KEY,
    )

#get object list
contents = s3_client.list_objects(Bucket='test-can-delete-anyone', Prefix='folder1/subfolder1')['Contents']
for object in contents:
     print(object['Key'])


result
folder1/subfolder1/
folder1/subfolder1/4_kitchen.jpg
folder1/subfolder1/5_bathroom.jpg
folder1/subfolder1/5_bedroom.jpg
folder1/subfolder1/5_frontal.jpg
folder1/subfolder1/5_kitchen.jpg
folder1/subfolder1/6_bathroom.jpg

another example
#get object list
contents = s3_client.list_objects(Bucket='test-can-delete-anyone', Prefix='folder1/')['Contents']
for object in contents:
        print(object['Key'])

result
folder1/
folder1/1_kitchen.jpg
folder1/2_bathroom.jpg
folder1/2_bedroom.jpg
folder1/2_frontal.jpg
folder1/2_kitchen.jpg
folder1/subfolder1/
folder1/subfolder1/4_kitchen.jpg
folder1/subfolder1/5_bathroom.jpg
folder1/subfolder1/5_bedroom.jpg
folder1/subfolder1/5_frontal.jpg
folder1/subfolder1/5_kitchen.jpg
folder1/subfolder1/6_bathroom.jpg


AWS s3 bucket - check folder exist or not by python code

Check certain folder exist in s3 bucket by python ncode


-
#create boto3 instance
s3_client = boto3.client(
        's3',
        aws_access_key_id=ACCESS_KEY,
        aws_secret_access_key=SECRET_KEY,
    )

#check folder exist
try:
        s3_client.get_object(Bucket='s3-bucket-name', Key='folder-name/')
        print('folder exist')
except botocore.exceptions.ClientError as e:
        print('no folder exist')
-

Thank you.


function type is here:
def check_folder_exist(s3_client, bucket_name, folder_name):
    
    try:
        s3_client.get_object(Bucket=bucket_name, Key=folder_name)
        return True
    except botocore.exceptions.ClientError as e:
        return False

6/30/2019

AWS S3 bucket, folder creation in python code

Basically s3 bucket doesn't have folder concept.
But this code create folder by key, and it doesn't have any object.


--
#create boto3 instance
s3_client = boto3.client(
        's3',
        aws_access_key_id=ACCESS_KEY,
        aws_secret_access_key=SECRET_KEY,
    )

#create folder by key    
s3_client.put_object(Bucket='s3-bucket-name', Key=('folder-name'+'/'))
--

Thank you.

6/10/2019

OpenCV -> PIL -> resize -> OpenCV

Some simple code for this processing

1. Read image by OpenCV
2. Conver from OpenCV to PIL image
3. some processing using PIL, ex)resize
4. Conver from PIL to OpenCV

Check this code.


import cv2
from PIL import Image
import numpy

#target resize
r_x = 100
r_y = 100


#read image using opencv
cv_img_o = cv2.imread('A.png')

#conver mat to pil
cv_img = cv2.cvtColor(cv_img_o, cv2.COLOR_BGR2RGB)
im_pil = Image.fromarray(cv_img)

#resize pil
im_pil = im_pil.resize((r_x,r_y), Image.ANTIALIAS)

#convert pil to mat
cv_img_r = numpy.array(im_pil)

# Convert RGB to BGR
cv_img_r = cv2.cvtColor(cv_img_r, cv2.COLOR_RGB2BGR)
#cv_img_r = cv_img_r[:, :, ::-1].copy()

cv2.namedWindow('origin',0)
cv2.imshow('origin', cv_img_o)

cv2.namedWindow('resize',0)
cv2.imshow('resize', cv_img_r)

cv2.waitKey(0)

4/22/2019

OpenCV Simple Background Subtraction Example code


Step #1 
Simple background subtraction




code here

.



Step #2
remove noise + binary

code here

.

Step #3 & #4
Draw contour

Remove small area and draw rect



core here:

.



Thank you
☕️

3/19/2019

Python hstack, vstack example code

import numpy as np

#hstack #1
a = np.array((1,2,3))
b = np.array((2,3,4))
print(np.hstack((a,b)))
>
 [1 2 3 2 3 4]

#hstack #2
a = np.array([[1],[2],[3]])
b = np.array([[2],[3],[4]])
print(np.hstack((a,b)))
>
 [[1 2]
 [2 3]
 [3 4]]

#vstack #1
a = np.array([1, 2, 3])
b = np.array([2, 3, 4])
print(np.vstack((a,b)))
>
 [[1 2 3]
 [2 3 4]]


#vstack #2
a = np.array([[1], [2], [3]])
b = np.array([[2], [3], [4]])
print(np.vstack((a,b)))
>
[[1]
 [2]
 [3]
 [2]
 [3]
 [4]]

3/07/2019

python number list, remove duplication and sort

coordiX = [0, 5016, 40, 5012, 40, 5012, 40, 5012, 40, 5012, 3169, 4970, 3169, 4970, 3169, 4970, 3169, 3537, 3586, 4355, 4395, 4970, 2632, 4616, 2632, 4616, 2632, 4616, 2632, 4616, 3651, 3659, 3651, 3659, 3651, 3659, 3651, 3659, 2630, 4614, 2630, 4614, 2630, 4614, 2630, 4614, 2632, 4616, 2632, 4616, 2632, 4616, 2632, 4616, 2630, 4614, 2630, 4614, 2630, 4614, 2630, 4614, 2632, 4614, 2632, 4614, 2632, 4614, 2632, 4614, 2630, 4614, 2630, 4614, 2630, 4614, 2630, 4614, 2630, 2640, 2630, 2640, 2630, 2640, 2630, 2640, 3652, 3660, 3652, 3660, 3652, 3660, 3652, 3660, 328, 4670, 328, 4670, 328, 4670, 328, 4670, 328, 4668, 328, 4668, 328, 4668, 328, 4668, 330, 4614, 330, 4614, 330, 2962, 330, 784, 808, 1206, 2694, 2962, 2692, 3868, 2692, 3404, 3428, 3868, 332, 866]

print("origin")
print(coordiX)
print("length:", len(coordiX))

print("remove duplication")
coordiX = list(set(coordiX))
print(coordiX)
print("length:", len(coordiX))

print("sort")
coordiX.sort()
print(coordiX)

output
origin
[0, 5016, 40, 5012, 40, 5012, 40, 5012, 40, 5012, 3169, 4970, 3169, 4970, 3169, 4970, 3169, 3537, 3586, 4355, 4395, 4970, 2632, 4616, 2632, 4616, 2632, 4616, 2632, 4616, 3651, 3659, 3651, 3659, 3651, 3659, 3651, 3659, 2630, 4614, 2630, 4614, 2630, 4614, 2630, 4614, 2632, 4616, 2632, 4616, 2632, 4616, 2632, 4616, 2630, 4614, 2630, 4614, 2630, 4614, 2630, 4614, 2632, 4614, 2632, 4614, 2632, 4614, 2632, 4614, 2630, 4614, 2630, 4614, 2630, 4614, 2630, 4614, 2630, 2640, 2630, 2640, 2630, 2640, 2630, 2640, 3652, 3660, 3652, 3660, 3652, 3660, 3652, 3660, 328, 4670, 328, 4670, 328, 4670, 328, 4670, 328, 4668, 328, 4668, 328, 4668, 328, 4668, 330, 4614, 330, 4614, 330, 2962, 330, 784, 808, 1206, 2694, 2962, 2692, 3868, 2692, 3404, 3428, 3868, 332, 866]
length: 130

remove duplication
[0, 3586, 4355, 2692, 4614, 2694, 4616, 784, 2962, 5012, 5016, 3868, 40, 808, 4395, 1206, 4668, 4670, 3651, 3652, 2630, 2632, 328, 330, 3659, 3660, 3404, 332, 2640, 3537, 3169, 866, 3428, 4970]
length: 34

sort
[0, 40, 328, 330, 332, 784, 808, 866, 1206, 2630, 2632, 2640, 2692, 2694, 2962, 3169, 3404, 3428, 3537, 3586, 3651, 3652, 3659, 3660, 3868, 4355, 4395, 4614, 4616, 4668, 4670, 4970, 5012, 5016]

python 2d array and rows and cols

#make array
W = 2
H = 3


list2d = []
for i in range(0,H):
w_list =[]
for j in range (0,W):
w_list.append((i,j))
list2d.append(w_list)


#print 2d array
print(list2d)


#get row and col
Height = Rows = len(list2d)
Width = Cols = len(list2d[0])


#check values
print( Rows, Cols)
print( Height, Width)

#print all elements
for i in range(0,Rows):
for j in range (0,Cols):
print(list2d[i][j])


output
[[(0, 0), (0, 1)], [(1, 0), (1, 1)], [(2, 0), (2, 1)]]
3 2
3 2
(0, 0)
(0, 1)
(1, 0)
(1, 1)
(2, 0)
(2, 1)

3/06/2019

python dic to json, json to txt file (example source code)

This article is about how to convert dic type to json.

source code is like that:
Dic -> Json -> txt file 1
txt file 1 -> Json -> Dic -> Json -> txt file 2

So consequentially, txt file1 and txt file 2 would be same.

Then check source code.

#dictionary type
dic_type = {'dic_type': 'yes', 'json': 10}

#dic to json
import json
str_type = json.dumps(dic_type) #dic to json

#write json to file
f= open("./json.txt","w+")
f.write(str_type)
f.close

#dic from json
dic_type_from_json = json.loads(str_type)

#dic from file
f = open("./json.txt","r")
str_type_from_file = f.read()
f.close
dic_type_from_file = json.loads(str_type_from_file)

#check data type
print( type(str_type) )#Output str
print( type(dic_type_from_json) )#Output dict
print( type(dic_type_from_file) )#Output dict


#write json to file
f= open("./json2.txt","w+")
r = json.dumps(dic_type_from_file)
f.write(r)
f.close

#json.txt and json2.txt is same

2/23/2019

64base image to pil image and upload s3 bucket on aws lambda

This article is about how to convert base64 string image to byte and invoke to PIL image.
Then we can handle image whatever we want to ex) image resize ..

base64 image come from img_base64 = event['base64Image']
and then convert to byte data imgdata = base64.b64decode(img_base64)

then it can save to image file and load to Image.open(filename)
or
invoke to pil image directly : img2 = Image.open(io.BytesIO(imgdata))

In the source code, there is also example how to upload image to s3 bucket.
refer to below code.
//
import sys
sys.path.append("/opt")

import json
import boto3
import os
import io
from PIL import Image
import base64



ACCESS_KEY = os.environ.get('ACCESS_KEY')
SECRET_KEY = os.environ.get('SECRET_KEY')

def uploadToS3(bucket, s3_path, local_path):
    s3_client = boto3.client(
        's3',
        aws_access_key_id=ACCESS_KEY,
        aws_secret_access_key=SECRET_KEY,
    )
    s3_client.upload_file(local_path, bucket, s3_path)
    
    
    
def lambda_handler(event, context):
    
    #base64 image string
    img_base64 = event['base64Image']
    #string to byte
    imgdata = base64.b64decode(img_base64)
    
    #save some image file
    with open("/tmp/imageToSave.png", "wb") as fh:
        fh.write(imgdata)
    
    #open("/tmp/imageToSave.png",'rb')
    uploadToS3("input-image1", "imageToSave.png", "/tmp/imageToSave.png")
    
    #load image file to pil
    img = Image.open("/tmp/imageToSave.png")
    width, height = img.size
    
    #load 
    img2 = Image.open(io.BytesIO(imgdata))
    width2, height2 = img2.size
    
    # TODO implement
    return {
        'statusCode': 200,
        'body': json.dumps('Hello leon it is from Lambda!'),
        'width file': width,
        'heihgt file' : height,
        'width pil': width2,
        'heihgt pil' : height2
    }

//

These method is referenced by:
https://stackoverflow.com/questions/16214190/how-to-convert-base64-string-to-image
https://stackoverflow.com/questions/11727598/pil-image-open-working-for-some-images-but-not-others
https://stackoverflow.com/questions/16214190/how-to-convert-base64-string-to-image
https://stackoverflow.com/questions/6444548/how-do-i-get-the-picture-size-with-pil

Thanks for effort.




2/12/2019

python zip test sample code


numberList = [1, 2, 3, 4, 5]
strList = ['one', 'two', 'three', 'five']

# No iterables are passed
result = zip()
print('first zip: {}'.format(result))

# Two iterables are passed
result = zip(numberList, strList)
print('input value to zip: {}'.format(result))

# Converting itertor to list
print('zip to list: {}'.format(list(result)))

# Converting itertor to set
resultSet = set(result)
print('zip to set: {}'.format(resultSet))



result


2/05/2019

How to install Node.js and npm on Amazon Linux (CentOS 7) for lambda


npm & node version check
> node --version
> npm --version


install or upgrade npm & node
Change setup_8.x, in case you want to install version 8.x
>curl -sL https://rpm.nodesource.com/setup_10.x | bash -
>yum install nodejs


If you already installed old version npm & node, remove first.
>yum remove -y nodejs npm


I have installed node as 8.x.

Enjoy!


2/04/2019

Install python 3.6 on AmazonLinux and make Docker image


Run Amazon Linux Container (and tunnelling)
> docker run -v $(pwd):/outputs --name lambdapack -d amazonlinux:latest tail -f /dev/null

Run container bash shell
> docker exec -i -t lambdapack /bin/bash
Install python 3.6
bash# yum -y update
bash# yum -y upgrade
bash# yum install -y \
 wget \
 gcc \
 gcc-c++ \
 findutils \
 zlib-devel \
 zip

bash# yum install -y yum-utils
bash# yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
bash# yum install -y python36.x86_64
bash# yum install -y python36-devel.x86_64
bash# curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
bash# python36 get-pip.py
bash# pip3 install virtualenv
bash# exit

Make docker Image
> docker commit lambdapack marearts/amazon_linux_py36:v1.0.0

Push Docker Image to Docker Hub
>docker login
..
>docker push marearts/amazon_linux_py36:v1.0.0


Done!
Take Care!




1/29/2019

How to install wget in macOS?

install by brew
>ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
>brew install wget --with-libressl

install by port
> sudo port install wget


Enjoy!





1/28/2019

tar (child): xz: Cannot exec: No such file or directory on Amazon Linux

yum install -y xz

docker command summarise


stop all containers:
docker kill $(docker ps -q)

remove all containers
docker rm $(docker ps -a -q)

remove all docker images
docker rmi $(docker images -q)

access(enter) docker container
docker exec -it docker_container_name sh

Exit from docker container
>exit

Commit container and build image
docker commit container_name image_name
ex) docker commit lambdapack lambda_image
ex) docker commit lambdapack marearts/lambda_image:v1.0.0

Docker build
>docker build --tag hello:0.1 .
>docker build --tag marearts/hello:0.1 .

Docker run shell and tunnelling 
>docker run -v $(pwd):/outputs --name nickname -d docker_image tail -f /dev/null
ex)
>docker run -v $(pwd):/outputs --name lambdapackgen2 -d marearts/awspy:0.1 tail -f /dev/null

Docker run shell

> docker run -it marearts/amazon_linux_py36:v1.0.0 /bin/bash

Docker run sh file in container
>docker exec -i -t container_name /bin/bash /outputs/shfile.sh
ex)
>docker exec -i -t lambdapackgen2 /bin/bash /outputs/buildPack_py3.sh

Restart Docker container
>docker restart <container id or name>
ex)
docker restart 59



it will be updated..


1/27/2019

summarise Azure function docker image deploy



*make python virtualenv
*activate python virtualenv !!
*install azure util cli
https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image#run-the-build-command

make project (app)
func init DocApp100 --docker

change directory
cd DocApp100

new function
func new --name MyHttpTrigger --template "HttpTrigger"

build docker image
docker build --tag marearts/image:v1 .

make new resource group
az group create --name DocGroup --location westeurope

storage account
az storage account create --name docstorage100 --location westeurope --resource-group DocGroup --sku Standard_LRS

create linux service plan
az appservice plan create --name docserviceplan --resource-group DocGroup --sku B1 --is-linux

create app and deploy
az functionapp create --name docapp100 --storage-account docstorage100 --resource-group DocGroup --plan docserviceplan --deployment-container-image-name marearts/image:v1

Configuration the function app
storageConnectionString=$(az storage account show-connection-string --resource-group DocGroup --name docstorage100 --query connectionString --output tsv)
az functionapp config appsettings set --name docapp100 --resource-group DocGroup --settings AzureWebJobsDashboard=$storageConnectionString AzureWebJobsStorage=$storageConnectionString

Test
curl https://docapp100.azurewebsites.net/api/MyHttpTrigger?name=myname