Showing posts with label tensorflow. Show all posts
Showing posts with label tensorflow. Show all posts

4/29/2022

simple example for EDA(Exploratory Data Analysis) using Tensorflow data validation

 

refer to this page for more detail

: https://www.tensorflow.org/tfx/data_validation/get_started

..

!pip install tensorflow_data_validation

import tensorflow_data_validation as tfdv
stats = tfdv.generate_statistics_from_tfrecord(data_location=path)
tfdv.visualize_statistics(stats)

..



Thank you.


7/24/2021

check pytorch, Tensorflow can use GPU

 

test tensorflow which can use GPU

#method 1
import tensorflow as tf
tf.test.is_built_with_cuda()
> Ture

#method 2
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
> ..

test pytorch can use GPU

#method 3
import torch
torch.cuda.is_available()
>>> True

torch.cuda.current_device()
>>> 0

torch.cuda.device(0)
>>> <torch.cuda.device at 0x7efce0b03be0>

torch.cuda.device_count()
>>> 1

torch.cuda.get_device_name(0)
>>> 'GeForce GTX 950M'



Thank you.
www.MareArts.com

5/28/2021

In Tensorflow, get the names of all the Tensors in a graph


To get all nodes in the graph: (type tensorflow.core.framework.node_def_pb2.NodeDef)

all_nodes = [n for n in tf.get_default_graph().as_graph_def().node]

To get all ops in the graph: (type tensorflow.python.framework.ops.Operation)

all_ops = tf.get_default_graph().get_operations()

To get all variables in the graph: (type tensorflow.python.ops.resource_variable_ops.ResourceVariable)

all_vars = tf.global_variables()

To get all tensors in the graph: (type tensorflow.python.framework.ops.Tensor)

all_tensors = [tensor for op in tf.get_default_graph().get_operations() for tensor in op.values()]

To get all placeholders in the graph: (type tensorflow.python.framework.ops.Tensor)

all_placeholders = [placeholder for op in tf.get_default_graph().get_operations() if op.type=='Placeholder' for placeholder in op.values()]

Tensorflow 2

To get the graph in Tensorflow 2, instead of tf.get_default_graph() you need to instantiate a tf.function first and access the graph attribute, for example:

graph = func.get_concrete_function().graph

where func is a tf.function

5/15/2021

Managing cuda version for different Tensorflow version

Tensorflow 1.15.0 or 1.14.0 may require cuda 10.0 or 10.1

And your latest version of Tensorflow 2.x use cuda 11.1 or 11.0

At this situation, you need to set proper path setting.

refer to below command




>sudo nano ~/.profile

# set PATH for cuda 10.0 installation
if [ -d "/usr/local/cuda-10.0/bin/" ]; then
export PATH=/usr/local/cuda-10.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
fi

# set PATH for cuda 10.1 installation
if [ -d "/usr/local/cuda-10.1/bin/" ]; then
export PATH=/usr/local/cuda-10.1/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
fi

# set PATH for cuda 11.2 installation
if [ -d "/usr/local/cuda-11.2/bin/" ]; then
export PATH=/usr/local/cuda-11.2/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-11.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
fi


reboot after saving


Thank you.



7/30/2019

Simple example for CNN + MNIST + Keras, Tensorboard, save model, load model

Training Code CNN + MNIST

..

import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Input
from keras.layers import Conv2D, MaxPooling2D

"""Build CNN Model"""
num_classes = 10 input_shape = (28, 28, 1) #mnist channels first format model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) model.summary() """Download MNIST Data""" from keras.datasets import mnist import numpy as np # the data, split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() #(60000, 28, 28) -> (60000, 28, 28, 1) x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. x_train = np.reshape(x_train, (len(x_train), 28, 28, 1)) # adapt this if using `channels_first` image data format x_test = np.reshape(x_test, (len(x_test), 28, 28, 1)) # adapt this if using `channels_first` image data format # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) """Show some images""" import matplotlib.pyplot as plt row = 10 col = 10 n = row * col plt.figure(figsize=(4, 4)) for i in range(n): # display original #https://jakevdp.github.io/PythonDataScienceHandbook/04.08-multiple-subplots.html ax = plt.subplot(row, col, i+1) plt.imshow(x_test[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() """set up tensorboard""" from datetime import datetime import os logdir="logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S") os.makedirs(logdir, exist_ok=True) tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir) """Train model""" from keras.callbacks import TensorBoard batch_size = 128 epochs = 1 model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size, shuffle=True, validation_data=(x_test, y_test), callbacks=[TensorBoard(log_dir=logdir)]) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) """test one image data """ x_test[0].shape one_image = x_test[0].reshape(1,28,28,1) y_pred_all = model.predict(one_image) y_pred_it = model.predict_classes(one_image) print(y_pred_all, y_pred_it) plt.imshow(x_test[0].reshape(28, 28)) plt.show() """save model to drive""" model.save('my_cnn_mnist_model.h5')


..
CNN network Layout


Dataset


Run Tensorboard

>cd ./logs/scalars/20190730-105257
>tensorboard --logdir=./


almost 99% accuracy



Load Model and test one mnist image
...
"""load model from drive"""
from keras.models import load_model
new_model = load_model('my_cnn_mnist_model.h5')
"""load 1 image from drive"""
from PIL import Image
import numpy as np
"""test prediction"""
img_path = './mnist_7_450.jpg'
img = Image.open(img_path) #.convert("L") img = np.resize(img, (28,28,1)) im2arr = np.array(img) im2arr = im2arr.reshape(1,28,28,1) y_pred = new_model.predict_classes(im2arr) print(y_pred)

...

Test image


output
[7]


download minist jpeg file on here: http://study.marearts.com/2015/09/mnist-image-data-jpg-files.html



10/03/2018

has type str, but expected one of: bytes (tf.train.Example)

make str to bytes

for example
#String to bytes
my_str = "file name"
my_str_as_bytes = str.encode(my_str)
type(my_str_as_bytes) # ensure it is byte representation
#byte to string
my_decoded_str = my_str_as_bytes.decode()
type(my_decoded_str) # ensure it is string representation

3/02/2018

Tensorflow RNN LSTM weight save and restore example code

I have been struggled for long time for save and restore the result of LSTM params.
Today, I have succeeded, I hope anyone helping this my example code.


Below code is example to learning for
input: hihell -> output: ihello

gist code start
This code is referenced by this(https://github.com/MareArts/DeepLearningZeroToAll/blob/master/lab-12-1-hello-rnn.py)

gist code end

There are 4 variable for trainable
name rnn/basic_lstm_cell/weights:0, shape (10, 20)
name rnn/basic_lstm_cell/biases:0, shape (20,1)
name fully_connected/weights:0, shape (5, 5)
name fully_connected/biases:0, shape (5,1)

And I have checked the values which are same after "global_variables_initializer"
The result is same and prediction result is also same.

OK, then let's move more complicated RNN design.
This example code for 2 layer LSTM and 2 batch condition.
gist code start

gist code end


Maintain




Reference

12/25/2017

tensorflow gpu install window error : self_check.py...ImportError: Could not find 'cudnn64_6.dll...


hmm......I have taken for about 5 hours for solve this problem....

error is like that:

Traceback (most recent call last):
  File "C:\Users\mare\Anaconda3\envs\mare4\lib\site-packages\tensorflow\python\platform\self_check.py", line 87, in preload_check
    ctypes.WinDLL(build_info.cudnn_dll_name)
  File "C:\Users\mare\Anaconda3\envs\mare4\lib\ctypes\__init__.py", line 348, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] 지정된 모듈을 찾을 수 없습니다
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\mare\Anaconda3\envs\mare4\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
    from tensorflow.python import *
  File "C:\Users\mare\Anaconda3\envs\mare4\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
    from tensorflow.python import pywrap_tensorflow
  File "C:\Users\mare\Anaconda3\envs\mare4\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 30, in <module>
    self_check.preload_check()
  File "C:\Users\mare\Anaconda3\envs\mare4\lib\site-packages\tensorflow\python\platform\self_check.py", line 97, in preload_check
    % (build_info.cudnn_dll_name, build_info.cudnn_version_number))
ImportError: Could not find 'cudnn64_6.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Note that installing cuDNN is a separate step from installing CUDA, and this DLL is often found in a different directory from the CUDA DLLs. You may install the necessary DLL by downloading cuDNN 6 from this URL: https://developer.nvidia.com/cudnn

In conclusion..
just use cudnn 6.0, that is Download cuDNN v6.0 (April 27, 2017), for CUDA 8.0

I have tried many time
cuda 9.1 + cudnn 7.x
cuda 8.0 + cudnn 7.x
...
but I never successed

tensorflow official site recommend
cuda 8.0 + cudnn 6.1
https://www.tensorflow.org/install/install_windows
I just ignore this mention, because you know I thought the document is outdated.



This is simple tutorial for install tensorflow-gpu in window.

1. install anaconda.

conda create -n tensorflow python=3.5 

2. install cuda 8.0 and cudnn 6.x

move cudnn header, lib and dll to cuda 8.0 folder
http://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installwindows

3. install tensorflow-gpu

pip install --ignore-installed --upgrade tensorflow-gpu 

4. check tensorflow-gpu

>>> import tensorflow as tf
>>> tf.test.is_built_with_cuda()
Ture