Monday, 26 November 2018

random_tensorFlow_blogPost

### Own - Conda venv --- dc_info_venv
### main Source --- https://www.tensorflow.org/guide/

# 
import tensorflow as tf
#from tf.keras import layers ### Fails - We have TF version == 1.5.0 

import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
from tensorflow.python.framework import ops
#from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict

%matplotlib inline
np.random.seed(1)
#
print(tf.VERSION)
print(tf.keras.__version__)
import keras
print('Keras: {}'.format(keras.__version__))
1.5.0
2.1.2-tf
Keras: 2.2.4
Using TensorFlow backend.
In [2]:
# earlier created Constants 
# Now creating placeHolders

a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
c = tf.sqrt(tf.add(tf.square(a), tf.square(b)))

print(a, b, c)

sess = tf.Session()
print(*sess.run([a, b, c], feed_dict={a: 4., b: 3.}))
Tensor("Placeholder:0", dtype=float32) Tensor("Placeholder_1:0", dtype=float32) Tensor("Sqrt:0", dtype=float32)
4.0 3.0 5.0
In [4]:
print(*sess.run([a, b, c], feed_dict={a: 18., b: 4.}))
18.0 4.0 18.439089
In [3]:
## Create a VARIABLE == count_variable

count_variable = tf.get_variable("count", [])
zero_node = tf.constant(0.)
assign_node = tf.assign(count_variable, zero_node)
sess = tf.Session()
sess.run(assign_node)
print(sess.run(count_variable))
#
0.0
In [ ]:
"""
When a variable node is first created, it basically stores “null”, and any attempts to evaluate it will 
result in this exception. 

We can only evaluate a variable after putting a value into it first. 
There are two main ways to put a value into a variable: initializers and tf.assign(). 
"""
In [ ]:
 
In [ ]:
"""
tf.assign(target, value) is a node that has some unique properties compared to nodes we’ve seen so far:

    Identity operation. tf.assign(target, value) does not do any interesting computations, 
    it is always just equal to value.
    
    Side effects. When computation “flows” through assign_node, side effects happen to other 
    things in the graph. 
    In this case, the side effect is to replace the value of count_variable with the value stored in zero_node.

"""

"""
Non-dependent edges. Even though the count_variable node and the assign_node are connected in the graph, 
neither is dependent on the other. This means computation will not flow back through that edge when
evaluating either node. 
However, assign_node is dependent on zero_node; it needs to know what to assign.

When we call sess.run(assign_node), the computation path goes through assign_node and zero_node.

"""

"""
As computation flows through any node in the graph, it also enacts any side effects controlled by 
that node, shown in green. 

Due to the particular side effects of tf.assign, the memory associated with count_variable 
(which was previously “null”) is now permanently set to equal 0. 
This means that when we next call sess.run(count_variable), 
we don’t throw any exceptions. Instead, we get a value of 0. Success!
"""
In [8]:
### Initializers ---

const_init_node = tf.constant_initializer(0.)
count_variable = tf.get_variable("count", [], initializer=const_init_node) #
## above -- initializer , is a PROPERTY of tf.get_variable.
## its been set to --- const_init_node
## We have created a CONNECTION in the GRAPH between Two Nodes 
## We are yet --- to tell the SESSION - which is not communicating with the GRAPH ..
## We tell SESSION whats to be done - by CODE LINE == init = tf.global_variables_initializer()
sess = tf.Session()
print(sess.run([count_variable]))
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-8-a8f4e21b064e> in <module>
      2 
      3 const_init_node = tf.constant_initializer(0.)
----> 4 count_variable = tf.get_variable("count", [], initializer=const_init_node)
      5 sess = tf.Session()
      6 print(sess.run([count_variable]))

~/anaconda2/envs/dc_info_venv/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py in get_variable(name, shape, dtype, initializer, regularizer, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter, constraint)
   1260       partitioner=partitioner, validate_shape=validate_shape,
   1261       use_resource=use_resource, custom_getter=custom_getter,
-> 1262       constraint=constraint)
   1263 get_variable_or_local_docstring = (
   1264     """%s

~/anaconda2/envs/dc_info_venv/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py in get_variable(self, var_store, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter, constraint)
   1095           partitioner=partitioner, validate_shape=validate_shape,
   1096           use_resource=use_resource, custom_getter=custom_getter,
-> 1097           constraint=constraint)
   1098 
   1099   def _get_partitioned_variable(self,

~/anaconda2/envs/dc_info_venv/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py in get_variable(self, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter, constraint)
    433           caching_device=caching_device, partitioner=partitioner,
    434           validate_shape=validate_shape, use_resource=use_resource,
--> 435           constraint=constraint)
    436 
    437   def _get_partitioned_variable(

~/anaconda2/envs/dc_info_venv/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py in _true_getter(name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, constraint)
    402           trainable=trainable, collections=collections,
    403           caching_device=caching_device, validate_shape=validate_shape,
--> 404           use_resource=use_resource, constraint=constraint)
    405 
    406     if custom_getter is not None:

~/anaconda2/envs/dc_info_venv/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py in _get_single_variable(self, name, shape, dtype, initializer, regularizer, partition_info, reuse, trainable, collections, caching_device, validate_shape, use_resource, constraint)
    741                          "reuse=tf.AUTO_REUSE in VarScope? "
    742                          "Originally defined at:\n\n%s" % (
--> 743                              name, "".join(traceback.format_list(tb))))
    744       found_var = self._vars[name]
    745       if not shape.is_compatible_with(found_var.get_shape()):

ValueError: Variable count already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at:

  File "<ipython-input-7-6122986dbb8d>", line 3, in <module>
    count_variable = tf.get_variable("count", [])
  File "/home/dhankar/anaconda2/envs/dc_info_venv/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3267, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "/home/dhankar/anaconda2/envs/dc_info_venv/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3185, in run_ast_nodes
    if (yield from self.run_code(code, result)):
In [3]:
const_init_node = tf.constant_initializer(0.)
count_variable = tf.get_variable("count", [], initializer=const_init_node)
#count_variable = tf.get_variable("count", [], initializer=const_init_node ,reuse=True) 
## reuse=True -- wont work here 
## restarting Notebook ..then below works...as count_variable - which was INIT in cell above, can be INIT again.

init = tf.global_variables_initializer() # Another node with side-effects...
## https://www.tensorflow.org/api_docs/python/tf/initializers/global_variables
## RETURNS -- An Op that initializes global variables in the graph.
## this - global_var_INIT - will look at the Global Graph and Add dependencies to each -- tf.initializer - that it finds.
## by - dependencies - here we mean it will make them -- RE-INIT all the Variables. 

sess = tf.Session()
sess.run(init)
print(sess.run([count_variable]))
[0.0]
In [ ]:
## Variable Sharing --- Source -- https://jacobbuckman.com/post/tensorflow-the-confusing-parts-1/#fnref:1
#
"""
You may encounter Tensorflow code with variable sharing, which involves creating a scope 
and setting “reuse=True”. 

I strongly recommend that you don’t use this in your own code.
If you want to use a single variable in multiple places, simply keep track of your pointer to that 
variable’s node programmatically, and re-use it when you need to. 
In other words, you should have only a single call of tf.get_variable() for each parameter you 
intend to store in memory.
"""
In [ ]:
## Optimizers -- Source -- https://jacobbuckman.com/post/tensorflow-the-confusing-parts-1/#fnref:1

"""
At last: on to the actual deep learning! If you’re still with me, the remaining concepts should be extremely straightforward.

In deep learning, the typical “inner loop” of training is as follows:

    Get an input and true_output
    Compute a “guess” based on the input and your parameters
    Compute a “loss” based on the difference between your guess and the true_output
    Update the parameters according to the gradient of the loss

"""
In [4]:
### build the graph
## first set up the parameters
m = tf.get_variable("m", [], initializer=tf.constant_initializer(0.))
b = tf.get_variable("b", [], initializer=tf.constant_initializer(0.))
init = tf.global_variables_initializer()

## then set up the computations
input_placeholder = tf.placeholder(tf.float32)
output_placeholder = tf.placeholder(tf.float32)

x = input_placeholder
y = output_placeholder
y_guess = m * x + b

loss = tf.square(y - y_guess)
### FATT --- Dont RE-RUN this cell again --- Re-start Notebook
"""
ValueError: Variable m already exists, disallowed. 
Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at:
"""
Out[4]:
'\nValueError: Variable m already exists, disallowed. \nDid you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at:\n'
In [5]:
## finally, set up the optimizer and minimization node
optimizer = tf.train.GradientDescentOptimizer(1e-3)
train_op = optimizer.minimize(loss)

### start the session
sess = tf.Session()
sess.run(init)

### perform the training loop
import random

## set up problem
true_m = random.random()
true_b = random.random()
In [ ]:
for update_i in range(100000):
    #
    
    ## (1) get the input and output
    input_data = random.random()
    output_data = true_m * input_data + true_b

    ## (2), (3), and (4) all take place within a single call to sess.run()!
    _loss, _ = sess.run([loss, train_op], feed_dict={input_placeholder: input_data, output_placeholder: output_data})
    #print(update_i, _loss) ## Dont Print ...

### finally, print out the values we learned for our two variables
#print("True parameters:     m=%.4f, b=%.4f", % (true_m, true_b))
print("True parameters: m=, b=",true_m, true_b)
#print("Learned parameters:  m=%.4f, b=%.4f", % tuple(sess.run([m, b])))
print("Learned parameters:  m=, b=",tuple(sess.run([m, b])))
#
0 0.8164941
1 1.1643778
2 0.8676618
3 1.1011628
4 1.3437326
5 0.79393685
6 0.7909982
7 0.7743486
8 0.9102865
9 0.9251575
10 1.3213344
11 0.89753455
12 0.7789482
13 1.1359224
14 1.0010478
15 0.9808642
16 0.6588353
17 0.8968339
18 1.2492636
19 0.78938293
20 1.2290252
21 1.1612035
22 0.7833797
23 0.68967134
24 0.95256704
25 1.1993306
26 1.1535661
27 0.9316273
28 0.7168413
29 0.68305457
30 1.036775
31 0.8293788
32 1.0205792
33 0.7289495
34 0.66338664
35 1.1136339
36 0.71509933
37 0.7940341
38 0.668631
39 0.9533691
40 1.0251735
41 1.0144604
42 0.7035603
43 0.8507445
44 0.99611455
45 0.80008495
46 0.98160285
47 0.7993694
48 0.6925569
49 0.6781959
50 0.59395266
51 0.8446856
52 0.89746416
53 0.6718957
54 0.9169124
55 0.7985803
56 0.8213245
57 1.001945
58 0.6849332
59 0.98912615
60 0.6668694
61 0.7590314
62 0.7378916
63 0.8129589
64 0.89663136
65 0.55486834
66 0.7611171
67 0.6593463
68 0.51213074
69 0.8899108
70 0.6552186
71 0.9152484
72 0.83726335
73 0.8333831
74 0.62360346
75 0.63432133
76 0.79528236
77 0.6192641
78 0.7435878
79 0.54560775
80 0.5859889
81 0.49032432
82 0.61036646
83 0.5053194
84 0.78006256
85 0.5128813
86 0.7429849
87 0.5046184
88 0.48061028
89 0.51002294
90 0.80441326
91 0.56098896
92 0.7224362
93 0.7405427
94 0.5116955
95 0.76823014
96 0.7965195
97 0.7902808
98 0.6449473
99 0.4860176
100 0.60519236
101 0.5996826
102 0.44330326
103 0.64305073
104 0.4316743
105 0.44523025
106 0.6218236
107 0.7719429
108 0.60644376
109 0.71767074
110 0.639779
111 0.64643157
112 0.54633874
113 0.65697366
114 0.72148407
115 0.6659232
116 0.64879113
117 0.7271787
118 0.51443243
119 0.4772804
120 0.682438
121 0.50902563
122 0.5511775
123 0.43470937
124 0.43614143
125 0.70290315
126 0.48624253
127 0.5828907
128 0.39495012
129 0.69976526
130 0.6599697
131 0.5441368
132 0.4197873
133 0.65306944
134 0.6310763
135 0.38402316
136 0.47500542
137 0.64529526
138 0.6070669
139 0.5104332
140 0.40351725
141 0.4564601
142 0.37807915
143 0.42503273
144 0.5141516
145 0.6148385
146 0.5301814
147 0.3947716
148 0.38425288
149 0.56395096
150 0.3890487
151 0.36894482
152 0.50351185
153 0.4522314
154 0.34507933
155 0.34626326
156 0.3426168
157 0.34754646
158 0.47097716
159 0.5798384
160 0.47540298
161 0.47791553
162 0.54770494
163 0.4497432
164 0.34746718
165 0.5636468
166 0.4818066
167 0.45385888
168 0.41598085
169 0.41958085
170 0.36642426
171 0.4390445
172 0.35971412
173 0.5116747
174 0.5269526
175 0.30968198
176 0.50055504
177 0.38931778
178 0.3162558
179 0.45763752
180 0.36995113
181 0.31329492
182 0.4879467
183 0.3986561
184 0.35603774
185 0.30818865
186 0.50034565
187 0.40003067
188 0.46638864
189 0.29266083
190 0.35303497
191 0.4088438
192 0.2901978
193 0.3073999
194 0.31304955
195 0.33912614
196 0.38144517
197 0.42802873
198 0.32196397
199 0.48124063
200 0.2898187
201 0.30708817
202 0.3826016
203 0.45951465
204 0.36651072
205 0.362423
206 0.34003678
207 0.28078714
208 0.40323862
209 0.27336287
210 0.3866445
211 0.43247896
212 0.34699416
213 0.38762864
214 0.2986888
215 0.40437067
216 0.389379
217 0.4037049
218 0.28983527
219 0.40656736
220 0.3943002
221 0.3257753
222 0.264306
223 0.32663098
224 0.35040876
225 0.25554508
226 0.3444555
227 0.4059459
228 0.4034371
229 0.34609744
230 0.2990514
231 0.38564655
232 0.25194952
233 0.24784406
234 0.2932816
235 0.25480157
236 0.34557846
237 0.2488897
238 0.28563392
239 0.2508228
240 0.339093
241 0.2810443
242 0.3532743
243 0.29006892
244 0.28734636
245 0.30058193
246 0.28220174
247 0.2837842
248 0.30063036
249 0.302872
250 0.27231318
251 0.28883824
252 0.22241305
253 0.34676135
254 0.2849809
255 0.34678745
256 0.35766244
257 0.257878
258 0.3404391
259 0.32327703
260 0.27622324
261 0.3503705
262 0.21686606
263 0.33168244
264 0.32030296
265 0.34337592
266 0.29623112
267 0.31374815
268 0.2274127
269 0.26795438
270 0.27049252
271 0.307536
272 0.23263398
273 0.21169041
274 0.2429096
275 0.21967845
276 0.25221622
277 0.27494454
278 0.21435569
279 0.20381385
280 0.24470523
281 0.27266836
282 0.21774878
283 0.27054468
284 0.31972665
285 0.20918134
286 0.28716773
287 0.30846733
288 0.25380605
289 0.2736958
290 0.19444042
291 0.19306865
292 0.20509914
293 0.28501424
294 0.20051777
295 0.19769877
296 0.28761354
297 0.25209087
298 0.27012524
299 0.20788158
300 0.26020604
In [ ]:
"""
0 0.8164941
1 1.1643778
2 0.8676618
3 1.1011628
4 1.3437326
5 0.79393685
6 0.7909982
7 0.7743486
8 0.9102865
9 0.9251575
10 1.3213344
"""
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 

No comments:

Post a Comment