# TF2.0 Warm-up exercises (forked from @chipHuyen Repo)

T

Heard of Ms @huyen chip for her notable yet controversial travelling books back in the day. I enjoy reading but I am not really into travel memoirs. Nevertheless, she did surprise everyone by her achievements by getting in Stanford, teaching TensorFlow, and then became a computer/data scientist. Her story is definitely very inspiring.

For ones who don’t know about Ms Huyen, I added an interview vid at the end of this post. Now, let’s have some fun with TF2.X exercises extract from her public repo (it was originally written with TF.1.X). See this article for a quick comparison.

### The TensorFlow exercises

#### Ex 0:

Enable TF2.0

``````from __future__ import absolute_import, division, print_function, unicode_literals
%tensorflow_version 2.x
import os
import tensorflow as tf
print(tf.version.VERSION)``````

#### Ex 1a.

Create two random 0-D tensors x and y of any distribution. Create a TensorFlow object that returns `x + y` if `x > y`, and `x - y` otherwise.
Hint: lookup `tf.cond()`

``````x = tf.random.uniform([])
y = tf.random.uniform([])
out = tf.cond(tf.greater(x, y), lambda: x + y, lambda: x - y)
print (out)
#tf.Tensor(1.0241972, shape=(), dtype=float32)``````

#### EX1b

Create two 0-d tensors x and y randomly selected from the range [-1, 1).
Return `x + y` if `x < y`, `x - y` if `x > y`, 0 otherwise.
Hint: Look up tf.case().

``````x = tf.random.uniform([], minval=-1, maxval=1)
y = tf.random.uniform([],  minval=-1, maxval=1)
out = tf.case([(tf.less(x, y), lambda: x + y), (tf.greater(x, y), lambda: x -y)], default= 0)
print(out)
#tf.Tensor(-0.26057887, shape=(), dtype=float32)``````

#### Ex1c:

Create the tensor x of the value `[[0, -2, -1], [0, 1, 2]]` and y as a tensor of zeros with the same shape as x. Return a boolean tensor that yields Trues if x equals y element-wise.
Hint: Look up tf.equal()

``````x = tf.constant([[0, -2, -1], [0, 1, 2]])
y = tf.zeros(x.shape, dtype='int32')
out = tf.equal(x, y)
print(out.numpy)
#<bound method _EagerTensorBase.numpy of <tf.Tensor: id=215, shape=(2, 3), dtype=bool, numpy=
#array([[ True, False, False],
#       [ True, False, False]])>>``````

#### Ex1d:

Create the tensor x of value
`[29.05088806, 27.61298943, 31.19073486, 29.35532951,`
`30.97266006, 26.67541885, 38.08450317, 20.74983215,`
`34.94445419, 34.45999146, 29.06485367, 36.01657104,`
`27.88236427, 20.56035233, 30.20379066, 29.51215172,`
`33.71149445, 28.59134293, 36.05556488, 28.66994858].`

Get the indices of elements in x whose values are greater than 30.
Hint: Use tf.where().
Then extract elements whose values are greater than 30.
Hint: Use tf.gather(): Gather slices from params axis according to indices.

``````x = tf.constant([29.05088806, 27.61298943, 31.19073486, 29.35532951,
30.97266006, 26.67541885, 38.08450317, 20.74983215,
34.94445419, 34.45999146, 29.06485367, 36.01657104,
27.88236427, 20.56035233, 30.20379066, 29.51215172,
33.71149445, 28.59134293, 36.05556488, 28.66994858])
condition = tf.greater(x, 30.)
indices = tf.where(condition)
print(tf.reshape(indices, [-1]))
print(tf.gather(x, indices))``````
``````tf.Tensor([ 2  4  6  8  9 11 14 16 18], shape=(9,), dtype=int64)
tf.Tensor(
[[31.190735]
[30.97266 ]
[38.084503]
[34.944454]
[34.45999 ]
[36.01657 ]
[30.20379 ]
[33.711494]
[36.055565]], shape=(9, 1), dtype=float32)``````

#### EX1e:

Create a diagonal `2-d` tensor of size `6 x 6` with the diagonal values of `1, 2, ..., 6`
Hint: Use `tf.range()` and `tf.linalg.diag()`.

``````a = tf.linalg.diag(tf.range(1, 7))
print(a)
#tf.Tensor(
#[[1 0 0 0 0 0]
# [0 2 0 0 0 0]
# [0 0 3 0 0 0]
# [0 0 0 4 0 0]
# [0 0 0 0 5 0]
# [0 0 0 0 0 6]], shape=(6, 6), dtype=int32)``````

#### Ex1f:

Create a random 2-d tensor of size 10 x 10 from any distribution. Calculate its determinant. Hint: Look at tf.linalg.det(): Computes the determinant of one or more square matrices.

``````a = tf.random.normal([10, 10])
detA = tf.linalg.det(a)
print(detA)
#tf.Tensor(1669.2573, shape=(), dtype=float32)``````

#### Ex1g:

Create tensor x with value `[5, 2, 3, 5, 10, 6, 2, 3, 4, 2, 1, 1, 0, 9]`.
Return the unique elements in x
Hint: use `tf.unique()`. Keep in mind that `tf.unique()` returns a tuple.

``````a = tf.constant([5, 2, 3, 5, 10, 6, 2, 3, 4, 2, 1, 1, 0, 9])
uniques, idx = tf.unique(a)
print(uniques)
#tf.Tensor([ 5  2  3 10  6  4  1  0  9], shape=(9,), dtype=int32)``````

#### Ex1h:

Create two tensors x and y of shape 300 from any normal distribution, as long as they are from the same distribution. Use `tf.cond()` to return:

• The mean squared error of (x – y) if the average of all elements in (x – y) is negative, or
• The sum of the absolute value of all elements in the tensor (x – y) otherwise.

Hint: One application is the Huber loss function

``````x = tf.random.normal(, seed=101)
y = tf.random.normal(, seed=102)
dif = x - y
out = tf.cond(tf.greater(x, y).numpy().all(), lambda: tf.reduce_mean(dif ** 2), lambda: tf.reduce_sum(tf.math.abs(dif)))
print(out)
#tf.Tensor(557.5003, shape=(), dtype=float32)``````

### The interview

• Anonymous says:

I do not like her!

• tungnd says:

Be positive :D. I like what is good/useful from anyone!

• Anonymous says:

I reserve my opinion! I still do not like her! 😒

• tungnd says:

PetaMinds focuses on developing the coolest topics in data science, A.I, and programming, and make them so digestible for everyone to learn and create amazing applications in a short time.