TensorFlow 学习笔记

Table of Contents

1 Get Started

1.1 基础概念

  • Tensor: 数据, 一个多维数组(n*1)
  • Graph: 计算模型
  • ops(operations): Graph中的节点,一个节点接受一个或多个Tensor
  • Session: 执行Graph的上下文
  • Varaible: 变量,可以用于维护状态或者数值等
  • Feed/Fetch: 装载数据和获取数据

1.2 笔记

  • 在一个Session中,TensorFlow可以决定哪些ops在哪些设备上运行,达到分布式的效果
  • TensorFlow可以指定CPU/GPU运行
  • 每个Tensor都具备: Type\Rank\Shape。 其中Type指元素的类型,float/int等。关于Tensor的Rank和Shape,例子如下: Rank Math entity Python example 0 Scalar (magnitude only) s = 483 1 Vector (magnitude and direction) v = [1.1, 2.2, 3.3] 2 Matrix (table of numbers) m = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] 3 3-Tensor (cube of numbers) t = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] n n-Tensor (you get the idea) ….

    Rank Shape Dimension number Example 0 [] 0-D A 0-D tensor. A scalar. 1 [D0] 1-D A 1-D tensor with shape 10. 2 [D0, D1] 2-D A 2-D tensor with shape [3, 4]. 3 [D0, D1, D2] 3-D A 3-D tensor with shape [1, 4, 3]. n [D0, D1, … Dn-1] n-D A tensor with shape [D0, D1, … Dn-1].

1.3 补充材料

1.3.1 矩阵的秩及其求法

参考: https://wenku.baidu.com/view/7936452ced630b1c59eeb5da.html 注: 矩阵的秩与Tensor的Rank似乎不同。 矩阵为2阶张量, 即Tensor rank=2。

2 Tutorials

3 实例

3.1 线性回归

# train: 训练数据集,数据类型为pd.DataFrame
# tests: 测试数据集,数据类型为pd.DataFrame
def build_lr(train, tests):

    x = tf.placeholder(tf.float32, shape=[None, 960])
    yr = tf.placeholder(tf.float32, shape=[None, 1])

    xtrain = train.iloc[:, :-1].values.tolist()
    ytrain = train.iloc[:, -1].values.reshape(-1, 1).tolist()
    #print(xtrain)

    # 构建简单的线性回归方程
    W = tf.Variable(tf.zeros([960, 1]))
    b = tf.Variable(tf.zeros([1]))
    y = tf.matmul(x, W) + b

    # 选择MSE作为loss function
    loss = tf.reduce_mean(tf.square(y - yr))
    optimizer = tf.train.AdamOptimizer(0.001)
    ops = optimizer.minimize(loss)

    init = tf.global_variables_initializer()

    # 开始运行
    with tf.Session() as sess:
	# 训练模型
	sess.run(init)

	for step in range(200):
	    sess.run(ops, feed_dict={x: xtrain, yr: ytrain})

	    if step % 20 == 0:
		print(step, sess.run(loss, feed_dict={x: xtrain, yr: ytrain}))

	# 预测输出
	final = sess.run(y, feed_dict={x: tests.iloc[:, :-1].values})

    return final

3.2 Optimizer的选择11

4 API

4.1 tf.nn.conv2d

tf.nn.conv2d(input, filter, strides, padding, usecudnnongpu=None, dataformat=None, name=None) 作用:卷积计算

参数:

  • input: 4阶张量,shape: [batch, height, weight, inchannels]
  • filter: 4阶张量,shape: [height, weight, inchannels, outchannels]

Footnotes:

1

DEFINITION NOT FOUND.

2

DEFINITION NOT FOUND.

3

DEFINITION NOT FOUND.

4

DEFINITION NOT FOUND.

5

DEFINITION NOT FOUND.

6

DEFINITION NOT FOUND.

7

DEFINITION NOT FOUND.

8

DEFINITION NOT FOUND.

9

DEFINITION NOT FOUND.

10

DEFINITION NOT FOUND.

Author: Marcnuth

Last Updated: 2017-05-24 Wed 22:15