Operations
See the official TensorFlow documentation for a complete description of these operations.
Basic operations
placeholder
constant
concat
stack
split
expand_dims
argmin
argmax
add_n
one_hot
random_uniform
random_normalVariables
Variable
global_variables_initializer
variable_scope
get_variable
ConstantInitializer
assign
assign_add
assign_sub
scatter_updateReductions
reduce_sum
reduce_prod
reduce_min
reduce_max
reduce_all
reduce_any
reduce_meanComparisons
equal
not_equal
less
less_equal
greater
greater_equal
select
whereImages
image.decode_jpeg
image.encode_jpeg
image.decode_png
image.encode_png
image.resize_images
image.central_cropNeural networks
Convolutions
TensorFlow.Ops.conv2d — Function. conv2d(input, filter; use_cudnn_on_gpu=true, data_format=NHWC, dilations=[1, 1, 1, 1])TensorFlow.Ops.max_pool — Function. max_pool(input; data_format=NHWC)Embeddings
TensorFlow.nn.embedding_lookup — Function.embedding_lookup(params, ids; partition_strategy="mod", name="", validate_indices=true)Looks up values of ids in params. Currently only supports one Tensor in params.
Args:
params: A list ofTensors of the same type and which can be concatenated along their first dimension.ids: ATensorofInt32orInt64ids to be looked up inparams.partition_strategy: If"mod"(default), assign each id to partitionp = id % len(params). If"div", assign each id contiguously.name: An optional name for the operation.validate_indices: Iftrue(default), make sure the indices are valid.
Recurrent neural nets
nn.rnn
nn.dynamic_rnn
nn.zero_state
nn.output_size
nn.zero_stateNonlinearities
nn.relu
nn.relu6
nn.elu
nn.softplus
nn.softsign
nn.softmax
nn.sigmoid
nn.tanhLosses
TensorFlow.nn.l2_loss — Function.l2_loss(t)
Computes half the L2-norm of a Tensor t, without taking the square root.
Regularizations
TensorFlow.nn.dropout — Function.dropout(x, keep_prob; noise_shape=nothing, seed=0)
Keeps each element of x with keep_prob, scaled by 1/keep_prob, otherwise outputs 0. This computes dropout.
Args:
x: ATensor.keep_prob: Probability that each element is kept.noise_shape: Shape for randomly generated keep/drop flags.seed: Integer used to seed the RNG. Defaults to0.
Evaluations
TensorFlow.nn.top_k — Function.top_k(input, k=1; sorted=true)
Finds values and indices of the top k largest entries of input in its last dimension.
Args:
input: One or more dimensionalTensorwith last dimension at least sizek.k: Number of largest elements ofinputto look for. Defaults to 1.sorted: Iftrue(default), the returned values will be sorted in descending order.
TensorFlow.nn.in_top_k — Function.in_top_k(predictions, targets, k)
Determines whether the targets are in the top k predictions. Outputs a batch_size (first dimension of predictions) array of boolean values.
Args:
predictions: Two dimensionalTensor.targets: ATensor.k: Number of elements to look at for comparison.
Logic
TensorFlow.Ops.logical_and — Function. logical_and(x, y)TensorFlow.Ops.logical_not — Function. logical_not(x)TensorFlow.Ops.logical_or — Function. logical_or(x, y)TensorFlow.logical_xor — Function.logical_xor(x, y; name="LogicalXor")Returns the truth value of x XOR y element-wise.
NOTE: LogicalXor supports broadcasting. More about broadcasting here
Args:
x: ATensorof typebool.y: ATensorof typebool.name: A name for the operation (optional).
Returns: A Tensor of type bool.
Control flow
identity
TensorFlow.make_tuple
TensorFlow.group
TensorFlow.no_op
cond