class Num::Grad::Variable(T)
inherits Reference
#
A variable is an abstraction of a Tensor that tracks the operations done to the Tensor. It also keeps track of the gradient of the operation if a Variable needs to backpropogate.
This is the fundamental object used in automatic differentiation, as well as the neural network aspects of Num.cr
Constructors#
.new(context : Num::Grad::Context(T), value : T, requires_grad : Bool = false)
#
(context : Num::Grad::Context(T), value : T, requires_grad : Bool = false)
Initialization method for a Variable.
This method should only be called by a context, as it creates a Variable. Context provides a helper method to add a Variable to the computational graph that handles ownership of the context and other related instance variables
Methods#
#*(other : Num::Grad::Variable(T)) : Num::Grad::Variable(T)
#
(other : Num::Grad::Variable(T)) : Num::Grad::Variable(T)
Multiples a variable to another variable and stores the derivative of the operation in the computational graph.
Arguments#
- other :
Num::Grad::Variable
- right hand side of the operation
Examples#
ctx = Num::Grad::Context(Tensor(Float64)).new
a = ctx.variable([2.0])
b = ctx.variable([3.0])
f = a * b # => [6.0]
f.backprop
#**(other : Num::Grad::Variable(T)) : Num::Grad::Variable(T)
#
(other : Num::Grad::Variable(T)) : Num::Grad::Variable(T)
Raises a variable to another variable and stores the derivative of the operation in the computational graph.
Arguments#
- other :
Num::Grad::Variable
- right hand side of the operation
Examples#
ctx = Num::Grad::Context(Tensor(Float64)).new
a = ctx.variable([2.0])
b = ctx.variable([3.0])
f = a ** b # => [8.0]
f.backprop
#+(other : Num::Grad::Variable(T)) : Num::Grad::Variable(T)
#
(other : Num::Grad::Variable(T)) : Num::Grad::Variable(T)
Adds a variable to another variable and stores the derivative of the operation in the computational graph.
Arguments#
- other :
Num::Grad::Variable
- right hand side of the operation
Examples#
ctx = Num::Grad::Context(Tensor(Float64)).new
a = ctx.variable([2.0])
b = ctx.variable([3.0])
f = a + b # => [5.0]
f.backprop
#-(other : Num::Grad::Variable(T)) : Num::Grad::Variable(T)
#
(other : Num::Grad::Variable(T)) : Num::Grad::Variable(T)
Subtracts a variable from another variable and stores the derivative of the operation in the computational graph.
Arguments#
- other :
Num::Grad::Variable
- right hand side of the operation
Examples#
ctx = Num::Grad::Context(Tensor(Float64)).new
a = ctx.variable([2.0])
b = ctx.variable([3.0])
f = a - b # => [-1.0]
f.backprop
#-
#
Negates the variable
Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0, 2.0])
-x # => [-1.0, -2.0]
#/(other : Num::Grad::Variable(T)) : Num::Grad::Variable(T)
#
(other : Num::Grad::Variable(T)) : Num::Grad::Variable(T)
Divides a variable by another variable and stores the derivative of the operation in the computational graph.
Arguments#
- other :
Num::Grad::Variable
- right hand side of the operation
Examples#
ctx = Num::Grad::Context(Tensor(Float64)).new
a = ctx.variable([2.0])
b = ctx.variable([3.0])
f = a / b # => [0.66667]
f.backprop
#[](*args)
#
(*args)
Slices a variable. Slices the gradient of the variable using the same arguments
Arguments#
- args - Slicing arguments, slicing behavior is the same as
it is for a standard
Tensor
Examples#
ctx = Num::Grad::Context(Tensor(Float64)).new
a = ctx.variable([[2.0], [3.0]])
b = a[1]
b # => [3]
#acos : Num::Grad::Variable(T)
#
: Num::Grad::Variable(T)
Computes the arccosine of a variable
Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.acos # => [0]
#asin : Num::Grad::Variable(T)
#
: Num::Grad::Variable(T)
Computes the arcsine of a variable
Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.asin # => [1.5708]
#atan : Num::Grad::Variable(T)
#
: Num::Grad::Variable(T)
Computes the arctangent of a variable
Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.atan # => [0.785398]
#backprop(debug : Bool = false)
#
(debug : Bool = false)
Back propogates an operation along a computational graph. This operation will destroy the operational graph, populating the gradients for all variables that are predecessors of the Variable this is called on.
Even if this is called on the first node in a graph, it will destroy all descendents of this variable stored by the Context
#context : Num::Grad::Context(T)
#
: Num::Grad::Context(T)
The graph the variable is associated with. This is a reference, as a variable does not own its context
#cos : Num::Grad::Variable(T)
#
: Num::Grad::Variable(T)
Computes the cosine of a variable
Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.cos # => [0.540302]
#elu(alpha = 0.01)
#
(alpha = 0.01)
Exponential Linear Unit activation function
Arguments#
- alpha :
Float
- Scale for the negative factor
#exp : Num::Grad::Variable(T)
#
: Num::Grad::Variable(T)
Computes the exp of a variable
Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.exp # => [2.71828]
#grad : T
#
: T
The gradient of the Variable. This is set as a reference to
the value of a Variable unless backprop
has been called, in
which case all related Variables will have their gradient
updated correctly
#grad=(grad : T)
#
(grad : T)
The gradient of the Variable. This is set as a reference to
the value of a Variable unless backprop
has been called, in
which case all related Variables will have their gradient
updated correctly
#log : Num::Grad::Variable(T)
#
: Num::Grad::Variable(T)
Computes the log of a variable
Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([2.7182818285])
x.log # => [1.0]
#matmul(b : Num::Grad::Variable(T)) : Num::Grad::Variable(T)
#
(b : Num::Grad::Variable(T)) : Num::Grad::Variable(T)
Matrix multiply operator for two variables. Computes the dot product of two matrices and stores the result in the computational graph
Arguments#
- other :
Num::Grad::Variable
- right hand side of the operation
Examples#
ctx = Num::Grad::Context(Tensor(Float64)).new
a = ctx.variable([[2.0], [2.0]])
b = ctx.variable([[3.0, 3.0]])
f = a.matmul(b)
# [[6, 6],
# [6, 6]]
f.backprop
#mean(axis : Int) : Num::Grad::Variable(T)
#
(axis : Int) : Num::Grad::Variable(T)
Reduces a Tensor
along an axis, finding the average of each
view into the Tensor
Arguments#
- axis :
Int
- Axis of reduction
Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([[1.0, 2.0], [3.0, 4.0]])
x.mean(0) # => [[2.0, 3.0]]
x.mean(1) # => [[1.5], [3.5]]
#requires_grad : Bool
#
: Bool
If set to true, this variable will track its operations, otherwise it will act similar to a Tensor, only calculating forward operations
#requires_grad=(requires_grad : Bool)
#
(requires_grad : Bool)
If set to true, this variable will track its operations, otherwise it will act similar to a Tensor, only calculating forward operations
#sin : Num::Grad::Variable(T)
#
: Num::Grad::Variable(T)
Computes the sine of a variable
Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.sin # => [0.841471]
#sum(axis : Int) : Num::Grad::Variable(T)
#
(axis : Int) : Num::Grad::Variable(T)
Reduces a Tensor
along an axis, summing each view into
the variable
Arguments#
- axis :
Int
- Axis of summation
Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([[1.0, 2.0], [3.0, 4.0]])
x.sum(0) # => [[4.0, 6.0]]
x.sum(1) # => [[3.0], [7.0]]
#tan : Num::Grad::Variable(T)
#
: Num::Grad::Variable(T)
Computes the tangent of a variable
Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.tan # => [1.55741]
#tanh : Num::Grad::Variable(T)
#
: Num::Grad::Variable(T)
Computes the tanh of a variable
Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.tanh # => [0.761594156]
#value : T
#
: T
The value of the Variable. This should not be edited outside of Variable operations, as other edits will not be tracked and will lead to incorrect results