Skip to content

class Num::Grad::Variable(T)
inherits Reference #

A variable is an abstraction of a Tensor that tracks the operations done to the Tensor. It also keeps track of the gradient of the operation if a Variable needs to backpropogate.

This is the fundamental object used in automatic differentiation, as well as the neural network aspects of Num.cr

Constructors#

.new(context : Num::Grad::Context(T), value : T, requires_grad : Bool = false) #

Initialization method for a Variable.

This method should only be called by a context, as it creates a Variable. Context provides a helper method to add a Variable to the computational graph that handles ownership of the context and other related instance variables

View source

Methods#

#*(other : Num::Grad::Variable(T)) : Num::Grad::Variable(T) #

Multiples a variable to another variable and stores the derivative of the operation in the computational graph.

Arguments#
Examples#
ctx = Num::Grad::Context(Tensor(Float64)).new

a = ctx.variable([2.0])
b = ctx.variable([3.0])

f = a * b # => [6.0]
f.backprop
View source

#**(other : Num::Grad::Variable(T)) : Num::Grad::Variable(T) #

Raises a variable to another variable and stores the derivative of the operation in the computational graph.

Arguments#
Examples#
ctx = Num::Grad::Context(Tensor(Float64)).new

a = ctx.variable([2.0])
b = ctx.variable([3.0])

f = a ** b # => [8.0]
f.backprop
View source

#+(other : Num::Grad::Variable(T)) : Num::Grad::Variable(T) #

Adds a variable to another variable and stores the derivative of the operation in the computational graph.

Arguments#
Examples#
ctx = Num::Grad::Context(Tensor(Float64)).new

a = ctx.variable([2.0])
b = ctx.variable([3.0])

f = a + b # => [5.0]
f.backprop
View source

#-(other : Num::Grad::Variable(T)) : Num::Grad::Variable(T) #

Subtracts a variable from another variable and stores the derivative of the operation in the computational graph.

Arguments#
Examples#
ctx = Num::Grad::Context(Tensor(Float64)).new

a = ctx.variable([2.0])
b = ctx.variable([3.0])

f = a - b # => [-1.0]
f.backprop
View source

#- #

Negates the variable

Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0, 2.0])
-x # => [-1.0, -2.0]
View source

#/(other : Num::Grad::Variable(T)) : Num::Grad::Variable(T) #

Divides a variable by another variable and stores the derivative of the operation in the computational graph.

Arguments#
Examples#
ctx = Num::Grad::Context(Tensor(Float64)).new

a = ctx.variable([2.0])
b = ctx.variable([3.0])

f = a / b # => [0.66667]
f.backprop
View source

#[](*args) #

Slices a variable. Slices the gradient of the variable using the same arguments

Arguments#
  • args - Slicing arguments, slicing behavior is the same as it is for a standard Tensor
Examples#
ctx = Num::Grad::Context(Tensor(Float64)).new

a = ctx.variable([[2.0], [3.0]])
b = a[1]
b # => [3]
View source

#acos : Num::Grad::Variable(T) #

Computes the arccosine of a variable

Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.acos # => [0]
View source

#asin : Num::Grad::Variable(T) #

Computes the arcsine of a variable

Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.asin # => [1.5708]
View source

#atan : Num::Grad::Variable(T) #

Computes the arctangent of a variable

Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.atan # => [0.785398]
View source

#backprop(debug : Bool = false) #

Back propogates an operation along a computational graph. This operation will destroy the operational graph, populating the gradients for all variables that are predecessors of the Variable this is called on.

Even if this is called on the first node in a graph, it will destroy all descendents of this variable stored by the Context

View source

#context : Num::Grad::Context(T) #

The graph the variable is associated with. This is a reference, as a variable does not own its context

View source

#cos : Num::Grad::Variable(T) #

Computes the cosine of a variable

Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.cos # => [0.540302]
View source

#elu(alpha = 0.01) #

Exponential Linear Unit activation function

Arguments#
  • alpha : Float - Scale for the negative factor
View source

#exp : Num::Grad::Variable(T) #

Computes the exp of a variable

Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.exp # => [2.71828]
View source

#grad : T #

The gradient of the Variable. This is set as a reference to the value of a Variable unless backprop has been called, in which case all related Variables will have their gradient updated correctly

View source

#grad=(grad : T) #

The gradient of the Variable. This is set as a reference to the value of a Variable unless backprop has been called, in which case all related Variables will have their gradient updated correctly

View source

#leaky_relu #

View source

#log : Num::Grad::Variable(T) #

Computes the log of a variable

Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([2.7182818285])
x.log # => [1.0]
View source

#matmul(b : Num::Grad::Variable(T)) : Num::Grad::Variable(T) #

Matrix multiply operator for two variables. Computes the dot product of two matrices and stores the result in the computational graph

Arguments#
Examples#
ctx = Num::Grad::Context(Tensor(Float64)).new

a = ctx.variable([[2.0], [2.0]])
b = ctx.variable([[3.0, 3.0]])

f = a.matmul(b)

# [[6, 6],
#  [6, 6]]

f.backprop
View source

#mean(axis : Int) : Num::Grad::Variable(T) #

Reduces a Tensor along an axis, finding the average of each view into the Tensor

Arguments#
  • axis : Int - Axis of reduction
Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([[1.0, 2.0], [3.0, 4.0]])
x.mean(0) # => [[2.0, 3.0]]
x.mean(1) # => [[1.5], [3.5]]
View source

#relu #

View source

#requires_grad : Bool #

If set to true, this variable will track its operations, otherwise it will act similar to a Tensor, only calculating forward operations

View source

#requires_grad=(requires_grad : Bool) #

If set to true, this variable will track its operations, otherwise it will act similar to a Tensor, only calculating forward operations

View source

#sin : Num::Grad::Variable(T) #

Computes the sine of a variable

Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.sin # => [0.841471]
View source

#sum(axis : Int) : Num::Grad::Variable(T) #

Reduces a Tensor along an axis, summing each view into the variable

Arguments#
  • axis : Int - Axis of summation
Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([[1.0, 2.0], [3.0, 4.0]])
x.sum(0) # => [[4.0, 6.0]]
x.sum(1) # => [[3.0], [7.0]]
View source

#tan : Num::Grad::Variable(T) #

Computes the tangent of a variable

Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.tan # => [1.55741]
View source

#tanh : Num::Grad::Variable(T) #

Computes the tanh of a variable

Examples#
ctx = Num::Grad::Context(Tensor(Float64, CPU(Float64))).new
x = ctx.variable([1.0])
x.tanh # => [0.761594156]
View source

#value : T #

The value of the Variable. This should not be edited outside of Variable operations, as other edits will not be tracked and will lead to incorrect results

View source