You're reading the documentation for a development version. For the latest released version, please have a look at v0.2.0.
rau.tools.torch
¶
- class rau.tools.torch.BasicComposable¶
Bases:
Module
Base class for composable modules.
- __init__(main, tags)¶
- class rau.tools.torch.Composable¶
Bases:
BasicComposable
A class that can be used to wrap any
Module
so that it can be used in a pipeline ofBasicComposable
s.- __init__(module, main=False, tags=None, kwargs=None)¶
- Parameters:
module (
Module
) – The module to wrapped. The new module will have the same inputs and outputs as this module.main (
bool
) – Whether this should be considered the main module, i.e., it should receive extra arguments fromComposed.forward()
of aComposed
that contains it.kwargs (
dict
[str
,Any
] |None
) – Optional keyword arguments that will be bound to theforward()
method.tags (
Iterable
[str
]) – Tags to assign to this module for argument routing.
- forward(*args, **kwargs)¶
Same as the wrapped module.
Automatically applies any bound keyword arguments.
- Return type:
- kwargs(**kwargs)¶
Bind keyword arguments to be passed to
forward()
of the wrapped module.- Return type:
- Returns:
Self.
- main()¶
Mark this module as main.
- Return type:
- Returns:
Self.
- class rau.tools.torch.Composed¶
Bases:
BasicComposable
A composition of two modules.
- __init__(first, second)¶
- forward(x, *args, tag_kwargs=None, **kwargs)¶
Feed the input
x
as input to the first module, feed the outputs as inputs to the second module, and return the output of the second module.- Parameters:
x (
Any
) – The input to the first module.args (
Any
) – Extra arguments that will be passed to the main module.tag_kwargs (
dict
[str
,Any
] |None
) – A dict mapping tag names to dicts of keyword arguments. These keyword arguments will be passed only to modules with the corresponding tags.kwargs (
Any
) – Extra keyword arguments that will be passed to the main module.
- Return type:
-
first:
BasicComposable
¶
-
second:
BasicComposable
¶
- class rau.tools.torch.EmbeddingLayer¶
Bases:
Module
- __init__(vocabulary_size, output_size, use_padding, shared_embeddings=None)¶
- forward(x)¶
- class rau.tools.torch.Layer¶
Bases:
Module
A fully-connected layer consisting of connection weights and an activation function.
Treating these things as a unit is useful because the activation function can be used to initialize the weights with Xavier initialization properly.
- __init__(input_size, output_size, activation=Identity(), bias=True)¶
- forward(x)¶
Let \(B\) be batch size, \(X\) be
input_size
, and \(Y\) beoutput_size
.
- get_gain()¶
Get the correct gain value for initialization based on the activation function.
- Return type:
- get_nonlinearity_name()¶
Get the name of the activation function as a string which can be used with
calculate_gain()
.- Return type:
- xavier_uniform_init(generator=None)¶
Initialize the parameters of the layer using Xavier initialization. The correct gain is used based on the activation function. The bias term, if it exists, will be initialized to 0.
- class rau.tools.torch.FeedForward¶
Bases:
Sequential
Multiple
Layer
s in serial, forming a feed-forward neural network.- __init__(input_size, layer_sizes, activation, bias=True)¶
- Parameters:
input_size (
int
) – The number of units in the input to the first layer.layer_sizes (
Iterable
[int
]) – The sizes of the outputs of each layer, including the last.activation (
Module
) – The activation function applied to the output of each layer. This should be a non-linear function, since multiple linear transformations is equivalent to a single linear transformation anyway.bias (
bool
) – Whether to use a bias term in each layer.
- class rau.tools.torch.MultiLayer¶
Bases:
Layer
A module representing \(num_layers\) fully-connected layers all with the same input and activation function. The layer outputs will be computed in parallel.
- __init__(input_size, output_size, num_layers, activation=Identity(), bias=True)¶
- Parameters:
- forward(x)¶
Let \(B\) be batch size, \(X\) be
input_size
, \(Y\) beoutput_size
, and \(n\) be the number of layers.
- class rau.tools.torch.TiedLinear¶
Bases:
Module
- rau.tools.torch.get_linear(input_size, output_size, shared_embeddings=None, bias=True)¶
- class rau.tools.torch.ModelInterface¶
Bases:
object
- __init__(use_load=True, use_init=True, use_output=True, require_output=True)¶
- add_arguments(parser)¶
- add_device_arguments(group)¶
- add_forward_arguments(parser)¶
- add_init_arguments(group)¶
- add_load_arguments(group)¶
- add_more_init_arguments(group)¶
- construct_model(**kwargs)¶
- construct_saver(args, *_args, **_kwargs)¶
- fail_argument_check(msg)¶
- get_device(args)¶
- get_kwargs(args, *_args, **kwargs)¶
- initialize(args, model, generator)¶
- on_saver_constructed(args, saver)¶
- rau.tools.torch.parse_device(s)¶
- class rau.tools.torch.ProfileResult¶
Bases:
object
ProfileResult(duration: float, memory_allocated: int, memory_reserved: int, initial_memory_stats: dict, memory_stats: dict)
- __init__(duration, memory_allocated, memory_reserved, initial_memory_stats, memory_stats)¶
- rau.tools.torch.profile(func, device, warmup=True)¶
- Return type:
- rau.tools.torch.get_current_memory(device)¶
- rau.tools.torch.reset_memory_profiler(device)¶
- rau.tools.torch.get_peak_memory(device)¶