klampt.math.autodiff.pytorch module
- class klampt.math.autodiff.pytorch.TorchModuleFunction(module)[source]
Bases:
ADFunctionInterface
Converts a PyTorch function to a Klamp’t autodiff function class.
- n_in(arg)[source]
Returns the number of entries in argument #arg. If 1, this can either be a 1-D vector or a scalar. If -1, the function can accept a variable sized argument.
- n_out()[source]
Returns the number of entries in the output of the function. If -1, this can output a variable sized argument.
- eval(*args)[source]
Evaluates the application of the function to the given (instantiated) arguments.
- Parameters:
args (list) – a list of arguments, which are either ndarrays or scalars.
- derivative(arg, *args)[source]
Returns the Jacobian of the function w.r.t. argument #arg.
- Parameters:
arg (int) – A value from 0,…,self.n_args()-1 indicating that we wish to take \(df/dx_{arg}\).
args (list of ndarrays) – arguments for the function.
- Returns:
A numpy array of shape
(self.n_out(),self.n_in(arg))
. Keep in mind that even if the argument or result is a scalar, this needs to be a 2D array.If the derivative is not implemented, raise a NotImplementedError.
If the derivative is zero, can just return 0 (the integer) regardless of the size of the result.
- Return type:
ndarray
- jvp(arg, darg, *args)[source]
Performs a Jacobian-vector product for argument #arg.
- Parameters:
arg (int) – A value from 0,…,self.n_args()-1 indicating that we wish to calculate df/dx_arg * darg.
darg (ndarray) – the derivative of x_arg w.r.t some other parameter. Must have
darg.shape = (self.n_in(arg),)
.args (list of ndarrays) – arguments for the function.
- Returns:
A numpy array of length
self.n_out()
If the derivative is not implemented, raise a NotImplementedError.
- Return type:
ndarray
- class klampt.math.autodiff.pytorch.ADModule(*args, **kwargs)[source]
Bases:
Function
Converts a Klamp’t autodiff function call or function instance to a PyTorch Function. The class must be created with the terminal symbols corresponding to the PyTorch arguments to which this is called.
- static forward(ctx, func, terminals, *args)[source]
This function is to be overridden by all subclasses. There are two ways to define forward:
Usage 1 (Combined forward and ctx):
@staticmethod def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any: pass
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
See combining-forward-context for more details
Usage 2 (Separate forward and ctx):
@staticmethod def forward(*args: Any, **kwargs: Any) -> Any: pass @staticmethod def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None: pass
The forward no longer accepts a ctx argument.
Instead, you must also override the
torch.autograd.Function.setup_context()
staticmethod to handle setting up thectx
object.output
is the output of the forward,inputs
are a Tuple of inputs to the forward.See extending-autograd for more details
The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with
ctx.save_for_backward()
if they are intended to be used inbackward
(equivalently,vjp
) orctx.save_for_forward()
if they are intended to be used for injvp
.
- static backward(ctx, grad)[source]
Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs as theforward()
returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computed w.r.t. the output.
- klampt.math.autodiff.pytorch.torch_to_ad(module, args)[source]
Converts a PyTorch function applied to args (list of scalars or numpy arrays) to a Klamp’t autodiff function call on those arguments.
- klampt.math.autodiff.pytorch.ad_to_torch(func, terminals=None)[source]
Converts a Klamp’t autodiff function call or function instance to a PyTorch Function. If terminals is provided, this is the list of arguments that PyTorch will expect. Otherwise, the variables in the expression will be automatically determined by the forward traversal order.