Working with Prebuilt Backbones¶
Backbones are a way of performing feature extraction techniques before passing through it through the core prediction algorithm.
Velora has two prebuilt options for this: an MLP
and a BasicCNN
.
MLP¶
The MLP
is a dynamic class for building Multi-layer Perceptron Networks - the traditional fully-connected neuron architecture.
Even though our algorithms focus on LNNs, we've added this into the framework on purpose to make it easy to compare the difference between the two for your own experiments.
The main component is the n_hidden
parameter that is a List[int]
(or a single int
for one layer). This creates the nn.Linear
hidden layers for you automatically.
It also comes with a few optional arguments, such as activation
and dropout_p
:
activation
- defines the activation function between the layers, default isrelu
.dropout_p
- the dropout probability assigned between each layer, default is0.0
meaning no dropout layers are applied.
Here's a code example:
Python | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
- 3 hidden layers
- Activation function used between the layers (optional)
- Dropout used between layers, 20% probability (optional)
This code should work 'as is'.
BasicCNN¶
The BasicCNN
uses a static architecture from the DQN Nature paper: Human-level control through deep reinforcement learning [].
The paper used it for Atari games, but has been adopted in other libraries such as Stable-Baselines3 [] as a go-to CNN architecture, so we thought we'd use the same one! 😊
As an added bonus, it makes things easier for comparing SB3 baselines with our algorithms 😉.
Backbones with Velora agents
Currently, Velora doesn't directly use backbones in it's agents, they are strictly LNN or NCP architectures. So, you need to manually apply them yourself (we'll show you how to do this shortly).
Typically, cyber environments don't use images as inputs, so we have no intention of changing this.
To use the BasicCNN
architecture, we pass in the number of in_channels
and then can call the forward()
or out_size()
methods:
Python | |
---|---|
1 2 3 4 5 6 7 8 9 |
|
- Number of
in_features
to an NCP or MLP
This code should work 'as is'.
Usage With a Custom LNN¶
To use the BasicCNN
with a custom LNN, we can combine it with the LiquidNCPNetwork
module:
Python | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
This code should work 'as is'.
Or, as a module:
Python | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
|
This code should work 'as is'.
And that completes the customization tutorials! Well done for getting this far! 👏
Still eager to learn more? Try one of the options 👇:
-
User Guide
Learn how to use Velora's core algorithms and utility methods.
-
Theory
Read the theory behind the RL algorithms Velora uses.