Creating Sparse Neurons¶
A major part of the NCP algorithm is sparse connections. This is handled through the Wiring
class which is then used by our SparseLinear
module.
Wiring and Layer Masks¶
The Wiring
class's main purpose is to create sparsity masks for the three NCP layers and store the information in dataclasses: NeuronCounts
, SynapseCounts
, and LayerMasks
.
Basic Usage¶
To use it, we create an instance of the Wiring
class and then call the data()
method to retrieve the NeuronCounts
and LayerMasks
:
SynapseCounts
SynapseCounts
is strictly used internally inside the wiring class. Typically, you won't need to access this or apply it elsewhere.
However, you can access it using the n_connections
attribute if you need to.
Python | |
---|---|
1 2 3 4 |
|
This code should work 'as is'.
The rest is all done for you automatically behind the scenes.
Creating an NCP Network¶
Then, to create your own NCP network, you use them like this:
Python | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
This code should work 'as is'.
There is also an optional sparsity_level
parameter that controls the connection sparsity between neurons:
- When
0.1
neurons are very dense, close to a traditional Neural Network. - When
0.9
neurons are extremely sparse.
Python | |
---|---|
1 2 3 4 5 6 |
|
- Optional
Experimenting with this could be interesting for your own use cases.
We've found 0.5
to be optimal (which is default) for most cases, providing a decent balance between training speed and performance. So, we recommend you start with this first! 😊
Dataclasses¶
When calling wiring.data()
we receive two dataclasses: NeuronCounts
and LayerMasks
.
Both are designed to be simple and intuitive.
NeuronCounts¶
API Docs
NeuronCounts
holds the counts for each type of node:
Python | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
LayerMasks¶
API Docs
LayerMasks
holds the created sparsity masks for each NCP layer:
Python | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
The masks will vary depending on the network size and the seed
you set. They are random connections after all!
Want to know how they work?
You can read more about them in the Theory - Liquid Neural Networks page.
We highly recommend you set a seed
using the set_seed
utility method first before creating a Wiring
instance. This will help you maintain reproducibility between experiments:
Python | |
---|---|
1 2 3 4 5 6 7 |
|
This code should work 'as is'.
SynapseCounts¶
API Docs
SynapseCounts
holds the synapse connection counts for each node type:
Python | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
As we've discussed, you likely won't ever need to use this. It's strictly used internally inside the Wiring
class.
To access it through a created Wiring
class instance, we use the n_connections
attribute:
Python | |
---|---|
1 2 3 4 5 6 7 8 9 |
|
This code should work 'as is'.
Importing Dataclasses¶
If you need to work with the dataclasses directly, you can manually import them from the wiring
module:
Python | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
This code should work 'as is'.
Sparse Linear Layers¶
We've seen how to use the Wiring
class in NCPLiquidCells
but what about in Linear
layers?
We've created our own implementation for this called a SparseLinear
layer that applies the sparsity mask to the weights automatically.
You can implement one like this:
Python | |
---|---|
1 2 3 4 5 6 7 |
|
This code should work 'as is'.
Notice how we transpose (T
) the mask and then take it's absolute value.
The transpose operation is required to ensure our mask fits the weights correctly and take the absolute to ensure gradient stability (turning -1
-> 1
).
We didn't have to do this in the NCPLiquidCell
because they have their own separate mask processing! 😉
Up next we will look at the methods available for building Liquid Networks. See you there! 👋