Contributors |
Wiki » History » Version 18
« Previous -
Version 18/23
(diff) -
Next » -
Current version
Ramon Martinez, 22 Aug 2014 10:14
Network models of V1¶
This project will be used to test implementations in PyNN (and eventually NeuroML) of published models of primary visual cortex (V1) based on spiking point neurons.
An initial focus will be on Lauritzen and Miller, 2003, but other models investigated will include Ozeki et al., 2009 and Sadagopan and Ferster, 2012.
This project is part of the INCF participation in the Google Summer of Code 2014.
Troyer Model¶
Here I will describe briefly the implementation of pubmed:9671678.
In order to run this model is necessary to first install git and PyNN and the appropriate simulator.
After that you can clone directly from git using:
git clone https://github.com/OpenSourceBrain/V1NetworkModels.git
The model runs in NEST and Neuron with the following versions:
PyNN '0.8beta1'
Nest version: Version 2.2.2
Neuron Release 7.3
Overview of the model¶
As the project stands at this moment the workflow can be briefly described in two steps: first there are two scripts that implement the spatio-temporal filter in the retina and produce the spike-trains for each cell in the Lateral Geniculate Nucelus (LGN) and stores them for further use. Second, there is a file that loads those spike-trains and runs the simulation of the cortical networks in PyNN using them. The first task is executed by two scripts produce_lgn_spikes_on_cells.py
and produce_lgn_spikes_off_cells.py
which generates pickled files in the folder './data' with the spike trains and positions for a given contrast that is selected in the parameters of the script. After we have run the file to produce the spikes with a given contrast (which can be adjusted in the scripts mentioned above) we can run the main script full_model.py
with the same contrast in order to run the complete model.
In order to describe the model in more detail we will start by describing full_model.py
. That is, we will assume that we already have the spikes' data from the LGN that is going to be feed into the other layers. So we will start by describing the general structure of the model which is shown in the following diagram.
The model consists in three qualitatvely different types of layers. The LGN with center-surround receptive fields and the inhibitory and excitatory layers which are connected with a Gabor filter profile to the LGN and a correlation based connectivity between them. At the beginning of the full_model.py
script we have the following parameters that control the general structure of the model and the connections between the layers. First we have the parameters that control the number of cells in each layer which were set accordingly to the values given in the troyer paper. Furthermore we have included a factor constant to decrease the overall size of the model and we also give the user the ability to chose how many LGN population layers he wants to include in the simulation:
factor = 1.0 # Reduction factor Nside_exc = int(factor * Nside_exc) Nside_inh = int(factor * Nside_inh) Ncell_lgn = Nside_lgn * Nside_lgn Ncell_exc = Nside_exc ** 2 Ncell_inh = Nside_inh ** 2 N_lgn_layers = 1
After we also include a series of bolean parameters that give the user the ability to show whether he wants to include certain connections and layers in the simulation. This is very useful to test the effect of a particular connection or layer in the overall behavior of the model.
## Main connections thalamo_cortical_connections = True # If True create connections from the thalamus to the cortex feed_forward_inhibition = True # If True add feed-forward inhibition ( i -> e ) cortical_excitatory_feedback = True # If True add cortical excitatory feedback (e -> e) and ( e -> i ) background_noise = True # If True add cortical noise correlated_noise = False # Makes the noise coorelated
This is all regarding the general structure of the model. The remaining part of full_model.py
is composed of two main sections. The first one determines the parameters of the neurons and the connections and were set according the paper. The second part is the building of the model in PyNN, this is detail in the companion blog of this project. In order to allow the user to interact immediately with the model and to provide with a cleared understanding of how different parts of the Troyer model can be reproduce with our code (and its limitations) we provide a series of scripts that reproduce qualitatively a substantial amount of the figures in Troyer original paper.
Scripts to Reproduce the Figures¶
First we have the LGN reponse. In order to obtain the results in figure 1a we have to run the file
troyer_plot_1a.py
. We obtain something like the following.
Then we have the mechanism that samples connections from a Gabor function shown in figure 2. In order to obtain the connectivity pattern and to see how the parameters affect the final outcome the script
troyer_plot2.py
can be used to explore. If run it will produce a figure similar to the following one:
We have also a script that plots the total conductance contribution from the LGN to the excitatory layer for the preferred and null orientation as shown in the paper's figure 3a. In order to play with how the parameters change the profile of this contribution the script
troyer_plot3a.py
can be explored. If run with a particular simulator (run troyer_plot3a.py nest) it will produce an output like this:
- In order to compare the exctiatory effects that come from the LGN with the inhibitory stimulus that come from the inhibitory layer we plot the excitatory and inhibitory conductances as Troyer did for the current in in figure 7a (the condductivity here being a proxy for the current which in the Troyer paper is calculate as if the voltage was clamped at threshold). In order to explore the dynamic of these effects we can run
troyer_plot7a.py
with nest or neuron as an argument. This should produce a figure like
- In order to explore the connection between the cortical layer we create a script that reproduces the general pattern seen in the figure 7b of Troyer's paper. In order to run it we can run
troyer_plot8b.py
- Finally if we want to see how the parameters and options of the model affect the voltage traces of a particular set of neurons we can run the script
troyer_plot9.py
with nest or neuron. This will produce a figure which is in the spirit of the figure 9 in the paper.
Caveats, Missing Features and Further Work¶
As the code stands the model is able to reproduce qualitatively most of the beahviours of the Troyer paper. There is however the need for tunning to achieve a beahviour that is also quantitatlvely consistent with the one from the paper. We believe that this boils down to the fact that we have some missing features from the original model that destroy the fine tuning. Among them we find:
- The Troyer paper uses a conductivity that not only falls exponentially but also rises in opposition to the one that we use which limits itself to the falling part.
- The Troyer paper uses a variable delay after each synaptic event
- The Troyer ppaer uses a correlation connecrivity algorith fro the cortical connections that is based in the correlation between the LGN's receptive field instead of using the Gabor filters directly as we did.
Further work will be add PyNN with the capabilities to handle such situations in order to implement a Troyer model that is more faithful the the original intentions of the paper.
Details¶
LGN - spikes¶
In brief, the Retina and the Thalamus part of the model can be represented by a spatio-temporal filter that, when convolved with the stimuli, will produce the firing rate of a given LGN cell. After that, we can use a non-homogeneous Poisson process to produce the corresponding spikes for each cell. We describe this in detail bellow.
Spatio-Temporal Receptive Field (STRF)¶
The file kernel_functions.py
contains the code for creating the STRF. The spatial part of the kernel possess a center-surround architecture which is model as a different of Gaussians. The temporal part of the receptive field has a biphasic structure, we use the implementation describe in Cai et al (1998). The details of the implementation are described in detail in the companion blog of this project in the posts (Retinal Filter I). Down here we present a kernel produce with this classes. The time here runs from left to right and from up to down as usual text, so we can see how the spatial components of the filter change in time with this series of two dimensional maps.
We also include a small script center_surround_plot.py
that can be used to visualize the spatial component of the STRF and received immediate feedback on how the overall pattern changes when the parameters and resolutions are changed.
Stimuli¶
The file stimuli_functions.py
contains the code for creating the stimuli. In particular we used the implementation of a full field sinusoidal grating with the parameters described in the paper. We also include a small script sine_grating_plot.py
to visualize how the sine grating looks at a particular point in time.
Convolution¶
After we have the stimuli and the STRF we can use the convolution function defined in the file analysis_functions.py
to calculate the response of LGN' neurons. The details of how the the convolution is implemented is described in the detail in the following entry of the blog (Retinal Filter II).
Producing Spikes¶
After we have the firing rate of a neuron we can use the produce_spikes functions in the file analysis_functions.py
. This functions takes the firing rate and using non-homogeneous Poisson process outputs an array with the spikes times. We provide in the repository the script produce_lgn_spikes_one.py
for testing variations of parameters and as an example showcase.
Storing Spikes¶
Now we have the complete mechanism of spike creation we can use the files produce_lgn_on_spikes.py
and produce_lgn_off_spikes.py
to create the spikes for the on and off LGN cells.This file creates a grid of positions (This should correspond to the grid of LGN cells that we are going to use in PyNN) and produces the list of spikes associated with them as well as the positions and stores them using cPickled
.