NeuroCorrelation: Plasticity in a Recurrent Spiking Network Simulation
NeuroCorrelation was a 2016/2017 project centered on a 3D spiking neural network simulator built in C++ and OpenGL to study one idea in particular: spike-timing-dependent plasticity.
In short, a synapse gets stronger when one neuron tends to fire just before another, and weaker when the order is reversed.
The rule is local. Changes happen at each synapse based on spike timing, without labels or a global training loop.
This project is better understood as a biological simulation than as an ML system. It tracks membrane voltage, spike arrival, refractory behavior, and synaptic delay in order to observe network dynamics and self-organization in a recurrent circuit.
The code is on GitHub, and you can download it, build it, and run the simulation for yourself. It is an inspectable little lab for timing-based learning.
Demo
Browser brain simulation with a 3D neuron network, spike-timing-dependent plasticity, synaptic learning, and interactive WebAssembly rendering.
Controls
- Drag Move the camera
- Scroll Zoom in or out
- C Switch camera mode
- M Switch render mode
- Space Pause or resume
- Click Select a neuron or toggle an input
What STDP actually is
Most neural-network explanations start with activations and weights. STDP starts with events in time. Neurons spike. Those spikes arrive earlier or later. The update depends on that offset. It is only one of many plasticity rules in the brain, alongside rate-based Hebbian learning, homeostatic plasticity, short-term facilitation and depression, and inhibitory plasticity.
That is also why standard deep learning frameworks such as PyTorch feel awkward here. They are built around dense tensor operations and iterative layer-by-layer updates, while this kind of spiking model is recurrent, event-driven, and full of delayed feedback. A spike changes future state, which can trigger more spikes, which then feed back into the same circuit at different times. The question is not only how much a unit responds, but exactly when a membrane changes state.
If the presynaptic neuron tends to fire slightly before the postsynaptic neuron, the synapse is rewarded. If it tends to fire after, the synapse is penalized. In plain English: useful predictive connections survive, and useless or misleading ones fade.
This is more informative than simple co-activation. A rate-based model can tell you that two units are often active together. A timing-based model can tell you which one tends to come first. That is much closer to learning sequence, propagation, and direction.
Over time, that tends to bias the network toward directed pathways and repeatable firing sequences. Synapses that consistently help drive downstream spikes are reinforced, while others weaken, so activity begins to propagate along more reliable temporal routes. In that sense, STDP is a local rule for extracting causally useful structure from raw spike timing.
Typical STDP window
Project rule
Δw = η (a_pre · trace_pre - a_post · trace_post)
In NeuroCorrelation, recent spikes leave behind exponentially decaying traces. Synapses keep a presynaptic trace, neurons keep a postsynaptic trace, and the difference between them determines the direction of the weight update.
That matters in a recurrent network, because the rule does not search the whole graph for a matching presynaptic and postsynaptic pair. It stays local to each synapse. For any directed edge, `pre` means the signal that most recently arrived through that synapse, while `post` means the most recent spike of the target neuron. The update is therefore attached to a connection, not to a global pairing procedure.
The continuous part comes from the traces. Instead of asking whether two discrete spikes should be explicitly matched, the simulator asks how much recent presynaptic evidence is still present at the synapse and how much recent postsynaptic activity is still present in the target neuron. Both traces decay exponentially from their last event times, so the weight update reflects a local memory of recent timing rather than a hard-coded spike pair lookup.
Delays fit naturally into that picture. The presynaptic trace is tied to spike arrival at the target, not merely to the source neuron having fired somewhere else in the graph. In practice that means the rule is sensitive to whether signals reach the synapse at the right moment, which is exactly the quantity a delayed recurrent network ought to care about.
The defaults are intentionally asymmetric: presynaptic factor `0.13`, postsynaptic factor `0.30`, with traces decaying at `0.75` and `0.65` per millisecond. That gives the network some room to reinforce stable timing patterns without letting recurrent noise run away forever.
An event-driven brain, not a tensor pipeline
The simulation is driven on the CPU as an event system rather than as a dense layer stack. It keeps track of neurons, external inputs, voltage measurements, and a priority queue of future events.
When the brain advances, it schedules only what needs to happen. Inputs queue future firings over the next time window. Synapses queue delayed arrivals based on connection length and propagation speed. Neurons queue themselves around firing and refractory boundaries. It is a sparse asynchronous simulation rather than a frame-by-frame update of every unit.
The code also separates simulation ownership from render-oriented buffers. Positions and potential/activity values live in flat vectors for GPU upload, while neurons themselves are stored in a stable container so references remain valid.
Key implementation choices
- Neurons are randomly placed inside a sphere and connect to nearby neighbors.
- Synapses start with randomized strengths, and a minority are inhibitory.
- The simulator can force full per-frame updates, but the default model tries to stay sparse.
- Background firing is injected stochastically so silent boundary regions can still be recruited into a learned pathway.
How the model turns timing into structure
Each neuron has a resting potential around `-70 mV`, a threshold around `-55 mV`, a refractory cutoff of `2 ms`, and an action-potential shape assembled from a difference of Gaussian-like terms. In plain English, that means the spike waveform is being faked with two simple bell-shaped curves subtracted from each other instead of simulating full ion-channel physics. It is not a full Hodgkin-Huxley model, but it is also not just an abstract threshold gate.
When a neuron fires, its outgoing synapses schedule deliveries into the future. When a synaptic event arrives, the target neuron integrates that effect, updates its potential, and the synapse immediately runs its plasticity rule. That simplification has an important computational benefit: the simulator can skip forward from event to event instead of updating every neuron at every frame. If nothing relevant is happening for a neuron right now, it does not need to be fully simulated until the next scheduled arrival, threshold crossing, or refractory boundary. Useful paths get reinforced because they repeatedly line up in time. The tradeoff is that this kind of event-driven recurrence is harder to parallelize cleanly, because each spike can change what needs to happen next elsewhere in the network.
That has a deep consequence: the network begins to orient edges. Even if two nearby neurons have synapses in both directions, STDP tends to make one direction win. Over time the graph stops acting like an undirected cloud and starts acting more like a directed flow field.
Why this matters
The value here is not that the model solves a useful task. It is that it makes a timing-based learning rule visible.
By watching the network run, you can see how local spike timing biases connectivity, how directed pathways emerge in a recurrent circuit, and how delays and refractoriness shape the patterns that survive.
What emerges from the experiments
The smallest test is the clearest. In the three-neuron experiment, one source neuron projects to two targets, and the input spikes are offset by `+2 ms` and `-2 ms`. One synapse climbs toward `1.0` while the other decays toward `0.0`. That is STDP in its cleanest possible form, and it matched the basic behavior reported in the biological timing experiments the project was modeled on.
The larger 750-neuron tests are more interesting architecturally. With one input source, the network initially erupts into broad recurrent activity, then gradually weakens many positive loops until the input starts propagating in a more stable directional way. That was one of the clearest findings in the report: the network could partially counteract its own runaway feedback, and the activity stopped looking like a uniform burst and started looking more like outward flow.
Another important finding was that background firing mattered much more than expected. It was what let activity spread into neurons at the edge of an already active region, which in turn allowed the learned propagation range to grow over time. The report also notes small self-supporting loops of activity, which appeared and disappeared inside the larger network.
In the three-input experiment, two inputs always shared the same firing rate while the third did not. There the network did separate correlated from uncorrelated structure: the linked inputs recruited overlapping regions, while the unrelated one separated out. That is probably the strongest result in the report. It suggests the system was not only reacting to activity levels, but organizing around temporal relationships between inputs.
This is also where the link to polychronization starts to matter. The report does not claim to have identified full polychronous groups, but once delays, directed pathways, and repeatable firing routes begin to appear, the network is moving in that direction: away from simple co-activation and toward reproducible time-locked patterns.
What the renderer contributes
The OpenGL/ImGui renderer is not decorative. It is the measurement instrument. The project exposes:
- Voltage plots for selected neurons.
- Weight histories for outgoing synapses.
- Activity and weight histograms over time.
- A raster plot for spike timing.
- Multiple rendering modes for voltage, plasticity, activity, and signal spread.
That makes the project feel less like a toy demo and more like a small exploratory lab. You are not only watching neurons light up. You are inspecting the mechanisms that make them reorganize.
Raster Plot
This view shows spike times across the network over a five-second simulation window. Each dot is a firing event, and the repeated vertical bands come from the launcher input pulsing at `35 Hz`.
The denser horizontal traces are more interesting: they point to small recurrent subcircuits whose neurons keep each other active even between launcher bursts. In other words, the plot is not only showing externally driven rhythm, but also pockets of internally sustained activity.
What the project shows
NeuroCorrelation is not a polished model of the brain, and it is not a competitive ML system. What it does show, quite clearly, is that timing alone can reorganize a recurrent network in visible ways.
Once spikes have delay, threshold, and local plasticity, the network stops looking like a uniform mass of connections. Some routes stabilize, others fade, and activity begins to follow more specific temporal paths. That is a modest result, but it is a real one.
Full report
The full write-up is available in both English and Swedish.