Classes | Public Member Functions | Protected Member Functions | Protected Attributes

Layers_Processor Class Reference

The main processing class of a neural network. Performs calculations for all layers. More...

#include <Layers_Processor.h>

Collaboration diagram for Layers_Processor:
Collaboration graph
[legend]

List of all members.

Classes

struct  Thread_Data
 Data that should be passed to each processing thread. Contains unique per-thread data and some parent data including pointer to the Layers_Processor. More...

Public Member Functions

int Propagate ()
 Propagates network input through all layers and calculates network output. Used in training and operation modes.
int Back_Propagate ()
 Propagates network input through all layers from last to the first and calculates next weights vectors for all neurons. Used in training mode.
 Layers_Processor (int threads_count, Neural_Net *net)
 Constructor, takes in the number of threads to run in parallel. Usually values equal to or a bit higher than physical CPUs/cores count should be used for optimal performance.
 ~Layers_Processor ()
 Destructor, frees all resources and gracefully closes all running threads (could take some additional milliseconds because it waits on threads to complete).

Protected Member Functions

void Process (Thread_Data *data)
 Method representing each running separate thread inside Layers_Processor class. Takes in thread setup data.
void Stop_Threads (Self_Lock &cage)
 Thread stopper with infinite wait for thread close of each running thread. Takes in Self_Lock object that will be used to pause calling thread until all threads are finished.

Protected Attributes

int threads_count_
 Shows how many threads are used for processing current set of layers.
std::vector< Thread_Datathread_data_
 Holds Thread_Data information on all threads that are under Layers_Processor control.
Neural_Netnet_
 Pointer to parent Neural_Net object.
Self_Lock cage_
 Described in Thread_Data::lock.

Detailed Description

The main processing class of a neural network. Performs calculations for all layers.

Its role is to make computations performed on neural network layers parallel - distribute tasks equally between multiple processors (cores) by using multiple processing threads.

Current algorithm is the following: if an object was constructed with number of cores = N then N neurons of each layer will be processed in parallel. When the first thread finished calculation of neuron 1 it chooses the next neuron to calculate simply by doing 1 + N (current neuron + N increment). Nth thread is doing the same, it processes Nth neuron then increments index by N thus choosing the next neuron with index N+N.

All threads start processing each layer with different neuron indexes (1,2,...,N) and because increment is N there is no overlapping (all threads calculate different neurons, there is no case when multiple threads are calculating the same neuron). Please note that index in this example is 1-based.

MCPN Logo
kr0st © 2010