Class List

Here are the classes, structs, unions and interfaces with brief descriptions:
Activation_FunctionsActivation functions applied to neurons of the whole layer. Per-neuron activation function setting is not supported
Binary_Stream_To_TextThis class provides methods to operate with configuration stream containing textual data as if it was binary data stream
Configuration_Stream_ConverterClass is used to convert streams containing network configuration from text to binary and vice-versa
Data_Set_InterpreterTurns vector<double> into a collection of training samples, this is only a base class, no relevant implementation
Epoch_Based_Stop_StrategyTraining will stop after N epochs (specified on construction) are passed
File_Stream< T >A template class to work with files using Simple_Stream interface. Please note that class works in 2 modes: read and append (in latter case creates file if non-existent). Mode of operation is determined based on first method called: if Read() was called first then mode is set to read, in case of Write() call, append mode is selected
Activation_Functions::FunctionHolds enumeration of all available functions
Neural_Net::Layer_Configuration_ItemThis structure is used to initialize network with random weights providing general configuration of each network layer
Neural_Net::Layer_DescriptorDefines a layer of neurons in the network for operating mode
Layers_ProcessorThe main processing class of a neural network. Performs calculations for all layers
Memory_Stream< T >Provides possibility to work with memory buffer through a Simple_Stream interface. Please note that Read and Write operations both increment internal pointer (in the same way as file operations). If the end of buffer is reached operations will not fail - only return less bytes (or zero) than required by Read or Write
MSE_Based_Stop_StrategyTraining will stop when Mean Squared Error will stop decreasing
MutexA synchronization primitive for multithreaded applications. Multiple Get() method calls from one process will not lead to deadlock, in other words the mutex is reentrant
Neural_NetThe core class of MCPN library, neural network itself
Activation_Functions::PairStructure that holds pointers to function and its derivative for convenient use
RangeStructure representing a range of numbers
Raw_Data_ReaderReads binary data from a stream into std::vector<double>, stops when there is no more data in the stream
Data_Set_Interpreter::SampleTraining sample, consists of input and desired output pair
Spin_Lock::Scoped_LockSpin_Lock object is acquired on Scoped_Lock construction and released on destruction
Spin_Mutex::Scoped_LockSpin_Mutex object is acquired on Scoped_Lock construction and released on destruction
Mutex::Scoped_LockMutex is acquired on Scoped_Lock construction and released on destruction
Semaphore::Scoped_LockSemaphore is acquired on Scoped_Lock construction and released on destruction
Self_LockAn artificial thread deadlock at user disposal. Using Seal(int key_count) method caller causes calling thread to deadlock and wait until the deadlock situation is resolved by other process(es) calling Release() method key_count number of times
SemaphoreA synchronization primitive for multithreaded applications. Multiple Get() method calls from one process will deadlock the calling process, this is the main difference from Mutex
Sequential_Diffs_InterpreterInterprets dataset as a continuous sequence of result values of a certain function y = f(x,..). Each sample is created by a sliding window of size = number of inputs + number of outputs. Window is shifting 1 value towards the end of the set at a time. Example: network has 3 inputs and 2 outputs, set contains 7 values which totals in 3 available samples
Sequential_Diffs_No_Repeat_InterpreterInterprets dataset as a continuous sequence of result values of a certain function y = f(x,..). Each sample is created by a sliding window of size = number of inputs + number of outputs. Window is shifting "size" values towards the end of the set at a time. Example: network has 3 inputs and 2 outputs, set contains 10 values which totals in 2 available samples
Simple_Stream< T >Template based Stream interface that provides only most necessary methods
Spin_LockA synchronization primitive for multithreaded applications. Multiple Get() method calls from one process will lead to deadlock, in other words the Spin_Lock is non-reentrant. Thread will not suspend waiting for resource but will "spin"
Spin_MutexA synchronization primitive for multithreaded applications. Multiple Get() method calls from one process will not lead to deadlock, in other words the Spin_Mutex is reentrant. Thread will not suspend waiting for resource but will "spin"
Data_Set_Interpreter::StatsContainer of statistics gathered over training data set
Trainer::StatsBasic statistics provided by neural network trainer
Stop_Training_StrategySole purpose of this class is to determine when the training should be stopped. This is only a base with a pure virtual method should_stop to be implemented in child classes. Stop strategies are friends of Trainer class
Layers_Processor::Thread_DataData that should be passed to each processing thread. Contains unique per-thread data and some parent data including pointer to the Layers_Processor
TrainerClass that makes training a network easier. It uses a special stop strategy object to decide when training is complete. Trainer class creates a network configuration that is optimal according to the used stop training strategy
Weights_InitializerImplements the following algorithm: "Improving the Learning Speed of 2-Layer Neural Networks by Choosing Initial Values of the Adaptive Weights" by Derrick Nguyen and Bernard Widrow
MCPN Logo
kr0st © 2010