You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: MLKit/Classes/ANN/Learning/NNOperations.swift
+1Lines changed: 1 addition & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,7 @@
8
8
9
9
import Foundation
10
10
11
+
/// The NNOperations (Nueral Network Operations) class has the objective of computing activation function values and the derivative of activation functions as well.
Copy file name to clipboardExpand all lines: MLKit/Classes/ANN/Learning/Training.swift
+1Lines changed: 1 addition & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,7 @@ import Foundation
10
10
import Upsurge
11
11
12
12
13
+
/// The Training Protocol defines the methods used for training a NeuralNet Object. Note that the `train` method used in this protocol's extension is used only for Neural Network architectures such as Adaline and Perceptron. There is no backpropagation method within the Training method. The Backpropagation class utilizes the Training protocol in order to implement methods that pertain to printing/debugging values. The Backpropagation algorithm has it's own 'train' method. The way the Adaline and Perceptron architecture's perform weight updates and training are completely different from the techniques found in Backpropagation which is why I have separated them.
Copy file name to clipboardExpand all lines: MLKit/Classes/ANN/NeuralNet.swift
+10-1Lines changed: 10 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -355,6 +355,15 @@ open class NeuralNet {
355
355
return newNeuralNetwork
356
356
}
357
357
358
+
359
+
360
+
/**
361
+
The forward method allows a NeuralNet object to pass in inputs (corresponding to the number of input layers in your NueralNet Object) and recieve a list of output values (depends on the number of output layer neurons available).
362
+
363
+
- parameter input: An array of Float values. NOTE: Don't forget to make the first input value a '1' (this is your bias value).
364
+
365
+
- returns: A list of Float values corresponding to the output of your NeuralNet object.
366
+
*/
358
367
publicfunc forward(input:[Float])->[Float]{
359
368
360
369
returnforwardProcess(network:self, input:input)
@@ -454,7 +463,7 @@ open class NeuralNet {
454
463
455
464
456
465
/**
457
-
The trainNet method trains the Neural Network with the methods available (PERCEPTRON, ADALINE, and BACKPROPAGATION).
466
+
The trainNet method trains the Neural Network with the methods available (PERCEPTRON, ADALINE, and BACKPROPAGATION). It is advised that you use this method for supervised learning.
Copy file name to clipboardExpand all lines: MLKit/Classes/Genetic Algorithms/Genome.swift
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@
8
8
9
9
import Foundation
10
10
11
-
/// Blueprint for a Genome. It is encouraged that you create your own `generateFitness` method as there are several ways to assess fitness. You are required, on the other hand, to have a genotype representation and a fitness for every Genome.
11
+
/// Protocol for a Genome. It is encouraged that you create your own `generateFitness` method as there are several ways to assess fitness. You are required, on the other hand, to have a genotype representation and a fitness for every Genome.
0 commit comments