Skip to content

Commit b707a6c

Browse files
committed
init
1 parent 701c772 commit b707a6c

File tree

2 files changed

+73
-0
lines changed

2 files changed

+73
-0
lines changed

Fall20/NeuralNetworks1/README.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
# 1. Introduction to AI and Neural Networks
2+
3+
Welcome to the repo for ACM AI's first workshop on AI and Neural Networks, hosted in fall 2020
4+
5+
Here's a quick run down of what was in the workshop!
6+
7+
1. What is AI?
8+
2. Stats and history of AI
9+
3. What is AI all about? Is it just Neural Networks? (answer is no)
10+
4. Applications
11+
5. What is a Neural Network?
12+
6. What is a Neuron? Biologically and Computationally?
13+
7. Weights and Biases of a Network
14+
8. Activation Functions
15+
9. Linear vs Nonlinear activations
16+
10. Vectorized Inputs and implementations
17+
11. Resources
18+
19+
In this folder there is also code for various work we demoed in the workshop. We use the `numpy` package to do our demos. To download it, do
20+
```
21+
pip install numpy
22+
```
Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
import numpy as np
2+
3+
def activation(z):
4+
return z
5+
6+
# Simple 2 layer neural network that returns the average of a 3 numbers given as a 3x1 column vector
7+
# this function does a "forward pass" of the input x through the 2 layer network and returns the results
8+
def average_nn(x):
9+
weights = np.array(
10+
[1/3, 1/3, 1/3] # a weights matrix of size 1 x 3
11+
)
12+
bias = 0
13+
weighted_input = np.matmul(weights, x) + bias
14+
y = activation(weighted_input)
15+
return y
16+
17+
x = np.array([2, 3, 4]) # a column vector of size 3 x 1
18+
print("The average of {} is {}".format(x, average_nn(x)))
19+
20+
21+
# seed our random number generator for reproducibility
22+
np.random.seed(0)
23+
24+
w_2 = np.random.rand(10, 3) - 0.5 # 10 x 3 matrix
25+
b_2 = np.random.rand() - 0.5
26+
w_3 = np.random.rand(2, 10) - 0.5 # 2 x 10 matrix
27+
b_3 = np.random.rand() - 0.5
28+
29+
def random_nn_activation(z):
30+
# try defining different activations here given weighted input z, they will impact the results of our neural network
31+
# the one given here is Relu - Rectified Linear Unit. See if you can understand what res[res < 0] = 0 means!
32+
res = z.copy()
33+
res[res < 0] = 0
34+
return res
35+
36+
# this is a random neural network with slightly more layers and does the same thing, performs a forward pass of x through the network
37+
# Architecturally, the network has 3 layers of sizes 3, 10, 2.
38+
# see if you can understand whats happening with the matrix multiplications and why our architecture is 3, 10, 2!
39+
def random_nn(x):
40+
# typically, z represents our "weighted" input to the next layers, a is our activation
41+
a_1 = x
42+
43+
z_2 = np.matmul(w_2, a_1) + b_2
44+
a_2 = random_nn_activation(z_2)
45+
46+
z_3 = np.matmul(w_3, a_2) + b_3
47+
a_3 = random_nn_activation(z_3)
48+
49+
return a_3
50+
51+
print("On 3 layer network, input {} fed forward gives {}".format(x, random_nn(x)))

0 commit comments

Comments
 (0)