SAUCE
Home
Events
Listing
Future
Previous
Accelerated Computing with GPUs 2020
Data Mining - Winter 20/21
High Performance Computing 2019
Einführung in die Bioinformatik WS19/20
Computational Logic
Parallel Algorithms and Architectures 2019
DSEA Praktikum 2018/19
Deep Learning 2018
High Performance Computing 2018
Parallel Algorithms and Architectures 2018
Datenstrukturen und effiziente Algorithmen Ws 18/19
EiP SoSe 18
bio-st-18
EiP WS 2017/18
High Performance Computing 2017
Datenstrukturen und effiziente Algorithmen WiSe 17/18
PS SS 2017
Einfuehrung in die Programmierung SS17
Parallel Algorithms and Architectures 2017
High Performance Computing 2016
DSEA 2016/17
EiP WS2016/17
Parallel Algorithms and Architectures 2016
PS SS 2016
Krypto SS 2016
EiP SS 2016
DSEA Praktikum WS 2015/16
DSEA WS 2015/16
News
Documentation
About
Changelog
Roadmap
Deutsche Dokumentation
Tips and Tricks
Test configuration
Language information
Contact
Login
Deep Learning 2018
Sheet 1: Logistic and Softmax Regression
Sheet 1: Logistic and Softmax Regression
Sheet 2: Resampled t Statistics
Sheet 4: MLP Autoencoder
Sheet 6: Convolutional Networks
Sheet 7: Graph Convolutional Networks
Sheet 8: Triplet Loss on CBF
Sheet 9: Adversarial Training
Sheet 10: Bayesian Inference
Task 1: Logistic Regression
Task 1: Logistic Regression
Task 2: Softmax Regression
Task 1: Logistic Regression
Assignment
Scaffold Head
import numpy as np import csv def get_titanic(split_ratio=0.66, seed=42, affine_embedding=False): X, Y = [], [] # parse the data, omit the name with open('titanic.csv', 'r') as csvfile: reader = csv.reader(csvfile, delimiter=',') for row in reader: try: label = row[0] feats = row[1:2]+[row[3] == "female"]+row[4:] if affine_embedding: feats += [1] X.append([float(x) for x in feats]) Y.append([float(y) for y in label]) except: print("# Skipped row:", row) X, Y = np.array(X), np.array(Y) # split the data according to train/test ratio np.random.seed(seed) iota = np.random.choice(len(X), len(X), replace=False) offs = int(split_ratio*len(X)) return (X[iota[:offs]], Y[iota[:offs]]), \ (X[iota[offs:]], Y[iota[offs:]]) def sigmoid(z): return 1.0/(1.0+np.exp(-z)) def forward_evaluate(X, B): return sigmoid(X.dot(B)) def loss_and_accuracy(X, Y, B): F = forward_evaluate(X, B) return -np.mean(Y*np.log(F)+(1-Y)*np.log(1-F)), \ np.mean(np.round(F) == Y)
Scaffold Foot
loss, acc = loss_and_accuracy(X_test, Y_test, B) print("Deep Learning is fun!" if acc > 0.70 else "Deep Learning sucks!")
Start time:
Mo 22 Okt 2018 10:51:00
End time:
Mi 14 Nov 2018 12:00:00
General test timeout:
10.0 seconds
Tests
Comment prefix
#
Given input
Expected output
Deep Learning is fun!