Data science

Introduction to statistical modeling

Linear models serve as a "toy" prerequisite to machine learning -- while machine learning is more general, dealing with linear problems first allows us to more clearly understand the objectives and capabilities of machine learning.
  1. Statistical models; causal models
  2. Normal linear models
  3. Hierarchical models
  4. SVM, k-means
  5. Kernels and GLMs [1]
Non-linear models
  1. Hierarchical clustering, bootstrap [1], kNN
  2. ACE, backfitting, etc.
  3. Game dynamics, MCMC, etc.

Neural networks

  1. The universal approximation theorem (+ Turing completeness of RNNs)
  2. Gradient descent and modifications [1][2]
  3. Backpropagation and the chain rule
  4. Overview of basic neural network architectures [1]
  5. Bayesian prior of neural networks [1] [2]
  6. Gaussian processes and infinite-width networks [1] [2]
  1. A deeper look into convolutional networks [1]
  2. Playing with frequency representations [1]
  3. Neural networks to simulate an entropy source
  4. This Chinese character does not exist (Trained version on Github)
  5. This Letter does not exist
  6. One-shot generative neural networks [1][2]
  7. Random walk of character shapes; this font does not exist
  8. Peeking into GANNs [1]
  9. Progressive GANNs [1]
  10. Various GANN projects: deepfakes, adverserial images, differentiable physics
  11. AI colorize, high-res, framerate, etc. 
  12. Neural style transfer on tunes
  13. Predict Stackoverflow answer time
  14. NLP WTW/TOMT (e.g. "disrecommend" --> "dissuade", "arbitraryhood" --> "arbitrariness")

No comments:

Post a Comment