Artificial intelligence

Neural networks and learning
  1. Machine learning as function approximation; statistical and cognitive motivations 
  2. Two ways to understand overfitting (plus notes on hyperparameter optimization)
  3. Backpropagation and the chain rule
  4. Overview of basic neural network architectures
+ universal approximation, Turing completeness of recurrent neural networks
    Encoding and feature abstraction
    1. Natural Language Processing
    2. Knowledge abstraction and representation 
    1. Ways to imply a better prior (lasso, ridge, batch normalization, dropout)
    2. Other optimization algorithms (Nesterov, Adam etc.)
    3. "Technical problems" (VGP, catastrophic forgeting, etc.)
    4. Bayesian networks
    1. A deeper look into convolutional networks
    2. Neural networks to simulate an entropy source
    3. This Chinese character does not exist (Trained version on Github)
    4. This Letter does not exist
    5. One-shot generative neural networks
    6. Random walk of character shapes
    7. This font does not exist
    8. Predict Stackoverflow answer time
    9. Nonlinear eigenfaces
    10. Peeking into GANNs
    11. Various GANN projects: deepfakes, adverserial images, differentiable physics
    12. AI colorize, high-res, framerate, etc. 
    13. NLP WTW/TOMT (e.g. "disrecommend" --> "dissuade", "arbitraryhood" --> "arbitrariness")
    14. Neural style transfer on tunes
    Kaggle etc.
    Some better visualization of CNN features
    Progressive GANs

    No comments:

    Post a Comment