@MachineLearningStreetTalk

What did you like about this video? What can we improve?!

@samgreydanus6148

I'm the author of the MNIST-1D dataset (discussed at 1h15). Thanks for the positive words! You do an excellent job of explaining what the dataset is and why it's useful.

Running exercises in Colab while working through the textbook is an amazing feature.

@Friemelkubus

Currently going through it and it's one of the best textbooks I've read. Period. Not just DL books, books.
I love it.

@chazzman4553

The best channel I've seen for AI. Cutting edge, no amateur overhyped BS. Down to earth.

@navidutube

I like the fact that he clearly rejected the public idea that neural nets are modelled after human brain.

@dariopotenza3962

Simon taught the first semester of my second year "Machine Learning" module at university! really nice man, we used this book as the module notes. He was very missed when he left in second semester and the rest of the module was never able to live up to his teaching.

@amesoeurs

i've read most of this book already and it's fantastic. it feels like a spiritual sequel to goodfellow's original DL book.

@chodnejabko3553

The overparametrization conundrum may be related to the fact we look at what NN are in a wrong way. To me NN is not a "processor" type of object, it's a novel type of memory object, memory which stores and retrieves data by overlaying them on top of one another & also recording the hidden relations that exist in the data set. This is what gets stored in the "in between" places even if the input resolution is low - the logic of coexistence of different images (arrays), which is something not visible on the surface. 
    I'm a philologist by training, and in XX cent. literature there was this big buzz around the concept of "palimpsest". Originally palimpsests were texts written on reused parchment from which previous text were scraped off with a razor. Despite scraping, the old text still remained under the new one, which led to having two texts in the same space on the page. In literature this became a conceptual fashion of merging two different narratives into one, with usually very surreal effect. One of the authors that comes to mind is William S.Burroughs.
    In the same way merged narratives evoke a novel situation due to novel logical interactions between the inputs, the empty space in an overparametrized NN gets filled with the logic of the world from which the input data comes & this logic exists between them even when the resolution is low.
    Maybe NN is a Platonic space. Many images of trees somehow hold in them the "logic of the tree", which is something deeper and non obvious to eyes, since in their form alone converge both the principles of molecular biology & the atmospheric fluid dynamics, ecosystemic interactions, up to the very astronomical effects of sun, moon, earth rotation, etc.
All of it contributes to this form in one way or another, so the form reflects those contributions & therefore holds partial logic of those interactions within it.
    Information is a relation between the object and it's context (in linguistics we say - it's dictionary). A dataset not only introduces objects, it also as a whole becomes a context (dictionary) through which each object is read. 
   In that sense maybe upscaling input data sets prior to learning is detrimental to the "truth" of those relations. I would be inclined to assume we'd be better off if we let the NN fill in those spaces based on the logic of the dataset, unless we want the logic of transformations to somehow influence the output data (say - we specifically are designing upscaling engine).

@oncedidactic

Brilliant, clear, direct conversation.  Thank you!!

@BlackShardStudio

I really appreciate being used as an example near the end of the discussion. Version 2.0 is coming along slowly, but I am confident I'll get there.

@aprohith1

What a gripping conversation.. Thank you. !

@makhalid1999

Love the studio, would love to see more face-to-face podcasts here

@JuergenAschenbrenner

Here you have a guy on the hook,  I love how you throw these common buzzwords, like emergent agency, set phenomena at him and let him sort it out, which he does in a way that he gives me the feeling of actually understanding, really nice stuff

@lestermcpherson2487

I just got into deep learning on my own. Every time I watch this channel its like thinking you know something, and then learning that what you think you know is only a grain of sand on a beach, with each grain having its own universe, and infinite beaches....subscribed.

@splaytrees

1:13:45 Three-Dimensional Orange (Volume): For a regular three-dimensional orange, which we can approximate as a sphere, the volume is calculated using the formula 4/3 * pi r^3 

Four-Dimensional Orange (Hypervolume): In four dimensions, an object analogous to a sphere is called a "hypersphere." The formula for the hypervolume of a 4D hypersphere is 1/2 * pi^2 r^4

@jasonabc

Best source and community for ml on the internet by far. Love the work you guys do mlst

@beagle989

great conversation, appreciate the skepticism

@tvoyatrubaYKE

Thanks!

@arturturk5926

The most amazing thing about this video to me is that Simon's hair matches that microphone perfectly, nice work lads...