I think you mean things like Gaussian quadrature rules? As far as I know, the issue with them is that (1) they work best if the function being integrated has a lot of structure, for example is close to a polynomial; (2) there are no sufficiently accurate variants for higher dimensions. However, if you have a one dimensional integral and you think your function is close to a polynomial, a Gaussian quadrature rule might be the right thing to use.

Sasho

]]>I always wondered why the authors don’t/can’t explain the concepts to the dummy audience who just know basic math and good coding.

If the authors could make it simpler deep learning and ml will have an impact in the size of open source movement. The current tutorials and the articles only for the academia.

]]>By the way, is everyone using (steepest) SGD? Can’t we get away with just gradient-related directions?

]]>Can you mention some of these alternative methods that Deep Learning is outperforming? For example, do you know what are the close competitors to Deep Learning in its main application domains (speech recognition and vision I suppose)?

It would be interesting to know if the alternatives are theoretically better understood or not.

]]>