Julian McAuley and I are organizing the Southern California Machine Learning Symposium, on Friday November 18 at Caltech!

http://dolcit.cms.caltech.edu/scmls/

** CPF Deadline is October 4th!!

The SoCal ML Symposium brings together students and faculty to promote machine learning in the Southern California region. The workshop serves as a forum for researchers from a variety of fields working on machine learning to share and discuss their latest findings.

Topics to be covered at the symposium include, but are not limited to:

+ Machine learning with graphs, social networks, and structured data.

+ Active learning, reinforcement learning, crowdsourcing.

+ Learning with images and natural language.

+ Learning with high-dimensional data.

+ Neural networks, deep learning, and graphical models.

+ Learning dynamic and streaming data.

+ Applications to interesting new domains.

+ Addressing each of these issues at scale.

The majority of the workshop will be focused on student contributions, in the form of contributed talks and posters.

We invite submissions in the form of 1-2 page extended absracts, to be presented as posters and oral presentations at the symposium. Submissions may be made on our easychair page:

https://easychair.org/conferences/?conf=scmls16

A $500 first-prize and a $250 runner-up prize, sponsored by Google Research, will be awarded for the best student presentations.

Timeline:

Oct 4: Abstract submission

Oct 14: Notification

Nov 11: Registration deadline

Nov 18: Symposium

For more details, including submission and registration instructions, visit our symposium webpage:

http://dolcit.cms.caltech.edu/scmls/

and please help distribute our flyer:

http://dolcit.cms.caltech.edu/scmls/scmls.pdf

# Random Ponderings

## Wednesday, September 28, 2016

## Friday, January 01, 2016

### Data Science Positions for Sports Analytics

I want to give a plug for STATS LLC, which is building a data science team and has several openings for data scientist positions. For those who don't know, STATS is sports data company that provides the tracking data for the National Basketball Association, amongst other sports and leagues. STATS also recently acquired Prozone, which provides tracking data for many professional soccer leagues around the world. Sports analytics is definitely entering an exciting phase due to the rapid growth of new data sources that offer far greater granularity than was possible before. See, e.g., these papers that analyze tracking data provided by STATS and Prozone.

Patrick Lucey is the new Director of Data Science. I previously worked with Patrick at Disney Research, and I can vouch for him being a great collaborator with lots of fantastic ideas and unbounded enthusiasm for sports analytics research.

Patrick Lucey is the new Director of Data Science. I previously worked with Patrick at Disney Research, and I can vouch for him being a great collaborator with lots of fantastic ideas and unbounded enthusiasm for sports analytics research.

Labels:
announcements,
machine learning

## Thursday, December 31, 2015

### Thoughts on NIPS 2015 and OpenAI

A few weeks ago, I attended NIPS 2015, which turned out to be (by far) the largest machine learning conference ever. With nearly 4000 attendees, the conference saw a roughly 50% increase from the previous year. Much of this growth seems fueled by industry interest, especially in topics such as deep learning and large scale learning. Deep learning, in particular, seems to be all the rage these days, at least in the public zeitgeist. I think this is great for the field, because this degree of interest will also percolate to the rest of machine learning more broadly.

There have been plenty of posts regarding NIPS already (see: Sebastien Bubeck, Neil Lawrence, John Langford, Paul Mineiro, and Hal Daume), with plenty of great pointers to interesting NIPS papers that I'll hopefully get around to reading soon. On my end, I didn't get a chance to see too many papers, in part because I was helping presenting a poster during one poster session, and a demo during another. But I did very much enjoy many of the talks, especially during the workshops.

Nonetheless, there have already been many reactions to OpenAI, from the usual "robots will steal our jobs" trope, to nuanced concerns voiced by machine learning expert Neil Lawrence observing that open access to data is just as important as open access to research and systems. I do very much agree with Neil's point and I think that one of the best things that OpenAI can do for the research community is to generate interesting new datasets and testbeds. There have also been concerns voiced that the founding team is overwhelmingly deep learning people. I don't think this is much of an issue at the moment, because representation learning has been the biggest practical leap forward and giving broader access to learned representations is a great thing.

The announcement has even caught the attention of rationalists such as Scott Alexander, who voiced concerns about whether AI research should be open at all, for risk of losing control of the technology and potentially leading to the catastrophic results. Scott's concern is a meta-concern about the current mentality of AI research being an arms race and institutions such as OpenAI not focusing on "controlling" access to AI that could become dangerous. These meta-concerns are predicated on the assumptions that hard takeoff of AGI is a legitimate existential threat to humanity (which I agree with), and that existing institutions such as OpenAI could directly lead to that happening (which I strongly disagree with). I realize that OpenAI ponders about human-level intelligence in their opening blog post, but that's just a mission statement of sorts. For instance, Google, while awesome, has (thus far) fallen quite short of their mission to "organize the world's information and make it universally accessible and useful". Likewise, I don't expect OpenAI to succeed in their mission statement anytime soon.

Most machine learning experts probably do take an overly myopic view of machine learning progress, which is partly due to the aforementioned research arms race but also just due to how research works (i.e., it is REALLY hard to make tangible progress on something that you can't even begin to rigorously and precisely reason about). However, from what I've read, rationalist non-experts conversely tend to phrase things in such imprecise terms that it's hard to have a substantive discussion between the two communities. I imagine the "truth", such as it is, is somewhere in the middle. Perhaps one should gather both camps together for a common discussion.

What is definitely going to happen, in the near term, is that access to AI technologies will be an increasingly important competitive advantage moving forward. And it's great that institutions such as OpenAI will help promote open access to those technologies.

I am optimistic that the crew at OpenAI will explore alternative mechanisms in contrast to NSF-style funding of research, and how places like the Allen Institute engages in research. I think it'll be exciting to see what comes out of that process. Hopefully, OpenAI will also engage with places like the Future of Humanity Institute, and maybe even create forums that bring together people like Stuart Russell, Eric Horvitz, Scott Alexander and Eliezer Yudkowsky.

Machine teaching has a wide range of applications, but the one that I'm most interested in is when the learner is a human. As models become necessarily more complex in the quest for predictive accuracy, it is important that we devise methods to keep these models somehow interpretable to humans. One way is to use a machine teaching approach to quickly show the human what concepts the trained model has learned. For instance, this approach would have applications debugging complicated machine learning models.

One interesting consequence of this study was that these interpretable models could be used to tease out biases in the data collection process. For instance, the model predicted that patients with asthma are at lower risk of dying from pneumonia. Consulting with medical experts revealed that, historically, patients with asthma are more closely monitored for signs of pneumonia and so the disease is detected much earlier than for the general populace. Nonetheless, it's clear that one wouldn't want a predictive model to predict a lower risk of pneumonia for patients with asthma -- that was simply a consequence of how the historical data was collected. See this paper for details.

## Zoubin Ghahramani on Probabilistic Models

Zoubin Ghahramani gave a keynote talk on probabilistic models. During this deep learning craze, it's important keep in mind that properly quantifying uncertainty is often a critical component as well. We are rarely given perfect information, and so we can rarely make perfect predictions. In order to make informed decisions, our models should make calibrated probabilities regarding so that we can properly weigh different tradeoffs. Recall that one of the critical aspects of the Jeopardy! winning IBM Watson machine was being able to properly calibrate its own confidence in the right answer (or question). Another point that Zoubin touched on was rational allocation of computational resources under uncertainty. See also this great essay on the interplay between machine learning and statistics by Max Welling.

Generalization in Adaptive Data Analysis and Holdout Reuse

This paper generalizes previous work on adaptive data analysis by: 1) allow the query to the validation set be adaptive to the result of previous queries, and 2) provide a more general definition of adaptive data analysis.

Logarithmic Time Online Multiclass Prediction

This paper studies how to quickly construct multiclass classifiers whose running time is logarithmic in the number of classes. This approach is especially useful for settings where the number of classes is enormous, which is also known as Extreme Multiclass Classification.

Spatial Transformer Networks

This paper studies how to incorporate more invariants into convolutional neural networks beyond just shift invariance. The most obvious cases are being invariant to rotation and skew. See also this post.

Optimization as Estimation with Gaussian Processes in Bandit Settings

A preliminary version of this paper was presented at the Women in Machine Learning Workshop at NIPS, and will be formally published at AISTATS 2016. This is a really wonderful paper that unifies, to some extent, two of the most popular views in Bayesian optimization: UCB-style bandit algorithms and probability of improvement (PI) algorithms. One obvious future direction is to also unify with expected improvement (EI) algorithms as well.

Fast Convergence of Regularized Learning in Games

This paper won a best paper award at NIPS, and analyzed the setting of learning in a repeated game. Previous results showed a regret convergence of O(T

Data Generation as Sequential Decision Making

This paper takes the view of sampling from sequential generative models as sequential decision making. For instance, can we view sequential sampling as an Markov decision process? In particular, this paper focuses on the problem of data imputation, or filling in missing values. This style of research has been piquing my interest recently, since it can offer the potential to dramatically speed up computation when sampling or prediction can be very computationally intensive.

Sampling from Probabilistic Submodular Models

Andreas's group has been working on a general class of probabilistic models called log-submodular and log-supermodular models. These models generalize models such as determinantal point processes. This paper studies how to do inference on these models via MCMC sampling, and establish conditions for fast mixing.

The Self-Normalized Estimator for Counterfactual Learning

This paper addresses a signficant limitation of previous work on counterfactual risk minimization, which is overfitting to hypotheses that match or avoid the logged (bandit) training data, which the authors call propensity overfitting. The authors propose a new risk estimator which deals with this issue.

There have been plenty of posts regarding NIPS already (see: Sebastien Bubeck, Neil Lawrence, John Langford, Paul Mineiro, and Hal Daume), with plenty of great pointers to interesting NIPS papers that I'll hopefully get around to reading soon. On my end, I didn't get a chance to see too many papers, in part because I was helping presenting a poster during one poster session, and a demo during another. But I did very much enjoy many of the talks, especially during the workshops.

## OpenAI

Perhaps the biggest sensation at NIPS was the announcement of OpenAI, which is a non-profit artificial intelligence research company with $1B in endowment donated by people such as Sam Altman, Elon Musk, Peter Thiel, and others. The core ideal of OpenAI is to promote open research in Artificial Intelligence. For the most part, not much is known about how OpenAI will operate (and from what I've gathered, the people at OpenAI haven't fully decided on a strategy yet either). One thing that I do know on good authority is that OpenAI will NOT be patenting their research.Nonetheless, there have already been many reactions to OpenAI, from the usual "robots will steal our jobs" trope, to nuanced concerns voiced by machine learning expert Neil Lawrence observing that open access to data is just as important as open access to research and systems. I do very much agree with Neil's point and I think that one of the best things that OpenAI can do for the research community is to generate interesting new datasets and testbeds. There have also been concerns voiced that the founding team is overwhelmingly deep learning people. I don't think this is much of an issue at the moment, because representation learning has been the biggest practical leap forward and giving broader access to learned representations is a great thing.

The announcement has even caught the attention of rationalists such as Scott Alexander, who voiced concerns about whether AI research should be open at all, for risk of losing control of the technology and potentially leading to the catastrophic results. Scott's concern is a meta-concern about the current mentality of AI research being an arms race and institutions such as OpenAI not focusing on "controlling" access to AI that could become dangerous. These meta-concerns are predicated on the assumptions that hard takeoff of AGI is a legitimate existential threat to humanity (which I agree with), and that existing institutions such as OpenAI could directly lead to that happening (which I strongly disagree with). I realize that OpenAI ponders about human-level intelligence in their opening blog post, but that's just a mission statement of sorts. For instance, Google, while awesome, has (thus far) fallen quite short of their mission to "organize the world's information and make it universally accessible and useful". Likewise, I don't expect OpenAI to succeed in their mission statement anytime soon.

Most machine learning experts probably do take an overly myopic view of machine learning progress, which is partly due to the aforementioned research arms race but also just due to how research works (i.e., it is REALLY hard to make tangible progress on something that you can't even begin to rigorously and precisely reason about). However, from what I've read, rationalist non-experts conversely tend to phrase things in such imprecise terms that it's hard to have a substantive discussion between the two communities. I imagine the "truth", such as it is, is somewhere in the middle. Perhaps one should gather both camps together for a common discussion.

What is definitely going to happen, in the near term, is that access to AI technologies will be an increasingly important competitive advantage moving forward. And it's great that institutions such as OpenAI will help promote open access to those technologies.

I am optimistic that the crew at OpenAI will explore alternative mechanisms in contrast to NSF-style funding of research, and how places like the Allen Institute engages in research. I think it'll be exciting to see what comes out of that process. Hopefully, OpenAI will also engage with places like the Future of Humanity Institute, and maybe even create forums that bring together people like Stuart Russell, Eric Horvitz, Scott Alexander and Eliezer Yudkowsky.

## Cynthia Dwork on Universal Adaptive Data Analysis

Cynthia Dwork gave a great talk on using differential privacy to guard against overfitting when re-using a validation set multiple times. See this Science paper for more details. The basic idea is that, when you use your validation set to evaluate the performance of a model, do so in a differentially private way so that you don't overfit to the idiosyncrasies of the validation set. See, for instance, this paper describing an application to Kaggle-style competitions. This result demonstrates a great instance of (unexpected?) convergence between different areas of study: privacy-preserving computation and machine learning.## Jerry Zhu on Machine Teaching

Jerry Zhu has been doing very interesting work on Machine Teaching, which he talked about at the NIPS workshop on adaptive machine learning. Roughly speaking, machine teaching is the computational and statistical problem of how to select training examples to teach a learner as quickly as possible. One can think of machine teaching as the converse of active learning, where instead of the learner actively querying for training examples, a teacher actively provides them.Machine teaching has a wide range of applications, but the one that I'm most interested in is when the learner is a human. As models become necessarily more complex in the quest for predictive accuracy, it is important that we devise methods to keep these models somehow interpretable to humans. One way is to use a machine teaching approach to quickly show the human what concepts the trained model has learned. For instance, this approach would have applications debugging complicated machine learning models.

## Rich Caruana on Interpretable Machine Learning for Health Care

On the flip side, Rich Carauna talked about training models that are inherently interpretable by domain experts, such as medical professionals. Of course, these models are only applicable in restricted domains, such as when there is a "sufficient" set of hand-crafted features such that a generalized additive model can accurately capture the phenomenon of interest. The approach was applied to two settings: predicting the risk of pneumonia and 30-day re-admission.One interesting consequence of this study was that these interpretable models could be used to tease out biases in the data collection process. For instance, the model predicted that patients with asthma are at lower risk of dying from pneumonia. Consulting with medical experts revealed that, historically, patients with asthma are more closely monitored for signs of pneumonia and so the disease is detected much earlier than for the general populace. Nonetheless, it's clear that one wouldn't want a predictive model to predict a lower risk of pneumonia for patients with asthma -- that was simply a consequence of how the historical data was collected. See this paper for details.

## Zoubin Ghahramani on Probabilistic Models

Zoubin Ghahramani gave a keynote talk on probabilistic models. During this deep learning craze, it's important keep in mind that properly quantifying uncertainty is often a critical component as well. We are rarely given perfect information, and so we can rarely make perfect predictions. In order to make informed decisions, our models should make calibrated probabilities regarding so that we can properly weigh different tradeoffs. Recall that one of the critical aspects of the Jeopardy! winning IBM Watson machine was being able to properly calibrate its own confidence in the right answer (or question). Another point that Zoubin touched on was rational allocation of computational resources under uncertainty. See also this great essay on the interplay between machine learning and statistics by Max Welling.## Interesting Papers

As I mentioned earlier, I didn't get a chance to check out too many posters, but here are a few that I did see which I found quite interesting.Generalization in Adaptive Data Analysis and Holdout Reuse

*by Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth*This paper generalizes previous work on adaptive data analysis by: 1) allow the query to the validation set be adaptive to the result of previous queries, and 2) provide a more general definition of adaptive data analysis.

Logarithmic Time Online Multiclass Prediction

*by Anna Choromanska, John Langford*This paper studies how to quickly construct multiclass classifiers whose running time is logarithmic in the number of classes. This approach is especially useful for settings where the number of classes is enormous, which is also known as Extreme Multiclass Classification.

Spatial Transformer Networks

*by Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu*This paper studies how to incorporate more invariants into convolutional neural networks beyond just shift invariance. The most obvious cases are being invariant to rotation and skew. See also this post.

Optimization as Estimation with Gaussian Processes in Bandit Settings

*by Zi Wang, Bolei Zhou, Stefanie Jegelka*A preliminary version of this paper was presented at the Women in Machine Learning Workshop at NIPS, and will be formally published at AISTATS 2016. This is a really wonderful paper that unifies, to some extent, two of the most popular views in Bayesian optimization: UCB-style bandit algorithms and probability of improvement (PI) algorithms. One obvious future direction is to also unify with expected improvement (EI) algorithms as well.

Fast Convergence of Regularized Learning in Games

*by Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, Robert E. Schapire*This paper won a best paper award at NIPS, and analyzed the setting of learning in a repeated game. Previous results showed a regret convergence of O(T

^{-1/2}), and this paper demonstrates an asymptotic improvement to O(T^{-3/4}) for individual regret and O(T^{-1}) for the sum of utilities.Data Generation as Sequential Decision Making

*by Philip Bachman, Doina Precup*This paper takes the view of sampling from sequential generative models as sequential decision making. For instance, can we view sequential sampling as an Markov decision process? In particular, this paper focuses on the problem of data imputation, or filling in missing values. This style of research has been piquing my interest recently, since it can offer the potential to dramatically speed up computation when sampling or prediction can be very computationally intensive.

Sampling from Probabilistic Submodular Models

*by Alkis Gotovos, S. Hamed Hassani, Andreas Krause*Andreas's group has been working on a general class of probabilistic models called log-submodular and log-supermodular models. These models generalize models such as determinantal point processes. This paper studies how to do inference on these models via MCMC sampling, and establish conditions for fast mixing.

The Self-Normalized Estimator for Counterfactual Learning

*by Adith Swaminathan, Thorsten Joachims*This paper addresses a signficant limitation of previous work on counterfactual risk minimization, which is overfitting to hypotheses that match or avoid the logged (bandit) training data, which the authors call propensity overfitting. The authors propose a new risk estimator which deals with this issue.

## Monday, September 07, 2015

### Thoughts on KDD 2015

Last month I attended KDD 2015 in beautiful Sydney, Australia. For those who don't know, KDD is the premier international conference for applied machine learning & data mining, and is often the venue for some of the most interesting data analysis research projects. Despite concerns that KDD 2015 would be a let down after KDD 2014 was such a great success in New York City, overall KDD 2015 was a fantastic conference, with an excellent lineup of invited speakers and plenty of interesting papers. Congratulations also to my PhD advisor Thorsten Joachims, who not only did a great job as PC Co-Chair, but also was the recipient of a Test of Time Award for his work on Optimizing Search Engines using Clickthrough Data.

Susan Athey gave another keynote on the interplay between machine learning and causal inference in policy evaluation, which is an important issue for the sciences as well. I must admit, most of the talk went over my head, but there was some interesting debate after the talk about whether causality should be the goal or rather just more "robust" correlations (whatever that might mean).

I also really enjoyed the Data-Driven Science Panel, where the debate got quite heated at times. Two issues in particular stood out. First, what should be the role of machine learning and data mining experts in the ecosystem of data-driven science? One the one hand, computer scientists have historically had a large impact by developing systems and platforms that abstract away low-level complexity and empower the end user to be more productive. However, how to achieve such a solution in a data-rich world is a much messier (or at least different) type of endeavor. There are, of course, plenty of startups that address aspects of this problem, but a genuine scalable solution for science remains elusive.

A second issue that was raised was whether computational researchers have made much of a direct impact on the sciences. The particular area, raised by Tina Eliassi-Rad, is the social sciences. Machine learning and data mining have taken great interest in computational social science via studying large social networks. However, it is not clear to what extent computational researchers have directly made an impact to traditional social science fields. Of course, this issue is tied back to what the role of computational researchers should be. On the one hand, many social scientists do use tools made by computational people, so the indirect impact is quite clear. Does it really matter that there hasn't been much direct impact?

One striking omission that was pointed out during the Q&A was that MOOCs have mostly abandoned the pre-college demographic, especially before high school. In retrospect, this is not too surprising, in large part due to the very different requirements for primary and secondary education across different states and school districts. But it does put a damper on the current MOOC enthusiasm, since many problems with education start much earlier than college.

Within the more traditional sports regimes, it's clear that access to data remains a large bottleneck. Many professional leagues are hoarding their data like gold, but sadly do not have the expertise leverage the data effectively. The situation actually seems better in Europe, where access to tracked soccer (sorry, futbol) games are relatively common. In the US, it seems like the data is only available to a select few sports analytics companies such as Second Spectrum. I'm hopeful that this situation will change in the near future as the various stake holders become more comfortable with the idea that it's not the raw data that has value, but the processed artifacts built on top of that data.

A Decision Tree Framework for Spatiotemporal Sequence Prediction

I'll start with a shameless piece of self-advertising. In collaboration with Disney Research, we trained a model to generate visual speech, i.e., animate the lower face in response to audio or phonetic inputs. See the demo video below:

More details here.

Inside Jokes: Identifying Humorous Cartoon Captions

Probably the most interesting application at KDD was on studying the anatomy of a joke. While the results may not seem too surprising in retrospect (e.g., the punchline should be at the end of the joke), what was really cool was that the model could quantify if one joke was funnier than another joke (i.e., rank jokes).

Cinema Data Mining: The Smell of Fear

This was a cool paper that studied how the exhaled organic particles vary in response to different emotions. The authors instrumented a movie theater's air circulation system with chemical sensors, and found that the chemicals you exhale are indicative of various emotions such as fear or amusement. The author repeatedly lamented the fact that they didn't do this for any erotic films, and so they don't know what the cinematic chemical signature of arousal would look like.

Who supported Obama in 2012? Ecological inference through distribution regression

This paper presents a new solution to the ecological inference problem of inferring individual level preferences from aggregate data. The primary data testbed were county-wise election outcomes and demographic data that reported at a different granularity or overlay. The main issue is how to estimate, e.g., female preference for one presidential candidate, using just these kinds of aggregate data.

Certifying and removing disparate impact

Many people assume that, because algorithms are "objective" then they can't be biased or discriminatory. This assumption is invalid because the data or features themselves can be biased (cf. this interview with Cynthia Dwork). The authors of this paper propose a way to detect & remove bias in machine learning models that is tailored to the US legal definition of bias. The work is, of course, preliminary, but this paper was arguably the most thought provoking of the entire conference.

Edge-Weighted Personalized PageRank: Breaking A Decade-Old Performance Barrier

This paper proposes a reduction approach to personalized PageRank that yields a computational boost by several orders of magnitude, thus allowing, for the first time, personalized PageRank to be computed at interactive speeds. This paper was also the recipient of the best paper award.

## Data Science for Science

One of the biggest themes at KDD 2015 was applying data science to support the sciences, which is something that's been on my mind a lot recently. Hugh Durrant-White gave a great keynote on applying machine learning to discovery processes in geology and ecology. One thing that jumped out of his talk was how challenging it is to develop models that are interpretable to domain experts. This issue is ameliorated in his settings because he largely focused on spatial models which are easier to visualize and interpret.Susan Athey gave another keynote on the interplay between machine learning and causal inference in policy evaluation, which is an important issue for the sciences as well. I must admit, most of the talk went over my head, but there was some interesting debate after the talk about whether causality should be the goal or rather just more "robust" correlations (whatever that might mean).

I also really enjoyed the Data-Driven Science Panel, where the debate got quite heated at times. Two issues in particular stood out. First, what should be the role of machine learning and data mining experts in the ecosystem of data-driven science? One the one hand, computer scientists have historically had a large impact by developing systems and platforms that abstract away low-level complexity and empower the end user to be more productive. However, how to achieve such a solution in a data-rich world is a much messier (or at least different) type of endeavor. There are, of course, plenty of startups that address aspects of this problem, but a genuine scalable solution for science remains elusive.

A second issue that was raised was whether computational researchers have made much of a direct impact on the sciences. The particular area, raised by Tina Eliassi-Rad, is the social sciences. Machine learning and data mining have taken great interest in computational social science via studying large social networks. However, it is not clear to what extent computational researchers have directly made an impact to traditional social science fields. Of course, this issue is tied back to what the role of computational researchers should be. On the one hand, many social scientists do use tools made by computational people, so the indirect impact is quite clear. Does it really matter that there hasn't been much direct impact?

## Update on MOOCs

Daphne Koller gave a great keynote on the state of MOOCs and Coursera in particular. It seems that MOOCs nowadays are much smarter about their consumer base, and have diversified the way they deliver content and measure success for a wide range of students. For example, people now understand much better the different needs of college aspirants (who use MOOCs to supplicant high school & college education) versus young professionals (who use MOOCs to get ahead in their careers) versus those seeking vocational skills (which is very popular in less developed countries).One striking omission that was pointed out during the Q&A was that MOOCs have mostly abandoned the pre-college demographic, especially before high school. In retrospect, this is not too surprising, in large part due to the very different requirements for primary and secondary education across different states and school districts. But it does put a damper on the current MOOC enthusiasm, since many problems with education start much earlier than college.

## Lessons Learned from Large-Scale A/B Testing

Ron Kohavi gave a keynote on lessons learned from online A/B testing. The most interesting aspect of his talk was just how well-tuned the existing systems are. One symptom of a highly tuned system is that it becomes very difficult to intuit about whether certain modifications will increase or decrease the performance of the system (or have no effect). For example, he gave the audience a number of questions to the audience, such as: "Does increasing the description of the sponsored advertisements lead to increased overall clicks on ads?" Basically, the audience could not guess better than random. So the main lesson is to basically to follow the data and don't be to (emotionally) tied to your own intuitions when it comes to optimizing large complex industrial systems.## Sports Analytics Workshop

I co-organized the 2nd workshop on Large-Scale Sports Analytics. I tried to get more eSports into the workshop this year, but alas fell a bit short. Thorsten did give an interesting talk that used eSports data, although the phenomenon he was studying was not specific to eSports. In many ways, eSports is an even better test bed for sports analytics than traditional sports because game replays track literally everything.Within the more traditional sports regimes, it's clear that access to data remains a large bottleneck. Many professional leagues are hoarding their data like gold, but sadly do not have the expertise leverage the data effectively. The situation actually seems better in Europe, where access to tracked soccer (sorry, futbol) games are relatively common. In the US, it seems like the data is only available to a select few sports analytics companies such as Second Spectrum. I'm hopeful that this situation will change in the near future as the various stake holders become more comfortable with the idea that it's not the raw data that has value, but the processed artifacts built on top of that data.

## Interesting Papers

There were plenty of interesting research papers at KDD, of which I'll just list a few that I particularly liked.A Decision Tree Framework for Spatiotemporal Sequence Prediction

*by Taehwan Kim, Yisong Yue, Sarah Taylor, and Iain Matthews*I'll start with a shameless piece of self-advertising. In collaboration with Disney Research, we trained a model to generate visual speech, i.e., animate the lower face in response to audio or phonetic inputs. See the demo video below:

More details here.

Inside Jokes: Identifying Humorous Cartoon Captions

*by Dafna Shahaf, Eric Horvitz, and Robert Mankoff*Probably the most interesting application at KDD was on studying the anatomy of a joke. While the results may not seem too surprising in retrospect (e.g., the punchline should be at the end of the joke), what was really cool was that the model could quantify if one joke was funnier than another joke (i.e., rank jokes).

Cinema Data Mining: The Smell of Fear

*by Jörg Wicker, Nicolas Krauter, Bettina Derstorff, Christof Stönner, Efstratios Bourtsoukidis, Thomas Klüpfel, Jonathan Williams, and Stefan Kramer*This was a cool paper that studied how the exhaled organic particles vary in response to different emotions. The authors instrumented a movie theater's air circulation system with chemical sensors, and found that the chemicals you exhale are indicative of various emotions such as fear or amusement. The author repeatedly lamented the fact that they didn't do this for any erotic films, and so they don't know what the cinematic chemical signature of arousal would look like.

Who supported Obama in 2012? Ecological inference through distribution regression

*by Seth Flaxman, Yu-Xiang Wang, and Alex Smola*This paper presents a new solution to the ecological inference problem of inferring individual level preferences from aggregate data. The primary data testbed were county-wise election outcomes and demographic data that reported at a different granularity or overlay. The main issue is how to estimate, e.g., female preference for one presidential candidate, using just these kinds of aggregate data.

Certifying and removing disparate impact

*by Michael Feldman, Sorelle Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian*Many people assume that, because algorithms are "objective" then they can't be biased or discriminatory. This assumption is invalid because the data or features themselves can be biased (cf. this interview with Cynthia Dwork). The authors of this paper propose a way to detect & remove bias in machine learning models that is tailored to the US legal definition of bias. The work is, of course, preliminary, but this paper was arguably the most thought provoking of the entire conference.

Edge-Weighted Personalized PageRank: Breaking A Decade-Old Performance Barrier

*by Wenlei Xie, David Bindel, Alan Demers, and Johannes Gehrke*This paper proposes a reduction approach to personalized PageRank that yields a computational boost by several orders of magnitude, thus allowing, for the first time, personalized PageRank to be computed at interactive speeds. This paper was also the recipient of the best paper award.

## Thursday, April 09, 2015

### KDD 2015 Workshop on Large-Scale Sports Analytics

We are pleased to announce that the KDD Workshop on Large-Scale Sports Analytics will be taking place in Sydney this year on August the 10th at KDD 2015. Similar to last year, it will be a full day workshop consisting of invited speakers as well as poster sessions for submitted papers. A call for paper submissions is below.

=== Call for Submissions ===

When: August 10th, 2015

Where: Sydney, Australia

Website: http://large-scale-sports-analytics.org/

Description:

Virtually every aspect of sports analytics is now entering the “Big Data” phase, and the interest in effectively mining, modeling, and learning from such data has also been correspondingly growing. Relevant data sources include detailed play-by-play game logs, tracking data, physiological sensor data to monitor the health of players, social media and text-based content, and video recordings of games.

The objective of this workshop is to bring together researchers and analysts from academia and industry who work in sports analytics, data mining and machine learning. We hope to enable meaningful discussions about state-of-the-art in sports analytics research, and how it might be improved upon.

We seek poster submissions (which can be both preliminary research as well as recently published work) on topics including but not limited to:

* Spatiotemporal modeling

* Video, text and social media analysis

* Feature selection and dimensionality reduction

* Feature learning and latent factor models

* Computational rationality

* Real-time predictive modeling

* Interactive analysis & visualization tools

* Sensor technology and reliability

* Labeling and annotation of events/activities/tactics

* Real-time/deployed analytical systems

* Knowledge discovery of player/team/league behaviors

* Game Theory

* eSports

Submission Details:

Poster submissions should be extended abstracts no more than 4 pages in length (in KDD format, do not need to be anonymous). Extended abstracts should be submitted by June 5th 11:59 PM PDT. Details can be found at:

http://www.large-scale-sports-analytics.org/Large-Scale-Sports-Analytics/Submissions.html

Important Dates:

Submission - 5th June 2015 11:59 PM PDT

Notification - 30th June 2015

Workshop - 10th August 2015

Organizers:

Patrick Lucey (Disney Research) (patrick.lucey@disneyresearch.com)

Yisong Yue (Caltech) (yyue@caltech.edu)

Jenna Wiens (University of Michigan) (wiensj@umich.edu)

Stuart Morgan (Australian Institute of Sport) (stuart.morgan@ausport.gov.au)

=== Call for Submissions ===

When: August 10th, 2015

Where: Sydney, Australia

Website: http://large-scale-sports-analytics.org/

Description:

Virtually every aspect of sports analytics is now entering the “Big Data” phase, and the interest in effectively mining, modeling, and learning from such data has also been correspondingly growing. Relevant data sources include detailed play-by-play game logs, tracking data, physiological sensor data to monitor the health of players, social media and text-based content, and video recordings of games.

The objective of this workshop is to bring together researchers and analysts from academia and industry who work in sports analytics, data mining and machine learning. We hope to enable meaningful discussions about state-of-the-art in sports analytics research, and how it might be improved upon.

We seek poster submissions (which can be both preliminary research as well as recently published work) on topics including but not limited to:

* Spatiotemporal modeling

* Video, text and social media analysis

* Feature selection and dimensionality reduction

* Feature learning and latent factor models

* Computational rationality

* Real-time predictive modeling

* Interactive analysis & visualization tools

* Sensor technology and reliability

* Labeling and annotation of events/activities/tactics

* Real-time/deployed analytical systems

* Knowledge discovery of player/team/league behaviors

* Game Theory

* eSports

Submission Details:

Poster submissions should be extended abstracts no more than 4 pages in length (in KDD format, do not need to be anonymous). Extended abstracts should be submitted by June 5th 11:59 PM PDT. Details can be found at:

http://www.large-scale-sports-analytics.org/Large-Scale-Sports-Analytics/Submissions.html

Important Dates:

Submission - 5th June 2015 11:59 PM PDT

Notification - 30th June 2015

Workshop - 10th August 2015

Organizers:

Patrick Lucey (Disney Research) (patrick.lucey@disneyresearch.com)

Yisong Yue (Caltech) (yyue@caltech.edu)

Jenna Wiens (University of Michigan) (wiensj@umich.edu)

Stuart Morgan (Australian Institute of Sport) (stuart.morgan@ausport.gov.au)

Labels:
announcements,
machine learning

## Tuesday, January 13, 2015

### A Brief Overview of Deep Learning

(This is a guest post by Ilya Sutskever on the intuition behind deep learning as well as some very useful practical advice. Many thanks to Ilya for such a heroic effort!)

Deep Learning is really popular these days. Big and small companies are getting into it and making money off it. It’s hot. There is some substance to the hype, too: large deep neural networks achieve the best results on speech recognition, visual object recognition, and several language related tasks, such as machine translation and language modeling.

But why? What’s so special about deep learning? (from now on, we shall use the term

The other necessary condition for success is that our model is trainable. That too is obvious, for if we cannot train our model, then its power is useless --- it will never amount to anything, and great results will not be achieved. The model will forever remain in a state of unrealized potential.

Fortunately, LDNNs are both trainable and powerful.

These four arguments suggest (strongly, in my opinion), that for a very wide variety of problems, there exists a setting of the connections of a LDNN that basically solves the problem. Crucially, the number of units required to solve these problems is far from exponential --- on the contrary, the number of units required is often so “small” that it is even possible, using current hardware, to train a network that achieves super-high performance on the task of interest. It is this last point which is so important, and requires additional elaboration:

So that’s it, then. Given a problem, such as visual object recognition, all we need is to train a giant convolutional neural network with 50 layers. Clearly a giant convnet with 50 layers can be configured to achieve human-level performance on object recognition --- right? So we simply need to find these weights. Once once we do, the problem is solved.

The success of Deep Learning hinges on a very fortunate fact: that well-tuned and carefully-initialized stochastic gradient descent (SGD) can train LDNNs on problems that occur in practice. It is not a trivial fact since the training error of a neural network as a function of its weights is highly non-convex. And when it comes to non-convex optimization, we were taught that all bets are off. Only convex is good, and non-convex is bad. And yet, somehow, SGD seems to be very good at training those large deep neural networks on the tasks that we care about. The problem of training neural networks is NP-hard, and in fact there exists a family of datasets such that the problem of finding the best neural network with three hidden units is NP-hard. And yet, SGD just solves it in practice. This is the main pillar of deep learning.

We can say fairly confidently that successful LDNN training relies on the “easy” correlation in the data, which allows learning to bootstrap itself towards the more “complicated” correlations in the data. I have done an experiment that seems to support this claim: I found that training a neural network to solve the parity problem is hard. I was able to train the network to solve parity for 25 bits, 29 bits, but never for 31 bits (by the way, I am not claiming that learning parity is impossible for over 30 bits --- only that I didn’t succeed in doing so). Now, we know that parity is a highly unstable problem that doesn’t have any linear correlations: every linear function of the inputs is completely uncorrelated with the output, which is a problem for neural networks since they are mostly linear at initialization time (so perhaps I should’ve used larger initial weights? I will discuss the topic of weight initialization later in the text). So my hypothesis (which is shared by many other scientists) is that neural networks start their learning process by noticing the most “blatant” correlations between the input and the output, and once they notice them they introduce several hidden units to detect them, which enables the neural network to see more complicated correlations. Etc. The process goes on. I imagine some sort of a “spectrum” of correlations --- both easy and hard, and the network jumps from a correlation to a more complicated correlation, much like an opportunistic mountain climber.

And the thing we can say is the following: in his famous 1984 paper called "A Theory of the Learnable", Valiant proved, roughly speaking, that if you have a finite number of functions, say N, then every training error will be close to every test error once you have more than log N training cases by a small constant factor. Clearly, if every training error is close to its test error, then overfitting is basically impossible (overfitting occurs when the gap between the training and the test error is large). (I am also told that this result was given in Vapnik’s book as small exercise). This theorem is easy to prove but I won’t do it here.

But this very simple result has a genuine implication to any implementation of neural networks. Suppose I have a neural network with N parameters. Each parameter will be a float32. So a neural network is specified with 32N bits, which means that we have no more than 2

It’s funny how science progresses, and how easy it is to train deep neural networks, especially in retrospect.

Here is a summary of the community’s knowledge of what’s important and what to look after:

I am pretty sure that I haven’t forgotten anything. The above 13 points cover literally everything that’s needed in order to train LDNNs successfully.

Deep Learning is really popular these days. Big and small companies are getting into it and making money off it. It’s hot. There is some substance to the hype, too: large deep neural networks achieve the best results on speech recognition, visual object recognition, and several language related tasks, such as machine translation and language modeling.

But why? What’s so special about deep learning? (from now on, we shall use the term

**Large Deep Neural Networks --- LDNN ---**which is what the vaguer term “Deep Learning” mostly refers to). Why does it work now, and how does it differ from neural networks of old? Finally, suppose you want to train an LDNN. Rumor has it that it’s very difficult to do so, that it is “black magic” that requires years of experience. And while it is true that experience helps quite a bit, the amount of “trickery” is surprisingly limited ---- one needs be on the lookout for only a small number well-known pitfalls. Also, there are many open-source implementations of various state-of-the-art neural networks (c.f. Caffe, cuda-covnet, Torch, Theano), which makes it much easier to learn all the details needed to make it work.##### Why Does Deep Learning Work?

It is clear that, to solve hard problems, we must use powerful models. This statement is obvious. Indeed, if a model is not powerful, then there is absolutely no chance that it can succeed in solving a hard problem, no matter how good the learning algorithm is.The other necessary condition for success is that our model is trainable. That too is obvious, for if we cannot train our model, then its power is useless --- it will never amount to anything, and great results will not be achieved. The model will forever remain in a state of unrealized potential.

Fortunately, LDNNs are both trainable and powerful.

##### Why Are LDNNs Powerful?

When I talk about LDNNs, I’m talking about 10-20 layer neural networks (because this is what can be trained with today’s algorithms). I can provide a few ways of looking at LDNNs that will illuminate the reason they can do as well as they do.- Conventional statistical models learn simple patterns or clusters. In contrast, LDNNs learn computation, albeit a massively parallel computation with a modest number of steps. Indeed, this is the key difference between LDNNs and other statistical models.
- To elaborate further: it is well known that any algorithm can be implemented by an appropriate very deep circuit (with a layer for each timestep of the algorithm’s execution -- one example). What’s more, the deeper the circuit, the more expensive are the algorithms that can be implemented by the circuit (in terms of runtime). And given that neural networks are circuits as well, deeper neural networks can implement algorithms with more steps ---- which is why depth = more power.

- N.B.: It is easy to see that a single neuron of a neural network can compute the conjunction of its inputs, or the disjunction of its inputs, by simply setting their connections to appropriate values.

- Surprisingly, neural networks are actually more efficient than boolean circuits. By more efficient, I mean that a fairly shallow DNN can solve problems that require many more layers of boolean circuits. For a specific example, consider the highly surprising fact that a DNN with 2 hidden layer and a modest number of units can sort N N-bit numbers! I found the result shocking when I heard about it, so I implemented a small neural network and trained it to sort 10 6-bit numbers, which was easy to do to my surprise. It is impossible to sort N N-bit numbers with a boolean circuit that has two hidden layers and that are not gigantic.

- The reason DNNs are more efficient than boolean circuits is because neurons perform a threshold operation, which cannot be done with a tiny boolean circuit.

- The reason DNNs are more efficient than boolean circuits is because neurons perform a threshold operation, which cannot be done with a tiny boolean circuit.
- Finally, human neurons are slow yet humans can perform lots of complicated tasks in a fraction of a second. More specifically, it is well-known that a human neuron fires no more than 100 times per second. This means that, if a human can solve a problem in 0.1 seconds, then our neurons have enough time to fire only 10 times --- definitely not much more than that. It therefore follows that a large neural network with 10 layers can do anything a human can in 0.1 seconds.
- This is not scientific fact since it is conceivable that real neurons are much more powerful than artificial neurons, but real neurons may also turn out to be much less powerful than artificial neurons. In any event, the above is certainly a plausible hypothesis.
- This is interesting because humans can solve many complicated perception problems in 0.1 seconds --- for example, humans can recognize the identity of an object that’s in front of them, recognize a face, recognize an emotion, and understand speech in a fraction of a second. In fact, if there exists even just one person in the entire world who has achieved an uncanny expertise in performing a highly complex task of some sort in a fraction of a second, then this is highly convincing evidence that a large DNN could solve the same task --- if only its connections are set to the appropriate values.
- But won’t the neural network need to be huge? Maybe. But we definitely know that it won’t have to be exponentially large ---- simply because the brain isn’t exponentially large! And if human neurons turn out to be noisy (for example), which means that many human neurons are required to implement a single real-valued operation that can be done using just one artificial neuron, then the number of neurons required by our DNNs to match a human after 0.1 seconds is greatly diminished.

These four arguments suggest (strongly, in my opinion), that for a very wide variety of problems, there exists a setting of the connections of a LDNN that basically solves the problem. Crucially, the number of units required to solve these problems is far from exponential --- on the contrary, the number of units required is often so “small” that it is even possible, using current hardware, to train a network that achieves super-high performance on the task of interest. It is this last point which is so important, and requires additional elaboration:

- We know that most machine learning algorithms are consistent: that is, they will solve the problem given enough data. But consistency generally requires an exponentially large amount of data. For example, the nearest neighbor algorithm can definitely solve any problem by memorizing the correct answer to every conceivable input. The same is true for a support vector machine --- we’d have a support vector for almost every possible training case for very hard problems. The same is also true for a neural network with a single hidden layer: if we have a neuron for every conceivable training case, so that neuron fires for that training case and but not for any other, then we could also learn and represent every conceivable function from inputs to outputs. Everything can be done given exponential resources, but it is never ever going to be relevant in our limited physical universe.
- And it is in this point that LDNNs differ from previous methods: we can be reasonably certain that a large but not huge LDNN will achieve good results on a surprising variety of problems that we may want to solve. If a problem can be solved by a human in a fraction of a second, then we have a very non-exponential super-pessimistic upper bound on the size of the smallest neural network that can achieve very good performance.
- But I must admit that it is impossible to predict whether a given problem will be solvable by a deep neural network ahead of time, although it is often possible to tell whenever we know that a similar problem can be solved by an LDNN of a manageable size.

So that’s it, then. Given a problem, such as visual object recognition, all we need is to train a giant convolutional neural network with 50 layers. Clearly a giant convnet with 50 layers can be configured to achieve human-level performance on object recognition --- right? So we simply need to find these weights. Once once we do, the problem is solved.

##### Learning.

What is learning? Learning is the problem of finding a setting of the neural network’s weights that achieves the best possible results on our training data. In other words, we want to “push” the information from the labelled data into the parameters so that the resulting neural network will solve our problem.The success of Deep Learning hinges on a very fortunate fact: that well-tuned and carefully-initialized stochastic gradient descent (SGD) can train LDNNs on problems that occur in practice. It is not a trivial fact since the training error of a neural network as a function of its weights is highly non-convex. And when it comes to non-convex optimization, we were taught that all bets are off. Only convex is good, and non-convex is bad. And yet, somehow, SGD seems to be very good at training those large deep neural networks on the tasks that we care about. The problem of training neural networks is NP-hard, and in fact there exists a family of datasets such that the problem of finding the best neural network with three hidden units is NP-hard. And yet, SGD just solves it in practice. This is the main pillar of deep learning.

We can say fairly confidently that successful LDNN training relies on the “easy” correlation in the data, which allows learning to bootstrap itself towards the more “complicated” correlations in the data. I have done an experiment that seems to support this claim: I found that training a neural network to solve the parity problem is hard. I was able to train the network to solve parity for 25 bits, 29 bits, but never for 31 bits (by the way, I am not claiming that learning parity is impossible for over 30 bits --- only that I didn’t succeed in doing so). Now, we know that parity is a highly unstable problem that doesn’t have any linear correlations: every linear function of the inputs is completely uncorrelated with the output, which is a problem for neural networks since they are mostly linear at initialization time (so perhaps I should’ve used larger initial weights? I will discuss the topic of weight initialization later in the text). So my hypothesis (which is shared by many other scientists) is that neural networks start their learning process by noticing the most “blatant” correlations between the input and the output, and once they notice them they introduce several hidden units to detect them, which enables the neural network to see more complicated correlations. Etc. The process goes on. I imagine some sort of a “spectrum” of correlations --- both easy and hard, and the network jumps from a correlation to a more complicated correlation, much like an opportunistic mountain climber.

##### Generalization.

While it is very difficult to say anything specific about the precise nature of the optimization of neural networks (except near a local minimum where everything becomes convex and uninteresting), we can say something nontrivial and specific about generalization.And the thing we can say is the following: in his famous 1984 paper called "A Theory of the Learnable", Valiant proved, roughly speaking, that if you have a finite number of functions, say N, then every training error will be close to every test error once you have more than log N training cases by a small constant factor. Clearly, if every training error is close to its test error, then overfitting is basically impossible (overfitting occurs when the gap between the training and the test error is large). (I am also told that this result was given in Vapnik’s book as small exercise). This theorem is easy to prove but I won’t do it here.

But this very simple result has a genuine implication to any implementation of neural networks. Suppose I have a neural network with N parameters. Each parameter will be a float32. So a neural network is specified with 32N bits, which means that we have no more than 2

^{32N}distinct neural networks, and probably much less. This means that we won’t overfit much once we have more than 32N training cases. Which is nice. It means that it’s theoretically OK to count parameters. What’s more, if we are quite confident that each weight only requires 4 bits (say), and that everything else is just noise, then we can be fairly confident that the number of training cases will be a small constant factor of 4N rather than 32N.##### The Conclusion:

If we want to solve a hard problem we probably need a LDNN, which has many parameters. So we need a large high-quality labelled training set to make sure that it has enough information to specify all the network’s connections. And once we get that training set, we should run SGD on it until the network solves the problem. And it probably will, if our neural network is large and deep.##### What Changed Since the 80s?

In the old days, people believed that neural networks could “solve everything”. Why couldn’t they do it in the past? There are several reasons.**Computers were slow.**So the neural networks of past were tiny. And tiny neural networks cannot achieve very high performance on anything. In other words, small neural networks are not powerful.**Datasets were small.**So even if it was somehow magically possible to train LDNNs, there were no large datasets that had enough information to constrain their numerous parameters. So failure was inevitable.**Nobody knew how to train deep nets.**Deep networks are important. The current best object recognition networks have between 20 and 25 successive layers of convolutions. A 2 layer neural network cannot do anything good on object recognition. Yet back in the day everyone was very sure that deep nets cannot be trained with SGD, since that would’ve been too good to be true!

It’s funny how science progresses, and how easy it is to train deep neural networks, especially in retrospect.

##### Practical Advice.

Ok. So you’re sold. You’re convinced that LDNNs are the present and the future and you want to train it. But rumor has it that it’s so hard, so difficult… or is it? The reality is that it used to be hard, but now the community has consolidated its knowledge and realized that training neural networks is easy as long as you keep the following in mind.Here is a summary of the community’s knowledge of what’s important and what to look after:

**Get the data:**Make sure that you have a high-quality dataset of input-output examples that is large, representative, and has relatively clean labels. Learning is completely impossible without such a dataset.**Preprocessing:**it is essential to center the data so that its mean is zero and so that the variance of each of its dimensions is one. Sometimes, when the input dimension varies by orders of magnitude, it is better to take the log(1 + x) of that dimension. Basically, it’s important to find a faithful encoding of the input with zero mean and sensibly bounded dimensions. Doing so makes learning work much better. This is the case because the weights are updated by the formula: change in w_{ij}\propto x_{i}dL/dy_{j}(w denotes the weights from layer x to layer y, and L is the loss function). If the average value of the x’s is large (say, 100), then the weight updates will be very large and correlated, which makes learning bad and slow. Keeping things zero-mean and with small variance simply makes everything work much better.**Minibatches:**Use minibatches. Modern computers cannot be efficient if you process one training case at a time. It is vastly more efficient to train the network on minibatches of 128 examples, because doing so will result in massively greater throughput. It would actually be nice to use minibatches of size 1, and they would probably result in improved performance and lower overfitting; but the benefit of doing so is outweighed the massive computational gains provided by minibatches. But don’t use very large minibatches because they tend to work less well and overfit more. So the practical recommendation is: use the smaller minibatch that runs efficiently on your machine.**Gradient normalization:**Divide the gradient by minibatch size. This is a good idea because of the following pleasant property: you won’t need to change the learning rate (not too much, anyway), if you double the minibatch size (or halve it).**Learning rate schedule:**Start with a normal-sized learning rate (LR) and reduce it towards the end.

- A typical value of the LR is
**0.1**. Amazingly, 0.1 is a good value of the learning rate for a large number of neural networks problems. Learning rates frequently tend to be smaller but rarely much larger. - Use a
**validation set**---- a subset of the training set on which we don’t train --- to decide when to lower the learning rate and when to stop training (e.g., when error on the validation set starts to increase). - A practical suggestion for a learning rate schedule: if you see that you stopped making progress on the validation set, divide the LR by 2 (or by 5), and keep going. Eventually, the LR will become very small, at which point you will stop your training. Doing so helps ensure that you won’t be (over-)fitting the training data at the detriment of validation performance, which happens easily and often. Also, lowering the LR is important, and the above recipe provides a useful approach to controlling via the validation set.

- A typical value of the LR is
- But most importantly, worry about the
**Learning Rate**. One useful idea used by some researchers (e.g., Alex Krizhevsky) is to monitor the ratio between the update norm and the weight norm. This ratio should be at around 10^{-3}. If it is much smaller then learning will probably be too slow, and if it is much larger then learning will be unstable and will probably fail. **Weight initialization.**Worry about the random initialization of the weights at the start of learning.

- If you are lazy, it is usually enough to do something like 0.02 * randn(num_params). A value at this scale tends to work surprisingly well over many different problems. Of course, smaller (or larger) values are also worth trying.
- If it doesn’t work well (say your neural network architecture is unusual and/or very deep), then you should initialize each weight matrix with the init_scale / sqrt(layer_width) * randn. In this case init_scale should be set to 0.1 or 1, or something like that.
- Random initialization is super important for deep and recurrent nets. If you don’t get it right, then it’ll look like the network doesn’t learn anything at all. But we know that neural networks learn once the conditions are set.
- Fun story: researchers believed, for many years, that SGD cannot train deep neural networks from random initializations. Every time they would try it, it wouldn’t work. Embarrassingly, they did not succeed because they used the “small random weights” for the initialization, which works great for shallow nets but simply doesn’t work for deep nets at all. When the nets are deep, the many weight matrices all multiply each other, so the effect of a suboptimal scale is amplified.
- But if your net is shallow, you can afford to be less careful with the random initialization, since SGD will just find a way to fix it.

**You’re now informed.**Worry and care about your initialization. Try many different kinds of initialization. This effort will pay off. If the net doesn’t work at all (i.e., never “gets off the ground”), keep applying pressure to the random initialization. It’s the right thing to do.- If you are training RNNs or LSTMs,
**use a hard constraint**over the norm of the gradient (remember that the gradient has been divided by batch size). Something like 15 or 5 works well in practice in my own experiments. Take your gradient, divide it by the size of the minibatch, and check if its norm exceeds 15 (or 5). If it does, then shrink it until it is 15 (or 5). This one little trick plays a huge difference in the training of RNNs and LSTMs, where otherwise the exploding gradient can cause learning to fail and force you to use a puny learning rate like 1e-6 which is too small to be useful. **Numerical gradient checking:**If you are not using Theano or Torch, you’ll be probably implementing your own gradients. It is easy to make a mistake when we implement a gradient, so it is absolutely critical to use numerical gradient checking. Doing so will give you a complete peace of mind and confidence in your code. You will know that you can invest effort in tuning the hyperparameters (such as the learning rate and the initialization) and be sure that your efforts are channeled in the right direction.- If you are using LSTMs and you want to train them on problems with very long range dependencies, you should initialize the biases of the forget gates of the LSTMs to large values. By default, the forget gates are the sigmoids of their total input, and when the weights are small, the forget gate is set to 0.5, which is adequate for some but not all problems. This is the one non-obvious caveat about the initialization of the LSTM.
**Data augmentation:**be creative, and find ways to algorithmically increase the number of training cases that are in your disposal. If you have images, then you should translate and rotate them; if you have speech, you should combine clean speech with all types of random noise; etc. Data augmentation is an art (unless you’re dealing with images). Use common sense.**Dropout.**Dropout provides an easy way to improve performance. It’s trivial to implement and there’s little reason to not do it. Remember to tune the dropout probability,**and to not forget to turn off Dropout and to multiply the weights by**(namely by 1-dropout probability)**at test time**. Also, be sure to train the network for longer. Unlike normal training, where the validation error often starts increasing after prolonged training, dropout nets keep getting better and better the longer you train them. So be patient.**Ensembling.**Train 10 neural networks and average their predictions. It’s a fairly trivial technique that results in easy, sizeable performance improvements. One may be mystified as to why averaging helps so much, but there is a simple reason for the effectiveness of averaging. Suppose that two classifiers have an error rate of 70%. Then, when they agree they are right. But when they disagree, one of them is often right, so now the average prediction will place much more weight on the correct answer. The effect will be especially strong whenever the network is confident when it’s right and unconfident when it’s wrong.

I am pretty sure that I haven’t forgotten anything. The above 13 points cover literally everything that’s needed in order to train LDNNs successfully.

##### So, to Summarize:

- LDNNs are powerful.

- LDNNs are trainable if we have a very fast computer.

- So if we have a very large high-quality dataset, we can find the best LDNN for the task.

- Which will solve the problem, or at least come close to solving it.

##### The End.

But what does the future hold? Predicting the future is obviously hard, but in general, models that do even more computation will probably be very good. The Neural Turing Machine is a very important step in this direction. Other problems include unsupervised learning, which is completely mysterious and incomprehensible in my opinion as of 8 Jan 2015. Learning very complicated “things” from data without supervision would be nice. All these problems require extensive research.
Subscribe to:
Posts (Atom)