Peter Flach，布里斯托大学人工智能教授，拥有20多年的机器学习教研经验。在高度结构化的数据挖掘以及通过ROC分析来评估和改进机器学习模型方面，Flach是国际领先的研究人员。他著有Simply Logical: Intelligent Reasoning by Example，也是Machine Learning期刊的总编。曾担任2009年ACM知识发现与数据挖掘国际会议、2012年欧洲机器学习与数据挖掘国际会议的程序委员会共同主席。
本书写作思路清楚，逻辑性强。作者首先介绍了机器学习的基础知识，然后提供了大量有价值的结论、对若干机器学习技术性能的洞见，以及许多核心算法的高层伪代码，巧妙地引领读者循序渐进地学习。 ——Fernando Berzal，Computing Reviews
In film Ex Machina, protagonist makes use of data researching and information processing to approach to Artificial Intelligence. Can machine learning really make AI a reality? Is AI a threat or a helper to mankind?
Clearly the ability to learn and improve through experience and interaction with the environment is central to any form of intelligence, natural or artificial. So yes, machine learning will be a big part of AI, but many other capabilities are needed too, such as common-sense reasoning and planning.
Whether AI is a threat depends on how we use it and what safeguards we are building into it. AI technology is already replacing people in certain kinds of jobs, and so we need to make sure that those people are offered alternatives. The more general threat of autonomous robots working against humans in pursuing their own goals is still quite a way off, but we do need to think about how to avoid such a situation — if only because it is philosophically interesting!
How does machine learning work in pilot studies on big data processing? Could you give us a brief example to explain that?
The first question to ask is whether the data contains enough information to solve your problem. In order to establish that, it is often a good idea to start from a much simplified version of the problem — something that you can almost solve by hand — and see whether you can build a machine learning model from the data to solve that simpler problem. If that works you can gradually make the problem harder — if not, you need to get better data!
In Interspeech 2015, top conference of speech recognition, academic papers on recognition model in the field of robust speech recognition are mainly based on Deep Neural Network. Is that meaning speech enhancement, noise reduction and filtering in speech signal have been out of date?
Deep learning is indeed very successful in speech and image recognition. Deep learning is very powerful but also very data-hungry and computation-intensive. It is furthermore a black box, meaning that it can solve a problem for you but it cannot teach you how to do it yourself. Techniques derived from first principles such as filtering are easier for us humans to understand. Whether or not this is important depends on how you use the technology. For example, many people do not really know how their car works and that is mostly fine, but not if you are planning to drive through the Sahara!
An interesting approach to opening the black box that people are working on currently is to first train a deep neural network to get good performance, then train a shallow network or other more explanatory technique on the outputs of the deep network to gain understanding.
I would like you to give your comments on the duel between AlphaGo and Lee Sedol from machine learning perspective. How does AlphaGo search for its available steps to meet Lee Sedol's attack? How does AlphaGo make a global judgement to choose the best way from endless steps?
In order to play a game like chess or Go the computer needs to navigate a very large tree of possible moves of either player. AlphaGo learned to navigate that tree from playing an enormous number of games against itself — many more games than a human could every expect to play in a lifetime. As a result it has two deep networks: one to score each possible move, and another to score each possible board position. The learning technique used is called reinforcement learning — it is not currently covered in my book but I might add a chapter on it if I can find the time to work on the second edition!
Do you think computer's success in the man-machine duel will encourage more people to do research on machine learning?
When I started as an academic not everybody was convinced that computer science was a bona-fide academic discipline, let alone something as exotic as AI or machine learning. Now Hollywood films are made about the subject and high-profile competitions are run between humans and machines. This is mostly a good thing, although inevitably there is a lot of hype which leads to unrealistic expectations. The role of an academic is to seek nuance, not hype, and the more researchers there are to pursue this the better!
Could you give some suggestions to self-learners in machine learning? What kinds of preparations should they first make?
I tried to write the book with self-learners in mind, but some background knowledge is of course needed: a little bit of probability and statistics, a bit of logic, and a bit of linear algebra. Furthermore, it would probably be good to play around with some machine learning software: Python's scikit-learn is very popular nowadays, but R and Matlab have plenty of machine learning libraries too. This will help get some initial understanding what machine learning can do; my book will then help understand how it works.