Deep Learning 一书中有哪些书法考试论述题被最新研究验证,拓展或推翻了

《Deep Learning》(深度学习)中文版开放下载
Loading...
这本书的主题具体来说,是机器学习的一种,一种能够使计算机系统从经验和数据中得到提高的技术。深度学习是一种特定类型的机器学习,具有强大的能力和灵活性,它将大千世界表示为嵌套的层次概念体系(由较简单概念间的联系定义复杂概念、从一般抽象概括到高级抽象表示)。对于本书的结构,第一部分介绍基本的数学工具和机器学习的概念,第二部分介绍最成熟的深度学习算法,而第三部分讨论某些具有展望性的想法,它们被广泛地认为是深度学习未来的研究重点。
因此,本书从基础数学知识到各类深度方法全面而又深入地描述了深度学习的各个主题。译者们也相信开源此书 PDF 版的中文译文可以促进大家对深度学习的基础和前沿知识有进一步的理解,也相信通过开放高质量的专业书籍能做到先阅读后付费。
Deep Learning 中文版在 ,你可以直接前往阅读、下载,译者建议「读者可以以中文版为主、为辅来阅读学习」。
更多细节请前往 Github,另外译者们依旧需要反馈意见,你可以在 Github 提交 issue,PDF 下载地址,在线阅读。
注意由于版权问题,在线版本不提供图片。
百度网盘连接失效啊,无法访问
按分类查看文章:
大家都在讨论些什么
: 用惯chrome了,不准备换了,速度差距只是以秒为单位。: 57会是革命性的突破么?,现在56开多进程感觉效果已经蛮不错了: 扩展趋同,而性能更好,为什么不用呢?:): 之前试用了一下,界面变化好大,但是最大的变化还是扩展变少了!我最爱用的keefox不能用了,呜呜呜,之后换回firefox56了。: 可惜很多扩展都不支持: 与源链接不同的包(ch.deletescape.lawnchair)都视为另外一个应用。: @青小蛙 ,作为一个公司的小编辑,经常要写一些文案或文章,从最开始的提笔就写到现在已经养成了在简书上先写下要点,然后大致写下思路列下提纲,最后在用分割线分割另起一行写正文。后来发现幕布不错,可以查看思维导图,本来不是很清楚的思路这样一些就清晰很多。而且最近在学产品经理的东西,想转行,但不太懂产品经理的工具,再想这个是不是也能用来写需求啥的呢?就是有的功能免费版不能用,希望能求得一枚账户哦~感谢!
最热门标签
传说中的小众软件 让你的手机应用与众不同。
商业网站、微信公众号 或其他未授权媒体不得复制、转载、使用本站内容。苹果/安卓/wp
积分 41856, 距离下一级还需 14039 积分
权限: 自定义头衔, 签名中使用图片, 隐身, 设置帖子权限, 设置回复可见, 签名中使用代码
道具: 彩虹炫, 涂鸦板, 雷达卡, 热点灯, 金钱卡, 显身卡, 匿名卡, 抢沙发, 提升卡, 沉默卡, 千斤顶, 变色卡, 置顶卡
购买后可立即获得
权限: 隐身
道具: 金钱卡, 彩虹炫, 雷达卡, 热点灯, 涂鸦板
TA的文库&&
无聊签到天数: 126 天连续签到: 1 天[LV.7]常住居民III
本帖最后由 wwqqer 于
09:04 编辑
【阿尔法系列】将定期介绍业界最新成果及各种阿尔法策略!想要随时跟踪【阿尔法系列】,请点击头像下方“加关注”。关注成功后,查看这里即可:。
[相关阅读]
今年(2016年6月)的国际机器学习大会(the International Conference on Machine Learning,ICML)的主题是深度学习(Deep Learning)。本次大会共分四个主题:Recurrent Neural Networks, Unsupervised Learning,Supervised Training Methods,Deep Reinforcement Learning。附件中的论文分别代表着这几研究方向的最新前沿。
深度学习(Deep Learning)是机器学习的一个分支,至今已有数种深度学习框架,如深度神经网络、卷积神经网络,深度信念网络和递归神经网络等。有些已被应用于计算机视觉、语音识别、自然语言处理、音频识别与生物信息学等领域并取得了极好的效果。
值得一提的是,2016年3月谷歌子公司DeepMind开发的围棋人工智能程序“AlphaGo” 正是运用了“Deep Reinforcement Learning” 技术 (),在与韩国李世石九段围棋五番棋大战中,历史上首次击败人类围棋顶尖高手。由于围棋的复杂程度远远超过其他任何游戏,人们之前一直认为机器想要战胜人类还遥遥无期。在人机五番棋大战中,人们的态度从起先轻视,怀疑,再到惊讶,迷茫,最后绝望,醒悟,不经意间经历了一次心灵上极其震撼的洗礼。AlphaGo的胜利意味着人工智能的无限可能,这是一件可以写入人类历史的里程碑事件!
与此同时在量化投资领域,人们也在探索运用机器学习技术的可能性。许多对冲基金(例如,Man Group, Two Sigma, DE Shaw,等等)都投入了大量的人力和财力以求占得先机。本文是美国知名对冲基金Two Sigma的专家Vinod Valsalam对本次国际机器学习大会的总结(论文在附件中)。可以想象在不远的将来,机器学习将成为量化投资中的制胜法宝!
(地址回复可见,17篇最新论文,降价出售3天)
本帖隐藏的内容
(17.91 MB, 售价: 1 个论坛币)
08:36:31 上传
售价: 1 个论坛币
08:42:05 上传
Machine learning offers powerful techniques to find patterns in data for solving challenging predictive problems. The dominant track at the International Conference onMachine Learning (ICML) in New York this year was deep learning, which uses artificial neural networks to solve problems by learning feature representations from large amounts of data.
Significant recent successes in applications such as image and speech recognition, and natural language processing, have helped fuel an explosion of interest in deep learning. And new research in the field is continuing to push the boundaries of applications, techniques, and theory. Below, Two Sigma research scientist Vinod Valsalam provides an overview of some of the most interesting research presented at ICML 2016, covering recurrent neural networks, unsupervised learning, supervised training methods, and deep reinforcement methods.
1. Recurrent Neural Networks
Unlike feed-forward networks, the outputs of recurrent neural networks (RNNs) can depend on past inputs, providing a natural framework for learning from time series and sequential data. But training them for tasks that require long-term memory is especially difficult due to the vanishing and exploding gradients problem, i.e., the error signals for adapting network weights become increasingly difficult to propagate through the network. Specialized network architectures such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) mitigate this problem by utilizing gating units, a technique that has been very successful in tasks such as speech recognition and language modeling. An alternative approach that is now gaining more focus is to constrain the weight matrices in a way that is more conducive to gradient propagation, as explored in the following papers.
Unitary Evolution Recurrent Neural Networks
Arjovsky, M., Shah, A., & Bengio, Y. (2016)
The problem of vanishing and exploding gradients occurs when the magnitude of the eigenvalues of weight matrices deviate from 1. Therefore, the authors use weight matrices that are unitary to guarantee that the eigenvalues have magnitude 1. The challenge with this constraint is to ensure that the matrices remain unitary when updating them during training without performing excessive computations. Their strategy is to decompose each unitary weight matrix into the product of several simple unitary matrices. The resulting parameterization makes it possible to learn the weights efficiently while providing sufficient expressiveness. They demonstrate state of the art performance on standard benchmark problems such as the copy and addition tasks. An additional benefit of their approach is that it is relatively insensitive to parameter initialization, since unitary matrices preserve norms.
Recurrent Orthogonal Networks and Long-Memory Tasks
Henaff, M., Szlam, A., & LeCun, Y. (2016)
In this paper, the authors construct explicit solutions based on orthogonal weight matrices for the copy and addition benchmark tasks. Orthogonal matrices avoid the vanishing and exploding gradients problem in the same way as unitary matrices, but they have real-valued entries instead of complex-valued entries. The authors show that their hand-designed networks work well when applied to the task for which they are designed, but produce poor results when applied to other tasks. These experiments illustrate the difficulty of designing general networks that perform well on a range of tasks.
Strongly-Typed Recurrent Neural Networks
Balduzzi, D., & Ghifary, M. (2016)
Physics has the notion of dimensional homogeneity, i.e. it is only meaningful to add quantities of the same physical units. Types in programming languages express a similar idea. The authors extend these ideas to constrain RNN design. They define a type as an inner product space with an orthonormal basis. The operations and transformations that a neural network performs can then be expressed in terms of types. For example, applying an activation function to a vector preserves its type. In contrast, applying an orthogonal weight matrix to a vector transforms its type. The authors argue that the feedback loop of RNNs produces vectors that are type-inconsistent with the feed-forward vectors for addition. While symmetric weight matrices are one way to preserve types in feedback loops, the authors tweak the LSTM and GRU networks to produce variants that have strong types. Experiments were inconclusive in showing better generalization of typed networks, but they are an interesting avenue for further research.
2. Unsupervised Learning
The resurgence of deep learning in the mid-2000s was made possible to a large extent by using unsupervised learning to pre-train deep neural networks to establish good initial weights for later supervised training. Later, using large labeled data sets for supervised training was found to obviate the need for unsupervised pre-training. But more recently, there has been renewed interest in utilizing unsupervised learning to improve the performance of supervised training, particularly by combining both into the same training phase.
Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-scale Image Classification
Zhang, Y., Lee, K., & Lee, H. (2016)
This paper starts out with a brief history of using unsupervised and semi-supervised methods in deep learning. The authors showed how such methods can be scaled to solve large-scale problems. Using their approach, existing neural network architectures for image classification can be augmented with unsupervised decoding pathways for image reconstruction. The decoding pathways consist of a deconvolutional network that mirrors the original network using autoencoders. They initialized the weights for the encoding pathway with the original network and for the decoding pathway with random values. Initially, they trained only the decoding pathway while keeping the encoding pathway fixed. Then they fine-tuned the full network with a reduced learning rate. Applying this method to a state-of-the-art image classification network boosted its
performance significantly.
Deconstructing the Ladder Network Architecture
Pezeshki, M., Fan, L., Brakel, P., Courville, A., & Bengio, Y. (2016)
A different approach for combining supervised and unsupervised training of deep neural networks is the Ladder Network architecture. It also improves the performance of an existing classifier network by augmenting it with an auxiliary decoder network, but it has additional lateral connections between the original and decoder networks. The resultant network forms a deep stack of denoising autoencoders that is trained to reconstruct each layer from a noisy version. In this paper, the authors studied the ladder architecture systematically by removing its components one at a time to see how much each component contributed to performance. They found that the lateral connections are the most important, followed by the injection of noise, and finally by the choice of the combinator function that combines the vertical and lateral connections. They also introduced a new combinator function that improved the already impressive performance of the ladder network on the Permutation-Invariant MNIST handwritten digit recognition task, both for the supervised and semi-supervised settings.
【阿尔法系列】将定期介绍业界最新成果及各种阿尔法策略!想要随时跟踪【阿尔法系列】,请点击头像下方“加关注”。关注成功后,查看这里即可:。
请继续往下看。。。。
支持楼主:、
购买后,论坛将奖励 10 元论坛资金给楼主,以表示您对TA发好贴的支持
载入中......
总评分:&经验 + 300&
论坛币 + 100&
学术水平 + 7&
热心指数 + 6&
信用等级 + 6&
本帖被以下文库推荐
& |主题: 5892, 订阅: 235
& |主题: 1332, 订阅: 123
本帖最后由 wwqqer 于
09:03 编辑
3. Supervised Training Methods
Historically, deep neural networks were known to be difficult to train using standard random initialization and gradient decent. However, new algorithms for initializing and training deep neural networks proposed in the last decade have produced remarkable successes. Research continues in this area to better understand existing training methods and to improve them.
Dropout distillation
Bulò, Samuel Rota, Porzi, L., & Kontschieder, P. (2016)
Dropout is a regularization technique that was proposed to prevent neural networks from overfitting. It drops units from the network randomly during training by setting their outputs to zero, thus reducing co-adaptation of the units. This procedure implicitly trains an ensemble of exponentially many smaller networks sharing the same parametrization. The predictions of these networks must then be averaged at test time, which is unfortunately intractable to compute precisely. But the averaging can be approximated by scaling the weights of a single network.
However, this approximation may not produce sufficient accuracy in all cases. The authors introduce a better approximation method called dropout distillation that finds a predictor with minimal divergence from the ideal predictor by applying stochastic gradient descent. The distillation procedure can even be applied to networks already trained using dropout by utilizing unlabeled data. Their results on benchmark problems show consistent improvements over standard dropout.
Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks
Arpit, D., Zhou, Y., Kota, B., & Govindaraju, V. (2016)
One of the difficulties of training deep neural networks is that the distribution of input activations to each hidden layer may shift during training. One way to address this problem, known as internal covariate shift, is to normalize the input activations to each hidden layer using the Batch Normalization (BN) technique. However, BN has a couple of drawbacks: (1) its estimates of mean and standard deviation of input activations are inaccurate, especially during initial iterations, because they are based on mini-batches of training data and (2) it cannot be used with batch-size of one. To address these drawbacks, the authors introduce normalization propagation, which is based on a data-independent closed-form estimate of mean and standard deviation for every layer. It is based on the observation that the pre-activation values of ReLUs in deep networks follow a Gaussian distribution. The normalization property can then be forward-propagated to all hidden layers during training. The authors show that their method achieves better convergence stability than BN during training. It is also faster because it doesn't have to compute a running estimate of the mean and standard deviation of
the hidden layer activations.
Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters
Luketina, J., Raiko, T., Berglund, M., & Greff, K. (2016)
Tuning hyperparameters is often necessary to get good results with deep neural networks. Typically, the turning is performed either by manual trial-and-error, by using search, or by evaluating validation set performance. The authors propose a gradient based method that is less tedious and less computationally expensive to find good regularization hyperparameters.
Unlike previous methods, their method is simpler and computationally lightweight, and it updates both hyperparameters and regular parameters using stochastic gradient descent in the same training run. The gradient of the hyperparameters is obtained from the cost of the unregularized model on the validation set. Although the authors show that their method is effective in finding good regularization hyperparameters, they haven't extended it to common training techniques such as dropout regularization and learning rate adaptation.
4. Deep Reinforcement Learning
The researchers at DeepMind extended the breakthrough successes of deep learning in supervised tasks to the challenging reinforcement learning domain of playing Atari 2600 games. Their basic idea was to leverage the demonstrated ability of deep learning to extract high-level features from raw high-dimensional data by training a deep convolutional network. However, reinforcement learning tasks such as playing games do not come with training data that are labeled with the correct move for each turn.
Instead, they are characterized by sparse, noisy, and delayed reward signals. Furthermore, training data are typically correlated and non-stationary. They overcame these challenges using stochastic gradient descent and experience replay to stabilize learning, essentially jump-starting the field of deep reinforcement learning.
Asynchronous Methods for Deep Reinforcement Learning
Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., Kavukcuoglu (2016)
The experience replay technique stabilizes learning by making it possible to batch or sample the training data randomly. However, it requires more memory and computation and applies only to off-policy learning algorithms such as Q-learning. In this paper, the authors introduce a new method based on asynchronously executing multiple agents on different instances of the environment. The resulting parallel algorithm effectively de-correlates the training data and makes it more stationary. Moreover, it makes it possible to extend deep learning to off-policy reinforcement learning algorithms such as SARSA and actor-critic methods. Their method combined with the actor-critic algorithm improved upon previous results on the Atari domain using
much less computation resources.
Dueling Network Architectures for Deep Reinforcement Learning
Wang, Z., Schaul, T., Hessel, M., van Hasselt, Hado, Lanctot, M., & de Freitas, Nando (2016)
This work, which won the Best Paper&&award, introduces a new neural network architecture that complements the algorithmic advances in deep Q-learning networks (DQN) and experience replay. The authors point out that the value of an action choice from a given state need to be estimated only if that action has a consequence on what happens. The dueling network architecture leverages this observation by inserting two parallel streams of fully connected layers after the final convolutional layer of a regular DQN. One of the two streams estimates the state-value function while the other stream estimates the state-dependent advantage of taking an action. The output module of the network combines the activations of these two streams to produce the Q-values for each action. This architecture learns state-value functions more efficiently and produces better policy evaluations when actions have similar values or the number of actions is large.
Opponent Modeling in Deep Reinforcement Learning
He, H., Boyd-Graber, J., Kwok, K., & III, Hal Daumé (2016)
The authors introduce an extension of the deep Q-network (DQN) called Deep Reinforcement Opponent Network (DRON) for multi-agent settings, where the action outcome of the agent being controlled depends on the actions of the other agents (opponents). If the opponents use fixed policies, then standard Q-learning is sufficient.
However, opponents with non-stationary policies occur when they learn and adapt their strategies over time. In this scenario, treating the opponents as part of the world in a standard Q-learning setup masks changes in opponent behavior. Therefore, the joint policy of opponents must be taken into consideration when defining the Q-function. The DRON architecture implements this idea by employing an opponent network to learn opponent policies and a Q-network to evaluate actions for a state. The outputs of the two networks are combined using a Mixture-of-Experts network [13] to obtain the expected Q-value. DRON out-performed DQN in simulated soccer and a trivia game by discovering different strategy patterns of opponents.
Conclusions
Deep learning is experiencing a phase of rapid growth due to its strong performance in a number of domains, producing state of the art results and winning machine learning competitions. However, these successes have also contributed to a fair amount of hype. The papers presented at ICML 2016 provided an unvarnished view of a vibrant field in which researchers are working actively to overcome challenges in making deep learning techniques more powerful, and in extending their successes to other domains and larger problems.
想要随时跟踪最新好书,请点击头像下方“加关注”。关注成功后,查看这里即可:。
(感谢olderp的热心帮助)
(感谢iRolly的热心帮助)
(感谢版主的热心帮助)
(感谢chenyi112982的热心帮助)
[专题系列]
[论坛活动系列]
回帖奖励 +1 个论坛币
回帖奖励 +1 个论坛币
回帖奖励 +1 个论坛币
感谢分享好资源!圣诞快乐!
回帖奖励 +1 个论坛币
美知名对冲基金介绍深度学习(Deep Learning)最新研究成果
回帖奖励 +1 个论坛币
回帖奖励 +1 个论坛币
2sigma NB得不行啊。感谢楼主的分享~~
回帖奖励 +1 个论坛币
二级伯乐勋章
二级伯乐勋章
一级伯乐勋章
一级伯乐勋章
初级学术勋章
初级学术勋章
中级学术勋章
中级学术勋章
初级热心勋章
初级热心勋章
中级热心勋章
中级热心勋章
初级信用勋章
初级信用勋章
中级信用勋章
中级信用勋章
高级学术勋章
高级学术勋章
高级热心勋章
高级热心勋章
特级学术勋章
特级学术勋章
高级信用勋章
高级信用勋章
特级信用勋章
高级信用勋章
特级热心勋章
高级热心勋章
&nbsp&nbsp|
&nbsp&nbsp|
&nbsp&nbsp|
&nbsp&nbsp|
&nbsp&nbsp|
&nbsp&nbsp|
如有投资本站或合作意向,请联系(010-);
邮箱:service@pinggu.org
投诉或不良信息处理:(010-)
论坛法律顾问:王进律师

我要回帖

更多关于 孔子对读书的论述 的文章

 

随机推荐