通过将学习系统建模为纤维丛来实现连续学习

人类大脑天生就有一种叫做连续学习(Continual Learning)的能力,然而目前的主流神经网络存在一种叫做灾难性遗忘(Catastrophic Forgetting)的问题。所谓灾难性遗忘,即突然和充分地学习一套新的模式,会导致完全遗忘已经学习过的模式。这篇文章提出一种将学习系统看作一个纤维丛(Fiber Bundle)的通用学习模型。通过在一系列的机器学习的实验中将文章提出的模型与传统模型的学习能力进行对比,作者发现文章提出的模型不仅仅有着极佳的连续学习能力,而且具有很大的信息容量。此外,作者发现,在某些学习情景下,假如我们让模型能够对时间产生意识(类似于人类大脑的情景记忆机制),模型的连续学习能力能够进一步得到增强。最后,作者发现文章提出的模型的遗忘性质和人类记忆的遗忘性质有着很好的对应。这个工作或许能够增进我们对人类大脑学习机制的认识。

AI-Reviser: An automated writing reviser based on cutting-edge artificial intelligence technology.

This article introduces an automated English-writing reviser (AI-Reviser) based on advanced AI/ML technology. It was developed during the summer of 2017 using my spare time, and has gradually gained popularity (reached more than 78,000 page views in around 6 months) especially among students whose native language is not English. Table of Contents: Motivation Product design … Continue reading AI-Reviser: An automated writing reviser based on cutting-edge artificial intelligence technology.

深度学习、自然语言处理、表征

Deep Learning, NLP, and Representations Posted on July 7, 2014 关键词: 神经网络,深度学习,表征,NLP,递归神经网络 Introduction 简介 In the last few years, deep neural networks have dominated pattern recognition. They blew the previous state of the art out of the water for many computer vision tasks. Voice recognition is also moving that way. 在过去几年中,模式识别为深层神经网络所主导。它们在很多计算机视觉问题中的表现,超越了之前得到的最佳结果。语音识别也朝着这个趋势发展。 But despite the results, … Continue reading 深度学习、自然语言处理、表征

神经网络、流形、拓扑

原文(Original article): “Neural Networks, Manifolds, and Topology”,http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/ topology, neural networks, deep learning, manifold hypothesis 关键词:拓扑、神经网络、深度学习、流形猜想 Recently, there’s been a great deal of excitement and interest in deep neural networks because they’ve achieved breakthrough results in areas such as computer vision.1 近年来,由于深度学习神经网络在计算机视觉等领域的突破性进展,深度学习已然成为一个令人兴奋的、很有趣的领域 1。 However, there remain a number of concerns about them. One is that it … Continue reading 神经网络、流形、拓扑