Affordable Access

Learning Efficient Algorithms with Hierarchical Attentive Memory

Authors
  • Andrychowicz, Marcin
  • Kurach, Karol
Type
Preprint
Publication Date
Feb 23, 2016
Submission Date
Feb 09, 2016
Identifiers
arXiv ID: 1602.03218
Source
arXiv
License
Yellow
External links

Abstract

In this paper, we propose and investigate a novel memory architecture for neural networks called Hierarchical Attentive Memory (HAM). It is based on a binary tree with leaves corresponding to memory cells. This allows HAM to perform memory access in O(log n) complexity, which is a significant improvement over the standard attention mechanism that requires O(n) operations, where n is the size of the memory. We show that an LSTM network augmented with HAM can learn algorithms for problems like merging, sorting or binary searching from pure input-output examples. In particular, it learns to sort n numbers in time O(n log n) and generalizes well to input sequences much longer than the ones seen during the training. We also show that HAM can be trained to act like classic data structures: a stack, a FIFO queue and a priority queue.

Report this publication

Statistics

Seen <100 times