Deepmind latest academic papers linear time Machine Translation – Sohu Technology 小坂めぐる

The academic | DeepMind latest paper: linear time neural Machine Translation – Sohu technology selected from the arXiv machine of the heart in compilation: Li Yazhou Abstract We propose a sequence for treatment (sequence processing) neural architecture. Convolutional neural network ByteNet is a two expansion (dilated convolutional neural networks) of the stacked; one network is used for encoding the source sequence (source sequence), another network for decoding the target sequence (target sequence) target network dynamic — this process started to generate variable length output. ByteNet has two core features: it runs in a linear time to the sequence length; it preserves the sequence time resolution (temporal resolution). The ByteNet decoder has achieved the highest level in the language modeling of character level, and has surpassed the best results obtained by previous recurrent neural network. ByteNet is also in the original character level of Machine Translation (raw character-level machine translation) was obtained on neural translation model is close to the best (running at two times (quadratic time) in the top) performance that can be achieved. The implicit structure learned by ByteNet can reflect the expected correspondence between sequences. © this paper compiled by the heart of the machine, reproduced please contact the public number to obtain authorization. ————————————————? Join the heart of machines (full-time reporter Intern): hr@almosthuman submission or seek reports: editor@almosthuman advertising business cooperation: bd@almosthuman &相关的主题文章: