The heart of the machine is original

Authors: Jiang Siyuan, Lu Xue

Over the past year, a number of summits, from AAAI to NIPS, have focused on artificial intelligence and machine learning, and their attendance and submissions largely reflect the level of activity in the field. In this article, Machine Heart provides an overview of 2017 AI Summit attendance, paper submission and acceptance, Chinese power and award-winning papers, and we hope readers can pick up some of this year’s trends and research topics from these observations.


The text is mainly divided into two parts. The first part is an overview of the 2017 Summit, including the submission and acceptance of the 10 summits in the field this year and their Chinese power. The second part focuses on the winning papers, which are divided into six topics including computer vision, natural language processing, learning process and data problems, and summarizes the viewpoints and findings of the corresponding research papers.


The top will be an overview of the paper


I will submit and receive papers

AAAI, CVPR, IJCAI, ICCV and NIPS all submitted more than 2,000 papers and received more than 600 papers this year. The ICLR 2017 is the fifth year since it was held. Last year, the rate of paper acceptance reached nearly 30 percent, and this year, it reached 40 percent. The paper acceptance rate of KDD is 18.9%, which is the lowest among the ten conferences above. (Quantitative statistical error ±5)


Below we provide a brief overview of these conferences and the papers received this year.


1. Comprehensive meetings

  • ICML is one of the top conferences in computer science. According to statistics, ICML 2017 reviewed a total of 1,676 papers and accepted 434, with an acceptance rate of 25.89%.
  • A total of 3,240 papers were submitted to NIPS this year, an all-time high, of which 678 were selected as conference papers, representing an acceptance rate of 20.9%. 40 are oral papers and 112 are Spotlight papers.
  • AAAI is the top annual event in the field of artificial intelligence, focusing on the research and development of artificial intelligence, attracting artificial intelligence elites from all over the world. AAAI received 2,571 papers for 2017, of which 639 were accepted by the conference, with an acceptance rate of less than 25%.
  • IJCAI (International Joint Conference on Artificial Intelligence) is the top comprehensive conference in the field of artificial intelligence, which is recognized as A Class A conference by the List of recommended international academic conferences of China Computer Society. This year, the IJCAI received 2,540 papers and accepted 660 of them, an acceptance rate of 26%.


2. Computer vision Field conference

  • According to the 2017 edition of Academic Indicators released by Google, CVPR is the most influential platform for publishing papers in the field of computer vision and pattern recognition. CVPR (IEEE Conference on Computer Vision and Pattern Recognition) is the most influential and comprehensive academic Conference in the field of Computer Vision in recent years. This year’s CVPR received 2,680 validly submitted papers, of which 2,620 were fully reviewed. A total of 783 papers were accepted (29% of all submissions). Of the accepted papers, 71 will give long oral presentations and 144 short highlight presentations.
  • The IEEE International Conference on Computer Vision (ICCV) is an International Conference on Computer Vision organized by IEEE. With the Computer vision Pattern Recognition Conference (CVPR) and the European Computer Vision Conference (ECCV) as the three top conference in the direction of computer vision. According to statistics, a total of 2,143 papers were submitted to ICCV this year, of which 621 were selected as conference papers, accounting for 29% acceptance rate. There are 45 Oral reports and 56 Spotlight reports. There were 3,107 attendees, according to public information at the conference.


3. Conference on Natural Language processing

  • The Association for Computational Linguistics (ACL) is one of The most influential and dynamic international academic organizations in The world. This year, ACL received 1,419 papers and accepted 344, representing an acceptance rate of 24%.
  • EMNLP is the premier conference in the field of natural language processing. This year, the EMNLP received 1,466 papers and accepted 323, including 216 long papers and 107 short papers, with an acceptance rate of 22%.


4. Deep learning Field Conference

ICLR is an annual event for deep learning. In 2013, deep learning giants Yoshua Bengio and Yann LeCun hosted the first ICLR conference. After several years of development, with deep learning hot today, ICLR has become one of the must-see events in the field of artificial intelligence. Related topics covered by the ICLR conference are:

  • Unsupervised, semi-supervised, supervised representational learning
  • Carry on the representation study of planning, strengthen the study
  • Metric learning and nuclear learning
  • Sparse coding and dimension expansion
  • The hierarchy model
  • Optimization of representational learning
  • Study the representation of outputs or states
  • Implementation problem, parallelism, software platform, hardware
  • Applications in vision, audio, speech, natural language processing, robotics, neuroscience, or any other field

ICLR received 507 papers in 2017, of which 196 were accepted by the Conference, with an acceptance rate of 38.7%. The results of this year’s paper review have been released. The number of submitted papers is 491, and the accepted papers are 15 oral papers (3%), 183 poster papers (37.3%), and the acceptance rate is 40%.


5. Data Mining field conference

KDD is the premier international conference for data mining. According to statistics, KDD 2017 received a total of 1144 papers, including 216 papers, with an acceptance rate of 18.9%.


Chinese forces in the summit

In the top conference in the field of computer vision, Chinese people can be seen everywhere. Many scholars attending the conference will be pleasantly surprised to find that there are a lot of Chinese signatures on the list of papers accepted by CVPR. ICCV 2017 also awarded the best paper and the best student paper to Kai Ming Ho et al. The following is the situation of Chinese authors of computer vision award-winning papers (incomplete statistics) :

  • Gao Huang and Zhuang Liu, co-authors of CVPR’s Best paper 2017 Symposium Connected Convolutional Networks, are both Chinese. Gao Huang is a PhD student at Tsinghua University and a postdoctoral fellow at Cornell University. Liu Zhuang is also from Tsinghua University. Wenda Wang, co-author of another of the best papers, Learning from Simulated and Unsupervised Images through Adversarial Training, is a graduate of Carnegie Mellon University, He is currently a machine learning engineer at Apple.
  • ICCV 2017 Facebook AI researcher Kaiming He won the Best paper award and was one of the authors of the best Student paper. Best Student Paper “Focal Loss for Dense Object Detection” By Tsung-Yi Lin, graduated from National Taiwan University and completed his PhD at Cornell University. In addition, Caffe team led by Jia Yangqing won the Everingham Team Award.

In the field of natural language processing, perhaps the biggest bright spot is that ACL 2017 has 5 domestic papers selected as outstanding papers, respectively from Peking University, Fudan University, Tsinghua University and the Institute of Automation of the Chinese Academy of Sciences. Here are the details (incomplete statistics) :


Xing Shi graduated from Tsinghua University and is currently studying for her PhD at the University of Southern California. The following five ACL 2017 outstanding papers are from China:

  • Adversarial Multi-Criteria Learning for Chinese Word Segmentation. Authors: Chen Xinchi, Shu Shu, Qiu Xipeng, Huang Xuanjing (Fudan University)
  • Visualizing and Understanding Neural Machine Translation. Ding Yanzhuo, Liu Yang, Luan Huanbo, Sun Maosong (Tsinghua University)
  • Abstractive Document Summarization with a graph-based Attentional Neural Model. Jiwei Tan and Wan Xiaojun (Peking University)
  • Joint Extraction of Entities and Relations Based on a Novel Tagging Scheme. Zheng Suncong, Feng Wang, Hongyun Bao (Institute of Automation, Chinese Academy of Sciences)
  • A two-stage Parsing Method for text-level Discourse Analysis. Wang Yizhong, Li Sujian, Houfeng Wang (Peking University)

EMNLP 2017 Best Long Essay “Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus- Level Constraints by Jieyu Zhao, Tianlu Wang, Mark Yatskar and Vicente Ordonez And Kai-Wei Chang, including Jianyu Zhao, Tianlu Wang and Kaiwei Zhang from the University of Virginia, are all Chinese. First, Zhao Jieyu is a doctoral student in her second year at UCLA, studying under Professor Kai-Wei Chang. His research interest covers natural language processing and machine learning. Prior to that, He received his bachelor’s and master’s degrees in computer science from Beihang University, and completed his first year of doctoral studies at the University of Virginia, majoring in computer science at UCLA.

In the comprehensive summit and deep learning summit, Chinese papers often win awards. Notable among them is ICLR’s 2017 best paper on rethinking generalization, which is very influential and also written by Chinese.

  • The IJCAI 2017 Best Student Paper Award “Tag Disentangled Generative Adversarial Networks for Object Image Re-rendering” is a paper by the AI Research Institute in Sydney. Wang Chaoyue is a FEIT doctoral student in the third year of UNIVERSITY of Technology Sydney. He is a visiting student of UBE Sydney AI Research Institute and his tutor is Professor Tao Dacheng. Tao Dacheng is the chief scientist of Unibe AI, and his Unibe Sydney AI Research Institute has received 13 papers. Besides this one, The paper “General Heterogeneous Transfer Distance Metric Learning via Knowledge Fragments Transfer” entered the top three of the Best Outstanding Paper Award.
  • Percy Liang, co-author of the ICML 2017 Best Paper Understanding Black-box Predictions via Influence Functions, is a noted Chinese scholar and associate professor at Stanford University.
  • The ICLR 2017 has three best papers, Among them, the thesis “Understanding Deep Learning Requires Rethinking Generalization” is written by Chiyuan Zhang, who graduated from Zhejiang University and is now a doctoral student at MIT. Dawn Song, author of the paper “Making Neural Programming Architectures Generalize via Recursion”, graduated from Tsinghua University and now works at the University of California, Berkeley.


Analysis of prize-winning papers

We have counted the number of papers awarded by the 2017 AI Summit, and after a brief screening, we have counted the interesting information about the areas of concern and keywords of these papers. The distribution of the conference’s winning papers is shown below. Inaccurately, there were approximately 56 winners in 2017, including classic papers or papers published in other years.

In addition to a best paper and a best student paper, there are 10 other awards in THE AAAI 2017 awards, including classic paper award, application development award and other seven winners. But we will only analyze the topics and keywords of the two best papers. Other winners of this year’s AAAI awards have interesting topics, such as the classic papers that pioneered the application of particle filtering, which provides efficient and scalable methods for robot localization, and the applied papers that focus on online recruitment and the synthesis and characterization of physical materials. In addition to the thesis award, Professor Lin Fang Zhen of HKU has been awarded an AAAI Fellow for his significant contributions to the fields of knowledge expression, non-monotony logic and behavior theory.

The situation of other papers is similar to AAAI, we will remove the classic papers of past years and papers of other years. For example, in this year’s classic ICML paper, researchers in 2007 focused on the combination of UCT online knowledge and offline knowledge to build a powerful 9*9 Go system. However, techniques such as deep reinforcement learning and self-antagonistic strategies have made great achievements in Go games recently, so we will not discuss this topic in depth.

It is worth noting that many of the top of the classic papers have appeared support vector machine related topics. For example, THE classic ICML paper Pegasos: Primal Estimated Sub-gradient SOlver for SVM proposes a simple and effective stochastic sub-gradient descent algorithm to solve the optimization problem proposed by support vector machines (SVM). This paper proposes a cross-plane algorithm for Training Linear SVM. It is proved that the Training Time for classification problems is only O(sn), and the Training Time for ordered regression problems is O(sn log(n)). Where S is the number of non-zero features and n is the number of training samples. Random Features for Large-scale Kernel Machines proposed a method to map input data into Random low-dimensional feature Spaces and then use existing fast linearization algorithms.

In addition, KDD is mainly the top meeting in data mining and knowledge discovery, so although it has a strong connection with artificial intelligence, we did not include it in the statistics of the research topic. This year’s KDD mainly focuses on sequential data and graph algorithms from the perspective of theme, and about 40% of the applications accepted were related papers. The best papers from the conference explored how to learn simpler structured representations that combine crowdsourcing with recurrent neural networks to extract vector representations from product descriptions, and that these learned vectors can find analogical information more accurately and quickly than traditional information retrieval methods. In addition, the conference’s best application paper focuses on defending against Android malware, which detects malware attackers by analyzing different relationships between apis to create higher-level semantic information. This year’s KDD conference did have a lot of insights and ideas, but limited to our theme and focus, there is no statistical and analysis of KDD related information.

Therefore, in this year’s top award of 56 papers, we discussed the topic and key words of 32 papers.


Distribution of research topics

Based on these 32 winning papers, we first analyzed the winning research topics of AAAI, ICLR, ICCV and NIPS (excluding KDD). Among them, we divided the topics of these winning papers into six categories, and there may be crossover among them. For example, there may be papers that use reinforcement learning methods to study problems related to natural language processing. It is important to note that the topic of learning processes describes issues such as optimization methods, model fitting, or model validation, while data problems describe related issues such as novel datasets, data privacy, and data bias. The topic distribution of some of this year’s top prize papers is shown below:

Among the topics discussed most were computer vision and natural language processing. These two tasks are also very popular research areas at present, as can be seen from CVPR and ICCV, which focus on computer vision, and ACL and EMNLP, which focus on natural language processing. Other comprehensive conferences such as AAAI, ICML, and IJCAI focus more on learning processes and data issues. In addition, such cutting-edge topics as reinforcement learning and transfer learning are also frequently mentioned in the award-winning papers presented at various conferences.


Computer vision

For the field of computer vision, CVPR and ICCV are of course the biggest contributions, and other award-winning papers such as IJCAI also have related topics. These award-winning papers mainly focus on target detection, image annotation, image generation, semantic segmentation, convolutional neural network architecture, etc. The only paper awarded this year for studying Convolutional architectures is the dense Connected Convolutional Networks by Cornell and Tsinghua University. They found that if Convolutional neural Networks contain shorter connections in layers close to the input and output layers, CNN can become significantly deeper, more accurate and more efficient in training. Accordingly, they proposed dense convolutional networks (DenseNet), which connect each layer with other layers in a feedforward manner. The evaluation of this paper is very high. Many researchers believe that DenseNet has proposed a better intensive connection mode based on ResNet, which can not only make features more robust, but also produce faster convergence speed. Although some scholars point out that DenseNet has a large memory footprint and high training costs, some researchers’ tests show that it requires less memory than ResNet for inference. The basic architecture of DenseNet is shown below:

In addition to convolutional architecture, one of the most influential award-winning papers for semantic segmentation or object instance segmentation is Mask R-CNN proposed by He Keming and other researchers, which is a simple, flexible and efficient universal object segmentation framework. Mask R-CNN is an extension based on Faster R-CNN, which adds a parallel branch to the branch used for boundary box recognition to predict the Mask of the target. Therefore, this method can not only detect the target in the image effectively, but also generate a high quality segmentation mask for each instance. It is worth noting that He is the first author of the best paper and one of the authors of the best student paper of the year. If the two best papers of CVPR 2009 and CVPR 2016 are included, then he has four best papers of the Computer vision Top Conference.

Mask R – CNN framework


In computer vision research topics, this year’s winning papers may be more discussed is target detection. In the paper YOLO9000: Better, Faster, Stronger, the author proposed YOLOv2 and YOLO9000 detection systems. YOLOv2 can greatly improve the YOLO model and achieve better results with very high FPS, while YOLO9000 is a network architecture that can detect over 9000 object classes in real time. This is mainly due to WordTree’s mixture of object detection and object recognition datasets. Therefore, the joint training can achieve very good results. In the paper of Focal Loss for Dense Object Detection, the researcher proposed a new Focal Loss method, which focused on the training of sparse and difficult samples and avoided a large number of negative factors that might occur in the training process. They showed that RetinaNet trained with Focal Loss can achieve the speed of a one-step detector on target detection tasks while being more accurate than the best two-step detectors in the industry.

Image generation is actually the topic of this year’s winning paper, For example, Apple’s Learning from Simulated and Unsupervised Images through Adversarial Training proposed the method of Simulated and Unsupervised Learning, which showed significant improvement in the use of synthetic Images. Another paper, Tag Disentangled Adversarial Networks for Object Image Re-rendering, proposed a structured Tag unentangled Generative Adversarial network (TDGAN). The TDGAN rerenders a new image of the object of interest from a single image by specifying multiple scene attributes, such as perspective, lighting, and presentation. Given an input image, the entanglement network extracts untangled, interpretable representations, which are then fed into the generating network to generate the image.


2. Natural language processing

Natural language processing (NLP) is another area of research that is getting a lot of attention besides computer vision. There are even more winning papers on NLP this year than on computer vision. Basically, ACL and EMNLP are the most significant contributions to this field, and this year’s winning papers in this field also focus on a wide range of topics, including machine translation, speech register, word segmentation models, language generation models and other NLP data related problems. It’s worth noting that the field of natural language processing has as many compelling applications as the field of computer vision, most notably neural machine translation. While last year saw significant advances in NEURAL machine translation, this year many researchers are really improving the performance of neural machine translation from encoder-decoder architectures, attention mechanisms, reinforcement learning methods and even LSTM and GRU structures. In addition, many other aspects of natural language processing (NLP) have been greatly improved. Below we introduce this year’s winning papers on NLP.

Many of the papers awarded by NLP this year focus on partial linguistics, such as the thesis On Probabilistic Typology: In Deep Generative Models of Vowel, the researchers describe a series of Deep stochastic point processes and compare them with previous computational, simulation-based methods. This paper presents the first probabilistic approach to fundamental problems in phonological typology. It hopes to construct a trainable probabilistic generation distribution of vowel space through deep neural network learning methods, so as to study the dispersion and focus of vowels in linguistic typology. In addition, at the end of the ACL conference, the researchers of this paper indicated that NLP tools should be a means of scientific research, not just engineering tasks, which is an attempt of this paper to combine deep learning with traditional NLP research. In addition, The Role of Prosody and Speech Register in Word Segmentation: A Computational Modelling Perspective explored the role of phonetic register and prosody in word segmentation tasks. They found that the difference between registers was smaller than before, and prosodic boundary information helped adults to point to more speech sounds than infants.

Another very interesting topic for this year’s winning paper on NLP is Hafez: An Interactive Poetry Generation System proposes an automatic Poetry Generation System, which integrates recurrent neural network (RNN) with a finite state receiver (FSA), so it can generate sonnets given any topic. Hafez also allows users to modify and polish the generated poems by adjusting configurations of various styles.

In addition to the above research results, some NLP winning papers have been awarded for important results in data or tools. Issues such as data sets, data bias or corpora will be discussed in detail in later sections, as in addition to data issues in NLP, other issues such as image labeling are also discussed in the conference winning papers. Another open source neuromachine translation tool from Harvard University’s NLP group exemplifies engineering research. In the paper OpenNMT: Open Source Toolkit for Neural Machine Translation, researchers introduce an Open Source Toolkit for Neural Machine Translation. The toolkit prioritized efficiency, modularity, and scalability to support NMT research in model architecture, characterization, and open source form. The HARVARD NLP Group says on its website that the system has reached a production-ready level.

OpenNMT can be used as a production system by major translation service providers. The system is simple to use and easy to expand, while maintaining efficiency and current best translation accuracy. Its features include:

  • Simple universal interface, only need source file and target file;
  • Speed and memory optimization for high-performance GPU training;
  • Features of the latest research that can improve translation performance;
  • A pre-trained model with multiple language pairs;
  • Allows extensions to other sequence generation tasks, such as summarization and image-to-text generation.


3. Learning process

In our classification, learning process is actually a very broad research area, which can include optimization methods, training processes or methods, maximum likelihood estimation or other loss function construction methods, generalization problems and black box problems and other topics. Of course, the general learning process refers to the training or optimization process, but we can here extend this concept to the common problems of general machine learning models, such as the black box problem we consider, random perturbation or new validation methods and other research topics suitable for general machine learning models. It is true that this part of the study has recently received increasing attention from researchers, with many papers discussing whether there are better gradient descent methods, better model interpretation, or better parameter estimation methods. This tendency is also reflected in the research topics of this year’s top prize papers. In total, we have grouped seven winning papers in this category, which discuss various aspects of machine learning models and are well worth your attention.

In fact, THE most recent ICLR 2018 second-rated paper looked at optimization methods in detail. In THE paper ON THE CONVERGENCE OF ADAM AND BEYOND, The researchers found that MSPROP, ADAM, ADADELTA, and NADAM methods are all based on exponential moving average using the square of the gradient generated by previous iterations, and they are used to scale the current gradient to update the weight after taking the square root of the moving average. This paper shows that some algorithms sometimes do not converge to the optimal solution (or critical point in non-convex condition) because they use exponential moving average operation. Therefore, the researchers propose a new variant of ADAM algorithm, which solves the convergence problem by giving these algorithms the ability to “long-term memory” of previous gradients. In NIPS 2017’s Best paper, variance-based Regularization with Convex Objectives, the researchers explore a risk minimization and stochastic optimization method that provides an alternative Convex attribute to the Variance. It also allows a tradeoff between approximate optimality and efficient computation between approximation and estimation errors. They demonstrate that the process has ertificates of Optimality and achieves a faster convergence rate than the empirical risk-minimization method under more general Settings through a good tradeoff between approximations and optimal estimation errors. Therefore, the former paper showed the limitations of Adam and other algorithms and proposed an improved method, while the latter paper directly proposed a method that could improve the performance of standard empirical risk minimization in testing many classification problems.

Performance comparison of ADAM and AMSGRAD in simple one-dimensional synthesis case


Optimization method is the standard learning process, but generalization and black box problems are also closely related to the learning process. For example, how to control the model from over-fitting in the training process, or to understand the hyperparameters of the model and the parameters learned are worthy of our attention. In the paper Understanding deep learning requires Rethinking, the author shows that the traditional generalization thinking ascribes small generalization errors to the characteristics of model family or is related to regularization techniques in the training process. However, these traditional methods cannot explain the good generalization of large neural networks in practice. Therefore, the author shows through theoretical construction and empirical research that as long as the number of parameters exceeds the number of data points, the simple two-layer deep neural network has perfect limited sample expression ability. Also in the paper on Understanding Black-box Predictions via Influence Functions, the researchers use the classical technique of robust statistics Influence Functions, which can track model Predictions through learning algorithms and return training data. Thus we can identify the training data points that most affect a given prediction. They show that even in nonconvex and nondifferentiable models with theoretical failures, the approximation of the influence function can still provide valuable information to understand the predictions of black-box models.


4. Data issues

This year’s conferences have indeed focused on data-related issues such as data bias, data privacy and big data sets. This kind of topic can be roughly divided into two parts, namely, new data set, corpus, knowledge base, or the characteristics and problems existing in data itself. In fact, several datasets have been proposed this year, and we may be familiar with the Fashion-MnIST dataset, which aims to replace MNIST, and the new generation of datasets built by Facebook for starCraft AI research, These powerful data sets have pushed deep learning and machine learning forward. In addition, big companies like Apple and Microsoft are giving further thought to data privacy. Microsoft, for example, introduced PrivTree this year, which uses a differential privacy algorithm to protect location privacy, while Apple’s differential privacy algorithm defines privacy mathematically strictly, on the idea that carefully calibrated noise can hide user data. This year’s IJCAI and EMNLP summits also had award-winning papers on the subject of data.

In A Corpus of Natural Language for Visual Reasoning, the researchers proposed A new Visual Reasoning Language dataset. Contains 92,244 pairs of samples for a natural description of the composite image (3962 statements). The data set demonstrates that most linguistic phenomena require set-theory reasoning, so it will be very competitive in future studies. In YAGO2, an extension of the YAGO knowledge base, the researchers show that it is automatically built from Wikipedia, GeoNames, and WordNet and covers 447 million facts from 9.8 million entities. Human assessments have confirmed 95% of these facts.

Alane Suhr et al. Visual reasoning language data set.


For data bias and data privacy, Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints This paper investigates data and models associated with multi-label object classification and visual semantic role labeling. They found that the datasets for these tasks contained significant gender biases, and that the models trained on these datasets amplified these biases. For example, cooking was 33 percent more likely to involve women in the training set than men, while the post-training model amplified that to 68 percent in the test set. Therefore, the researchers suggest that we can inject corp-level constraints to calibrate existing structured prediction models and design an algorithm based on Lagrange relaxation for group inference. In addition, researchers such as Google Brain may unintentionally imply some Training Data in the semi-supervised Knowledge Transfer for Deep Learning from Private Training Data representation model of the paper. So careful analysis can reveal some sensitive information. To solve this problem, the researchers proposed private aggregation for teacher integration (PATE), which combines multiple models trained by mutually exclusive data sets in a black-box manner. Because these models rely on sensitivity data, they are not publicly available, but they can serve as “teachers” for the “student” model. Therefore, even though an attacker can access students and check the internal workings, he cannot directly access the basic parameters or data of a single teacher.


5. Other questions

Many of the winning papers at this year’s conference also focused on reinforcement learning and applications. In terms of reinforcement learning, The option-Critic Architecture paper shows that The temporal abstraction is The key to The expansion of learning and planning in reinforcement learning. They derive The strategy gradient theorem for options. A new option-Critic architecture is proposed, which can simultaneously learn internal policies and option termination conditions without providing any additional rewards or sub-goals. On the application side, Making Neural Programming Architectures Generalize via Recursion propose a method that uses Recursion to enhance Neural architecture. They implement this Recursion in a neuroprogrammer-interpreter framework, The framework can be used for four tasks: elementary addition, bubble sort, topological sort and quick sort. The authors of this paper conclude that we need to use concepts such as recursion in order for neural architectures to learn procedural semantics robustly.


conclusion

This year’s field of machine learning, and deep learning methods in particular, was notable, as evidenced by the number of submissions and attendees. On the first day of NIPS 2017, for example, the registration line at the Long Beach Convention Center was “long enough for you to read a few papers.” All of these well-known summits show that this is a time when so many ideas and so many possibilities can be realized through research and discussion. Finally, I wish you all the best in the New Year to implement your ideas and make a mark on the thriving AI field and machine learning community. Heart of the Machine will also continue to observe academic conferences in depth in 2018, where we will showcase the most lovely aspects of this booming field.


  • Exclusive | AAAI – 17 award-winning paper depth interpretation (top) : learn from supervision without a label to the artificial intelligence moral box
  • Exclusive | AAAI – 17 award-winning paper depth interpretation (under) : monte carlo localization and recommendation system
  • ICCV 2017 Awards announced: the biggest winner He Kaiming won the best paper, the best student paper
  • The University of Oxford was named best Outstanding Paper in IJCAI 2017
  • Frontiers in Natural Language Processing: EMNLP 2017 Best Papers published
  • ACL 2017 five best papers published, no domestic award-winning papers
  • Column | 2017 best CVPR paper unscramble: intensive convolution network connection
  • CVPR 2017 awards announced: Apple won the best paper
  • ICLR 2017 is about to open, the Heart of the Machine gives you an overview of the papers (with the best papers and live address)
  • The ESE Speech Recognition engine of Shenjian Technology wins the best paper of 2017
  • KDD 2017 Award Paper announcement: Top research and application achievements in data mining field
  • On the first day of ICML 2017, two awards were announced: Stanford University won the best paper award
  • Live coverage | NIPS 2017 on its first window full interpretation: four awards announced the results of this paper