Issue #13: July 2020
- IEEE CIS Neural Networks Pioneer Award 2021
- GPT-3 codes up a website for you
- ICML 2020 Outstanding Paper Awards
- CVPR, ICCV, WACV videos
- ACL 2020 Best Paper Award
- Deep Learning with PyTorch
- MIT takes down Tiny Images dataset due to offensive content
- International Symposium on Artificial Intelligence and Brain Science
IEEE CIS Neural Networks Pioneer Award 2021
IARAI’s very own Sepp Hochreiter has received the prestigious Neural Networks Pioneer Award 2021 for his contributions to the development of the Long Short-Term Memory (LSTM) architecture. The award is presented annually by the Computational Intelligence Society (CIS) of the Institute of Electrical and Electronics Engineers (IEEE). The Neural Networks Pioneer award recognizes groundbreaking contributions to early concepts and sustained developments in the field of neural networks. The prize includes a plaque plus US$2,500 honorarium, and travel support for the recipient and one companion to attend the award presentation at a major IEEE CIS-sponsored conference in 2021.
LSTM (Long Short-Term Memory) is an artificial recurrent neural network architecture used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections. It can not only process single data points, but also entire sequences of data. LSTM networks are well-suited to classifying, processing and making predictions based on time series data.
Sharif Shameem demonstrated how you can give instructions in natural language to the recently released GPT-3 API (OpenAI) and the system will generate near perfect code for you, shown with the example of the Google homepage.
Source: Twitter @sharifshameem
🚀Find a collection of more amazing GPT-3 demos here.
Organizers of the 37th International Conference on Machine Learning (ICML) have announced their Outstanding Paper awards, recognizing papers from the current conference that are “strong representatives of solid theoretical and empirical work in our field.”
This year the acceptance rate of 21.8 percent (1,088/4,990) is slightly lower than 2019’s 22.6 percent (774/3,424).
Outstanding Paper Awards:
- On Learning Sets of Symmetric Elements | Authors: H. Maron, O. Litany, G. Chechik, E. Fetaya
- Tuning-free Plug-and-Play Proximal Algorithm for Inverse Imaging Problems | Authors: K. Wei, A. Aviles-Rivero, J. Liang, Y. Fu, C. Schönlieb, H. Huang
Outstanding Paper (Honorable Mentions):
- Efficiently sampling functions from Gaussian process posteriors | Authors: J. Wilson, S. Borovitskiy, A. Terenin, P. Mostowsky, M. Deisenroth
- Generative Pretraining from Pixels | Authors: M. Chen, A. Radford, R. Child, J. K Wu, H. Jun, D. Luan, I. Sutskever
CVPR, ICCV, WACV videos are freely available on CVF (Computer Vision Foundation) YouTube channel.
Also you could find the list of conference links here
The ACL 2020 Best Paper Awards went to “Beyond Accuracy: Behavioral Testing of NLP Models with CheckList” by Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, Sameer Singh. [paper]
- “Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics” by Nitika Mathur, Timothy Baldwin and Trevor Cohn
- “Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks” by Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey and Noah A. Smith [paper]
📌 ACL 2020
The full version of the “Deep Learning with PyTorch” book by Luca Antiga, Eli Stevens, and Thomas Viehmann is now available.
Deep Learning with PyTorch provides a detailed, hands-on introduction to building and training neural networks with PyTorch, a popular open source machine learning framework. This full book includes:
Introduction to deep learning and the PyTorch library Pre-trained networks Tensors The mechanics of learning Using a neural network to fit data Using convolutions to generalize Real-world examples: Building a neural network designed for cancer detection Deploying to production
MIT takes down 80 Million Tiny Images data set due to offensive content.
“The dataset is too large (80 million images) and the images are so small (32 x 32 pixels) that it can be difficult for people to visually recognize its content. Therefore, manual inspection, even if feasible, will not guarantee that offensive images can be completely removed. We therefore have decided to formally withdraw the dataset. We ask the community to refrain from using it in future and also delete any existing copies of the dataset that may have been downloaded.
From the official MIT statement:
Why it is important to withdraw the dataset: biases, offensive and prejudicial images, and derogatory terminology alienates an important part of our community — precisely those that we are making efforts to include. It also contributes to harmful biases in AI systems trained on such data. Additionally, the presence of such prejudicial images hurts efforts to foster a culture of inclusivity in the computer vision community. This is extremely unfortunate and runs counter to the values that we strive to uphold.”
The dataset was created in 2006. 53,464 different nouns copied from Wordnet were used to automatically download images of the corresponding noun from Internet search engines at the time to collect the 80 million images (at 32×32 resolution).
📌 Source: Official statement by MIT professors Bill Freeman and Antonio Torralba and NYU professor Rob Fergus published on the MIT CSAIL website https://groups.csail.mit.edu/vision/TinyImages/
📌 Image source
Recent advances in “deep learning” realized artificial intelligence (AI) that surpasses humans in certain tasks like visual object recognition and game playing. Today’s AI, however, still lacks the versatility and flexibility of human intelligence, which motivates AI researchers to learn brain’s working principles. Neuroscientists also need helps of AI in making sense of massive data from sequencing, imaging, and so forth. The aim of this symposium is to bring together researchers advancing the forefront of AI and neuroscience to identify next targets in creating brain-like intelligence and further advancing neuroscience.
📌 Date: Saturday 10th October to Monday 12th October 2020📌 Source: Correspondence and Fusion of Artificial Intelligence and Brain Science