AI DIGEST #APRIL20

By Alisher Abdulkhaev and Suzana Ilić

Issue #10: April 2020

  • Facebook AI releases Blender
  • Virtual ICLR 2020
  • Facebook and AWS introduce TorchServe
  • PyTorch v1.5 release
  • MONAI: An Open Source AI Framework for Healthcare Research
  • The AI For Medicine Specialization
  • OpenAI Microscope
  • We lost John Horton Conway
  • ML Code Completeness Checklist
  • Image Matching Benchmark and Challenge
  • Waymo: Automated Data Augmentation
  • ACM Prize in Computing Awarded to AlphaGo Lead David Silver

Facebook AI releases Blender

Facebook AI has built and open-sourced Blender, the largest-ever state-of-the-art open-domain chatbot. It outperforms others in terms of engagement and also feels more human, according to human evaluators. They pretrained large (up to 9.4 billion) Transformer neural networks on large amounts of conversational data.

The team recently introduced a novel task called Blended Skill Talk (BST) for training and evaluating desirable chatbot skills. BST consists of the following skills, leveraging previous research:

  • Engaging use of personality (PersonaChat)
  • Engaging use of knowledge (Wizard of Wikipedia)
  • Display of empathy (Empathetic Dialogues)
  • Ability to blend all three seamlessly (BST)

Find all information including paper and code here.


Virtual ICLR 2020

Initially planned to happen in Addis Ababa, Ethiopia, ICLR 2020 was organized as a fully virtual conference this year due to COVID-19, with a lot of highlights such as a virtual paper visualization and exploration.


Facebook and AWS introduce TorchServe

TorchServe: a PyTorch model serving framework

Facebook and AWS announced TorchServe, a new model-serving framework for deploying PyTorch machine learning models at scale without custom code. TorchServe is a collaboration between AWS and Facebook, and it’s available as part of the PyTorch open source project.

Basic Features

  • Serving Quick Start – Basic server usage tutorial
  • Model Archive Quick Start – Tutorial that shows you how to package a model archive file.
  • Installation – Installation procedures
  • Serving Models – Explains how to use torchserve.
  • REST API – Specification on the API endpoint for TorchServe
  • Packaging Model Archive – Explains how to package model archive file, use model-archiver.
  • Logging – How to configure logging
  • Metrics – How to configure metrics
  • Batch inference with TorchServe – How to create and serve a model with batch inference in TorchServe

Advanced Features

  • Advanced settings – Describes advanced TorchServe configurations.
  • Custom Model Service – Describes how to develop custom inference services.
  • Unit Tests – Housekeeping unit tests for TorchServe.
  • Benchmark – Use JMeter to run TorchServe through the paces and collect benchmark data.

PyTorch v1.5 release

PyTorch v1.5: autograd API for Hessians/Jacobians, C++ frontend stable and 100% parity with Python, Better performance on GPU and CPU with Tensor Format ‘channels last’, distributed.rpc stable, Custom C++ class binding.


MONAI Open Source AI Framework for Healthcare Research

NVIDIA and King’s College London Announced MONAI Open Source AI Framework for Healthcare Research: domain-based, PyTorch-based project aids researchers developing AI in healthcare.

MONAI is user-friendly, delivers reproducible results and is domain-optimized for the demands of healthcare data — equipped to handle the unique formats, resolutions and specialized meta-information of medical images. The first public release provides domain-specific data transforms, neural network architectures and evaluation methods to measure the quality of medical imaging models.

End to end process pipeline

GitHub | Project Website
Source: NVIDIA Blog


The AI For Medicine Specialization

DeeplearningAI (founded by Andrew Ng) announced their new three-course AI for Medicine Specialization:

  • Course 1: AI For Medical Diagnosis
  • Course 2: AI For Medical Prognosis
  • Course 3: AI For Medical Treatment

The courses can be taken from Coursera. It typically takes 3-4 weeks, 4-6 hours per week to complete each course.

In this Specialization, you’ll gain practical experience applying machine learning to concrete problems in medicine. You’ll learn how to:

  • Diagnose diseases from x-rays and 3D MRI brain images
  • Predict patient survival rates more accurately using tree-based models
  • Estimate treatment effects on patients using data from randomized trials
  • Automate the task of labeling medical datasets using natural language processing

Source: deeplearning.ai


OpenAI Microscope

OpenAI introduced Microscope – a collection of visualizations of layers and neurons of several common deep learning models that are often studied in interpretability. Microscope makes it easier to analyze the features that form inside these neural networks.

Source: OpenAI


John Horton Conway dies aged 82

John Horton Conway was an English mathematician active in the theory of finite groups, knot theory, number theory, combinatorial game theory and coding theory. Conway spent the first half of his long career at the University of Cambridge in England, and the second half at Princeton University in New Jersey, where he held the title John von Neumann Professor Emeritus.

John Conway was famous for his invention – Game of Life. Game of Life is a cellular automaton, is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input.

Source: John Horton Conway | Conway’s Game of Life


ML Code Completeness Checklist

Papers with Code, which hosts the collection of paper implementations in one place, compiled the best practices from various popular research repositories – ML Code Completeness Checklist. ML Code Completeness Checklist is now part of the official NeurIPS 2020 code submission process.

The ML Code Completeness Checklist assesses a code repository and checks a code repository for:

  • Dependencies
  • Training scripts
  • Evaluation scripts
  • Pretrained models
  • Results

Source: ML Code Completeness Checklist
GitHub: paperswithcode, releasing-research-code


Image Matching Benchmark and Challenge

The second Image Matching Challenge: seeking the best end-to-end solutions for 3D image reconstruction announced. The winners going to present the approaches at Local Features and Beyond workshop at CVPR2020.

Reconstructing 3D objects and buildings from a series of images is a well-known problem in computer vision, known as Structure-from-Motion (SfM). It has diverse applications in photography and cultural heritage preservation and powers many services across Google Maps, such as the 3D models created from StreetView and aerial imagery.

We hope this benchmark, dataset and challenge helps advance the state of the art in 3D reconstruction with heterogeneous images.

Source: Google AI
Challenge: Image Matching Challenge – 2020
Paper: Image Matching across Wide Baselines: From Paper to Practice


Waymo: Automated Data Augmentation

Building a new augmentation strategy for lidar point clouds

“Each augmentation operation is associated with a probability and specific parameters. For example, the GroundTruthAugmentor has parameters denoting the probability for sampling vehicles, pedestrians, cyclists, whereas the GlobalTranslateNoise operation has parameters for the distortion magnitude of translation operation on x, y and z coordinates.

To automate the process of finding good augmentation policies for lidar point clouds, we created a new automated data augmentation algorithm – Progressive Population Based Augmentation (PPBA). PPBA builds on our previous Population Based Training (PBT) work, where we train neural nets with evolutionary computation, which uses principles similar to Darwin’s Natural Selection Theory. PPBA learns to optimize augmentation strategies effectively and efficiently by narrowing down the search space at each population iteration and adopting the best parameters discovered in past iterations.”

Source: Waymo


ACM Prize in Computing Awarded to AlphaGo Lead David Silver

David Silver was announced to win the 2019 ACM Prize in Computing for breakthrough advances in computer game-playing using deep reinforcement learning. Silver is a Professor at University College London and a Principal Research Scientist at DeepMind. His most highly publicized achievement was leading the team that developed AlphaGo, that defeated the world champion in the game Go. Silver developed AlphaGo aby combining ideas from deep learning, reinforcement learning, traditional tree-search and large-scale computing. AlphaGo is recognized as a milestone in AI research and was ranked by New Scientist magazine as one of the top 10 discoveries of the last decade.

ACM Prize for David Silver

Silver is a Professor at University College London and a Principal Research Scientist at DeepMind, who developed Alpha Go.


Support MLT on Patreon. 💙

Leave a Reply