By Alisher Abdulkhaev

Issue #17: November 2020

AlphaFold: a solution to a 50-year-old grand challenge in biology

  • The latest version of AlphaFold (AlphaFold-2) has been recognised as a solution to one of biology’s grand challenges – the “protein folding problem”.
  • It was validated at CASP14, the biennial Critical Assessment of protein Structure Prediction
  • We’re excited about the potential impact AlphaFold may have on the future of biological research and scientific discovery.

📌 Source: DeepMind Blog

NeurIPS Meetup Japan 2020

  • NeurIPS is one of the top conference for Machine Learning and Computational Neuroscience
  • NeurIPS Meetup is a local event hosted during the NeurIPS conference, leveraging conference videos and live local content, with a duration ranging from a few hours to a full week, and bringing together participants from one or more companies, universities, and/or the public.
  • NeurIPS Meetup Japan will take place this year on December.
  • The NeurIPS Meetup is organizers by RIKEN AIP, MLT, Keio University, University of Tokyo, Tokyo Institute of Technology, NTT and MathWorks Japan.

📌 Source: NeurIPS Meetups

📌 Source: NeurIPS Meetup Japan 2020


  • gradslam is a PyTorch based open-source framework providing differentiable building blocks for SLAM systems.
  • SLAM (simultaneous localization and mapping) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent’s location within it.
  • SLAM algorithms are used in navigation, robotic mapping and odometry for virtual reality or augmented reality.

📌 Source: Grad SLAM

📌 Paper: ∇SLAM: Dense SLAM meets Automatic Differentiation

The Language Interpretability Tool (LIT)

  • The Language Interpretability Tool (LIT) is an open-source platform for visualization and understanding of NLP models.
  • LIT is for researchers and practitioners looking to understand NLP model behavior through a visual, interactive, and extensible tool.
  • LIT contains many built-in capabilities but is also customizable, with the ability to add custom interpretability techniques, metrics calculations, counterfactual generators, visualizations, and more.

📌 Source: The Language Interpretability Tool (LIT)

Image Expert Models

  • Image Expert Models is a collection of pre-trained image representations that have been tailored for different data distributions.
  • 48 models from the Scalable Transfer Learning with Expert Models paper have been added to TFHub, increasing the diversity of pre-trained image representations.

📌 Source: The Language Interpretability Tool (LIT)

📌 Paper: Scalable Transfer Learning with Expert Models

MinDiff Framework

  • MinDiff — a new regularization technique available in the TF Model Remediation library for effectively and efficiently mitigating unfair biases when training ML models.
  • Given two sets of examples from our dataset, MinDiff penalizes the model during training for differences in the distribution of scores between the two sets. The less distinguishable the two sets are based on prediction scores, the smaller the penalty that will be applied.

📌 Source: TF Model Remediation library | Google AI Blog

Apple M1 chip

  • Apple announced a new M1 chip — the first chip designed specifically for Mac which delivers incredible performance, custom technologies, and revolutionary power efficiency.
  • With a giant leap in performance per watt, every Mac with M1 is transformed into a completely different class of product. This isn’t an upgrade. It’s a breakthrough.
  • According to the official TensorFlow Blog, the M1 chip can exploit accelerated training in a Mac-optimised version of TensorFlow together with the new ML compute framework.

📌 Source: Apple M1 | TensorFlow Blog

Leave a Reply