Subscribe to the Machine Learning Engineer Newsletter

Receive curated articles, tutorials and blog posts from experienced Machine Learning professionals.


THE ML ENGINEER
Issue #3

 
 
This week in Issue #3:
Extreme ML with Apache Kafka, programming explainable ML, broad overview of computer vision, going beyond accuracy, deep slow-mo generators, guidelines for trustworthy AI, explainability open source libraries and more!
 
Support the ML Engineer!
Forward the email, or share the online version on 🐦 Twitter,  💼 Linkedin and  📕 Facebook!
 
If you would like to suggest articles, ideas, tutorials, libraries or provide feedback just hit reply or send us an email to a@ethical.institute!
 
 
 
 
Kafka is a great framework we have been advocating for a few months now. Kai Wähner has put together a great blog post + video talk on "how to build ML models at extreme scale and productionize the built models in mission-critical real-time apps by leveraging open-source components in the public cloud".
 
 
One of the best insights on practical approaches towards explainable machine learning by Mark Hammond, former founder at Bons.AI and current director of business AI at microsoft. Mark provides 3 key categories that tackle the question of explainability: Deep Explanation, Model Induction and Machine Teaching. The last one being key - namely combining machine learning with subject matter expertise (aka Principle #3 "Explainability by Justification" in our Principles for Responsible ML 🤖).
 
 
Very comprehensible blog post that provides a brief and concise overview on all-things-computer-vision, including: Classification, object detection, segmentation, pose estimation, action recognition, enhancement and restoration.
 
 
One of our core principles advocates for statistical metrics that go beyond beyond accuracy (Principle #6). ROC curves and precision-recall curves are great tools to open up a model and assess its performance. This machine learning mastery blog post provides a great overview of key concepts including ROC Curves, Precision and Recall + More!
 
 
NVIDIA comes back thanks to a community PyTorch implementation of their paper: "Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation". They use deep learning video interpolation techniques to generate intermediate frames, resulting in ultra-slowmo videos.
 
 
We are thrilled to announce that the European Commission has launched the draft of the AI Ethics Guidelines produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG), which The Institute for Ethical AI & Machine Learning is currently part of. If you have time do have a look and provide feedback through the European AI Alliance portal.
 
 
MLOps = ML Operations
A new section to the Machine Learning operations list specifically around Explainability of Predictions (aka Principle #3) - we are very excited about this new addition. The machine learning explainability libraries we're showcasing this week are:
 
  • SHAP - SHapley Additive exPlanations is a unified approach to explain the output of any machine learning model.
  • LIME - Local Interpretable Model-agnostic Explanations for machine learning models.
  • ELI5 - "Explain Like I'm 5" is a Python package which helps to debug machine learning classifiers and explain their predictions.
  • Tensorboard's What-If - Tensorboard screen to analyse the interactions between inference results and data inputs.
 
If you know of any libraries that are not in the "Awesome MLOps" list, please do give us a heads up or feel free to add a pull request