Subscribe to the Machine Learning Engineer Newsletter

Receive curated articles, tutorials and blog posts from experienced Machine Learning professionals.


THE ML ENGINEER 🤖
Issue #63
 
 
This week in Issue #63:
 
 
Forward the email, or share the online version on 🐦 Twitter,  💼 Linkedin and  📕 Facebook!
 
If you would like to suggest articles, ideas, papers, libraries, jobs, events or provide feedback just hit reply or send us an email to a@ethical.institute! We have received a lot of great suggestions in the past, thank you very much for everyone's support!
 
 
 
Often machine learning interpretability techniques are studied and analysed in isolation. This post explores the powerful interfaces that arise when you combine interpretability techniques, as well as the the rich structure of the combinatorial space that results when combining them. The post provides an intuitive explanation, together with visual representations of these building blocks for machine learning interpretability techniques.
 
 
 
The Linux Foundation has put together an excellent resource that covers key topics that form the foundations of Ethics in AI and Big Data. In this course they cover a brief overview on AI, as well as principles for building responsible AI, together with several initiatives and open source drivers that suround this topic. The course starts this week, so perfect timing to join in.
 
 
 
An incredibly insightful paper that reviews the traditional frameworks to infer emotions from facial expressions. This article provides a (very specific but relevant) insight on the power of context when it comes to data that can be used for machine learning, especially in something as complex as facial expressions, and most importantly as use-cases of this would involve a large number of critical things to consider, including privacy, transparency, explainability, etc.
 
 
 
The Data Exchange podcast dives into machine learning explainability and transparency with Fiddlr Labs CEO Krishna Gade. In this podcast they cover guidelines for companies who want to begin working on incorporating model explainability into their data products as well as the relationship between model explainability (transparency) and security (ML that can resist adversarial attacks).
 
 
 
Microsoft has put together a post that dives into the Team Data Science Process, which is an agile and iterative data science methodology to delivery predictive analytics solutions and intelligent applications efficiently. This article provides an overview of TDSP and its main components.
 
 
 
 
 
 
The topic for this week's featured production machine learning libraries is Adversarial Robustness. We are currently looking for more libraries to add - if you know of any that are not listed, please let us know or feel free to add a PR. The four featured libraries this week are:
  • Alibi Detect - alibi-detect is a Python package focused on outlier, adversarial and concept drift detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. The outlier detection methods should allow the user to identify global, contextual and collective outliers.
  • CleverHans - library for testing adversarial attacks / defenses maintained by some of the most important names in adversarial ML, namely Ian Goodfellow (ex-Google Brain, now Apple) and Nicolas Papernot (Google Brain). Comes with some nice tutorials!
  • Foolbox - second biggest adversarial library. Has an even longer list of attacks - but no defenses or evaluation metrics. Geared more towards computer vision. Code easier to understand / modify than ART - also better for exploring blackbox attacks on surrogate models.
  • AdvBox - generate adversarial examples from the command line with 0 coding using PaddlePaddle, PyTorch, Caffe2, MxNet, Keras, and TensorFlow. Includes 10 attacks and also 6 defenses. Used to implement StealthTshirt at DEFCON!
 
If you know of any libraries that are not in the "Awesome MLOps" list, please do give us a heads up or feel free to add a pull request
 
 
 
 
As AI systems become more prevalent in society, we face bigger and tougher societal challenges. We have seen a large number of resources that aim to takle thiese challenges in the form of AI Guidelines, Principles, Ethics Frameworks, etc, however there are so many resources it is hard to navigate. Because of this we started an Open Source initiative that aims to map the ecosystem to make it simpler to navigate. We will be showcasingitg three resources from our list so we can check them out every week. This week's resources are:
  • ACM's Code of Ethics and Professional Conduct - This is the code of ethics that has been put together in 1992 by the Association for Computer Machinery and updated in 2018
  • From What to How - An initial review of publicly available AI Ethics Tools, Methods and Research to translate principles into practices
 
If you know of any guidelines that are not in the "Awesome AI Guidelines" list, please do give us a heads up or feel free to add a pull request
 
 
 
© 2018 The Institute for Ethical AI & Machine Learning