Subscribe to the Machine Learning Engineer Newsletter

Receive curated articles, tutorials and blog posts from experienced Machine Learning professionals.


THE ML ENGINEER 🤖
Issue #105
 
This week in Issue #105:
 
 
Forward  email, or share the online version on 🐦 Twitter,  💼 Linkedin and  📕 Facebook!
 
If you would like to suggest articles, ideas, papers, libraries, jobs, events or provide feedback just hit reply or send us an email to a@ethical.institute! We have received a lot of great suggestions in the past, thank you very much for everyone's support!
 
 
 
Deploying AI ethically and responsibly will involve cross-functional team collaboration, new tools and processes, and proper support from key stakeholders. The Gradient Flow team has put together a great overview of the ecosystem of tools and resources around "Responsible AI".
 
 
 
The nlp-tutorial repository is one of the most highly starred tutorials relevant for individuals currently studying NLP (Natural Language Processing) using Pytorch. The repo provides coverage for a broad range of models, and most are implemented with less than 100 lines of code.
 
 
 
Microsoft has published a blog post outlining some of the highlights in the area of research, showcasing some of their work across the areas of AI, graphics, language and domain specific applications, between many others.
 
 
 
Machine learning mastery has put together a comprehensive deep dive into the topic of concept drift in machine learning. In this post they cover the challenge of data changing over time, how concept drift is defined, and how to handle concept drift in predictive modelling pipelines.
 
 
 
Uber serves millions of rides and deliveries a day, generating hundreds of petabytes of raw data - this introduces demands for innovative approaches in order to service all the analytics teams across the organisation. In this post the Uber engineering team covers how they approached the road towards a unified workflow management system.
 
 
 
 
 
The topic for this week's featured production machine learning libraries is Adversarial Robustness. We are currently looking for more libraries to add - if you know of any that are not listed, please let us know or feel free to add a PR. The four featured libraries this week are:
  • AdvBox - generate adversarial examples from the command line with 0 coding using PaddlePaddle, PyTorch, Caffe2, MxNet, Keras, and TensorFlow. Includes 10 attacks and also 6 defenses. Used to implement StealthTshirt at DEFCON!
  • Foolbox - second biggest adversarial library. Has an even longer list of attacks - but no defenses or evaluation metrics. Geared more towards computer vision. Code easier to understand / modify than ART - also better for exploring blackbox attacks on surrogate models.
  • IBM Adversarial Robustness 360 Toolbox (ART) - at the time of writing this is the most complete off-the-shelf resource for testing adversarial attacks and defenses. It includes a library of 15 attacks, 10 empirical defenses, and some nice evaluation metrics. Neural networks only.
  • CleverHans - library for testing adversarial attacks / defenses maintained by some of the most important names in adversarial ML, namely Ian Goodfellow (ex-Google Brain, now Apple) and Nicolas Papernot (Google Brain). Comes with some nice tutorials!
 
If you know of any libraries that are not in the "Awesome MLOps" list, please do give us a heads up or feel free to add a pull request
 
 
 
 
As AI systems become more prevalent in society, we face bigger and tougher societal challenges. We have seen a large number of resources that aim to takle these challenges in the form of AI Guidelines, Principles, Ethics Frameworks, etc, however there are so many resources it is hard to navigate. Because of this we started an Open Source initiative that aims to map the ecosystem to make it simpler to navigate. You can find multiple principles in the repo - some examples include the following:
 
 
If you know of any guidelines that are not in the "Awesome AI Guidelines" list, please do give us a heads up or feel free to add a pull request
 
 
 
© 2018 The Institute for Ethical AI & Machine Learning