Subscribe to the Machine Learning Engineer Newsletter

Receive curated articles, tutorials and blog posts from experienced Machine Learning professionals.


THE ML ENGINEER πŸ€–
Issue #104
 
This week in Issue #104:
 
 
Forward  email, or share the online version on 🐦 Twitter,  πŸ’Ό Linkedin and  πŸ“• Facebook!
 
If you would like to suggest articles, ideas, papers, libraries, jobs, events or provide feedback just hit reply or send us an email to a@ethical.institute! We have received a lot of great suggestions in the past, thank you very much for everyone's support!
 
 
 
The lifecycle of a machine learning model only begins once it’s in production. This article presents an end-to-end example showcasing best practices, principles, patterns and techniques around monitoring of machine learning models. It covers standard microservice monitoring techniques adapted towards deployed machine learning models, as well as more advanced paradigms including concept drift, outlier detection and AI explainability.
 
 
 
Metadata lineage and discovery is critical in large scale machine learning and data systems. Linkedin Principle Engineer Shirshanka Das has put together a comprehensive overview of the key architectural, conceptual and infrastructural components in metadata management systems. This covers search, access control, lineage, compliance, etc, as well as common architecture patterns.
 
 
 
A hands-on course on MLOps for software engineers, data scientists and product managers. This course covers end to end examples building from the very basics to the advanced knowledge required to train, improve and productionise machine learning.
 
 
 
The Montral AI Ethics Institute has put together a fantstic panel with several thought leaders in the AI Ethics space following the release of their AI Ethics report. In this panel they cover a broad range of topics, and dive into a lot of interesting resources which they have linked in their transcript.
 
 
 
The Free & Open Source Developers European Meeting is an annual software conference which for the first time will take place online in 2021. This fantastic conference has a broad range of tracks, and one of them will be focusing on HPC, Big Data and Data Science, which is currently looking for proposals - do feel free to submit a proposal or share with anyone relevant that would be keen to showcase best practices, learning, case studies, etc. on ML related topics.
 
 
 
 
 
The topic for this week's featured production machine learning libraries is Adversarial Robustness. We are currently looking for more libraries to add - if you know of any that are not listed, please let us know or feel free to add a PR. The four featured libraries this week are:
  • AdvBox - generate adversarial examples from the command line with 0 coding using PaddlePaddle, PyTorch, Caffe2, MxNet, Keras, and TensorFlow. Includes 10 attacks and also 6 defenses. Used to implement StealthTshirt at DEFCON!
  • Foolbox - second biggest adversarial library. Has an even longer list of attacks - but no defenses or evaluation metrics. Geared more towards computer vision. Code easier to understand / modify than ART - also better for exploring blackbox attacks on surrogate models.
  • IBM Adversarial Robustness 360 Toolbox (ART) - at the time of writing this is the most complete off-the-shelf resource for testing adversarial attacks and defenses. It includes a library of 15 attacks, 10 empirical defenses, and some nice evaluation metrics. Neural networks only.
  • CleverHans - library for testing adversarial attacks / defenses maintained by some of the most important names in adversarial ML, namely Ian Goodfellow (ex-Google Brain, now Apple) and Nicolas Papernot (Google Brain). Comes with some nice tutorials!
 
If you know of any libraries that are not in the "Awesome MLOps" list, please do give us a heads up or feel free to add a pull request
 
 
 
 
As AI systems become more prevalent in society, we face bigger and tougher societal challenges. We have seen a large number of resources that aim to takle these challenges in the form of AI Guidelines, Principles, Ethics Frameworks, etc, however there are so many resources it is hard to navigate. Because of this we started an Open Source initiative that aims to map the ecosystem to make it simpler to navigate. You can find multiple principles in the repo - some examples include the following:
 
 
If you know of any guidelines that are not in the "Awesome AI Guidelines" list, please do give us a heads up or feel free to add a pull request
 
 
 
Β© 2018 The Institute for Ethical AI & Machine Learning