Subscribe to the Machine Learning Engineer Newsletter

Receive curated articles, tutorials and blog posts from experienced Machine Learning professionals.


THE ML ENGINEER 🤖
Issue #64
 
 
This week in Issue #64:
 
 
Forward the email, or share the online version on 🐦 Twitter,  💼 Linkedin and  📕 Facebook!
 
If you would like to suggest articles, ideas, papers, libraries, jobs, events or provide feedback just hit reply or send us an email to a@ethical.institute! We have received a lot of great suggestions in the past, thank you very much for everyone's support!
 
 
 
The Kubeflow Project has made their official release of the v1.0, a great milestone to bring Kubernetes Machine Learning for everyone. The Kubeflow project brings end to end machine learning capabilities, including Development with Jupyter Notebook Management, Building with Kubeflow Fairing, Training leveraging multiple ML frameworks and Deployment with KFServing and Seldon Core.
 
 
 
One of the biggest issues that we face in machine learning is how to deploy and scale models in production. This article breaks down the core concepts that make machine learning deployment and productionionisation different to that of traditional software, as well as the core components that are part of the machine learning lifecycle. These include the Training, Validation, Testing, Serving and Monitoring.
 
 
 
Machine learning systems have been increasingly deployed to aid in high-impact decision-making, such as determining criminal sentencing, child welfare assessments, who receives medical attention and many other settings. Understanding whether such systems are fair is crucial, and requires an understanding of models’ short- and long-term effects.Google released a research paper which outlines a set of components for building simple simulations that explore potential long-run impacts of deploying machine learning-based decision systems in social environments.
 
 
 
Peer review has been an integral part of scientific research for more than 300 years. But even before peer review was introduced, reproducibility was a primary component of the scientific method. Now, we hear warnings that Artificial Intelligence (AI) and Machine Learning (ML) face their own reproducibility crises. This article dives into insights obtained whilst attempting to reproduce ML algorithms from papers continuously, leading into a framework to assess and quantify how reproducible a specific resource is.
 
 
 
It can be hard to stay up-to-date on the published papers in the field of adversarial examples, where we have seen massive growth in the number of papers written each year. This resource attempts to address just that by putting together a huge list of papers from Arxiv related to adversarial examples.
 
 
 
 
 
 
The topic for this week's featured production machine learning libraries is Adversarial Robustness. We are currently looking for more libraries to add - if you know of any that are not listed, please let us know or feel free to add a PR. The four featured libraries this week are:
  • Alibi Detect - alibi-detect is a Python package focused on outlier, adversarial and concept drift detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. The outlier detection methods should allow the user to identify global, contextual and collective outliers.
  • CleverHans - library for testing adversarial attacks / defenses maintained by some of the most important names in adversarial ML, namely Ian Goodfellow (ex-Google Brain, now Apple) and Nicolas Papernot (Google Brain). Comes with some nice tutorials!
  • Foolbox - second biggest adversarial library. Has an even longer list of attacks - but no defenses or evaluation metrics. Geared more towards computer vision. Code easier to understand / modify than ART - also better for exploring blackbox attacks on surrogate models.
  • AdvBox - generate adversarial examples from the command line with 0 coding using PaddlePaddle, PyTorch, Caffe2, MxNet, Keras, and TensorFlow. Includes 10 attacks and also 6 defenses. Used to implement StealthTshirt at DEFCON!
 
If you know of any libraries that are not in the "Awesome MLOps" list, please do give us a heads up or feel free to add a pull request
 
 
 
 
As AI systems become more prevalent in society, we face bigger and tougher societal challenges. We have seen a large number of resources that aim to takle thiese challenges in the form of AI Guidelines, Principles, Ethics Frameworks, etc, however there are so many resources it is hard to navigate. Because of this we started an Open Source initiative that aims to map the ecosystem to make it simpler to navigate. We will be showcasingitg three resources from our list so we can check them out every week. This week's resources are:
  • ACM's Code of Ethics and Professional Conduct - This is the code of ethics that has been put together in 1992 by the Association for Computer Machinery and updated in 2018
  • From What to How - An initial review of publicly available AI Ethics Tools, Methods and Research to translate principles into practices
 
If you know of any guidelines that are not in the "Awesome AI Guidelines" list, please do give us a heads up or feel free to add a pull request
 
 
 
© 2018 The Institute for Ethical AI & Machine Learning