|
|
|
|
THE ML ENGINEER 🤖
Issue #28
|
|
|
|
|
|
|
|
If you would like to suggest articles, ideas, papers, libraries, jobs, events or provide feedback just hit reply or send us an email to a@ethical.institute! We have received a lot of great suggestions in the past, thank you very much for everyone's support!
|
|
|
|
|
|
|
|
|
During Kubecon Shanghai 2019 we presented a high level overview of the themes that are growingly becoming critical in the world of production machine learning. This includes over 10 themes which expand over explainability, privacy, model versioning, adversarial robustness and beyond. This repository contains a set of slides that dive into three key themes: black box explainability, model versioning and ML orchestration. For each of these themes, there is a high level explanation, together with a hands on example with a Jupyter notebook, including an end-to-end NLP pipeline, tabular explainers and a pytorchhub integration.
|
|
|
|
|
|
|
Machine learning at scale introduces new challenges, as managing a large number of models that perform increasingly critical tasks becomes more complex. O'Reilly Chief Scientist Ben Lorica has put together a great overview of the ecosystem and tools available in the machine learning governance and operations world.
|
|
|
|
|
|
|
Hugging face scientist Victor Sanh has put together an extensive list of resources related to key themes in NLP, including transfer learning, representation learning, neural dialogue, as well as other miscellaneous pieces of research that have contributed to the growth of the field in the last two years.
|
|
|
|
|
|
|
Tensorflow tutorials has launched a new deep dive on adversarial examples, which covers the conceptual, theoretical and practical aspect of this topic. It provides the code for you to find an adversarial example which could trick a classifier.
|
|
|
|
|
|
|
|
|
OSS: Adversarial Robustness
The theme for this week's featured ML libraries is Adversarial Robustness, which includes tools for adversarial attacks and adversarial security. These libraries are an incredibly exciting addition that fall in our Responsible ML Principle #8, and the whole section was contributed by one of the Fellows at the Institute Ilja Moisejevs from Calipso AI. The four featured libraries this week are:
- CleverHans - library for testing adversarial attacks / defenses maintained by some of the most important names in adversarial ML, namely Ian Goodfellow (ex-Google Brain, now Apple) and Nicolas Papernot (Google Brain). Comes with some nice tutorials!
- Foolbox - second biggest adversarial library. Has an even longer list of attacks - but no defenses or evaluation metrics. Geared more towards computer vision. Code easier to understand / modify than ART - also better for exploring blackbox attacks on surrogate models.
- IBM Adversarial Robustness Toolbox (ART) - at the time of writing this is the most complete off-the-shelf resource for testing adversarial attacks and defenses. It includes a library of 15 attacks, 10 empirical defenses, and some nice evaluation metrics. Neural networks only.
- AdvBox - generate adversarial examples from the command line with 0 coding using PaddlePaddle, PyTorch, Caffe2, MxNet, Keras, and TensorFlow. Includes 10 attacks and also 6 defenses. Used to implement StealthTshirt at DEFCON!
|
|
|
|
|
|
|
|
We feature conferences that have core ML tracks (primarily in Europe for now) to help our community stay up to date with great events coming up.
Technical & Scientific Conferences
- AI Conference Beijing [18/06/2019] - O'Reilly's signature applied AI conference in Asia in Beijing, China.
- Data Natives [21/11/2019] - Data conference in Berlin, Germany.
- ODSC Europe [19/11/2019] - The Open Data Science Conference in London, UK.
Business Conferences
- Big Data LDN 2019 [13/11/2019] - Conference for strategy and tech on big data in London, UK.
|
|
|
|
|
|
|
|
|
© 2018 The Institute for Ethical AI & Machine Learning
|
|
|
|
|