|
|
|
|
THE ML ENGINEER 🤖
Issue #94
|
|
|
|
|
|
|
|
This week in Issue #94:
If you would like to suggest articles, ideas, papers, libraries, jobs, events or provide feedback just hit reply or send us an email to a@ethical.institute! We have received a lot of great suggestions in the past, thank you very much for everyone's support!
|
|
|
|
|
|
|
|
Twitter Machine Learning Engineer Andrew Bean has written a series of posts outlining their journey enabling distributed training for sparse machine learning models. In this first part they cover how at twitter they increased performance by 100x over the standard tensorflow distribution strategies allowing for faster iterations on model training.
|
|
|
|
|
|
|
The state of AI report 2020 has come out, covering top highlights on AI in 2020. The report includes key developments around research, talent, industry and politics, together with predictions for next year.
|
|
|
|
|
|
|
This post covers some of the key concepts that cover the area of data monitoring, and the challenges and motivations for logging in machine learning. This is an important component due to growing requirements of auditability and reproducibility.
|
|
|
|
|
|
|
D2IQ Senior ML Engineer Ian Hellstrom has put together an archeological overview of machine learning platforms through the decades. This includes some high level tools provided that show some highlight on ML libraries and frameworks that have been released from the early 2000s to date.
|
|
|
|
|
|
|
Later this month we'll be hosting a meetup titled, "AI Ethics - Whose Ethics? An Analysis Across Eastern & Western Philosophy". During this session, we will dive into the similarities and differences in foundational philosophical concepts such as the meaning of good, continuity & the self, and we'll analyse published resources in the space of AI Ethics & Principles across the globe.
|
|
|
|
|
|
|
|
|
The topic for this week's featured production machine learning libraries is Privacy Preserving ML. We are currently looking for more libraries to add - if you know of any that are not listed, please let us know or feel free to add a PR. The four featured libraries this week are:
- Google's Differential Privacy - This is a C++ library of ε-differentially private algorithms, which can be used to produce aggregate statistics over numeric data sets containing private or sensitive information.
- Intel Homomorphic Encryption Backend - The Intel HE transformer for nGraph is a Homomorphic Encryption (HE) backend to the Intel nGraph Compiler, Intel's graph compiler for Artificial Neural Networks.
- Microsoft SEAL - Microsoft SEAL is an easy-to-use open-source (MIT licensed) homomorphic encryption library developed by the Cryptography Research group at Microsoft.
- PySyft - A Python library for secure, private Deep Learning. PySyft decouples private data from model training, using Multi-Party Computation (MPC) within PyTorch.
|
|
|
|
|
|
|
As AI systems become more prevalent in society, we face bigger and tougher societal challenges. We have seen a large number of resources that aim to takle thiese challenges in the form of AI Guidelines, Principles, Ethics Frameworks, etc, however there are so many resources it is hard to navigate. Because of this we started an Open Source initiative that aims to map the ecosystem to make it simpler to navigate. We will be showcasingitg three resources from our list so we can check them out every week. This week's resources are:
- ACM's Code of Ethics and Professional Conduct - This is the code of ethics that has been put together in 1992 by the Association for Computer Machinery and updated in 2018
- From What to How - An initial review of publicly available AI Ethics Tools, Methods and Research to translate principles into practices
|
|
|
|
|
|
|
|
About us |
|
The Institute for Ethical AI & Machine Learning is a Europe-based research centre that carries out world-class research into responsible machine learning.
|
|
|
© 2018 The Institute for Ethical AI & Machine Learning
|
|
|
|
|
| | |