|
|
|
|
THE ML ENGINEER 🤖
Issue #50
|
|
|
|
|
|
|
|
If you would like to suggest articles, ideas, papers, libraries, jobs, events or provide feedback just hit reply or send us an email to a@ethical.institute! We have received a lot of great suggestions in the past, thank you very much for everyone's support!
|
|
|
|
|
|
|
|
It’s easy and fun to ship a prototype, whether that’s in software or data science. What’s much, much harder is making it resilient, reliable, scalable, fast, and secure. Ravelin co-founder and CTO Leonard Austin has written an excellent blog post where he outlines some best practices that are brought from the software engineering best practices.
|
|
|
|
|
|
|
Sir Tim Berners-Lee has launched what he has called a 'Contract for the Web', intended to govern the behaviour of both internet giants, such as Google and Facebook, and governments. The Contract describes itself as "a global plan of action to make our online world safe and empowering for everyone".
|
|
|
|
|
|
|
The videos for Deep Learning Indaba 2019 are out! The mission of the Deep Learning Indaba is to Strengthen African Machine Learning. The Deep Learning Indaba is the annual meeting of the African machine learning community. In 2019, the Indaba aims to see 700 members of Africa's artificial intelligence community for a week-long event of teaching, research, exchange, and debate around the state of the art in machine learning and artificial intelligence.
|
|
|
|
|
|
|
While we usually cannot guarantee our models to be absolutely perfect, we could use information about how certain they are with their predictions. That way, in case of high uncertainty, we can perform more extensive tests or pass the case to a human in order to avoid potentially wrong results. This, however, requires our models to be aware of their prediction accuracy for a given input. This article aims to break down just that.
|
|
|
|
|
|
|
Google entered the XAI ecosystem with a new Google Cloud AI Explanations product, which is targetted at model developers and data scientists. Together with their new system, they have released a whitepaper that outlines their approach towards XAI, togeter with a high level overview of the motivations and features.
|
|
|
|
|
|
|
|
|
The theme for this week's featured ML libraries is Adversarial Robustness. The four featured libraries this week are:
- Alibi Detect - Open source library with algorithms for outlier detection, concept drift and anomaly detection, optimised for massive scale machine learning deployments
- CleverHans - library for testing adversarial attacks / defenses maintained by some of the most important names in adversarial ML, namely Ian Goodfellow (ex-Google Brain, now Apple) and Nicolas Papernot (Google Brain). Comes with some nice tutorials
- Foolbox - second biggest adversarial library. Has an even longer list of attacks - but no defenses or evaluation metrics. Geared more towards computer vision. Code easier to understand / modify than ART - also better for exploring blackbox attacks on surrogate models.
- IBM Adversarial Robustness 360 Toolbox (ART) - at the time of writing this is the most complete off-the-shelf resource for testing adversarial attacks and defenses. It includes a library of 15 attacks, 10 empirical defenses, and some nice evaluation metrics. Neural networks only.
|
|
|
|
|
|
|
As AI systems become more prevalent in society, we face bigger and tougher societal challenges. We have seen a large number of resources that aim to takle thiese challenges in the form of AI Guidelines, Principles, Ethics Frameworks, etc, however there are so many resources it is hard to navigate. Because of this we started an Open Source initiative that aims to map the ecosystem to make it simpler to navigate. We will be showcasing three resources from our list so we can check them out every week. This week's resources are:
- Oxford's Recommendations for AI Governance - A set of recommendations from Oxford's Future of Humanity institute which focus on the infrastructure and attributes required for efficient design, development, and research around the ongoing work building & implementing AI standards.
- San Francisco City's Ethics & Algorithms Toolkit - A risk management framework for government leaders and staff who work with algorithms, providing a two part assessment process including an algorithmic assessment process, and a process to address the risks.
- ISO/IEC's Standards for Artificial Intelligence - The ISO's initiative for Artificial Intelligence standards, which include a large set of subsequent standards ranging across Big Data, AI Terminology, Machine Learning frameworks, etc.
- Linux Foundation AI Landscape - The official list of tools in the AI landscape curated by the Linux Foundation, which contains well maintained and used tools and frameworks.
|
|
|
|
|
|
|
|
We feature conferences that have core ML tracks (primarily in Europe for now) to help our community stay up to date with great events coming up.
Technical & Scientific Conferences
- Data Natives [21/11/2019] - Data conference in Berlin, Germany.
- ODSC Europe [19/11/2019] - The Open Data Science Conference in London, UK.
Business Conferences
- Big Data LDN 2019 [13/11/2019] - Conference for strategy and tech on big data in London, UK.
|
|
|
|
|
|
|
|
|
About us |
|
The Institute for Ethical AI & Machine Learning is a UK-based research centre that carries out world-class research into responsible machine learning systems.
|
|
|
|
|
|
|
|
© 2018 The Institute for Ethical AI & Machine Learning
|
|
|
|
|