|
|
|
|
THE ML ENGINEER 🤖
Issue #44
|
|
|
|
|
|
|
|
If you would like to suggest articles, ideas, papers, libraries, jobs, events or provide feedback just hit reply or send us an email to a@ethical.institute! We have received a lot of great suggestions in the past, thank you very much for everyone's support!
|
|
|
|
|
|
|
|
As AI systems become more prevalent in society, we face bigger and tougher societal and ethical challenges. Recently there has been an increase in content that attempts to address these challenges in the form of “Principles”, “Ethics Frameworks”, “Checklists” and beyond. Navigating through so many resources is not easy, which is why we created and now maintain "The Awesome AI Guidelines List", a repository which maps the ecosystem of guidelines, principles, codes of ethics, standards, regulation, etc related to AI 🚀 if there is any guideline or framework which is not outlined please let us know or feel free to submit an issue / pull request!
|
|
|
|
|
|
|
When dealing with production machine learning systems, we may face challenges that we won't see in the experimentation stage. One of the key challenges is to manage a large number of machine learning models, potentially from various users, and being able to compare them and upgrade them into staging and production environments. Dataricks announced a new ML Management feature which tackles this issue by providing a workflow system to version and manage models across multiple stages.
|
|
|
|
|
|
|
When dealing with challenges that involve a lot of data, it's often hard to choose the best visualisations to use at different stages of the project. This post provides a set of best practices on how to approach this challenge, suggesting how to leverage complex charts for data analysis, and classic charts for communicating data. Furthermore they provide a case study / example of how this looks like in practice in their team.
|
|
|
|
|
|
|
When training complex models like neural networks, we obtain advantages in accuracy, but we face tradeoffs on explainability. Fortunately we have seen a recent increase in tools and methods that you can use to extract explanations from various types of machine learning models. Last week we gave a talk about the approaches and tools you can use to introduce interpretability techniques into your experimentation workflow. Furthermore we show how you can leverage explainability techniques in the production stage of your machine learning lifecycle.
|
|
|
|
|
|
|
There are many ways in which you can tackle building a machine learning model. This post proposes a 6-step approach towards building any machine learning model. It provides quite a reasonable breakdown that covers a reasonable amount of important pieces, from defining the problem, to identifying risks and high level, to questions around interpretability, tuning & inference time.
|
|
|
|
|
|
|
|
|
- Seldon - Open source platform for deploying and monitoring machine learning models in kubernetes
- KFServing - Serverless framework to deploy and monitor machine learning models in Kubernetes
- Redis-AI - A Redis module for serving tensors and executing deep learning models. Expect changes in the API and internals.
- Model Server for Apache MXNet (MMS) - A model server for Apache MXNet from Amazon Web Services that is able to run MXNet models as well as Gluon models (Amazon's SageMaker runs a custom version of MMS under the hood)
|
|
|
|
|
|
|
|
We feature conferences that have core ML tracks (primarily in Europe for now) to help our community stay up to date with great events coming up.
Technical & Scientific Conferences
- Data Natives [21/11/2019] - Data conference in Berlin, Germany.
- ODSC Europe [19/11/2019] - The Open Data Science Conference in London, UK.
Business Conferences
- Big Data LDN 2019 [13/11/2019] - Conference for strategy and tech on big data in London, UK.
|
|
|
|
|
|
|
|
|
© 2018 The Institute for Ethical AI & Machine Learning
|
|
|
|
|