|
|
|
|
THE ML ENGINEER 🤖
Issue #25
|
|
|
|
|
|
|
|
Support the ML Engineer!
If you would like to suggest articles, ideas, papers, libraries, jobs, events or provide feedback just hit reply or send us an email to a@ethical.institute! We have received a lot of great suggestions in the past, thank you very much for everyone's support!
|
|
|
|
|
|
|
|
Google's current research director and former NASA chief scientist Peter Norvig dives into how machine learning will change the way we program. Peter focuses a lot on machine learning model evaluation, as well as the tools we use. You can find the hour-long video here, as well as the full slides here.
|
|
|
|
|
|
|
AutoML provides methods and processes to make Machine Learning available for ML experts and non-experts. AutoML has achieved considerable successes in recent years and an ever-growing number of disciplines rely on it. AutoML.org provides a free e-book that covers all things around this topic, and provides extensive resources to dive into hands on use of this powerful set of tools and techniques.
|
|
|
|
|
|
|
Face detection is a computer vision problem that involves finding faces in photos. In this hands-on tutorial, they provide the knowledge to understand the challenges and opportunities for face detection, as well as a hands-on example performing state-of-the-art face detection can be achieved using a Multi-task Cascade CNN via the MTCNN library.
|
|
|
|
|
|
|
This great article provides a set of tips and best practices to structure your ETL data pipelines to ensure they are scalable and maintainable in the medium and longer term. The tips provided consist of 4 key themes: 1) Building a chain of simple tasks, 2) using a workflow management tool, 3) leveraging SQL where possible and 4) implementing data quality checks.
|
|
|
|
|
|
|
A counterfactual explanation describes a causal situation in the form: “If X had not occurred, Y would not have occurred”. In interpretable machine learning, counterfactual explanations can be used to explain predictions of individual instances. Seldon's Interpretable Machine Learning Library Alibi has launched its v0.2.0 version which contains Counterfactual explanations, and provides an example of how to find counterfactual instances using the MNIST dataset.
|
|
|
|
|
|
|
One of the most familiar settings for a machine learning engineer is having access to a lot of data, but modest resources to annotate it. Everyone in that predicament eventually goes through the logical steps of asking themselves what to do when they have limited supervised data, but lots of unlabeled data, and the literature appears to have a ready answer: semi-supervised learning. This post provides a brief introduction to the concept of semi-supervised machine learning, as well as references to papers that provide an insight on this topic.
|
|
|
|
|
|
|
|
|
MLOps = Featured OS Libraries
The theme for this week's featured ML libraries is AutoML frameworks, which fall on our Responsible ML Principle #4. The four featured libraries on AutoML this week are:
- auto-sklearn - Framework to automate algorithm and hyperparameter tuning for sklearn
- TPOT - Automation of sklearn pipeline creation (including feature selection, pre-processor, etc)
- Colombus - A scalable framework to perform exploratory feature selection implemented in R
- automl - Automated feature engineering, feature/model selection, hyperparam. optimisation
|
|
|
|
|
|
|
|
We feature conferences that have core ML tracks (primarily in Europe for now) to help our community stay up to date with great events coming up.
Technical & Scientific Conferences
- AI Conference Beijing [18/06/2019] - O'Reilly's signature applied AI conference in Asia in Beijing, China.
- Data Natives [21/11/2019] - Data conference in Berlin, Germany.
- ODSC Europe [19/11/2019] - The Open Data Science Conference in London, UK.
Business Conferences
- Big Data LDN 2019 [13/11/2019] - Conference for strategy and tech on big data in London, UK.
|
|
|
|
|
|
|
|
|
© 2018 The Institute for Ethical AI & Machine Learning
|
|
|
|