|
|
|
|
|
|
Support the ML Engineer!
If you would like to suggest articles, ideas, tutorials, libraries or provide feedback just hit reply or send us an email to a@ethical.institute!
|
|
|
|
|
|
|
|
Kafka is a great framework we have been advocating for a few months now. Kai Wähner has put together a great blog post + video talk on "how to build ML models at extreme scale and productionize the built models in mission-critical real-time apps by leveraging open-source components in the public cloud".
|
|
|
|
|
|
|
One of the best insights on practical approaches towards explainable machine learning by Mark Hammond, former founder at Bons.AI and current director of business AI at microsoft. Mark provides 3 key categories that tackle the question of explainability: Deep Explanation, Model Induction and Machine Teaching. The last one being key - namely combining machine learning with subject matter expertise (aka Principle #3 "Explainability by Justification" in our Principles for Responsible ML 🤖).
|
|
|
|
|
|
|
Very comprehensible blog post that provides a brief and concise overview on all-things-computer-vision, including: Classification, object detection, segmentation, pose estimation, action recognition, enhancement and restoration.
|
|
|
|
|
|
|
|
|
We are thrilled to announce that the European Commission has launched the draft of the AI Ethics Guidelines produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG), which The Institute for Ethical AI & Machine Learning is currently part of. If you have time do have a look and provide feedback through the European AI Alliance portal.
|
|
|
|
|
|
|
MLOps = ML Operations
A new section to the Machine Learning operations list specifically around Explainability of Predictions (aka Principle #3) - we are very excited about this new addition. The machine learning explainability libraries we're showcasing this week are:
- SHAP - SHapley Additive exPlanations is a unified approach to explain the output of any machine learning model.
- LIME - Local Interpretable Model-agnostic Explanations for machine learning models.
- ELI5 - "Explain Like I'm 5" is a Python package which helps to debug machine learning classifiers and explain their predictions.
- Tensorboard's What-If - Tensorboard screen to analyse the interactions between inference results and data inputs.
|
|
|
|
|
|
|
|