Survey on Top ML Challenges Thrilled that our survey is featured in TheNewStack which dives into the state of production ML and uncovers key insights across challenges, tech stacks, trends and demographics. There are some actionable insights for practitioners, such as the challenges in Prod ML on observability and monitoring as well as the operational challenges required as applications scale for robust Day 1 and Day 2 practices. We continue to see a lot of key trends in MLOps such as favoring custom-built solutions over vendor tools across their tech stacks, as well as products such as MLflow leading in model tracking and Airflow in workflow orchestration. Check it out to get a refresher on the state of prod ML! |
|
|
---|
|
Understanding ML Theory to Algos This is a fantastic (free) 450 page book on machine learning theory and practice covering the full foundation of the domain: This is a great deep dive into foundational topics such as bias-variance tradeoffs, VC-dimension, PAC learning, but also extending to core concepts such as convex optimization, generalization bounds and much more. Whether you are a seasoned practitioner or an interested enthusiast, this is a great resource to dive into the algorithmic paradigms that set the backbone of the field such as stochastic gradient descent, boosting, support vector machines, and kernel methods, while also covering essential topics like model selection, regularization, and validation techniques. |
|
|
---|
|
The 13 Software Laws to Live By The 13 laws of tech come up more often that you may think, so it's definitely worth a quick refresher: 1. Parkinson’s law: Work expands to fill the available time. 2. Hofstadter’s Law: It always takes longer than you expect. 3. Brooks’ law: Adding manpower to a late software project makes it later. 4. Conway’s law: Organizations produce designs which are copies of the communication structures of these organizations. 5. Cunningham’s law: The best way to get the right answer on the internet is to post the wrong answer. 6. Sturgeon’s Law: 90% of everything is crap. 7. Zawinski’s Law: Programs which cannot expand are replaced by ones that can. 8. Hyrum’s Law: With a sufficient number of users of an API, it does not matter what you promise in the contract. 9. Price's law: 50% of the work is done by the square root number of people. 10. Ringelmann effect: The tendency for individual members of a group to become increasingly less productive as the size of their group increases. 11. Goodhart’s law: When a measure becomes a target, it ceases to be a good measure. 12. Gilb’s law: Anything you need to quantify can be measured in some way that is superior to not measuring it at all. 13. Murphy’s Law: Anything that can go wrong will go wrong. This is a great compilation of the laws of software, as they truly do appear more often than one would like on a day-to-day basis. |
|
---|
|
META Released LLaMa 4 Meta has released Llama 4! It is quite exciting to see the continuous contribution to the ML community, particularly with the increasing competition (e.g. China). This release comes with two 17B-parameter models which leveraging a mixture-of-experts architecture: Llama 4 Scout has a 16-MoE architecture, and fits on a single NVIDIA H100 GPU (efficiency seems to be a growing trend) and supports a huge 10M token context window. Llama 4 Maverick has a 128-MoE architecture which claims to beat ChatGPT 4.5 across reasoning, coding, and visual benchmarks, however we don't see comparisons to recent chinese models such as Tencent's and DeepSeek's models. It is also interesting to see the adoption of safety and bias mitigation strategies, it will be interesting to see what the community is able to build from these. |
|
|
---|
|
Reinforcement Learning From Scratch What better way to dive into the field of Reinforcement Learning than by diving into the internals of some of the core foundational concepts in the field: This is a great resource which puts together approachable tutorials across reinforcement learning by building core components from scratch in Python. This is targeted for ML practitioners, but it seems it can be approachable by anyone that is interested to learn more about this important field (which is also powering some of the most innovative GenAI models). The repo is setup in detailed Jupyter notebooks covering everything from basic exploration and tabular methods (like Q-Learning and SARSA) to advanced techniques (such as PPO, DDPG, and multi-agent algorithms). |
|
|
---|
|
Upcoming MLOps Events The MLOps ecosystem continues to grow at break-neck speeds, making it ever harder for us as practitioners to stay up to date with relevant developments. A fantsatic way to keep on-top of relevant resources is through the great community and events that the MLOps and Production ML ecosystem offers. This is the reason why we have started curating a list of upcoming events in the space, which are outlined below. Upcoming conferences where we're speaking: Other upcoming MLOps conferences in 2025:
In case you missed our talks:
|
|
---|
| |
Check out the fast-growing ecosystem of production ML tools & frameworks at the github repository which has reached over 10,000 ⭐ github stars. We are currently looking for more libraries to add - if you know of any that are not listed, please let us know or feel free to add a PR. Four featured libraries in the GPU acceleration space are outlined below. - Kompute - Blazing fast, lightweight and mobile phone-enabled GPU compute framework optimized for advanced data processing usecases.
- CuPy - An implementation of NumPy-compatible multi-dimensional array on CUDA. CuPy consists of the core multi-dimensional array class, cupy.ndarray, and many functions on it.
- Jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
- CuDF - Built based on the Apache Arrow columnar memory format, cuDF is a GPU DataFrame library for loading, joining, aggregating, filtering, and otherwise manipulating data.
If you know of any open source and open community events that are not listed do give us a heads up so we can add them! |
|
---|
| |
As AI systems become more prevalent in society, we face bigger and tougher societal challenges. We have seen a large number of resources that aim to takle these challenges in the form of AI Guidelines, Principles, Ethics Frameworks, etc, however there are so many resources it is hard to navigate. Because of this we started an Open Source initiative that aims to map the ecosystem to make it simpler to navigate. You can find multiple principles in the repo - some examples include the following: - MLSecOps Top 10 Vulnerabilities - This is an initiative that aims to further the field of machine learning security by identifying the top 10 most common vulnerabiliites in the machine learning lifecycle as well as best practices.
- AI & Machine Learning 8 principles for Responsible ML - The Institute for Ethical AI & Machine Learning has put together 8 principles for responsible machine learning that are to be adopted by individuals and delivery teams designing, building and operating machine learning systems.
- An Evaluation of Guidelines - The Ethics of Ethics; A research paper that analyses multiple Ethics principles.
- ACM's Code of Ethics and Professional Conduct - This is the code of ethics that has been put together in 1992 by the Association for Computer Machinery and updated in 2018.
If you know of any guidelines that are not in the "Awesome AI Guidelines" list, please do give us a heads up or feel free to add a pull request!
|
|
---|
| |
| | The Institute for Ethical AI & Machine Learning is a European research centre that carries out world-class research into responsible machine learning. | | |
|
|
---|
|
|
This email was sent to You received this email because you are registered with The Institute for Ethical AI & Machine Learning's newsletter "The Machine Learning Engineer" |
| | |
|
|
---|
|
© 2023 The Institute for Ethical AI & Machine Learning |
|
---|
|
|
|