The State of MLOps 2025 Survey 🔥 Did you know that last year less than 50% practitioners had monitoring for their production machine learning? We are capturing the insights on this year's MLOps Survey which we will be able to share very soon!! We still need your support to continue collecting diverse perspectives to map the ecosystem! Please help us with your response, as well as by sharing with your colleagues 🚀🚀🚀 If you have a few minutes, your contribution will make a significant difference to the whole production ML ecosystem 🥳 The results will be shared as open source like last year!! You can add your response directly at: https://bit.ly/state-of-ml-2025 🔥 |
|
|
---|
|
Anthropic Outage Post-Mortem Anthropic has published a transparent post-mortem review of their recent outages which provides quite an interesting perspective on the challenges a top AI lab faces in massive-scale production MLOps: In large-scale ML deployment, subtle infrastructure bugs that can go unnoticed can affect the quality of the model resulting in huge user/customer impact. Between August and September Anthropic traced intermittent quality regressions in Claude to what they describe as three overlapping infrastructure bugs: 1) They had a routing error that misdirected short-context requests to 1M-token servers, 2) They also saw a TPU misconfiguration that corrupted token outputs, 3) In parallel they also had an XLA-TPU precision bug that broke their approximate top-k sampling. These issues were further heightened by load balancing changes and proved hard to diagnose due to platform heterogeneity, noisy evaluations, and privacy limits on inspecting user data. This actually does hit home, as it really reminds us ML Enginereing practitioners on the importance of production-grounded evaluations, strong monitoring, and the right privacy-preserving processes & tooling for debugging efficiently. |
|
|
---|
|
Harvard on LLM National Bias Harvard researchers published in 2023 a still highly relevant study that exposes the large (+ growing) gaps in existing LLMs when it comes to cultural and geographical diversities. Non-surprisingly several of the more popular LLMs reflect distributions of specific demographics (i.e. western, industrialized, etc) due to the inherent bias in their training data and alignment processes. This Harvard study uses the World Values Survey and cognitive tasks to find that indeed LLM responses most closely resemble U.S. and Northern European populations but diverge sharply (r = –0.70) as cultural distance increases, which is then also measured respectively. This skew appears in values, politics, thinking styles, and assumptions about self-concept, meaning that LLMs systematically misrepresent the psychological diversity of most of humanity - this in itself becomes growignly important especially as these tools are growingly used in critical contexts (+ further outages / learnings arise). |
|
|
---|
|
The State of Devs in 2025 This is a fantastic survey that brings a new "personal" lens into the state of developers in 2025, showing insights on career mobility, mental health, hobbies, and more interesting insights that have not arisen in previous surveys. This survey results cover responses from 8,000+, and span themes across demographics, career, workfplace, technology, health, worldview and hobbies. Some insights that are expected include a still male-dominated workforce, active career mobility, education-linked income gaps, and a strong preference for remote/hybrid setups. Career mobility and higher education seem to correlate with income, though burnout, poor management, discrimination, and health issues (notably poor sleep, mental health, and back pain) seem to also be quite a large reported concern. It is also worth highlighting that the site is also organised in quite a neat structure, the color schemes may make it a bit harder to read sometimes but certainly allows for interesting deep dives. |
|
|
---|
|
Build Containers from Scratch This is a classic, and hands down one of the best tech talks available online: it basically is an opportunity to build a container from scratch, certainly more than recommended watch! As a brief high level overview, Liz Rice (Isovalent's Chief OSS Officer) shows the simplicity in containers by buidling one from scratch, hands-on live on stage in a way that is super simple to follow. This is definitely one of the best explain-like-I'm-5 sessions to break down the world of containers, so for any ML practitioner that has not yet watched it, do check it out! |
|
---|
|
Upcoming MLOps Events The MLOps ecosystem continues to grow at break-neck speeds, making it ever harder for us as practitioners to stay up to date with relevant developments. A fantsatic way to keep on-top of relevant resources is through the great community and events that the MLOps and Production ML ecosystem offers. This is the reason why we have started curating a list of upcoming events in the space, which are outlined below. Upcoming conferences where we're speaking: Other upcoming MLOps conferences in 2025:
In case you missed our talks:
|
|
---|
| |
Check out the fast-growing ecosystem of production ML tools & frameworks at the github repository which has reached over 10,000 ⭐ github stars. We are currently looking for more libraries to add - if you know of any that are not listed, please let us know or feel free to add a PR. Four featured libraries in the GPU acceleration space are outlined below. - Kompute - Blazing fast, lightweight and mobile phone-enabled GPU compute framework optimized for advanced data processing usecases.
- CuPy - An implementation of NumPy-compatible multi-dimensional array on CUDA. CuPy consists of the core multi-dimensional array class, cupy.ndarray, and many functions on it.
- Jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
- CuDF - Built based on the Apache Arrow columnar memory format, cuDF is a GPU DataFrame library for loading, joining, aggregating, filtering, and otherwise manipulating data.
If you know of any open source and open community events that are not listed do give us a heads up so we can add them! |
|
---|
| |
As AI systems become more prevalent in society, we face bigger and tougher societal challenges. We have seen a large number of resources that aim to takle these challenges in the form of AI Guidelines, Principles, Ethics Frameworks, etc, however there are so many resources it is hard to navigate. Because of this we started an Open Source initiative that aims to map the ecosystem to make it simpler to navigate. You can find multiple principles in the repo - some examples include the following: - MLSecOps Top 10 Vulnerabilities - This is an initiative that aims to further the field of machine learning security by identifying the top 10 most common vulnerabiliites in the machine learning lifecycle as well as best practices.
- AI & Machine Learning 8 principles for Responsible ML - The Institute for Ethical AI & Machine Learning has put together 8 principles for responsible machine learning that are to be adopted by individuals and delivery teams designing, building and operating machine learning systems.
- An Evaluation of Guidelines - The Ethics of Ethics; A research paper that analyses multiple Ethics principles.
- ACM's Code of Ethics and Professional Conduct - This is the code of ethics that has been put together in 1992 by the Association for Computer Machinery and updated in 2018.
If you know of any guidelines that are not in the "Awesome AI Guidelines" list, please do give us a heads up or feel free to add a pull request!
|
|
---|
| |
| | The Institute for Ethical AI & Machine Learning is a European research centre that carries out world-class research into responsible machine learning. | | |
|
|
---|
|
|
This email was sent to You received this email because you are registered with The Institute for Ethical AI & Machine Learning's newsletter "The Machine Learning Engineer" |
| | |
|
|
---|
|
© 2023 The Institute for Ethical AI & Machine Learning |
|
---|
|
|
|