Microsoft on GenAI Making Us Dumb GenAI is making us dumber, and the Microsoft Research team has a survey that dives into it: Whilst generative AI tools can significantly reduce the cognitive effort required for routine tasks, they simultaneously reduce critical thinking demands to verifying and integrating AI outputs. This research initiatives from MSFT survey shows that knowledge workers with high confidence in AI tend to engage less critically, which results in over-reliance on AI and hence a decline in independent problem-solving skills. This also has some interesting lessons for all of us as production ML practitioners, as it means we have to think about the way we design GenAI systems to ensure they do not only automate processes, but also incorporate features like transparent feedback and query steering to encourage embedding of human-in-the-loop. |
|
|
---|
|
Text to SQL: The Ultimate Guide Text-to-SQL GenAI tools are helping organisations accelerate analytics productivity, however the ecosystem is still nascent and hence understanding the best tools and best practice can help speed up adoption rate: There are 4 key considerations when diving into text-to-SQL GenAI tooling, namely: 1) directly prompting large language models with full schema context, 2) leveraging retrieval-augmented generation (RAG) to filter and use only relevant data, 3) deploying multi-agent systems for improved error recovery and query refinement, and 4) fine-tuning customized contextual LLMs for high accuracy and enterprise-grade performance. There is a great article on this which evaluates every method based on cost, latency, accuracy, and data security, which offers us as production ML practitioners some clear guidance. |
|
|
---|
|
Perplexity Deep Research As OpenAI, Gemini and the rest have released a Deep Research Agent, Perplexity follows suit launching their own AI-driven tool that automates in-depth research by iteratively searching, reading, and reasoning through vast amounts of data to produce comprehensive reports: It is interesting to see that the foundational LLM model space is innovating at increasing pace, however the path is quite linear; when one player releases something, the rest follow with the same release, this latest one with the exact same name! Either way, the domain of deep research certainly has potential opportunity for innovation, and it will be interesting to see how products like this will change the current state of science across the board. |
|
---|
|
Anthropic Economic Index Anthropic has released an Economic Index based on millions of anonymized Claude LLM conversations mapped to specific work tasks, providing insights into the impact driven by these tools: Anthropic has published their analysis using the O*NET framework via its Clio tool to quantify AI's impact on the labor market for their tooling. This study finds that AI usage is concentrated in technical fields such as engineering, and is mainly employed to augment rather than automate tasks. Approximately 36% of occupations use AI for at least 25% of their tasks, though only 4% see AI in 75% or more, with mid-to-high wage roles like programmers and data scientists leading the trend. |
|
|
---|
|
Build Your Own <Anything> What better way to learn something that by building it from scratch! This open source repo has compiled every possible combination of "Build your own <anything>" in every language available: This is a fantastic source of knowledge to expand our hands-on skills through hands-on tutorials that walk you through building various technologies from scratch! This list extends across concepts like building databases from scratch, to building neural networks or visual recognition systems from scratch! |
|
|
---|
|
Upcoming MLOps Events The MLOps ecosystem continues to grow at break-neck speeds, making it ever harder for us as practitioners to stay up to date with relevant developments. A fantsatic way to keep on-top of relevant resources is through the great community and events that the MLOps and Production ML ecosystem offers. This is the reason why we have started curating a list of upcoming events in the space, which are outlined below. Upcoming conferences where we're speaking: Other upcoming MLOps conferences in 2025:
In case you missed our talks:
|
|
---|
| |
Check out the fast-growing ecosystem of production ML tools & frameworks at the github repository which has reached over 10,000 ⭐ github stars. We are currently looking for more libraries to add - if you know of any that are not listed, please let us know or feel free to add a PR. Four featured libraries in the GPU acceleration space are outlined below. - Kompute - Blazing fast, lightweight and mobile phone-enabled GPU compute framework optimized for advanced data processing usecases.
- CuPy - An implementation of NumPy-compatible multi-dimensional array on CUDA. CuPy consists of the core multi-dimensional array class, cupy.ndarray, and many functions on it.
- Jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
- CuDF - Built based on the Apache Arrow columnar memory format, cuDF is a GPU DataFrame library for loading, joining, aggregating, filtering, and otherwise manipulating data.
If you know of any open source and open community events that are not listed do give us a heads up so we can add them! |
|
---|
| |
As AI systems become more prevalent in society, we face bigger and tougher societal challenges. We have seen a large number of resources that aim to takle these challenges in the form of AI Guidelines, Principles, Ethics Frameworks, etc, however there are so many resources it is hard to navigate. Because of this we started an Open Source initiative that aims to map the ecosystem to make it simpler to navigate. You can find multiple principles in the repo - some examples include the following: - MLSecOps Top 10 Vulnerabilities - This is an initiative that aims to further the field of machine learning security by identifying the top 10 most common vulnerabiliites in the machine learning lifecycle as well as best practices.
- AI & Machine Learning 8 principles for Responsible ML - The Institute for Ethical AI & Machine Learning has put together 8 principles for responsible machine learning that are to be adopted by individuals and delivery teams designing, building and operating machine learning systems.
- An Evaluation of Guidelines - The Ethics of Ethics; A research paper that analyses multiple Ethics principles.
- ACM's Code of Ethics and Professional Conduct - This is the code of ethics that has been put together in 1992 by the Association for Computer Machinery and updated in 2018.
If you know of any guidelines that are not in the "Awesome AI Guidelines" list, please do give us a heads up or feel free to add a pull request!
|
|
---|
| |
| | The Institute for Ethical AI & Machine Learning is a European research centre that carries out world-class research into responsible machine learning. | | |
|
|
---|
|
|
This email was sent to You received this email because you are registered with The Institute for Ethical AI & Machine Learning's newsletter "The Machine Learning Engineer" |
| | |
|
|
---|
|
© 2023 The Institute for Ethical AI & Machine Learning |
|
---|
|
|
|