This week we continue to celebrate a large milestone towards democratising AI inference with our Vulkan Kompute project being adopted as one of the backends for the LLama.cpp and GPT4ALL frameworks! |
|
|
|
---|
|
|
Search Engine in 80 Lines of Python Writing a search engine in 80 lines of Python: In order to built an intuition on search engines, what better way to learn about a topic than with a hands on exercise. This endeavor showcases 80 likes of code for the core components of a micro search engine: 1) a crawler that leverages asynchronous programming for efficiency, 2) an inverted index for mapping keywords to documents, 3) a BM25 ranker for sorting search results, 4) and a simple user interface built with FastAPI. Despite its limitations, such as the lack of query operators and semantic search capabilities, the project serves as a practical learning tool, offering insights into search engine operations and the advantages of asynchronous code in handling I/O-bound tasks, with future plans to incorporate semantic search features. This is a great exercise for production machine learning practitioners interested in the foundational aspects and development process of a basic search engine. |
|
---|
|
One Trillion Row Challenge The One Trillion Row Challenge is launched and tackled by the Python Dask team calling for ever-faster and more optimised submissions: A recent extension of the One Billion Row Challenge now taken to the trillion-mark, designed to be a catalist for innovation by testing and comparing the performance of big data tools on a significantly larger scale. You are tasked with writing a program to calculate the minimum, mean, and maximum temperature per weather station from a dataset of one trillion rows, stored across 100,000 Parquet files on AWS S3. This challenge is open to any tool or method, with the primary objective being to foster innovation and discussion within the data science community rather than competition. Participants are encouraged to share their solutions in the 1TRC repository, including hardware used, runtime, and a reproducible code snippet, to facilitate community learning and exploration of different big data tools and techniques. |
|
|
---|
|
Comparing LLMs to Lawyers Better Call GPT instead of your lawyer? An insightful study comparing large language models against (junior & senior) lawyers with promising results: This study explores the efficacy of Large Language Models in legal contract review, comparing them with Junior Lawyers and Legal Process Outsourcers (LPOs) across accuracy, speed, and cost-efficiency metrics. It concludes that LLMs, particularly GPT4-1106, offer comparable or superior accuracy in identifying legal issues, drastically reduce review times to seconds, and cut costs by approximately 99.97% compared to traditional methods. These findings indicate a significant shift towards LLMs in the legal sector, suggesting they could greatly enhance the efficiency and accessibility of legal services, while potentially disrupting current legal practices and employment. The paper underscores the need for further research, especially in contract negotiation, and highlights LLMs' potential to transform the legal industry fundamentally. |
|
---|
|
Infra Decisions Endrose or Regret Every infrastructure decision endorsed or regretted after 4 years running infrastructure at a startup: A great resource that retrospectively analyses infra choices and their impact down the line, categorising them as "Endorse🟩" or "Regret 🟧". Endorse🟩: AWS services, EKS for Kubernetes, and RDS for database management for their reliability and seamless integration. Endorse🟩: praising tools like GitOps, Notion, Slack, and Terraform for enhancing operational efficiency and team collaboration. Regret 🟧: EKS managed addons, the costly AWS premium support, and not adopting an identity platform like Okta sooner. Key lessons highlight the importance of selecting scalable, flexible infrastructure tools and processes that balance cost, efficiency, and the ability to customize, underscoring the continuous evolution and learning in infrastructure management for startups. |
|
---|
|
AI Generated Calls Now Illegal The US FCC (ie Federal Communications Commission) has declared AI-generated voices in robocalls illegal under the Telephone Consumer Protection Act, granting State Attorneys General new enforcement tools against voice cloning scams: This measure addresses the rising concern over AI-enabled fraud, which can impersonate individuals for malicious purposes, by expanding legal actions against the use of such technology in unsolicited calls. The ruling is enforced effective immediately and is part of a broader effort to combat the misuse of AI in communication technologies, with a coalition of 26 State Attorneys General supporting the move. While this represents a significant step in regulating AI-generated content, experts argue that further legislation is needed to fully tackle the dissemination of AI-manipulated media, highlighting the ongoing challenge of adapting legal frameworks to evolving technological threats. |
|
---|
|
Upcoming MLOps Events The MLOps ecosystem continues to grow at break-neck speeds, making it ever harder for us as practitioners to stay up to date with relevant developments. A fantsatic way to keep on-top of relevant resources is through the great community and events that the MLOps and Production ML ecosystem offers. This is the reason why we have started curating a list of upcoming events in the space, which are outlined below. Upcoming conferences where we're speaking: Other upcoming MLOps conferences in 2024:
In case you missed our talks:
|
|
---|
| |
Check out the fast-growing ecosystem of production ML tools & frameworks at the github repository which has reached over 10,000 ⭐ github stars. We are currently looking for more libraries to add - if you know of any that are not listed, please let us know or feel free to add a PR. Four featured libraries in the GPU acceleration space are outlined below. - Kompute - Blazing fast, lightweight and mobile phone-enabled GPU compute framework optimized for advanced data processing usecases.
- CuPy - An implementation of NumPy-compatible multi-dimensional array on CUDA. CuPy consists of the core multi-dimensional array class, cupy.ndarray, and many functions on it.
- Jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
- CuDF - Built based on the Apache Arrow columnar memory format, cuDF is a GPU DataFrame library for loading, joining, aggregating, filtering, and otherwise manipulating data.
If you know of any open source and open community events that are not listed do give us a heads up so we can add them! |
|
---|
| |
As AI systems become more prevalent in society, we face bigger and tougher societal challenges. We have seen a large number of resources that aim to takle these challenges in the form of AI Guidelines, Principles, Ethics Frameworks, etc, however there are so many resources it is hard to navigate. Because of this we started an Open Source initiative that aims to map the ecosystem to make it simpler to navigate. You can find multiple principles in the repo - some examples include the following: - MLSecOps Top 10 Vulnerabilities - This is an initiative that aims to further the field of machine learning security by identifying the top 10 most common vulnerabiliites in the machine learning lifecycle as well as best practices.
- AI & Machine Learning 8 principles for Responsible ML - The Institute for Ethical AI & Machine Learning has put together 8 principles for responsible machine learning that are to be adopted by individuals and delivery teams designing, building and operating machine learning systems.
- An Evaluation of Guidelines - The Ethics of Ethics; A research paper that analyses multiple Ethics principles.
- ACM's Code of Ethics and Professional Conduct - This is the code of ethics that has been put together in 1992 by the Association for Computer Machinery and updated in 2018.
If you know of any guidelines that are not in the "Awesome AI Guidelines" list, please do give us a heads up or feel free to add a pull request!
|
|
---|
| |
| | The Institute for Ethical AI & Machine Learning is a European research centre that carries out world-class research into responsible machine learning. | | |
|
|
---|
|
|
This email was sent to You received this email because you are registered with The Institute for Ethical AI & Machine Learning's newsletter "The Machine Learning Engineer" |
| | |
|
|
---|
|
© 2023 The Institute for Ethical AI & Machine Learning |
|
---|
|
|
|