UK AI Policy Adops Proposals We are very excited to announce that the UK Government has adopted 13 of the 14 recommendations we made for 2023 UK AI Regulation proposal! It feels sureal to see tangible positive change in such important policy documents that are furthering the global AI ecosystem 🚀🚀🚀 It was an honour to lead this publication, and collaborate with renowned academics and industry throught leaders from across both Europe and the US. |
|
---|
|
Alibaba's Realistic AI Video Alibaba has released a mind blowing new ML architecture to generate realistic videos from still images using Audio2Video Diffusion: A groundbreaking framework named EMO has been released by Alibaba, supporting generation of realistic and expressive "talking head videos" directly from audio cues without relying on intermediate 3D models or facial landmarks. The methodology uses "FrameEncoding" for preserving character identity. The model was trained on a diverse audio-video dataset comprising over 250 hours of footage, and the results demonstrate that EMO surpasses current state-of-the-art methods in terms of realism and expressiveness. |
|
---|
|
Don't Mock ML (in Unit Tests) Don't mock machine learning (in Unit Tests): A great piece by Eugene Yan which showcases the unique challenges ML practitioners face in unit testing ML code, different to traditional software development. In ML we face logic that may not be static nor deterministic, instead we face dynamic entities that learn from data. However to tackle these challenges, we can using simple data samples, testing against models with random or empty weights, and writing critical tests against actual models while avoiding testing external libraries. Another great resource from Eugene Yan providing practical advice for practitioners that accommodate the complexities of ML code and models. |
|
|
---|
|
Amazon Billion Param TTS ML Amazon releases an indistinguishable-from-human text-to-speech large AI model trained on 100k hours of public domain speech data: BASE TTS represents a significant advancement in text-to-speech technology, being the largest TTS model to date with 1 billion parameters, trained on 100,000 hours of speech data. This model and architecture introduces a novel approach to TTS, utilizing autoregressive Transformers for converting texts into discrete speechcodes, which are then turned into waveforms by a convolution-based decoder, allowing for incremental, streamable speech synthesis. This model showcases emergent abilities for handling complex sentences with natural prosody as the dataset and model size increase, a phenomenon observed in large language models but relatively unexplored in TTS. BASE TTS employs a unique speech tokenization technique that disentangles speaker ID and compresses speech data using byte-pair encoding, significantly enhancing speech naturalness and efficiency compared to existing large-scale TTS systems like YourTTS, Bark, and TortoiseTTS. |
|
---|
|
Karpathy Tutorial on GPT Tokens A significant number of the limitations we face in LLMs can arise from the tokenization process - Andrej Karpathy has put together a fantastic hands-on tutorial to build the intuition required on the tokenisation process: A great hands on tutorial by Andrej Karpathy for machine learning practitioners, focusing on developing a tokenizer for Large Language Models (LLMs). Karpathy outlines the tokenizer's role in converting strings to tokens and vice versa, using Byte Pair Encoding for its training. He highlights the impact of tokenization on LLM performance, including issues related to language handling, whitespace management, and the potential improvements in newer GPT versions. The video also includes practical coding examples, discussions on Unicode encodings, and insights into the tokenizer's influence on model behaviors. |
|
|
---|
|
Upcoming MLOps Events The MLOps ecosystem continues to grow at break-neck speeds, making it ever harder for us as practitioners to stay up to date with relevant developments. A fantsatic way to keep on-top of relevant resources is through the great community and events that the MLOps and Production ML ecosystem offers. This is the reason why we have started curating a list of upcoming events in the space, which are outlined below. Upcoming conferences where we're speaking: Other upcoming MLOps conferences in 2024:
In case you missed our talks:
|
|
---|
| |
Check out the fast-growing ecosystem of production ML tools & frameworks at the github repository which has reached over 10,000 ⭐ github stars. We are currently looking for more libraries to add - if you know of any that are not listed, please let us know or feel free to add a PR. Four featured libraries in the GPU acceleration space are outlined below. - Kompute - Blazing fast, lightweight and mobile phone-enabled GPU compute framework optimized for advanced data processing usecases.
- CuPy - An implementation of NumPy-compatible multi-dimensional array on CUDA. CuPy consists of the core multi-dimensional array class, cupy.ndarray, and many functions on it.
- Jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
- CuDF - Built based on the Apache Arrow columnar memory format, cuDF is a GPU DataFrame library for loading, joining, aggregating, filtering, and otherwise manipulating data.
If you know of any open source and open community events that are not listed do give us a heads up so we can add them! |
|
---|
| |
As AI systems become more prevalent in society, we face bigger and tougher societal challenges. We have seen a large number of resources that aim to takle these challenges in the form of AI Guidelines, Principles, Ethics Frameworks, etc, however there are so many resources it is hard to navigate. Because of this we started an Open Source initiative that aims to map the ecosystem to make it simpler to navigate. You can find multiple principles in the repo - some examples include the following: - MLSecOps Top 10 Vulnerabilities - This is an initiative that aims to further the field of machine learning security by identifying the top 10 most common vulnerabiliites in the machine learning lifecycle as well as best practices.
- AI & Machine Learning 8 principles for Responsible ML - The Institute for Ethical AI & Machine Learning has put together 8 principles for responsible machine learning that are to be adopted by individuals and delivery teams designing, building and operating machine learning systems.
- An Evaluation of Guidelines - The Ethics of Ethics; A research paper that analyses multiple Ethics principles.
- ACM's Code of Ethics and Professional Conduct - This is the code of ethics that has been put together in 1992 by the Association for Computer Machinery and updated in 2018.
If you know of any guidelines that are not in the "Awesome AI Guidelines" list, please do give us a heads up or feel free to add a pull request!
|
|
---|
| |
| | The Institute for Ethical AI & Machine Learning is a European research centre that carries out world-class research into responsible machine learning. | | |
|
|
---|
|
|
This email was sent to You received this email because you are registered with The Institute for Ethical AI & Machine Learning's newsletter "The Machine Learning Engineer" |
| | |
|
|
---|
|
© 2023 The Institute for Ethical AI & Machine Learning |
|
---|
|
|
|