Development Update – 4th January 2024

LLM

Over Christmas and the New Year we’ve been doing a general sweep up of all those little tasks that get added to the list over the year. Our focus is still on how we can use AI to improve our grant results and tooling. We’re now testing some code that uses RAG (Retrieval-Augmented Generation) to improve the quality of responses by grounding the LLM (Large Language Model) on external sources of knowledge. This is how you can add your own data to an existing model and the good thing about this approach is it can all be done locally, without having to send your data to a third party, and it can be applied to any source of data. We’ll be exapnding our work on grants AI to provide a general solution so you can talk to your own data using AI!

If you want to read more about RAG then here’s a good overview: https://aws.amazon.com/what-is/retrieval-augmented-generation/