Hyper-personalization in LLMs

What’s next in the LLM revolution?

complexity
systems
LLM
Published

March 19, 2023

As of early 2023, most of us are still grappling with the opportunities, challenges, and unforeseen implications of Large Language Models (LLMs), such as GPT. GPT-4 is several days old, Microsoft just announced their “co-pilot for work” extensive integration in Office 365, and Khan Academy announced further integration. And this is just the tip of the iceberg.

As a thought exercise, I decided to ideate a potential next frontier. The answer I came up with is privacy-preserving hyper-personalization:

What data do the LLMs currently not have (for a good reason)? Your genome, medical history, and all your personal data (texts, voice recordings, etc.) - are just a few. But the benefits of using these data to create a natural language interface are huge. Imagine you are planning a new jogging route. Safe to assume from now on that you will be using a natural language to describe what you want, and a service such as Bing would come up with an optimal route. Unfortunately, that would not take into account your current medical status. Here, the personalized layer on top of the LLM would alert you against selecting this route. Another potential use case is personal knowledge management. You would like to be able to converse with your past notes to understand why you came to specific ideas or to filter out information.

Those use cases are inevitable, but before they can safely and ethically be brought to life, the most critical challenge needs to be addressed - how do we make the data and personalized model inference secure and solely the individual user’s property and under their complete control? The computation and inference should happen on edge, but what about the data? After those challenges are solved, new use cases will pop up. The dream of personalized medicine can be within reach since the end user will be able (if willing) to provide selective access to their medical history or genome - and perhaps even monetize the data, or donate it to research.

The possibilities are staggering, but a safe and ethical future is only possible if we design such systems correctly today.