diff --git a/_posts/2024-05-09-speech-conversational-llms.md b/_posts/2024-05-09-speech-conversational-llms.md index 32ee3e5a..17c60b5e 100644 --- a/_posts/2024-05-09-speech-conversational-llms.md +++ b/_posts/2024-05-09-speech-conversational-llms.md @@ -19,7 +19,7 @@ come. [Earlier](/speech-first-conversational-ai-revisited/) we discussed how spoken conversations are richer than pure text and how the gap would be not bridged by -LLMs purely working on transcriptions. In one of our recent experiments we build +LLMs purely working on transcriptions. In one of our recent experiments we built an efficient multi-modal LLM that takes speech directly to provide better conversational experience. For production usage, the constraint here is that this should happen without losing the flexibility that you get in a text-only