Skip to content

Commit cb7388b

Browse files
authored
update readmes (#271)
1 parent 267e9ad commit cb7388b

File tree

4 files changed

+56
-45
lines changed

4 files changed

+56
-45
lines changed

README.md

+12-10
Original file line numberDiff line numberDiff line change
@@ -8,19 +8,21 @@
88

99
## Getting started
1010

11-
This repository contains several Typescript SDKs for the Hume Empathic Voice Interface.
11+
This monorepo contains multiple packages for using the Hume Empathic Voice Interface within browser based web applications.
1212

13-
EVI API
13+
| Package | Version | README | npm URL |
14+
| :--- | :--- | :--- | :--- |
15+
| [@humeai/voice-react](https://github.com/HumeAI/empathic-voice-api-js/tree/main/packages/react) | ![https://img.shields.io/npm/v/%40humeai%2Fvoice-react](https://img.shields.io/npm/v/%40humeai%2Fvoice-react) | [README](https://github.com/HumeAI/empathic-voice-api-js/tree/main/packages/react/README.md) | <https://npmjs.com/package/@humeai/voice-react> |
16+
| [@humeai/voice-embed](https://github.com/HumeAI/empathic-voice-api-js/tree/main/packages/embed) | ![https://img.shields.io/npm/v/%40humeai%2Fvoice-embed](https://img.shields.io/npm/v/%40humeai%2Fvoice-embed)| [README](https://github.com/HumeAI/empathic-voice-api-js/tree/main/packages/embed/README.md) | <https://npmjs.com/package/@humeai/voice-embed> |
17+
| [@humeai/voice-embed-react](https://github.com/HumeAI/empathic-voice-api-js/tree/main/packages/embed-react) | ![https://img.shields.io/npm/v/%40humeai%2Fvoice-embed-react](https://img.shields.io/npm/v/%40humeai%2Fvoice-embed-react)| [README](https://github.com/HumeAI/empathic-voice-api-js/tree/main/packages/embed-react/README.md) | <https://npmjs.com/package/@humeai/voice-embed-react> |
1418

15-
- [https://github.com/HumeAI/empathic-voice-api-js/tree/main/packages/core](https://github.com/HumeAI/empathic-voice-api-js/tree/main/packages/core)
16-
- [https://github.com/HumeAI/empathic-voice-api-js/tree/main/packages/react](https://github.com/HumeAI/empathic-voice-api-js/tree/main/packages/react)
19+
> [!IMPORTANT]
20+
> `@humeai/voice` has been deprecated and replaced with `hume`
21+
>
22+
> - GitHub: <https://github.com/humeai/hume-typescript-sdk>
23+
> - npm: <https://npmjs.com/package/hume>
1724
18-
Embedded Widget
19-
20-
- [https://github.com/HumeAI/empathic-voice-api-js/tree/main/packages/embed-react](https://github.com/HumeAI/empathic-voice-api-js/tree/main/packages/embed-react)
21-
- [https://github.com/HumeAI/empathic-voice-api-js/tree/main/packages/embed](https://github.com/HumeAI/empathic-voice-api-js/tree/main/packages/embed)
22-
23-
Aside from the Typescript SDKs, you may visit [our documentation page](https://dev.hume.ai/docs/empathic-voice-interface-evi/overview) for guides on how to use the EVI API.
25+
More information on how to use the EVI API may be found in [our documentation site](https://dev.hume.ai/docs/empathic-voice-interface-evi/overview).
2426

2527
## Example applications
2628

packages/embed-react/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
<div align="center">
22
<img src="https://storage.googleapis.com/hume-public-logos/hume/hume-banner.png">
3-
<h1>Hume AI Voice Embed React SDK</h1>
3+
<h1>@humeai/voice-embed-react</h1>
44
<p>
55
<strong>Integrate Hume's Empathic Voice Interface directly into your web application</strong>
66
</p>

packages/embed/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
<div align="center">
22
<img src="https://storage.googleapis.com/hume-public-logos/hume/hume-banner.png">
3-
<h1>Hume AI Voice Embed SDK</h1>
3+
<h1>@humeai/voice-embed</h1>
44
<p>
55
<strong>Integrate Hume's Empathic Voice Interface directly into your web application</strong>
66
</p>

packages/react/README.md

+42-33
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,23 @@
11
<div align="center">
22
<img src="https://storage.googleapis.com/hume-public-logos/hume/hume-banner.png">
3-
<h1>Hume AI EVI React SDK</h1>
3+
<h1>@humeai/voice-react</h1>
44
<p>
55
<strong>Integrate Hume's Empathic Voice Interface in your React application</strong>
66
</p>
77
</div>
88

99
## Overview
1010

11-
This is the React SDK for Hume's Empathic Voice Interface, making it easy to integrate the voice API into your own front-end application. The SDK abstracts the complexities of managing websocket connections, capturing user audio via the client's microphone, and handling the playback of the interface's audio responses.
11+
This package streamlines all of the required state management for building client side applications using the [EVI Chat WebSocket](https://dev.hume.ai/reference/empathic-voice-interface-evi/chat/chat) through a `<VoiceProvider>` component and `useVoice()` hook. It provides a WebSocket, Microphone Interface, Audio Playback Queue, and Message History that are all designed to work closely together.
12+
13+
> [!NOTE]
14+
> This package uses Web APIs for microphone input and audio playback that are not compatible with React Native.
1215
1316
## Prerequisites
1417

18+
> [!IMPORTANT]
19+
> This package is built for use within modern web based React applications using a bundler like `Next.js`, `Webpack`, or `Vite`
20+
1521
Before installing this package, please ensure your development environment meets the following requirement:
1622

1723
- Node.js (`v18.0.0` or higher).
@@ -45,22 +51,18 @@ import { VoiceProvider } from '@humeai/voice-react';
4551
To use the SDK, wrap your components in the `VoiceProvider`, which will enable your components to access available voice methods. Here's a simple example to get you started:
4652

4753
```tsx
48-
import React, { useState } from 'react';
49-
import { EmbeddedVoice } from '@humeai/voice-react';
54+
import { VoiceProvider } from '@humeai/voice-react';
5055

5156
function App() {
52-
const apiKey = process.env.HUME_API_KEY || '';
53-
const [isEmbedOpen, setIsEmbedOpen] = useState(false);
57+
const apiKey = process.env.HUME_API_KEY;
5458

5559
return (
56-
<>
57-
<VoiceProvider
58-
auth={{ type: 'apiKey', value: apiKey }}
59-
hostname={process.env.HUME_VOICE_HOSTNAME || 'api.hume.ai'}
60-
>
61-
<ExampleComponent />
62-
</VoiceProvider>
63-
</>
60+
<VoiceProvider
61+
auth={{ type: 'apiKey', value: apiKey }}
62+
configId={/* Optional: Your EVI Configuration ID */}
63+
>
64+
{/* ... */}
65+
</VoiceProvider>
6466
);
6567
}
6668
```
@@ -89,32 +91,39 @@ The table below outlines the props accepted by `VoiceProvider`:
8991

9092
After you have set up your voice provider, you will be able to access various properties and methods to use the voice in your application. In any component that is a child of `VoiceProvider`, access these methods by importing the `useVoice` custom hook.
9193

92-
```jsx
93-
// ExampleComponent is nested within VoiceProvider
94+
For example, to include a button to start a call, you could create a button like this:
95+
96+
```tsx
9497
import { useVoice } from '@humeai/voice-react';
9598

96-
export const ExampleComponent = () => {
99+
export function StartCallButton () {
97100
const { connect } = useVoice();
101+
102+
return (
103+
<button onClick={() => connect()}>
104+
Start Call
105+
</button>
106+
)
98107
};
99108
```
100109

101110
### Methods
102111

103-
| Method | Usage |
104-
| ------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
105-
| `connect: () => Promise` | Opens a socket connection to the voice API and initializes the microphone. |
106-
| `disconnect: () => void` | Disconnect from the voice API and microphone. |
107-
| `clearMessages: () => void` | Clear transcript messages from history. |
108-
| `mute: () => void` | Mute the microphone |
109-
| `unmute: () => void` | Unmute the microphone |
110-
| `muteAudio: () => void` | Mute the assistant audio |
111-
| `unmuteAudio: () => void` | Unmute the assistant audio |
112-
| `sendSessionSettings: (text: string) => void` | Send new session settings to the assistant. This overrides any session settings that were passed as props to the VoiceProvider. |
113-
| `sendUserInput: (text: string) => void` | Send a user input message. |
114-
| `sendAssistantInput: (text: string) => void` | Send a text string for the assistant to read out loud. |
115-
| `sendToolMessage: (toolMessage: ToolResponse \| ToolError) => void` | Send a tool response or tool error message to the EVI backend. |
116-
| `pauseAssistant: () => void` | Pauses responses from EVI. Chat history is still saved and sent after resuming. |
117-
| `resumeAssistant: () => void` | Resumes responses from EVI. Chat history sent while paused will now be sent. |
112+
| Method | Usage |
113+
| ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
114+
| `connect: () => Promise` | Opens a socket connection to the voice API and initializes the microphone. |
115+
| `disconnect: () => void` | Disconnect from the voice API and microphone. |
116+
| `clearMessages: () => void` | Clear transcript messages from history. |
117+
| `mute: () => void` | Mute the microphone |
118+
| `unmute: () => void` | Unmute the microphone |
119+
| `muteAudio: () => void` | Mute the assistant audio |
120+
| `unmuteAudio: () => void` | Unmute the assistant audio |
121+
| `sendSessionSettings: (text: string) => void` | Send new session settings to the assistant. This overrides any session settings that were passed as props to the VoiceProvider. |
122+
| `sendUserInput: (text: string) => void` | Send a user input message. |
123+
| `sendAssistantInput: (text: string) => void` | Send a text string for the assistant to read out loud. |
124+
| `sendToolMessage: (toolMessage: ToolResponse \| ToolError) => void` | Send a tool response or tool error message to the EVI backend. |
125+
| `pauseAssistant: () => void` | Pauses responses from EVI. Chat history is still saved and sent after resuming. |
126+
| `resumeAssistant: () => void` | Resumes responses from EVI. Chat history sent while paused will now be sent. |
118127

119128
### Properties
120129

@@ -139,7 +148,7 @@ export const ExampleComponent = () => {
139148
| `callDurationTimestamp` | `string` or `null` | The length of a call. This value persists after the conversation has ended. |
140149
| `toolStatusStore` | `Record<string, { call?: ToolCall; resolved?: ToolResponse \| ToolError }>` | A map of tool call IDs to their associated tool messages. |
141150
| `chatMetadata` | `ChatMetadataMessage` or `null` | Metadata about the current chat, including chat ID, chat group ID, and request ID. |
142-
| `playerQueueLength` | `number` | The number of assistant audio clips that are queued up, including the clip that is currently playing. |
151+
| `playerQueueLength` | `number` | The number of assistant audio clips that are queued up, including the clip that is currently playing. |
143152

144153
## Support
145154

0 commit comments

Comments
 (0)