You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Aside from the Typescript SDKs, you may visit [our documentation page](https://dev.hume.ai/docs/empathic-voice-interface-evi/overview) for guides on how to use the EVI API.
25
+
More information on how to use the EVI API may be found in [our documentation site](https://dev.hume.ai/docs/empathic-voice-interface-evi/overview).
<strong>Integrate Hume's Empathic Voice Interface in your React application</strong>
6
6
</p>
7
7
</div>
8
8
9
9
## Overview
10
10
11
-
This is the React SDK for Hume's Empathic Voice Interface, making it easy to integrate the voice API into your own front-end application. The SDK abstracts the complexities of managing websocket connections, capturing user audio via the client's microphone, and handling the playback of the interface's audio responses.
11
+
This package streamlines all of the required state management for building client side applications using the [EVI Chat WebSocket](https://dev.hume.ai/reference/empathic-voice-interface-evi/chat/chat) through a `<VoiceProvider>` component and `useVoice()` hook. It provides a WebSocket, Microphone Interface, Audio Playback Queue, and Message History that are all designed to work closely together.
12
+
13
+
> [!NOTE]
14
+
> This package uses Web APIs for microphone input and audio playback that are not compatible with React Native.
12
15
13
16
## Prerequisites
14
17
18
+
> [!IMPORTANT]
19
+
> This package is built for use within modern web based React applications using a bundler like `Next.js`, `Webpack`, or `Vite`
20
+
15
21
Before installing this package, please ensure your development environment meets the following requirement:
16
22
17
23
- Node.js (`v18.0.0` or higher).
@@ -45,22 +51,18 @@ import { VoiceProvider } from '@humeai/voice-react';
45
51
To use the SDK, wrap your components in the `VoiceProvider`, which will enable your components to access available voice methods. Here's a simple example to get you started:
configId={/* Optional: Your EVI Configuration ID */}
63
+
>
64
+
{/* ... */}
65
+
</VoiceProvider>
64
66
);
65
67
}
66
68
```
@@ -89,32 +91,39 @@ The table below outlines the props accepted by `VoiceProvider`:
89
91
90
92
After you have set up your voice provider, you will be able to access various properties and methods to use the voice in your application. In any component that is a child of `VoiceProvider`, access these methods by importing the `useVoice` custom hook.
91
93
92
-
```jsx
93
-
// ExampleComponent is nested within VoiceProvider
94
+
For example, to include a button to start a call, you could create a button like this:
|`connect: () => Promise`| Opens a socket connection to the voice API and initializes the microphone. |
106
-
|`disconnect: () => void`| Disconnect from the voice API and microphone. |
107
-
|`clearMessages: () => void`| Clear transcript messages from history. |
108
-
|`mute: () => void`| Mute the microphone |
109
-
|`unmute: () => void`| Unmute the microphone |
110
-
|`muteAudio: () => void`| Mute the assistant audio |
111
-
|`unmuteAudio: () => void`| Unmute the assistant audio |
112
-
|`sendSessionSettings: (text: string) => void`| Send new session settings to the assistant. This overrides any session settings that were passed as props to the VoiceProvider. |
113
-
|`sendUserInput: (text: string) => void`| Send a user input message. |
114
-
|`sendAssistantInput: (text: string) => void`| Send a text string for the assistant to read out loud. |
115
-
|`sendToolMessage: (toolMessage: ToolResponse \| ToolError) => void`| Send a tool response or tool error message to the EVI backend. |
116
-
|`pauseAssistant: () => void`| Pauses responses from EVI. Chat history is still saved and sent after resuming. |
117
-
|`resumeAssistant: () => void`| Resumes responses from EVI. Chat history sent while paused will now be sent. |
|`connect: () => Promise`| Opens a socket connection to the voice API and initializes the microphone. |
115
+
|`disconnect: () => void`| Disconnect from the voice API and microphone. |
116
+
|`clearMessages: () => void`| Clear transcript messages from history. |
117
+
|`mute: () => void`| Mute the microphone |
118
+
|`unmute: () => void`| Unmute the microphone |
119
+
|`muteAudio: () => void`| Mute the assistant audio |
120
+
|`unmuteAudio: () => void`| Unmute the assistant audio |
121
+
|`sendSessionSettings: (text: string) => void`| Send new session settings to the assistant. This overrides any session settings that were passed as props to the VoiceProvider. |
122
+
|`sendUserInput: (text: string) => void`| Send a user input message. |
123
+
|`sendAssistantInput: (text: string) => void`| Send a text string for the assistant to read out loud. |
124
+
|`sendToolMessage: (toolMessage: ToolResponse \| ToolError) => void`| Send a tool response or tool error message to the EVI backend. |
125
+
|`pauseAssistant: () => void`| Pauses responses from EVI. Chat history is still saved and sent after resuming.|
126
+
|`resumeAssistant: () => void`| Resumes responses from EVI. Chat history sent while paused will now be sent.|
0 commit comments