Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Chat Input with Auto-Sizing Textarea #12785

Merged
merged 7 commits into from
Apr 8, 2025

Conversation

characharm
Copy link
Contributor

This PR proposes an enhancement to the chat input textarea in the web UI.

Problem:

Currently, the chat input textarea has a fixed initial height and relies on manual resizing via the handle in the bottom-right corner. This manual resizing can sometimes feel unintuitive, especially as the sticky-positioned input area visually expands upwards when resized from the bottom. Furthermore, composing longer, multi-line messages often requires frequent manual resizing or scrolling within a constrained areea.

Proposed Solution:

This change replaces the manual resizing behavior with an auto-sizing mechanism:

  1. The textarea now automatically adjusts its height vertically to fit the content as the user types or pastes text.
  2. A maximum height constraint (max-h-48 / 12rem by default in the current implementation) is applied.
  3. Once the content exceeds this maximum height, a vertical scrollbar appears inside the textarea, preventing the input area from growing indefinitely.
  4. The native browser resize handle has been disabled (resize: none).

Implementation Details:

  • A new custom React hook, useAutosizeTextarea, has been created to encapsulate the logic.
  • This hook utilizes a ref to directly interact with the textarea DOM element.
  • On the onInput event, the hook calculates the scrollHeight of the textarea and updates its inline height style accordingly, respecting the max-height set via CSS.
  • This implementation preserves the existing optimization of avoiding component re-renders on every keystroke

Benefits:

  • Provides a more fluid and intuitive user experience for writing multi-line messages.
  • Eliminates the need for manual resizing for typical message lengths.
  • Offers predictable behavior by limiting excessive growth with max-height.

Note:

Please consider this pull request more as a suggestion
master:

Screen.Recording.2025-04-07.030314.mp4

pr:

Screen.Recording.2025-04-07.030551.mp4

useAutosizeTextarea to encapsulate the logic.
Copy link
Collaborator

@ngxson ngxson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The overall idea is good, just need a bit of clean up. Also, don't forget to run npm run build

Comment on lines -297 to -298
// This is a workaround to prevent the textarea from re-rendering when the inner content changes
// See https://github.com/ggml-org/llama.cpp/pull/12299
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should preserve these comments (and you should also add comments to explain your code if necessary)

@ngxson
Copy link
Collaborator

ngxson commented Apr 7, 2025

Also, I'm thinking if we should disable this on mobile view. Otherwise, it will take up a large space on small screen

Replaces the manual textarea resizing with an automatic height adjustment based on content.

- `useChatTextarea` hook to manage textarea state and auto-sizing logic via refs, preserving the optimization
- Textarea now grows vertically up to a maximum height (`lg:max-h-48`) on large screens (lg breakpoint and up).
- Disables auto-sizing and enables manual vertical resizing (`resize-vertical`) on smaller screens for better mobile usability.
- Aligns the "Send" button to the bottom of the textarea (`items-end`) for consistent positioning during resize.
-refactor: replace OptimizedTextareaValue with AutosizeTextareaApi in VSCode context hook
refactor: AutosizeTextareaApi -> chatTextareaApi
@characharm characharm requested a review from ngxson April 8, 2025 07:09
Copy link
Collaborator

@ngxson ngxson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding comments, it makes the code much easier to understand!

@ngxson ngxson merged commit 8ca6e1c into ggml-org:master Apr 8, 2025
6 checks passed
tastelikefeet added a commit to tastelikefeet/llama.cpp that referenced this pull request Apr 10, 2025
* master: (123 commits)
  cuda : add f32 to bf16 copy op (ggml-org#12806)
  llava: improve clip_ctx destructor to not memleak load_image_size (ggml-org#12834)
  llama : fix FA when KV cache is not used (i.e. embeddings) (ggml-org#12825)
  server : fix thread.join() on exit (ggml-org#12831)
  llava: add more helper functions to check projector types in clip context (ggml-org#12824)
  arg : Including limits file on AIX (ggml-org#12822)
  server : webui : Improve Chat Input with Auto-Sizing Textarea (ggml-org#12785)
  Revert "sycl:remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor" (ggml-org#12812)
  gguf-py : support lazy tensor splitting (ggml-org#12809)
  llama : Support llama 4 text-only (ggml-org#12791)
  opencl: better identify Adreno GPU (ggml-org#12760)
  hellaswag: display estimated score confidence interval (ggml-org#12797)
  cuda : fix HIP and MUSA BF16 (#0)
  sync : ggml
  ggml : simplify Arm fp16 CPU logic (ggml/1177)
  CUDA: don't convert BF16 weights to FP32 (ggml/1174)
  cpu: move all the operators into a separate c++ file (except mul_mat) (ggml/1167)
  sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor (ggml-org#12734)
  ci : no curl on ggml-ci (ggml-org#12796)
  cmake : enable curl by default (ggml-org#12761)
  ...

# Conflicts:
#	common/arg.cpp
#	common/common.cpp
#	common/common.h
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants