Streaming of content to a TextArea component

Can someone please point me in the right direction for streaming text to a TextArea component?
The code below prints tokens as a complete block of text as opposed to streaming each token. Can someone suggest a way to have tokens streamed as in the example of ChatGPT?

        @Override
        public void onNext(String token) {
            getUI().ifPresent(ui -> ui.access(() -> {
                responseTextArea.setValue( responseTextArea.getValue() + token);
            }));
        }

This code streams each token to console
@Override
public void onNext(String token) {
System.out.println(token);
}

full method

public void generateResponse(String userMessage) {

    StreamingChatLanguageModel model = OllamaStreamingChatModel.builder()
            .baseUrl("http://localhost:11434")
            .modelName(MODEL_NAME)
            .build();

    CompletableFuture<Response<AiMessage>> futureResponse = new CompletableFuture<>();
    model.generate(userMessage, new StreamingResponseHandler<AiMessage>() {

        @Override
        public void onNext(String token) {
            getUI().ifPresent(ui -> ui.access(() -> {
                responseTextArea.setValue( responseTextArea.getValue() + token);
            }));
        }

        @Override
        public void onComplete(Response<AiMessage> response) {
            futureResponse.complete(response);
        }

        @Override
        public void onError(Throwable error) {
            futureResponse.completeExceptionally(error);
        }
    });

}

Did you add @Push like described here? Enabling Push in Your Application

Did you check that the UI is present? Typically the optional won’t return anything if called from another thread. So you normally would want to store the UI and pass it to your asynchronous call’s constructor.

Thanks for your suggestion which I will need to investigate since I had not stumbled upon @Push before today. I am curious to understand why the onNext method does not work as expected.

It’s all described on the page above. Push is required for asynchronous UI updates (like your onNext does)

In addition to adding the Push annotation like @knoobie suggested, it’s worth considering if you could stream into a text component instead and use this workaround by @Leif for more efficient streaming updates.

We need to add some convenience methods to our components to allow appending content without having to re-send the entire content. The way it works right now is that for each new token, the server is sending the entire previous response + the next token to the browser. It’s fine for small texts, but quickly gets very inefficient if you were to stream pages of content.

1 Like

You guys are super quick replying to questions (which is to be applauded).
Before seeing this message I had a good read of suggested solution at Server Push | Advanced Topics | Vaadin Docs
which made sense however, I fell down at the first hurdle when adding the annotation Push to the existing view class as shown below:

@Push
@Route(“chat”)
public class OpenAIChatUI extends VerticalLayout {

Resulted in a successful mvn install however running the application caused the following error :-
Found app shell configuration annotations in non AppShellConfigurator classes.
Please create a custom class implementing AppShellConfigurator and move the following annotations to it:
- Push from com.example.demo.service.ai.OpenAIChatUI

At which point I wondered if going down the route suggested would be beneficial, especially if other issues were to be encountered.

I remembered watching just this week this excellent presentation https://www.youtube.com/watch?v=jXgVe06yvf4&t=1427s

The demo satisfied my use case but was based on Hilla instead of Flow, to be honest I have been delaying React development due to 20+ years of developing with Java.
Long story short, after playing with the Github demo and using Codeium I have achieved what I needed in a few lines of code and less hassle, not having to create an additional class just to stream content to a TextArea component.

1 Like

The @Push annotation needs to go on the application’s AppShellConfigurator class rather than on some specific view class. (There might be some old examples with @Push on views but that approach has been deprecated for a long time already.)

If you use the typical project setup, then the Application class already implements AppShellConfigurator which means that you can add @Push to the Application class.

A question (related to the topic), why stream chatbot response to a TextArea component (meant for inputting text), instead to to Paragraph (meant for showing text)? You isn’t the first so I assume there is some actual reason, but I believe there could be room for improvement.

This might be handy BTW, even if you would only care of raw text: flow-viritin/src/main/java/org/vaadin/firitin/components/messagelist/MarkdownMessage.java at v24 · viritin/flow-viritin · GitHub Available in Viritin, if you don’t like copy-pasting.

One use case that I’ve heard about is that the LLM helps write the initial text for an email message that the user can then customize before sending. The other use case is like GitHub Copilot that co-edits the text.

Neither of those are technically chat bots but still close enough.

1 Like