mirror of
https://github.com/mudler/LocalAI.git
synced 2024-12-18 20:27:57 +00:00
fix(go-grpc-server): always close resultChan
By not closing the channel, if a server not implementing PredictStream receives a client call would hang indefinetly as would wait for resultChan to be consumed. If the prediction stream returns we close the channel now and we wait for the goroutine to finish. Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This commit is contained in:
parent
2553de0187
commit
83110891fd
@ -144,6 +144,8 @@ func (s *server) PredictStream(in *pb.PredictOptions, stream pb.Backend_PredictS
|
||||
}()
|
||||
|
||||
err := s.llm.PredictStream(in, resultChan)
|
||||
// close the channel, so if resultChan is not closed by the LLM (maybe because does not implement PredictStream), the client will not hang
|
||||
close(resultChan)
|
||||
<-done
|
||||
|
||||
return err
|
||||
|
Loading…
Reference in New Issue
Block a user