A very experimental PLC implementation which uses BFT consensus for decentralization

Fix unbounded memory use growth during snapshot application

Not sure what I was thinking when I moved the writes to a goroutine. I suppose I assumed the pipe would be read at roughly the same rate it would be written, which is obviously not true.

gbl08ma.com 93104729 0cf9deed

verified
+6 -14
+6 -14
abciapp/snapshots.go
··· 665 665 a.importerWg.Go(a.streamingImporter) 666 666 } 667 667 668 - isLastChunk := chunkIndex == len(a.expectedChunkHashes)-1 669 - go func(b []byte) { 670 - // From the docs: 671 - // It is safe to call Read and Write in parallel with each other or with Close. 672 - // Parallel calls to Read and parallel calls to Write are also safe: 673 - // the individual calls will be gated sequentially. 674 - 675 - // so even if not everything gets written from this chunk (e.g. because the zstd decoder decided not to advance) 676 - // it'll eventually be written, in the correct order 677 - _, _ = a.pipeWriter.Write(b) 678 - if isLastChunk { 679 - _ = a.pipeWriter.Close() 680 - } 681 - }(chunkBytes) 668 + _, err := a.pipeWriter.Write(chunkBytes) 669 + if err != nil { 670 + return stacktrace.Propagate(err) 671 + } 682 672 673 + isLastChunk := chunkIndex == len(a.expectedChunkHashes)-1 683 674 if isLastChunk { 675 + _ = a.pipeWriter.Close() 684 676 // wait for importer to finish reading and importing everything 685 677 a.importerWg.Wait() 686 678