Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: io: add Seq for efficient, zero-copy I/O #73154

Open
rogpeppe opened this issue Apr 3, 2025 · 20 comments
Open

proposal: io: add Seq for efficient, zero-copy I/O #73154

rogpeppe opened this issue Apr 3, 2025 · 20 comments
Labels
LibraryProposal Issues describing a requested change to the Go standard library or x/ libraries, but not to a tool Performance Proposal
Milestone

Comments

@rogpeppe
Copy link
Contributor

rogpeppe commented Apr 3, 2025

[Edited to more benchmarks]
[Edited to add reservations about iterator use]

Proposal Details

This proposal adds two new functions and one new type to the io package:

type Seq = iter.Seq2[[]byte, error]
func SeqFromReader(r Reader, bufSize int) Seq
func ReaderFromSeq(it Seq) ReadCloser

These enable more efficient and ergonomic I/O pipelines by leveraging Go's iterator functionality.

Background

When io.Reader was introduced into Go, its API was designed following the time-honored API of the Unix read system call.
The caller provides a buffer and the callee copies data into that buffer.

However, its API is notoriously hard to use. Witness the long doc comment. I'm sure most
of us have struggled to write io.Reader implementations and been caught out by gotchas when
using it.

It's also somewhat inefficient. Although it amortizes byte-by-byte streaming cost, it's in general
not possible to turn an io.Writer (convenient to produce data) into an io.Reader without
the need to copy data, because once the data has been copied there is no clear way
of signalling back to the producer of the data that it's done with. So io.Pipe is as good
as we can get, which involves goroutines and copying.

However now that iterators are a thing, there is a potential alternative.

I propose that we add a way to bridge from an iterator to io.Reader and vice versa.

Proposal

I propose adding one type and two functions to the io package:

Seq defines a standard type for these sequences.

SeqFromReader returns a Seq of byte slices, yielding chunks of data read from the Reader using an internal buffer. ReaderFromSeq wraps a Seq as a ReadCloser, implementing Reader semantics.

The semantics of Seq slices are similar to those of Reader.Read buffers: callers must not retain or mutate slices outside the current iteration. The sequence terminates at the first non-nil error, including EOF.

// Seq represents a sequence of byte slices. It's somewhat equivalent to
// [Reader], although simpler in some respects.
// See [SeqFromReader] and [ReaderFromSeq] for a way to convert
// between [Seq] and [Reader].
//
// Each element in the sequence must have either a non-nil byte slice or
// a non-nil error; a producer should never produce either (nil, nil) or
// a non-nil slice and a non-nil error.
//
// The sequence always ends at the first error: if there are temporary
// errors, it's up to the producer to deal with them.
//
// The code ranging over the sequence must not use the slice outside of
// the loop or across iterations; that is, the receiver owns a slice
// until that particular iteration ends.
//
// Callers must not mutate the slice. [TODO perhaps it might be OK to
// allow callers to mutate, but not append to, the slice].
type Seq = iter.Seq2[[]byte, error]

// SeqFromReader returns a [Seq] that reads from r, allocating a buffer
// of the given size to do so.
func SeqFromReader(r Reader, bufSize int) Seq

// ReaderFromSeq converts an iterator into an ReadCloser.
// Close must be called on the reader if it hasn't returned an error.
func ReaderFromSeq(it Seq) ReadCloser

Discussion

In general, no allocation or copying needs to take place when using this API.
Byte slices can be passed by reference directly between producer and consumer,
and the strong ownership conventions of an iterator make this a reasonable
approach. The coroutine-based iter.Pull function makes ReaderFromSeq considerably more
efficient than io.Pipe while providing many of the same advantages.

The API is arguably easier to deal with:

  • no need to deal with partial reads or writes
  • less confusion over the multiple combinations of the return values from Read

The fact that we can write a bridge between Seq and Reader mean that
this new abstraction could fit nicely into the existing Go ecosystem.

It might also be useful to add another abstraction to make it easier to use
Writer-oriented APIs with a generator. Perhaps something like this:

// SeqWriter returns a [Writer] that operates on the yield
// function passed into a [Seq] iterator. Writes will succeed until
// the iteration is terminated, upon which Write will return
// [ErrSequenceTerminated].
//
// The returned Writer should not be used outside the scope
// of the iterator function, following the same rules as any yield
// function.
func SeqWriter(yield func([]byte, error) bool) Writer

Performance

I tried a few benchmarks to get an idea for performance:

Benchmark old (sec/op) new (sec/op) Δ sec/op old (B/s) new (B/s) Δ B/s
PipeBase64 11.187µ ± 1% 6.008µ ± 2% -46.30% 698.4Mi ± 1% 1300.5Mi ± 2% +86.22%
PipeNoop 537.0n ± 2% 144.8n ± 2% -73.05% 14.21Gi ± 2% 52.72Gi ± 2% +271.11%
ReaderNoop 2.263n ± 1% 3.584n ± 6% +58.41% 3.299Ti ± 2% 1.948Ti ± 3% -40.96%
ReaderFillIndex 1.110µ ± 2% 1.109µ ± 1% ~ 6.874Gi ± 2% 6.881Gi ± 1% ~

The Pipe benchmarks measure the performance when using SeqReader as
a substiture for io.Pipe with various workloads for between the source
and the sink (a base64 encoder and a no-op pass-through).
This is to demonstrate how Seq can be used to improve performance
of some existing tasks.

The Reader benchmarks measure performance when using a Seq vs an io.Reader;
the no-op does nothing at all on producer or consumer; the FillIndex
just fills the buffer and runs bytes.Index on it (a fairly minimal workload).
This is to demonstrate how Seq is a very low overhead primitive for producing
readable data streams. Writing this benchmark, it was immediately obvious
that writing the io.Reader benchmark code was harder: I had to create an
auxilliary struct type with fields to keep track of iterations, rather than just
write a simple for loop. This is the classic advantage of storing data in control flow.
So we might pay for this abstraction with a nanosecond of overhead, that
seems well worth the cost.

However, it's not all roses. Seq is very convenient when we want to write push-oriented
code, but if callers are usually going to convert it to a reader with SeqReader and
it's not too hard to write the code in a pull-based style, we should do so:

Benchmark old (sec/op) new (sec/op) Δ sec/op old (B/s) new (B/s) Δ B/s
ReaderVsSeqFromReaderNoop 3.341n ± 3% 128.550n ± 1% +3747.65% 2283.51Gi ± 3% 59.35Gi ± 1% -97.40%
ReaderVsSeqFromReaderFillIndex 1.089µ ± 2% 1.221µ ± 1% +12.08% 7.006Gi ± 2% 6.253Gi ± 1% -10.76%

Note that the overhead here is per-iteration, not per-byte, so as the buffer size grows and the per-iteration work grows proportionately, the overhead will reduce.

PoC code (including the benchmark code) is available at https://github.com/rogpeppe/ioseq.

Reservations about using iterators in this way

My biggest reservation about this proposal is the way that it uses iterators. In general, iterators work best when the values produced are independent of the loop. This enables us to use functions such as slices.Collect and maps.Insert which take values received from the iterator and store them elsewhere. This proposal uses iterators differently. The values produced are emphatically intended not to escape the loop in this way.

That said, iterators of the general form iter.Seq[T, error] aren't that useful with collectors of this type anyway because of the final error value.

I'm very much in two minds on this. One the one hand, the above properties are definitely nice to have. One the other hand, there are many pre-Go-iterator-support iterator-like APIs which it seems to me would be nice phrased as for-range loops. But those iterator-like APIs often return values which only valid for the extent of one iteration. bufio.Scanner.Bytes is a good example of this. In fact it feels like a great example because that's essentially exactly what Seq is doing. You can even use Scanner to mimic what SeqFromReader does.

Another example of this dilemma is #64341.

Modulo explicitly "escaping" functions like slices.Collect, iterators do seem like a good fit here. The scope of a given iterator variable is well-defined, much like the argument to a function (which it is, of course, under the hood). And in general we don't seem to have a problem with saying that it's not OK to store the argument to a function outside the span of that function's invocation.

So ISTM that we need to decide if it's OK to "bless" this kind of pattern. If it's not OK, then this proposal should be declined.

@gopherbot gopherbot added this to the Proposal milestone Apr 3, 2025
@rogpeppe rogpeppe changed the title proposal: io: add Seq for efficient, allocation-free I/O proposal: io: add Seq for efficient, zero-copy I/O Apr 3, 2025
@zombiezen
Copy link
Contributor

Drive-by observation: one implication of using an iter.Seq2 is that there will be no compile-time feedback given if the user of a for/range statement does not use the second iteration variable. For example, someone using this API could write:

for data := range io.SeqFromReader(r, 4096) {
  // ...
}

and not realize that there's an error that's being ignored.

I don't know whether this is a problem, but something to consider.

@rogpeppe
Copy link
Contributor Author

rogpeppe commented Apr 4, 2025

Drive-by observation: one implication of using an iter.Seq2 is that there will be no compile-time feedback given if the user of a for/range statement does not use the second iteration variable.

Yes, that's an issue with [T, error] iterators in general and AFAIR was the reason why initially it was required to mention all the values in the range statement. I see it in the same kind of area as not checking an error result in general, and is amenable to the same kind of tooling: it's easy for static analysis tools to diagnose this issue.

@renthraysk
Copy link

Just a note on the PoC. It should yield with the slice capacity explicitly set, yield(buf[:n:n], nil). Would prevent the consumer from being able to write into regions of the buffer with append() (forces an allocation&copy) or growing the slice.

@DeedleFake
Copy link

This is potentially nice for certain things, but I think that the documentation should note that if you need to read in a pull manner then you should use a regular io.Reader, not iter.Pull(), if at all possible. I built a wrapper like this for something a little while back and then tested it with iter.Pull() and it had a ton of overhead and massively slowed down the entire thing.

Here's some results from benchstat I just threw together:

goos: linux
goarch: amd64
pkg: seqread
cpu: AMD Ryzen 9 3900X 12-Core Processor
        │ simple.txt  │           push.txt            │               pull.txt                │
        │   sec/op    │   sec/op     vs base          │    sec/op     vs base                 │
Read-24   245.9n ± 1%   246.2n ± 2%  ~ (p=0.725 n=10)   1137.0n ± 2%  +362.29% (p=0.000 n=10)

        │  simple.txt  │             push.txt             │               pull.txt               │
        │     B/op     │     B/op      vs base            │     B/op      vs base                │
Read-24   1.000Ki ± 0%   1.000Ki ± 0%  ~ (p=1.000 n=10) ¹   1.422Ki ± 0%  +42.19% (p=0.000 n=10)
¹ all samples are equal

        │ simple.txt │            push.txt            │               pull.txt                │
        │ allocs/op  │ allocs/op   vs base            │  allocs/op   vs base                  │
Read-24   1.000 ± 0%   1.000 ± 0%  ~ (p=1.000 n=10) ¹   15.000 ± 0%  +1400.00% (p=0.000 n=10)
¹ all samples are equal

Here's the code that was used to test: https://gist.github.com/DeedleFake/b931c0393e4385dd39118ea8088a5729

@rogpeppe
Copy link
Contributor Author

rogpeppe commented Apr 4, 2025

the documentation should note that if you need to read in a pull manner then you should use a regular io.Reader, not iter.Pull(), if at all possible

Absolutely! That was why I included that final benchmark. This is no panacea, just another tool in the toolbox.
Currently we've got the pull-oriented reader API (io.Reader) and the push-oriented writer API (io.Writer). This would add a push-oriented reader API. If you can implement a pull-oriented API easily, then you definitely should, but it's often really hard to do, which is why we have APIs like base64.NewEncoder and gzip.NewWriter that are write-oriented.

I suspect that many of the APIs that currently take a Writer as argument and return a Writer could be straightforwardly and advantageously rewritten as func(io.Reader) io.Seq. This would probably lead to simplifications in the code due to the fact that the pull-orient Read is generally more convenient to use than being forced to store enough state to handle arbitrary length writes (the pcdata thing again).

@rogpeppe
Copy link
Contributor Author

rogpeppe commented Apr 4, 2025

I've just realised that this function should probably be part of the API too:

// CopySeq is like [io.Copy] but reads over r writing
// all the data to w. It returns the total number of bytes
// read.
func CopySeq(w io.Writer, r Seq) (int64, error)

That would make it easy to efficiently bridge between io.Seq values and io.Writer APIs
such as crypto.Hash - no need for iter.Pull.

@DeedleFake
Copy link

Absolutely! That was why I included that final benchmark.

Ah, I completely missed somehow that that was a benchmark for a pull iterator. Sorry about the duplication.

@jrick
Copy link
Contributor

jrick commented Apr 4, 2025

Feels to me like SeqFromReader should live in bufio if it is buffering the reader.

nvm, I think i misunderstood the proposed semantics.

@Merovius
Copy link
Contributor

Merovius commented Apr 4, 2025

The API is arguably easier to deal with:

* no need to deal with partial reads or writes

I don't really agree on this point. For one, io.ReadFull exists. For another, there is no way to signal to the iter.Seq how many bytes you need/want. When I read from a Reader, I usually want to process that data. And that usually depends on some kind of condition, like the end of a token or line or whatever. With the iter.Seq API, I need to copy over the data I get into a temporary buffer and copy everything into that. With an io.Reader I can allocate a "large enough" []byte and use io.ReadFull to fill it.

That same problem applies to composing the iter.Seq. The FromReader function can basically slice and dice the data however it wishes and I have no reason to assume that is consistent with what I want to process it into, using my pipeline. So, the next step in the pipeline will, in general, need to buffer stuff. That doesn't seem super nice.

As for efficiency, could we use the same underlying coroutine switches to make io.Pipe as efficient?

@rogpeppe
Copy link
Contributor Author

rogpeppe commented Apr 4, 2025

The API is arguably easier to deal with:

* no need to deal with partial reads or writes

I don't really agree on this point. For one, io.ReadFull exists. For another, there is no way to signal to the iter.Seq how many bytes you need/want. When I read from a Reader, I usually want to process that data. And that usually depends on some kind of condition, like the end of a token or line or whatever. With the iter.Seq API, I need to copy over the data I get into a temporary buffer and copy everything into that. With an io.Reader I can allocate a "large enough" []byte and use io.ReadFull to fill it.

Yes, those are fair points. What you're talking about there is a pull-based API, where the caller can decide how much they want and when to acquire it. This Seq API is, like all iterators, fundamentally a push-based API, but phrased in terms of iterators, which make it somewhat more convenient to pull the values that are being pushed. That's why we have iter.Pull and the range syntax sugar.

That same problem applies to composing the iter.Seq. The FromReader function can basically slice and dice the data however it wishes and I have no reason to assume that is consistent with what I want to process it into, using my pipeline. So, the next step in the pipeline will, in general, need to buffer stuff. That doesn't seem super nice.

Yup, there are certain fundamentals here that are unavoidable. Some abstraction combinations require a buffer, some don't.

Somewhat related, here's an interesting transform I discovered earlier:

func WriterFuncToSeq(f func(w io.Writer) io.WriteCloser) func(r Seq) Seq

That is, we can rewrite any writer-to-writer function as a seq-to-seq function. This requires no buffer or data copying or coroutines. (Aside: I'm looking for a good name for this function).

That means my previous assertion that "many of the APIs that currently take a Writer as argument and return a Writer could be straightforwardly and advantageously [be] rewritten as func(io.Reader) io.Seq" is kinda redundant.
All those functions can already be transformed into that form as desired.

Essentially what's going on under the hood there is we're just passing a Writer implementation that invokes a callback function when Write is called, nothing special. But the syntax sugar and the Pull support makes it something a bit more, I think.

As for efficiency, could we use the same underlying coroutine switches to make io.Pipe as efficient?

I'm pretty sure that's impossible. io.Pipe gives you a handle on both the read side and the write side simultaneously.
Iterators fundamentally rely on the fact that the write side of the pipe (the yield function) is only available while a function that you call is running. It's illegal to pass it outside of that scope, there's no way to obtain it, therefore no way to implement the same API as io.Pipe using iterator coroutines. ReaderFromSeq is as good as we can get AFAICS, and perhaps the main fundamental justification for including the Seq type in the standard library.

@jonjohnsonjr
Copy link
Contributor

That is, we can rewrite any writer-to-writer function as a seq-to-seq function. This requires no buffer or data copying or coroutines. (Aside: I'm looking for a good name for this function).
...

As for efficiency, could we use the same underlying coroutine switches to make io.Pipe as efficient?

I'm pretty sure that's impossible... ReaderFromSeq is as good as we can get AFAICS, and perhaps the main fundamental justification for including the Seq type in the standard library.

I want to claim a small amount of partial credit for nerd-sniping Roger into this weird WriterFuncToSeq thing. I was trying to figure out if it's possible to use iterators to turn a writer into a reader. My hope was that this would be better than the usual method that uses an io.Pipe with a goroutine . As you can tell by the type signature of WriterFuncToSeq, it isn't that straightforward, and I couldn't figure it out by myself.

A common example of this is if you want to have gzipped content in the form of an io.Reader, e.g. to use as the body of an HTTP request. To gzip the content, you can use a gzip.Writer, which gives you an io.Writer. But to upload it, you need an io.Reader to set as the http.Request.Body. Square peg, round hole.

The normal way of dealing with this mismatch might look something like:

func main() {
	pr, pw := io.Pipe()
	go func() {
		zw := gzip.NewWriter(pw)

		if _, err := io.Copy(zw, os.Stdin); err != nil {
			pw.CloseWithError(err)
			return
		}
		if err := zw.Close(); err != nil {
			pw.CloseWithError(err)
			return
		}
		pw.Close()
	}()

	resp, err := http.Post("http://example.com", "application/gzip", pr)
	if err != nil {
		panic(err)
	}
	fmt.Println(resp.Status)
}

You can instead compose a bunch of things from this proposal into:

func main() {
	in := ioseq.SeqFromReader(os.Stdin, 1024*32) // Same buffer size as io.Copy

	cb := ioseq.WriterFuncToSeq(func(w io.Writer) io.WriteCloser {
		return gzip.NewWriter(w)
	})

	body := ioseq.ReaderFromSeq(cb(in))

	resp, err := http.Post("http://example.com", "application/gzip", body)
	if err != nil {
		panic(err)
	}
	fmt.Println(resp.Status)
}

Even with something CPU-intensive like gzip, I see ~10% speedup using roger's zany package over io.Pipe. For something like base64.NewEncoder, this is closer to ~100% speedup.

@rogpeppe
Copy link
Contributor Author

rogpeppe commented Apr 5, 2025

If we make WriterFuncToSeq generic on the return type, then it becomes a bit more ergonomic. A bunch of functions in the stdlib (including gzip.NewWriter) satisfy func(io.Writer)W for some type that implements io.Writer but don't return io.Writer itself.

So the boilerplate above can be a bit shorter:

	cb := ioseq.WriterFuncToSeq(gzip.NewWriter)
	in := ioseq.SeqFromReader(os.Stdin, 1024*32) // Same buffer size as io.Copy
	body := ioseq.ReaderFromSeq(cb(in)

You could even encapsulate the above into a function, I guess:

// PipeThrough returns a reader that pipes the content from r through f.
func PipeThrough[W io.WriteCloser](r io.Reader, f func(io.Writer) W) io.Reader {
	return ReaderFromSeq(WriterFuncToSeq(f)(SeqFromReader(r, 1024 * 32)))
}

@rogpeppe
Copy link
Contributor Author

rogpeppe commented Apr 5, 2025

Even with something CPU-intensive like gzip, I see ~10% speedup using roger's zany package over io.Pipe. For something like base64.NewEncoder, this is closer to ~100% speedup.

It should be better now that I've realised that it makes sense to impement WriterTo on the reader returned by ReaderFromSeq. When WriteTo is called before any Read call (the usual case and the way that http.Post behaves), we don't need iter.Pull at all - we can just iterate directly. I see another 8% improvement with that for the base64 workload.

@mauri870 mauri870 added Performance LibraryProposal Issues describing a requested change to the Go standard library or x/ libraries, but not to a tool labels Apr 5, 2025
@mauri870 mauri870 moved this to Incoming in Proposals Apr 5, 2025
@eihigh
Copy link

eihigh commented Apr 6, 2025

FYI: Here is the implementation for a "concurrent" Push iterator, which is the opposite of iter.Pull.
#72083 (comment)

@rogpeppe
Copy link
Contributor Author

rogpeppe commented Apr 6, 2025

@eihigh

FYI: Here is the implementation for a "concurrent" Push iterator, which is the opposite of iter.Pull. #72083 (comment)

That's interesting - I hadn't seen that. Thanks.

However, you see that as directly relevant here? AFAICS that primitive is targeted at more general coroutine-oriented APIs, and this doesn't seem like it would benefit, but I'm probably missing something.

@nussjustin
Copy link
Contributor

I closely follow the Go proposal process and try to read and understand all proposals and design documments. I also like to read the Go code of different projects I use.

Normally this is no problem and I can easily understand everything immediately and have no problem wrapping my head around actual code.

And I love this about Go. It's probably the main reason I use it.

But this proposal is different, at least for me.

I feel like this is the first time in a long time (months? years?) that I actually had to take some code and stare at it for what felt like minutes to fully understand what it does and why we would need it.

When trying out some of the code I actually ended up inlining some code, as it made it far easier for me to understand what was happening, even if the code was "much longer". Still, even then I never found the result to be easy to understand.

The example from rogpeppe here is a good example:

// PipeThrough returns a reader that pipes the content from r through f.
func PipeThrough[W io.WriteCloser](r io.Reader, f func(io.Writer) W) io.Reader {
	return ReaderFromSeq(WriterFuncToSeq(f)(SeqFromReader(r, 1024 * 32)))
}

I understand what this does and that it can be quite useful, but it still took me a while to understand the implementation. Part of this is probably because I find the function names (ReaderFromSeq, WriterFuncToSeq and SeqFromReader) hard to understand, especially when combining the results of all 3 of them.

And that is my complaint here and the reason I gave a thumbs down: I feel like this is too much complexity for what (in my opionion) little benefit it provides.

I would really dislike seeing this not only be added to the stdlib, but potentially even seeing the complexity find its way across the stdlib.

Given that one can convert from the interfaces to sequences, maybe this could be implemented outside the stdlib? Or at least in a different package, with other stdlib packages continuing to use the interfaces as much as possible.

I also think it is important to mention that io.Reader and io.Writer are probably some of the most used and most important interfaces in Go. Changing how we work with them, even without looking at the exact proposal, already feels like it could easily lead to bigger change (disruption?) than for example the introduction of the context package.

@earthboundkid
Copy link
Contributor

earthboundkid commented Apr 8, 2025

In iter.Seq2[[]byte, error], is the []byte reusable between iterations? No, right? If not, wouldn't it make more sense to define a sequence as something like

type Seq interface {
    Bytes() []byte
    Text() string
    Err() error
    Advance() func(func() bool) // iter.Seq0
    // No Close() error method because Advance auto-closes when iteration finishes
}

// usage like

for range seq.Advance() {
    if seq.Err() != nil {
        // do something, break, return
    }
    process(seq.Bytes())
}

@mvdan
Copy link
Member

mvdan commented Apr 8, 2025

@earthboundkid that seems like a significant increase in complexity - particularly given that it doesn't really protect from reusing or keeping each []byte.

@earthboundkid
Copy link
Contributor

earthboundkid commented Apr 8, 2025

I think we don't really know how to use fallible sequences in Go yet. For my file walking iterator package, I ended up with what I think is an interesting API.

A walker.Ranger takes a walker.ErrorPolicy. A walker.ErrorPolicy is a func that receives iteration results and decides whether to continue iteration or not. The walker package comes with error policies for halt-on-error, panic-on-error, ignore-error, collect-errors, and ignore-permission-error-but-otherwise-halt, and you can always make custom policies. Because errors are managed by the error policy, a Ranger has convenient infallible iterators for all entries, just file entries (no directories), just the path strings, etc. There aren't any iter.Seq2[T, error] iterators.

I don't know if that's the right approach to managing fallible iterators in Go, but it's one I'm experimenting with. I think you could probably do something similar with io, but I'm not ready to throw out a proposal yet.

@apparentlymart
Copy link

I'm concerned that there are currently various proposals all independently inventing patterns for fallible sequences as a side-effect of proposing something else, and that the first one that gets accepted would effectively make a decision for the whole standard library as to what the idiomatic pattern is.

The current support for infallible sequences was proposed and discussed separately from any single specific application of it, and I think it would be best to follow a similar approach for fallible sequences so that the proposal can consider the many different potential applications of that pattern at once and hopefully choose a compromise that is broadly applicable.

I don't mean to say that the new capabilities in this proposal are not worthwhile -- it does seem like a nice improvement -- but accepting it seems to imply also accepting that iter.Seq2[T, error] is an idiomatic way to represent fallible sequences, and I'd prefer that to be decided as a separate proposal that can take a broader view on it.

(I will be explicit that I'm still somewhat unconvinced that we should be trying to apply range-over-func to fallible sequences at all, vs. a more explicit pattern using normal function/method calls, but for me that specific preference is overridden by the desire for there to be a single idiomatic design pattern -- even if it does involve iter.Seq in some way -- used throughout the standard library and hopefully also adopted by third-party libraries for consistency and better composability.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
LibraryProposal Issues describing a requested change to the Go standard library or x/ libraries, but not to a tool Performance Proposal
Projects
Status: Incoming
Development

No branches or pull requests