Skip to content

Conversation

@lchanouha
Copy link

… issues & memory usage which large files

Hello,
I use your library to import / export LDIF with > 150K entries.

This commit addresses:

  • performance issues by replacing basic string concatenation with buffer concat
  • io management
  • added intermediate function to get entries with a channel, without building a huge ldif.LDIF table.
  • error handling, read error is added in the entry, and now skippable

I didn't change the behavior of Marshall and Unmarshall (same spec and error handling), to read / write
a lot of entries, theses functions are prefered:

  • func MarshalBuffer(l *LDIF, output io.Writer) (err error) -> to write to an buffer (memory, file, stdout..)
  • func UnmarshalBuffer(r io.Reader, l *LDIF) (error, chan *Entry) -> to output entries in a channel.

This is one on my first go code, maybe this commit should me checked (go test passed).

Regards,

@adamluzsi
Copy link
Contributor

I just realised that I did similarly, except using the iterator pattern, to achieve the same streaming idiom within a single goroutine.

#25

@cpuschma
Copy link
Member

@adamluzsi Yes, but imo your implementation is cleaner. I'll close this PR once yours is merged. Thank you both anyways for your work! ❤️

@adamluzsi
Copy link
Contributor

The writing / Encoder idiom still valuable for scavenging, but it might worth to prove its behaviour with tests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants